Interpretable human-AI interaction for content creation and machine autonomy

Go back to programme

Interpretable human-AI interaction for content creation and machine autonomy

  • Register

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

  • Deep learning models such as ConvNet and transformers have made huge progress in real-world applications from image generation to autonomous driving. Researchers mainly focus on building larger and deeper models to improve the accuracy and performance of public benchmarks. However, when AI models are being deployed in the real world, it is difficult to establish a trustworthy relationship between humans and AI if there are no meaningful interactions. In this talk, Professor Bolei Zhou will present their effort to facilitate interpretable Human-AI interaction for image generation and machine autonomy tasks. He will first introduce their work on improving the controllability of deep generative models for interactive image editing. Then Professor Zhou will talk about their recent work on human-in-the-loop machine learning for efficiently learning safe and interactive autonomous agents in visual navigation and autonomous driving environments.  

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session
    • Days
      Hours
      Min
      Sec
    • Watch

      Register

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Start date
      3 October 2022 at 17:00 CEST, Geneva | 08:00-09:30 PDT, California | 23:00-00:30 CST, Beijing
    • End date
      3 October 2022 at 18:30 CEST, Geneva | 08:00-09:30 PDT, California | 23:00-00:30 CST, Beijing
    • Duration
      90 minutes (including 30 minutes networking)
    • Topics
    • UN SDGs