Interpretable human-AI interaction for content creation and machine autonomy
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
Deep learning models such as ConvNet and transformers have made huge progress in real-world applications from image generation to autonomous driving. Researchers mainly focus on building larger and deeper models to improve the accuracy and performance of public benchmarks. However, when AI models are being deployed in the real world, it is difficult to establish a trustworthy relationship between humans and AI if there are no meaningful interactions. In this talk, Professor Bolei Zhou will present their effort to facilitate interpretable Human-AI interaction for image generation and machine autonomy tasks. He will first introduce their work on improving the controllability of deep generative models for interactive image editing. Then Professor Zhou will talk about their recent work on human-in-the-loop machine learning for efficiently learning safe and interactive autonomous agents in visual navigation and autonomous driving environments.
This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.