AI for multisensory experiences: where the senses meet technology
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
Most of our everyday life experiences are multisensory in nature – they consist of what we see, hear, feel, taste and smell. Almost any experience you can think of, such as enjoying a meal or watching your favourite TV show, involves a magnificent sensory world.
In recent years, many of these experiences have been increasingly transformed and capitalised on through advancements that adapt the world around us – through technology products and services – to suit our ever more computerised environment. AI has the potential to scale human experiences beyond our imagination, creating new avenues to explore and reconnect with our world. At the same time, misuse of AI can amplify negative and/or false experiences, blurring the lines between what we should feel and what we are made to feel.
In this first Let’s Talk, we will connect with Marianna Obrist (University College London) and Carlos Velasco (BI Norwegian Business School (Norway)) to discuss the evolution of multisensory experiences and the relationship between AI and the senses. We will talk about how they both came into the field, why they are passionate about the work they are doing, and why this work is relevant in an increasingly digitized world. We will also explore what the future of these AI-augmented experiences could be, how they can help address key challenges to the Sustainable Development Goals, and our responsibility in shaping a future to improve our quality of life and the world we live in.