AI for Good stories

Defining the Good in Robotics for Good: Establishing guidelines and standards for societally beneficial assistive robots

How can we ensure that robots will benefit society, making the world a better place? This was the core question of the Robots for Good workshop on May 30th, convening leading experts from academia and industry to discuss questions of morality and technical specifications for socially assistive robots.

by Cindy X. S. Zheng

Featured Image

How can we ensure that robots will benefit society, making the world a better place?

This was the core question of the Robots for Good workshop on May 30th, convening leading experts from academia and industry to discuss questions of morality and technical specifications for socially assistive robots.

Organized by several academic leaders in the field, including Selma Šabanović, Associate Professor of Informatics and Cognitive Science at the Indiana University Bloomington, Shelly Levy-Tzedek, Associate Professor and Director of the Cognition, Aging & Rehabilitation Laboratory at the Ben Gurion University, Vicky Charisi, Research Fellow at Berkman Klein Center, Harvard University and Maja Matarić, Professor in the Computer Science Department at the University of Southern California, from design and development to deployment, the workshop wrestled with the question how robots must be built to become trustworthy and provide social values, ensuring they are making a positive impact.

Defining Good for Robotics

In preparation for the workshop, the organizers have noted down ten defining questions for ‘Robots for Good’ in a comprehensive report that seek to concretize the thought process of defining the inherent goodness of a robotic solution. This report was used to introduce the goal of the workshop, which is to gather a diverse set of robotics experts to investigate the main ethical considerations, from which standardization for the responsible development and deployment of socially assistive robots will be created.

Throughout the sessions, a consensus emerged on the need to comprehensively understand and define what constitutes ‘good’ within the context of assistive robots. The participants grappled with the reality that ‘good’ is not a one-size-fits-all concept. Each community, discipline, and individual might have varying interpretations of what ‘good’ means, making it challenging to establish a universal ethical standard for robotics.

“How exactly do we define good? We published this paper and then the ITU invited us to run this workshop to start creating a community so that all of us together can try to define what good means for the community of robotics and social assisted robots. We are aware that sometimes good might be conflicting for different populations so it’s not straightforward to define.” Vicky Charisi

Perspectives from Robotics experts on realizing Robotics for Good

The introduction was followed by speaker presentations that highlighted various aspects and expertise from the participants, accompanied with practice examples.

David Crandall, Luddy Professor of Computer Science at the Indiana University Bloomington started off with insights into the IEEE Computer Vision and AI community, focusing on the impact of Computer Vision to the field.

Following, Friederike Eyssel, Professor of Psychology and Head of Lab at CITEC at Bielefeld University introduced a psychological approach to defining goodness in robots, encouraging her peers to reflect on ethical, legal, and social implications when deploying robots.

“We have to be mindful of the biases that we bring into the research situation and have to take that into account in order to cater more to diversity.” Friederike Eyssel

Ginevra Castellano, Professor in Intelligent Interactive Systems and Director of the Uppsala Social Robotics Lab in the Department of Information Technology of Uppsala University, presented her study on utilizing a social robot for perinatal depression screening (PND). Her user interview results revealed that robots can save time, make up for lack in resources, and offer a less stigmatized context. However, they can also make patients feel abandoned by the healthcare system and reduce their societal well-being. She spoke out for the need of regulation, operationalizing ethical guidelines of trustworthy AI and the importance of a multidisciplinary approach with societal stakeholders.

The following keynote by Alessandra Sciutti, Tenure Track Researcher, Head of the CONTACT Unit at the Italian Institute of Technology (IIT) also presented a study called the iCub The Robotcub Project in which she uses robots to study human behaviour. In her talk she alerted to several risks, especially the risk of language, and aligns with the previous speaker as in that the multidisciplinary discourse can be a self-correction tool to face those challenges.

Social robots in action

The second half of the speaker presentations came from developers of robotic applications. Rodolphe Hasselvander, CEO of Blue Frog Robotics, presented his robotic platform Budy that enhances social interactions for children with autism as a robotic companion. For the elderly, it can also be used for health monitoring. Through his process, he found co-creation with stakeholders to be the most effective way to counter misalignments with user needs and the final product.

Randy Gomez, Chief Scientist at Honda Research Institute Japan, attended with robot Haru which is designed to be an embodied mediator for children in homes, schools, and hospitals. He elaborates their approach for robotic assistance as being strengthening instead of replacing the human experience. On that note, Takanori Shibata, Chief Senior Research Scientist at the National Institute of Advanced Industrial Science and Technology (AIST), agrees when introducing the robot seal PARO which offers animal-assisted therapy without the drawbacks of owning a pet.

Robots in daily life: Between emotional support and emotional dependency

The workshop culminated in a panel discussion that focuses on the governance structure for the guidelines and standards, the urgency of the matter, and exchanging best practices, adding also an interactive part at the end to involve the audience.

One main concern was the ambivalence that people may feel towards robots which could be clarified with increased research into the effects of robot attachment and removal on users, also considering consequences of long-term use outside the laboratory. It was further agreed that there is a need for standardized metrics that can accurately assess positive outcomes when using social assistive robots across different countries.

The key topics of discussion were the protection of human rights in robotics and its regulation, privacy concerns, and cultural considerations in the design process. Transparency and training were mentioned as necessary to ensure appropriate use, while the degree of customization of robots was debated on ethical grounds. Researchers also spoke out that the time for development must be increased to improve the product through iterative testing before deployment, as well as to better understand the individual and societal effects which also requires long-term funding.

Ensuring Robotics for Good with interdisciplinary cooperation, transparency, and user co-creation

Participants did not shy away from asking the existential question: Why should robots address the problem in the first place? When focusing on the problems a socially assistive robot aims to solve, considering their significant technical resource requirements, other possible solutions should also be included in the decision-making.

The outcome of the workshop keenly acknowledged both the potential and the pitfalls of integrating assistive robots into daily life. While robots can significantly enhance human capabilities and well-being by providing companionship, the expert exchange revealed a healthy scepticism about issues surrounding privacy, emotional dependency on machines, and the subtle ways in which robots could influence human decisions.

Therefore, the importance of maintaining transparency throughout the lifecycle of a robot becomes essential – from its initial design to its everyday interactions with humans, users deserve to know how a robot works, what data it gathers, and how it processes that information. Such clarity is not only crucial for fostering trust but is also a matter of respecting individual rights and societal norms. The recurring theme of interdisciplinary cooperation complements this by bringing together expertise from various fields, including robotics, ethics, law, psychology, and direct input from end-users, the community can strive towards creating robotic solutions that are not only technologically sound but also ethically grounded and socially responsible.

This workshop explored how robotics can be ethically and effectively harnessed for the greater good of society. Attendees called for a proactive approach to the regulation of assistive robots and advocated for a forward-looking regulatory framework that requires a systematic approach in the design and development of social assistive robots. It also took the initiative of forming a community that is actively involved in the assessment of socially assistive robots and their ultimate impact, concerned with ensuring that they improve the life quality of individuals and societal well-being overall.

To learn more about ethical guidelines and standardization for social assistive robots, watch the full webinar below:

Are you sure you want to remove this speaker?