AI has the potential to reveal detailed private information through embedded, mobile and wearable devices, advanced facial recognition and predictive analysis, leading to the questions of whether the current privacy and security standards can still protect and safeguard personal data, individual privacy and anonymity. The increasing security vulnerabilities will affect the application of AI technologies in a safe manner, in context of autonomous vehicles and drones, biomonitoring, healthcare robotics, or robots responsible for the maintenance of public order. As AI powered technologies can self-advance, leaving uncertainty in applying standard data protection principles of accountability, transparency, consent, control, how can we ensure that proper data privacy and data security measures and standards are in place. This session will aim to discuss and identify strategies to ensure that AI contributes to the global security and peace, protect individuals against unauthorized manipulation of AI algorithms and not create chaos.
- Hongjiang ZhangHead ofBytedance Technical Strategy Research Center
- Virginia DignumProfessor of Ethical & Social Artificial IntelligenceUmeå University
- Drudeisha MadhubData Protection CommissionerPrime Minister's Office
- Brian WittenSenior DirectorSymantec Research Labs (SRL) Worldwide
- Frederike KaltheunerPolicy OfficerPrivacy International
- Konstantinos KarachaliosManaging DirectorIEEE Standards Association
- Mark LatoneroLead researcher for the Data & Human Rights programData & Society
- Sean McGregorTechnical LeadXPRIZE Foundation
Algorithm-based machines increasingly learn from and autonomously interact with their environments, thereby developing unexplainable forms of decision making. In many ways this is just as much as a new frontier for ethics and risk assessment as it is for emerging technologies of AI. Should AI be able to make life-and-death decisions, for example, in deciding how autonomous vehicles behave in the moments preceding a crash? Where does the liability rest for harm caused by AI ? How can we avoid the biases in decision-making by AI, causing inequalities and discrimination? How can we ensure that a world of increasingly pro-active computing remains human-centered, protecting human identity and dignity? This session will discuss the challenges of today's world posed by the use of AI and will aim to identify possible solutions that can ensure that the design and operation of AI is at minimum characterized by accountability and respect for human rights and purpose?
- Luka OmladicLecturer and researcher at Philosophy DepartmentUniversity of Ljubljana
- Lorna McGregorDirectorHuman Rights Centre
- Francesca RossiIBM FellowIBM T.J. Watson Research Lab
- Wendell WallachCarnegie/Uehiro Fellow and Co-direct of the AI and Equality InitiativeCarnegie Council for Ethics in International Affairs
- Chinmayi ArunResearch DirectorCentre for Communication Governance
AI will eventually be capable of performing not just routine tasks but also the functions of doctors, lawyers, engineers and other professions reliant on expert judgement and specialized qualifications. How will AI's augmentation of jobs, elimination of jobs affect the quality of life enjoyed by human beings? Will AI's increasing influence on production processes reduce tax revenues to the detriment of social welfare systems, and is it time to revisit the concept of social welfare spending as more and more people hand over their jobs to machines?
- Barmak HeshmatPI Research ScientistMassachusetts Institute of Technology (MIT)
- Ekkehard ErnstChief of the Macroeconomic Policies and Jobs UnitInternational Justice MissionInternational Labour Organization (ILO)
- Irmgard NüblerSenior Economist in the Research DepartmentInternational Labour Organisation (ILO)
- Olga MemedovicChief, Europe and Central Asia BureauUNIDO
- Ratika JainExecutive DirectorManufacturing, Confederation of Indian Industry (CII)
- Manuela VelosoHerbert A. Simon University ProfessorSchool of Computer Science at Carnegie Mellon University
- Plamen DimitrovPresidentConfederation of Independent Trade Unions of Bulgaria
- Stuart RussellProfessor of Computer Science at University of California, Berkeley, Author 'Human Compatible: Artificial Intelligence and the Problem of Control'UC-Berkeley
- Alexandre CadainCo-Founder & CEOANIMA
AI technology is enabling a wide range of new ways for humans and machines to interact, from intelligent and autonomous robots to smart spaces to natural language and even physical and cognitive human augmentation. These technologies, if well-deployed, can have enormous social impact, supporting independence, economic development and engagement, and cultural diversity. What are the most promising potential applications of novel human-machine interaction in the coming five to ten years, focusing on applications that can support disadvantaged individuals and communities? And what are the associated risks of these technologies and the steps that developers, communities, and governments may need to take to regulate them to ensure that their good outweighs adverse impacts?