AI for Good blog

Navigating AI’s Legal and Ethical Frontiers

Global Summit

By Celia Pizzuto

The AI for Good Global Summit 2024, held in Geneva, brought together leaders and innovators from various sectors to discuss the transformative potential of artificial intelligence (AI). Among the distinguished speakers was Danny Tobey, a partner at DLA Piper, who brought his multifaceted expertise as a lawyer, medical doctor, and software founder to a compelling discussion on the intersection of AI, law, and ethics. Tobey, recognized by the Financial Times as the Innovative Lawyer of the Year in 2023, provided deep insights into the challenges and opportunities presented by generative AI across various industries.

“Red teaming is something we’re very focused on right now,” Tobey began, explaining how his firm is approaching the testing of generative AI. In the context of legal practice, red teaming involves rigorously examining AI models to identify and mitigate potential risks. This is particularly crucial in sectors like healthcare, education, and insurance, where the stakes are high.

“As lawyers, we have a really interesting role to play. We have to take these concepts like fairness and transparency and take them from principles and actually help companies come up with a way to prove that they’re aligning with those values,” Tobey explained.

He detailed how DLA Piper conducts legal red teaming by treating AI models as if they were witnesses in a deposition. The firm identifies the legal risks associated with the AI, examining the societal norms and regulations that dictate acceptable behavior in specific industries. Lawyers then rigorously depose the AI models, much like they would a witness in a legal proceeding. This methodical approach ensures that AI systems comply with legal and ethical standards before they are deployed, helping companies avoid potential pitfalls.

The conversation then turned to the commercial advantages of ethical AI development. Tobey agreed with the notion that ethical practices would become a competitive edge. “If for no other reason than the people who don’t take that route are going to be driving off a cliff,” he remarked. He emphasized the importance of upfront investment in making AI safe, reliable, and consistent, highlighting the issue of consistency in generative AI:

“One of the things we’re really helping companies do is set up ways to monitor their AI over time,” Tobey said.

Addressing the balance between innovation and regulation, Tobey drew on his experience as a former software founder. He acknowledged the iterative nature of software development and the need for practical safeguards. Instead of striving for perfection, Tobey emphasized the importance of carefully considering potential failures and implementing safeguards to address them, while still allowing innovation to progress.

A pressing issue in AI development is the potential for bias and inequality. Tobey stressed the importance of defining key terms like fairness, bias, and accessibility. He noted that AI governance has matured, evolving from ethical AI to responsible AI, and now to legal AI.

“We may not be able to agree on fairness as a philosophical concept, but we know what the law says about discrimination, about bias, about infliction of emotional distress,” he explained.

This legal framework provides a practical approach for companies to test and ensure their AI systems meet societal standards.

Tobey also highlighted the significant opportunities AI presents, particularly in improving access to justice. He pointed out that many people worldwide lack access to lawyers or judicial systems due to the lengthy, slow, and expensive processes.

I think AI is an incredible tool for opening up access to legal information,” he said.

To this end, DLA Piper has founded the AI Law and Justice Institute, a non-profit initiative working under AI for Good. The institute aims to bring together experts to develop responsible, consistent, and affordable legal systems, with the first symposium scheduled at Stanford in the fall.

When asked about the most exciting advancements in AI, Tobey pointed to generative AI’s potential as a communication tool. He noted that currently, generative AI is being used primarily as a question-and-answer machine, but he envisions its long-term role as a translator that facilitates natural language communication with all types of AI and technology. This capability allows for more natural interaction with various technologies, democratizing access to AI’s power. However, it also raises the stakes, necessitating greater attention to safety and ethical considerations.

In response to whether AI will replace lawyers, Tobey shared a popular saying:

“AI will not replace lawyers; lawyers who use AI will replace lawyers who don’t.”

He believes that while AI will handle routine tasks, the human elements of legal practice—communication, consensus-building, and negotiation—are irreplaceable. According to Tobey, there is an inherent humanity in interactions and negotiations that ensures lawyers will always be necessary.

Danny Tobey’s insights at the AI for Good Global Summit 2024 underscore the critical role that legal professionals play in the responsible deployment of AI. His vision for integrating rigorous legal standards with cutting-edge technology provides a roadmap for navigating the complex landscape of AI ethics and regulation. As AI continues to evolve, Tobey’s approach offers a balanced perspective on harnessing its potential while safeguarding societal values.


Watch the full interview here.

Are you sure you want to remove this speaker?