AI and the Social Contract: How Sam Altman Envisions Tomorrow’s World
by Alexandra Bustos Iliescu
AI is no longer a futuristic concept, it is a transformative force reshaping industries, economies, and daily life. As AI technologies rapidly advance, the world stands at a crossroads, balancing incredible potential with significant challenges. At the recent AI for Good Global Summit 2024, Sam Altman, CEO of OpenAI, provided a comprehensive look into the current state and future trajectory of AI, addressing its impacts, governance, and the ethical considerations that accompany its rise. The keynote discussion was moderated by Nicholas Thompson, CEO of The Atlantic.
Altman began by discussing the immediate effects of AI on productivity, highlighting software developers as a prime example. He noted that AI tools have significantly accelerated their work processes.
“People can just do their work much faster, more effectively [and] work more on the parts that they want” Altman said, “like other sort of technological tools, they become part of a workflow, and it pretty quickly becomes difficult to image working without them”.
Altman pointed out that this efficiency would extend to various industries, from education to healthcare.
He emphasized that AI is already making significant strides in enhancing productivity and efficiency, and this trend is expected to continue and would be the first detectable positive outcome that we can expect to see.
However, with great power comes great responsibility, and Altman did not shy away from addressing the potential negative impacts of AI. He raised concerns about cybersecurity, identifying it as a critical area of focus.
“Cybersecurity […] could be quite a problem,” he warned, highlighting the need for vigilance as AI continues to evolve.
This dual perspective underscores the complexity of AI’s impact: while it offers tremendous benefits, it also poses significant risks that must be managed.
As OpenAI embarks on training its next iteration of large language models, Altman was asked by Thompson about language equity. He acknowledged the disparity in performance across different languages and emphasized OpenAI’s commitment to improving this.
“One of the things that we’re really pleased with GPT-4 […] is that it is very good at a much wider variety of languages,” Altman noted. “We will make future ones even better, but I think the stat we announced was a good coverage for 97% of people for their primary language.”
He stressed the importance of inclusivity and equity as AI models advance, ensuring that the benefits of AI are accessible to a global audience.
Regarding the level of improvement expected in the new models, Altman predicted significant advancements but remained cautious about setting unrealistic expectations.
“I think the best thing for us […] is to show not tell. We will try to do the best research we can and figure out how to responsibly release whatever we’re able to create. I expect that it’ll be hugely better in some areas and surprisingly not as much better in others,” he said, underscoring the unpredictable nature of AI development.
This cautious optimism reflects a realistic approach to AI innovation, acknowledging both its potential and its limitations.
The conversation also touched on the use of synthetic data for training AI models. Altman acknowledged the experiments with synthetic data but emphasized the importance of high-quality data.
“As long as we can find enough quality data to train our models […] or ways to train and get better at data efficiency […] I think that’s okay,” he remarked.
He expressed hope that future models could learn more efficiently from smaller amounts of data, addressing concerns about the potential “corruption” of AI systems by synthetic data. This focus on data quality is crucial, as the integrity of AI models depends heavily on the datasets used for their training.
Safety in AI development is a paramount concern for OpenAI. Altman discussed the challenges of interpretability in AI models, admitting that while progress has been made, much remains to be understood.
“Safety is going to require like a whole package approach,” he said, highlighting the complexity of ensuring AI models are both effective and secure.
He acknowledged that understanding AI at a granular level is still a work in progress but stressed the importance of continuing to advance in this area. Altman also mentioned the Golden Gate Bridge as a recent breakthrough in understanding the question of interpretability.
When asked whether a balance between capabilities and safety should be maintained, Altman argued against a simplistic separation of the two.
“You’re trying to design this integrated system that is going to safely get you where you want to go,” he explained, likening the process to designing an airplane that is both efficient and safe.
Altman used an airplane analogy to illustrate the relationship between AI capabilities and safety. He explained that designing AI is akin to designing an airplane, where both efficiency and safety must be integrated. Just as an airplane must be capable of transporting passengers swiftly and safely to their destination without crashing, AI systems must be developed to perform tasks effectively while ensuring they do not cause harm. This integrated approach ensures that safety is a core component of AI innovation, rather than an afterthought.
The governance of AI is a critical issue, especially as AI systems become more powerful and widespread. Altman responded to criticisms about OpenAI’s governance structure by pointing to the company’s actions and the safety measures implemented in their models.
“You have to look at our actions, the models that we have released and the work we have done,” he stated, defending OpenAI’s track record.
He emphasized that OpenAI’s commitment to safety is reflected in their rigorous testing and deployment processes.
Addressing broader regulatory concerns, Altman suggested that effective regulation would require empirical observation and iterative improvement. He emphasized the need for a balance between long-term planning and short-term adaptability, given the rapid pace of AI development.
“We don’t know yet how society and this technology are going to co-evolve,” he said, advocating for a flexible approach to regulation.
This perspective recognizes the dynamic nature of AI and the importance of staying adaptable in regulatory frameworks.
Altman touched on the topic of Artificial General Intelligence (AGI), which represents a significant focus for OpenAI. He suggested that AGI could lead to profound changes in society and governance, emphasizing its potential to drive both innovation and ethical challenges. Altman expressed hope for a future where AGI aligns with human values and contributes positively to the world.
He remarked, “We believe in designing for a human-compatible world,” underscoring OpenAI’s commitment to developing AGI that benefits humanity while remaining safe and aligned with societal goals.
Altman touched on the ethical and societal implications of AI, including its potential to exacerbate or mitigate income inequality. He cited examples of AI tools being used to support non-profit organizations and crisis zones, illustrating AI’s potential to benefit the most vulnerable populations.
“You can see ways in which […] AI does more to help the poorest people than the richest people,” he said, expressing optimism about AI’s role in promoting social equity.
This optimistic view highlights the potential of AI to drive positive social change, provided it is deployed thoughtfully. However, Altman also acknowledged the potential need for changes to the social contract as AI continues to transform the economy and labor market. He predicted that AI’s impact would necessitate new approaches to social safety nets and economic structures.
“I don’t think that’ll require any special intervention […] but over a long period of time, I still expect that there will be some change required to the social contract given how powerful we expect this technology to be. I’m not a believer that there won’t be any jobs; I think we always find new things, but I do think the whole structure of society itself will be up for some degree of debate and reconfiguration,” he stated.
This long-term perspective underscores the profound impact AI could have on societal norms and structures. For Altman, the reconfiguration is not led by large language model companies but rather by the dynamics of the entire economy and societal decisions. He argued that this evolution has been ongoing as the world has become wealthier, citing the development of social safety nets as a prime example.
Sam Altman addressed the topic of regulations during the conference, emphasizing the need for empirical observation and iterative improvement in regulatory approaches. He highlighted the importance of balancing long-term planning with short-term adaptability, given the rapid pace of AI development.
Altman stated, “We don’t know yet how society and this technology are going to co-evolve,” advocating for a flexible regulatory framework that can evolve alongside technological advancements. This approach ensures that regulations remain relevant and effective as AI continues to transform various sectors.
In a thought-provoking moment, Altman discussed the possibility that AI could foster a greater sense of humility and awe in humans. He suggested that as AI becomes more capable, it could lead to a broader appreciation of the complexities of the world and humanity’s place within it.
“I would bet that there will be a widespread […] increase in awe for the world and a place in the universe,” he said. This philosophical reflection adds a deeper dimension to the conversation about AI.
Sam Altman reflected on the history of science, drawing parallels between past scientific revolutions and the current advancements in AI. He noted that throughout history, scientific discoveries have consistently shifted humanity’s perspective, making us realize our smaller role in the vast universe.
“In some sense, the history of science has been humans becoming less and less at the center,” Altman remarked.
He explained how humanity once believed that the sun revolved around the Earth, a viewpoint known as the geocentric model. This perspective changed with the heliocentric model, which correctly identified that the Earth revolves around the sun. Altman suggested that AI might be another step in this journey, prompting a broader and more humble understanding of our place in the cosmos.
Altman also delved into the practical aspects of AI development and deployment. When asked about the Scarlet Johansson episode, where a voice model sounded remarkably similar to the actress despite her not participating, Altman clarified, “It’s not her voice… It’s not supposed to be.”
The conversation turned to the future of AI governance, with Thompson probing Altman on the governance model of OpenAI. Altman reiterated the company’s commitment to responsible AI development and the importance of transparency and accountability. He acknowledged past criticisms but emphasized that OpenAI’s track record and ongoing efforts reflect their dedication to safety and ethical considerations.
One of the more radical ideas Altman touched upon was the potential for AI to enable a new form of governance, where individual preferences could be directly inputted into decision-making processes. This concept, which Altman had previously discussed in passing, suggests a future where AI could facilitate a more direct and participatory form of democracy.
“I think it would be a great project for the UN to start talking about how we’re going to collect the alignment set of humanity,” he said.
Altman elaborated on the idea, envisioning a system where people could use AI to input their preferences and contribute to a more direct and participatory form of democracy. He highlighted the importance of developing frameworks that consider the diverse perspectives and needs of the global population.
“You can imagine a world where eventually people can chat with ChatGPT about their individual preferences and have that be taken into account for the larger system and certainly how it behaves just for them,” he explained.
Altman concluded with a call to action for policymakers and AI developers to balance the incredible potential of AI with the serious risks it poses. He emphasized the need for holistic consideration of AI’s impact and urged stakeholders to remain vigilant and adaptive.
“Don’t neglect the long term and don’t assume that we’re going to asymptote off here,” he advised.
His closing remarks encapsulate the dual challenge of AI: harnessing its transformative power while safeguarding against its potential dangers.