AI-generated content (AIGC) is reshaping how people create, consume, and trust digital media. What once seemed like niche experimentation has now become part of everyday user experiences, from viral clips on social platforms to emerging tools for journalism, education, and accessibility. But with this transformation come pressing questions: how can platforms ensure content provenance, protect information integrity, and maintain user safety in an AI-driven era?
At the 2025 AI for Good Global Summit, a workshop held in partnership with TikTok set the stage for a discussion moderated by Jade Nester, Director of Data Public Policy Europe at TikTok, on how platforms can navigate the opportunities and risks of AI-generated content.
Platforms leading with transparency
Opening the discussion, Kushal Sagar Prakash, Head of Global Institution Engagement at TikTok, emphasized both sides of the equation. “Over recent years, we have seen firsthand how AI can unlock incredible potential for creators and communities of all kinds,” he said, pointing to examples such as reanimating historical figures to teach history and small businesses using AI to reach new audiences.
At the same time, Prakash cautioned: “given AI is a powerful technology, we know that it needs the right guardrails and transparency to ensure that it’s used for good.”
For platforms, transparency has become the first line of defense against misuse of AI. TikTok’s own steps include an AI-powered alternative text generator for accessibility, mandatory labeling of realistic AI-generated content, and early adoption of C2PA content credentials. As he put it, “we are proud to be the first social media or video sharing platform to launch a tool that helps creators easily label content including AI generated content.” Alongside tools, he argued, platforms must invest in education:
“Tools and rules aren’t alone enough, […] communities need to understand why responsible AI matters,” Prakash said.
These initiatives illustrate a dual strategy: making transparency tools available to creators, while also educating communities on why these measures matter.
With that framing, a panel of experts from policy, journalism, civil society, and standards bodies joined the conversation to explore how transparency and trust can guide the future of AI-generated content.
Watch the full session here:
Technical standards and provenance
Andrew Jenks, Director of Media Provenance at Microsoft and Chairman of the Coalition for Content Provenance and Authenticity (C2PA), traced how provenance standards evolved in response to deepfakes and synthetic media. “Way back in the day, in 2020, Microsoft partnered up with BBC, CBC, and the New York Times specifically because they were concerned about the coming impact of deep fakes,” he recalled. That work merged with Adobe’s parallel efforts, leading to the creation of C2PA.
What began as an effort to authenticate media has since shifted toward synthetic disclosure. Jenks noted that when ChatGPT appeared in November 2022, many suddenly realized the urgency of distinguishing between what was authentic and what was synthetic, and C2PA proved relatively fit for purpose, leading to its rapid adoption for generative AI disclosure use cases. The coalition now counts hundreds of members, and its technical standards have advanced to a new version incorporating stronger approaches to privacy, attribution, and security.
Yet challenges remain. As Jenks noted, the line between real and synthetic is increasingly blurred.
“Nothing that you see in your daily life is 100% authentic or 100% synthetic anymore. It is some continuum of all of it,” Jenks explained.
The next step, he argued, is making provenance understandable for everyday users.
Trust across journalism, creators, and communities
Journalist, author and creator Sophia Smith Galer brought the perspective shaped by working at the intersection of news and online influence. She described very different conversations happening between journalists and content creators. In newsrooms, trust is paramount but fragile:
“Everyone is always worried about trust in news media especially because it looks like we’re losing it from our audiences,” Smith Galer explained.
By contrast, creators often view AI as an opportunity, seeing it as a way to generate income and to solve practical problems like producing B-roll footage.
This divide, she argued, has led to fragmentated conversations. While journalists debate whether AI-generated clips are acceptable at all, creators assume audiences will recognize them as synthetic, which may not be true. Smith Galer called for cross-learning: “Online creators and influencers have a lot to learn from journalists and journalists have an awful lot to learn from the other side as well.” At the same time, she noted that creators are often anxious about intellectual property, fearing that their names, voices, or likenesses could be reused without proper credit or compensation.
She also raised the stakes around language and cultural preservation, introducing the concept of “linguicide.” With over 7,000 languages in existence and half at risk of disappearing this century, AI could help document them, but it could also distort them. For languages on the verge of disappearing, she stressed, accuracy becomes especially critical, and AI must be used with extreme care.
Social responsibility in AI-generated content
Mutale Nkonde, CEO of AI for the People, stressed that AI-generated content is as much a social challenge as a technical one. “It’s something that people in the research community would think of as a technosocial problem,” she said, highlighting how engineering fixes alone cannot address risks of bias or harm.
She pointed to a recent case where an AI image generator, in an effort to ensure diversity, produced historically inaccurate depictions. The attempt at proportional representation backfired because it ignored the social and historical context in which those images were situated. She noted that while representation is important, attempts to achieve it without considering the broader social dimensions are likely to fall flat.
Bias, she added, is further compounded by hallucination. When systems generate inaccurate outputs, the errors raise questions of accountability: should the mistake be attributed to the individual creator who used the tool, or to the system itself? Nkonde argued that these risks make closer collaboration essential:
“When we’re creating these models, there has to be a conversation between engineering and policy,” Nkonde emphasized.
She also warned against offloading all responsibility to platforms, arguing that it is unrealistic to expect companies like TikTok to act as “society’s police”, and that governments and larger systems should at least provide a baseline of accountability.
Policy and education for trust
Areeq Chowdhury, Head of Policy for Data and Digital Technologies at the Royal Society, underlined the need for expert involvement in testing AI systems and designing policy. Ahead of the 2023 AI Safety Summit, his team asked climate scientists and infectious disease specialists to probe Meta’s Llama 2 model for weaknesses, quickly exposing vulnerabilities in its safeguards.
“You need expert voices involved in the design of those guardrails because they’re able to rip holes throughout that platform,” Chowdhury explained.
But technical measures alone are not enough. His research showed most users “were not able to detect a deep fake even when they had a warning.” He stressed that education is crucial, especially for marginalized communities, yet underfunded and inconsistent initiatives leave those most vulnerable to misinformation the least equipped to recognize and resist it.
Chowdhury also cautioned that policy debates should not only focus on risks. The Royal Society has recently published a report on disability technology, examining how digital tools can serve people with disabilities across all aspects of life. He cited AI-generated alt text and sign language as promising examples, but emphasized that real progress depends on inclusion: these tools will only succeed if disabled people are directly involved in shaping them.
Trust under pressure
Following the panel, the discussion deepened during an audience Q&A, which brought out some pressing challenges around AI-generated content.
One theme was the sheer accessibility and scale of manipulation tools. As Jenks observed, falsified images are not new, but what once required specialized skills and software can now be produced instantly on a phone, a shift that dramatically expands the scale and speed at which misinformation can spread.
The discussion also addressed trust in high-stakes contexts such as conflict zones. Jenks pointed to how provenance tools like C2PA can help document and verify the status of cultural sites, preventing misinformation about their destruction. Yet, Nkonde cautioned that institutional assurances are not always enough in moments of crisis, where people often turn to trusted local voices instead.
Another concern was the public’s ability to navigate the impact of AI. Nkonde highlighted automation’s effect on employment, arguing that job losses are more often driven by digital systems than by foreign workers. Jenks warned of declining media literacy, observing that young people today are less able to assess online trustworthiness than they were five years ago. Smith Galer added that in the UK, cuts to education budgets have left teachers ill-equipped to deliver digital literacy training.
These exchanges underscored that trust is most vulnerable under conditions of conflict, uncertainty, and social change, and that transparency must be paired with education and community-level engagement to be effective.
Towards a trusted AI content ecosystem
Across platforms, standards bodies, journalism, and civil society, one theme stood out: preserving trust in an AI-driven content ecosystem requires more than technical fixes. Provenance standards like C2PA, platform transparency measures, creator practices, and policy frameworks all contribute pieces of the puzzle. But without education, inclusion, and collaboration, users will remain vulnerable to manipulation and distrust. In the end, the session made clear that transparency is only the starting point. Building trust in AI-generated content is a shared responsibility, one that will determine whether AIGC becomes a force for confusion and harm or for creativity, accessibility, and empowerment.










