AI for Good stories

AI: The next enabler of media, journalism, and content creation?

In a digital world overloaded with content and short on resources, reaching and engaging new audiences has been a persistent challenge for creative industries. Some top media players have turned to artificial intelligence (AI) for a possible solution.

Featured Image

In a digital world overloaded with content and short on resources, reaching and engaging new audiences has been a persistent challenge for creative industries. Some top media players have turned to artificial intelligence (AI) for a possible solution.

For Kati Bremme, executive product manager at France Télévisions, AI has been a “faithful companion” to speed the digital transformation of media amid shifting audience expectations.

But the adoption of AI-powered technologies in the media has been slow compared to its uptake in other sectors, she said, speaking at an online event organized by the European Broadcasting Union’s AI and Data Initiative (AIDI).

Lack of resources, limited understanding, and the low number of use cases to date continue to hold back media AI use, she added.

Knowing the user

The AI Maturity Model produced by digital consultancy Gartner shows the media using AI mostly on active and operational levels. While AI-powered technologies are being tested and gradually introduced experimentally, they are far from pervasive or part of the business DNA, Bremme said.

Today’s so-called “narrow AI” builds on machine learning, deep learning, natural language processing (NLP) and natural language generation (NLG).

People consuming media, however, want a fuller experience, which AI can help achieve by putting the user, or “citizen”, at the centre.

Bremme highlighted the four key Cs for media AI to resonate: the citizen, content, context, and container.

It’s about understanding what users are watching, their choice of device, when they choose to watch, and even their routines and habits, like whether they consume content during their weekday commute or with family on a Sunday.

“Data will help you know audiences and can then make you produce better and smarter,” Bremme said.

Working together intelligently

More and more AI use cases are emerging in media creation, production, and distribution value chains.

Recently, when a cast member on France’s soap opera ‘Plus belle la vie’ became ill during shooting, France Télévisions used a body-double and deepfakes to complete her pending scenes.

Massive amounts of data, churned through social listening, helps to spot emerging trends as well as identify weak areas, explained Bremme.

Several years ago, big data analytics prompted led Netflix to commission the US version of the series ‘House of Cards’.

Automatic editing and algorithmic writing of the news media is not new either, especially for sports and finance stories.

AI is already widely used in media distribution, meanwhile, through personalization of content customized based on the user’s profile and contextual data. It also helps in monetization by suggesting new content based on what resonates well and proposing content based on the users’ emotions at the time.

Augmented experience

Soon enough, the fledgling combination of human reportage and AI will move towards intelligent collaboration on an “augmented” journalism model, Bremme surmised.

“AI can help us to better fulfil our public service mission to address needs of niche audiences,” Bremme said.

For instance, it can bring new verticals, such as capturing minor sports events where expensive TV crews cannot be sent. Two-thirds of media leaders, our of 234 surveyed, see AI as the next big enabler for journalism, according to a recent Reuters study.

Making AI creative and keeping it ethical

Bremme hopes for a future where AI is used to bring “more innovation than imitation” through “creative” general adversarial networks.

AI’s creativity levels can be tested objectively, she added. Just as the earlier Turing Test checked whether a machine could think like a human, today’s Lovelace Test checks if an AI code can generate something it was not engineered to produce, as well as whether its designers can explain this new result.

Ethical concerns, however, remain at the forefront amid the blurring of lines between the virtual and the real.

Deepfakes have become mainstream in the media, with AI-generated anchors presenting news on China’s Xinhua or on MBN in South Korea.

The media needs to care and must grapple with bias and filter bubbles, Bremme said. Striking the “right balance” between recommendations by humans and personalization by machines presents a significant challenge. The ideals of responsibility, transparency and explainability must be upheld, Bremme emphasized.

She called for increasing AI literacy in media organizations as well as among audiences. Collaboration is needed between media teams and with external organizations, including AI start-ups, research centres and even media competitors, to propel the ethical use of AI in media.

“We need ethics by design,” she said.

Learn more about the role of artificial intelligence in media and other areas by joining the ITU AI for Good Global Summit: all online, all year.

 

Image credit: Anthony Shkraba via Pexels

Are you sure you want to remove this speaker?