During the 2025 AI for Good Global Summit, TELUS convened a workshop on crafting ethical AI with Indigenous intelligence. The session brought together Marissa Nobauer, Director of Reconciliation, Community Engagement and External Relations at TELUS, Jesslyn Dymond, Director of AI Governance & Data Ethics at TELUS, and Shani Gwin, founder and CEO of pipikwan pêhtâkwan and wâsikan kisewâtisiwin and member of the TELUS Indigenous Advisory Council. This interactive session brought together participants from academia, government and UN agencies, corporate entities, and nonprofits to explore how reconciliation commitments, Indigenous data sovereignty, and practical guardrails can shape responsible AI.
From reconciliation pledges to shared governance
Nobauer opened the session by situating the conversation within Canada’s Truth and Reconciliation process. She recalled how the commission had documented the experiences of Survivors of the residential school system and had issued 94 Calls to Action, including one directed specifically at corporate Canada. This call emphasized the responsibility to educate employees on their shared history, to collaborate with Indigenous Peoples, and to contribute to Indigenous economic growth. In response, TELUS launched a reconciliation strategy with public action commitments and, in 2022, established its Indigenous Advisory Council to ensure accountability and dialogue.
“Nothing about us without us”, a concept about the importance of Indigenous inclusion and participation introduced by Gwin, was presented as a guiding principle, with Nobauer stressing that mistakes were inevitable but should become opportunities to listen, learn, and adapt:
“It’s not about the mistakes that we’re making. It’s about how we’re listening and learning and reflecting and updating based on those mistakes,” Nobauer said.
Centering Indigenous leadership and lived experience
For Gwin, the work begins with relationships, not technology. Situating her perspective through kinship, place, and professional practice, she described founding her Indigenous-owned PR firm to give communities greater control over their narratives and access to professional communications support through sliding-scale and pro bono work. She emphasized that Indigenous communities have often experienced broken promises in the past, so building trust requires visible follow-through, transparency, and accountability.
One tangible outcome of this partnership is Honour by Design: TELUS’s commitment not to use generative AI to create Indigenous artwork. Gwin explained that AI-generated Indigenous imagery risks cultural appropriation and economic harm to Indigenous artists whose work carries deep cultural and spiritual significance.
“The history I think there for me is that Indigenous data, […] Indigenous knowledge, Indigenous artwork, even Indigenous identity now in Canada is being stolen by non-Indigenous people for profit,” Gwin said.
TELUS presented the draft policy to the Indigenous Advisory Council for feedback, then paired the final commitment with education materials and technical guardrails so it would be actively enforced across internal platforms rather than remaining a symbolic gesture.
Watch the full session here
How guardrails get built: governance and “purple teaming”
Dymond described how TELUS approaches responsible AI through its Data & Trust Office.
“The role of the Data & Trust Office is to establish trust in the digital ecosystem with our customers and with communities more broadly,” Dymond explained.
The work began with public research and engagement so that TELUS’s responsible AI program reflected diverse perspectives from the start.
A core element of this approach is purple teaming: an adaptation of red- and blue-team testing where employees from different departments, including policy, engineering, legal, and now Indigenous partners, work together to probe AI systems for risks and unintended harms. TELUS has invited Indigenous organizations to try to “break” safeguards, ensuring that critique comes directly from those most affected.
Power dynamics were openly acknowledged: TELUS created spaces by partnering with Indigenous organizations for unfiltered feedback, including sessions without TELUS staff present.
Indigenous-led tooling
Gwin introduced wâsikan kisewâtisiwin, an early-stage Indigenous-led AI application designed to identify hate speech, bias, or misinformation about Indigenous Peoples in written content. The tool not only flags harmful language but also explains why it is problematic and suggests alternatives, aiming to create healthier online spaces while empowering Indigenous voices in digital environments.
This is important because Indigenous ways of knowing treat data as more than numbers: storytelling, relationships to the land, and lived experience carry equal weight with quantitative evidence. For example, a housing initiative’s success might be measured not only by construction data but also by how it improves community wellbeing, the kind of knowledge that disappears if reduced to narrow datasets alone.
Currently drawing on local, community-informed data, the tool is being prototyped as a copy-paste checker, a chatbot, and a social media workflow. Its governance model ensures Indigenous ownership and commits to expanding only through community invitations and appropriate compensation structures.
Literacy, constrained search, and safe deployment
Dymond underscored that AI literacy was central to responsible adoption. In addition to working with Indigenous organizations, the team has undertaken training from the First Nations Information Governance Centre on the OCAP® framework. During the workshop, she demonstrated TELUS’ proprietary generative AI platform, Fuel iX™, showing how it lets teams compare different large language models while embedding internal “rules of the road” to guide responsible use. She explained how constrained co-pilots could be built using retrieval-augmented generation, for example with UNDRIP-only knowledge sources, to ensure outputs remained grounded in trusted material. She also highlighted the role of purple teaming before launch to uncover unintended behaviors, noting that TELUS had applied this method to a customer-facing support chatbot prior to deployment.
The session concluded with tabletop exercises where participants discussed real-world situations: when to use generative AI tools, how to respond to questions about Indigenous Peoples responsibly, and where to turn for community-validated information. The goal was to give attendees not only policy principles but also practical strategies they could take back to their own organizations, from building relationships with Indigenous partners to testing AI systems for bias and misinformation before they go live.
Through its dialogue and activities, the workshop demonstrated how reconciliation commitments, Indigenous leadership and technical guardrails could come together to shape a more responsible approach to AI. By centering Indigenous voices, translating policy into enforceable practice, and testing systems with those most affected, TELUS and its partners highlighted a model of co-creation that goes beyond compliance to build trust. The session underscored that ethical AI is not achieved through technology alone but through sustained relationships, transparency and accountability – principles that participants were encouraged to carry into their own work long after the Summit.










