“AI [Artificial Intelligence] is likely to be either the best or worst thing to happen to humanity.” This is a statement made by a hero of mine – the late, great Stephen Hawking a couple of years ago at the opening of the Centre for the Future of Intelligence in Cambridge.
Although I don’t entirely agree with Hawking’s views on this occasion, I am glad to see a well-known academic considering the positives of AI. In some ways his views are more pragmatic than the commentary we’re hearing from other well-known leaders across business and government; and in my view we have far more control over the future of AI than scare-mongering, apocalyptic headlines would lead us to believe.
Here’s a selection I have seen recently in the UK: ‘AI the biggest risk we face as a civilisation’, ‘AI is turning racist as it learns from humans’ and, my favourite, ‘ROBOT WARNING: Millions to lose their jobs to AI and we need to act NOW’.
It concerns me that the media narrative is still stuck in the past. References like this are unhelpful at best and at worst threaten to limit the potential we can harness from this emerging technology. Straight from Hollywood, they tell a good fiction tale, but this is where they should firmly remain.
At Sage, we’re all about doing business the right way, and as the UK’s largest technology company we take our responsibility to ensure the development of emerging technologies like AI is undertaken ethically very seriously.
There’s no debate: AI applied ethically is the future for business
It is exciting to see increased academic research in ethical AI and accountability over the last 18 months, but in truth we aren’t seeing enough schools, business leaders, governments or even the companies applying AI take responsibility for developing it ethically. That’s why last year Sage became one of the first in the world to address this little acknowledged issue and called upon other leaders to adopt the principles we use within our AI team.
Our Ethics of Code is a call to action for other business leaders, but also shows Sage taking responsibility. We believe that leading by example will bring the change we wish to see. Developed in house by our VP of AI, Kriti Sharma, we ensure the code is strictly adhered to across all our AI development.
In truth, there’s a lot of work to do and the biggest threat we face from AI is by not embracing the benefits this technology can bring.
What does this code look like when it’s applied in practice? Well, firstly, it’s about ensuring that the teams building intelligent machines, reflect the diversity of it’s users. We know the community of people creating scalable AI for businesses is relatively small. While the focus on quality needs to remain intact, expanding the diversity of people working on AI is vital to its sustainable future.
And secondly, it’s about the responsibility we carry as business leaders. If you’re developing AI, you should act now to bring creatives, writers, linguists, sociologists and passionate people from nontraditional backgrounds on board. Over time, we also must take the opportunity to commit to support training programs that will widen the talent pool beyond those who’ve graduated from red brick universities. By taking these ethical steps, we will deliver AI that is representative and well-serves of our community as a whole.
AI for good
As I’ve already said, Sage believe in doing business the right way. That’s why we believe AI can accelerate the UN’s Sustainable Development Goals. Building on our strengths in tech, we’re now using our expertise in this field through Sage Foundation, to incubate innovative ideas using emerging technology that will help solve humanitarian challenges.
Sage Foundation will preview rAInbow for the very first time at the AI for Global Good conference.
RELATED: How can we ensure that AI is a force for good? (Q&A)
Research has shown us that victims of gender-based violence take a long time to report abuse to authorities – if they do so at all. This is usually because of the social stigma and the fact that often this crime is seen as ‘acceptable’ – even by members of local law enforcement authorities.
Working in partnership with local charity, Soul City, rAInbow will be a companion to support women who are victims of domestic violence, using AI to helps users find the answers when even asking the question is a struggle. The launch of rAInbow will be In August 2018 in South Africa.
AI may replace, but it will also create
We have also developed Sage FutureMakers Lab to help educate young people up to the age of 18 in AI skills, but equally to demonstrate that there are many other skills required for a career in AI like problem solving and conversation design. Currently in a pilot phase, with sessions taking place in towns and cities across the UK and Ireland, we have plans to take the initiative all over the world to secure the future success of young people and dispel the myth that you need a PHD to work in this field.
For key insights on trends in AI, read the recent edition of the ITU News Magazine:
For those of us attending the UN’s AI for Good Global Summit, I would issue a stark call to action that we must all take responsibility for the part we play in positively securing the future of AI.
In truth, there’s a lot of work to do and the biggest threat we face from AI is by not embracing the benefits this technology can bring. We need to address skills gaps to alleviate the inevitable war on talent. We need to address the ethics and values conversation – this, in my opinion, is where government and industry need to work hand in hand with legal, religious and education leaders to strike the right balance between technical advancement and doing the right thing.
If we act now we can be the beacons of hope, taking the information on how AI is changing lives for the better, and creating opportunities for the future back to our communities.
*Views expressed do not necessarily reflect those of ITU.