The Middle East needs to talk about ethical AI

Atul Hegde, Founder of YAAP

For a Luddite — a person opposed to new technologies and ways — the ongoing AI proliferation might seem straight out of a dystopian sci-fi movie. To those who vehemently support AI, today’s status quo is a dream come true. Neither is right because, in actuality, we are collectively walking the tightrope between the perils and promises of AI.

While it is fortunate that the majority find themselves here, unable to determine their definite stance on this disruptive technology, ambivalence is not an ideal state to be in for a long time. The more we delay to ensure its ethical use, the harder it becomes to mitigate the risks.

Though AI’s potential risks have received their fair share of press, the remedial actions have been sluggish and fragmented. As key stakeholders remain complacent, AI continues to pose ethical concerns: Will it stifle human productivity, curiosity, and creativity? Can a mature AI take humans out of the loop, rendering the majority of the workforce useless for corporations? Such seemingly alarmist questions aren’t as far-fetched if a recent development is anything to go by: The untimely — or timely depending on how you see it — the departure of Dr Geoffrey Hinton from Google, where he spent over a decade working on AI development.

Often called the “Godfather of AI”, Dr Hinton is a well-respected luminary in the upper echelons of technology. So, what made him turn away from his life’s work? In his own words, “It is hard to see how you can prevent the bad actors from using it for bad things.” Dr Hinton is now geared toward raising awareness of potential risks and probable remedies, hoping to set things right. It is not AI’s implications for jobs, misinformation, and productivity that primarily concern pioneers such as Dr Hinton but the eventuality where the ripple effects shake the very foundations of human societies. So, what is truly at stake?

AI mimics humans, and that isn’t without problems

Despite the obvious sociocultural evolution, the ability of humans to commit evil deeds remains uncontested. Humans are prone to biases, prejudices, and insecurities, which can be passed on to AI algorithms in their developmental stages. Most importantly, AI’s decision-making hinges on a historical dataset. So, if an algorithm draws upon biased data, it will amplify and perpetuate those biases incrementally, making it difficult to course-correct it later. This phenomenon, known as ‘AI bias’, is extremely dangerous in the context of increasing use cases across sectors.

For example, AI is making inroads into the financial sector, where it will find application in determining creditworthiness. If an algorithm is characterized by procedural biases or the dataset is riddled with prejudices against certain social groups, then the decision on who qualifies for loans and who doesn’t becomes questionable. Likewise, as humans unconsciously feed on such generative output, they are bound to go deeper down the rabbit hole to a point of no return, oblivious to the erosion of values that give meaning to their existence. From alienation to moral corruption, unchecked AI adoption could dangerously verge on an “Orwellian” society.

Humans can also induce ethics into AI

As things stand, AI adoption is pronounced in the private sector. Yet, despite the awareness of potential risks, most companies don’t have a robust ethics framework in place. Fortunately, as the AI-based use cases in practice are still in their nascency, it is an opportune moment to formulate policies and mitigate risks. Plausible considerations include continuous monitoring, implementing appropriate interpretability and explainability techniques, and having hands-on involvement in the algorithms’ learning curve. That said, companies cannot take a one-size-fits-all approach to AI ethics; organization- and sector-specific considerations are paramount. In fact, a prudent step that decision-makers can take is to appoint a ‘Chief Ethics Officer’ entrusted with the supervision of AI development and its constant adherence to fair practices.

At the same time, governmental oversight of the private sector’s compliance with ethics will be critical to mitigating macroeconomic and sociocultural risks posed by AI. The reports furnished by companies from different sectors will enable policymakers to predict the societal impact of AI proliferation and take necessary steps to ensure the developments don’t stray from the righteous path. In fact, oversight also allows governing bodies to institutionalize AI adoption by incorporating certain use cases into civic engagement and unlock benefits such as higher efficiency in administrative activities, lean operations, and optimized resources. So, by and large, humans can essentially transform AI risks into rewards with timely and appropriate actions.

As it turns out, such top-down undertakings are surfacing in the Middle East. In the last few months, the Saudi Data and Artificial Intelligence Authority (SDAIA) has issued AI Ethics Principles and the UAE’s Digital Dubai has developed a toolkit related to ethical AI. Such readiness is commendable. It must be supplemented with bottom-up efforts of the private sector and individuals because AI is as much everyone’s responsibility as it is everyone’s problem. With a whole-of-society approach to promoting ethical AI, we can focus on its promises and leave the perils behind. One could argue that we are subjects of a global social experiment, with AI putting our collective conscience to the test. Let’s be on our best behaviour.

By Atul Hegde, Founder of YAAP