Regulating Artificial Intelligence

Regulating Artificial Intelligence

It’s been a tough couple of weeks this and we want you to know that our thoughts and prayers are with you and your families. I have no doubts that we will emerge from this crisis stronger than ever. But until then, I hope you take care of yourself, stay indoors and get yourself and your family members vaccinated, whenever an opportunity does arise.

In the meantime, here is our latest article on AI regulations and what this might mean for the industry.


Policy

The Story

Replacing or augmenting human intelligence with machine intelligence creates new risks. This should hardly be surprising. But if it weren’t evident already, here’s an excerpt from an article in the “Nature” talking about GPT-3, one of the most sophisticated AI models available for commercial use.

“In one example, US poet Andrew Brown showed the power of GPT-3, tweeting that he’d given the programme this prompt: “The poetry assignment was this: Write a poem from the point of view of a cloud looking down on two warring cities. The clever student poet turned in the following rhyming poem:”

GPT-3 responded:

“I think I’ll start to rain,

Because I don’t think I can stand the pain,

Of seeing you two,

Fighting like you do.”

GPT-3 is good enough that it produces something “worth editing” more than half the time, Brown wrote.”

AI is already doing things that at once seem magical. And it’s not just storytelling that it’s good at. It’s pervading all walks of human life. It can drive vehicles with little to no human intervention. It can detect cancer cells and can help with facial recognition. Meanwhile, policymakers are using it, businesses are deploying it and it should come as no surprise that there has been a lot of debate about regulating the use of AI.

So a few days back, the European Union took the first stab at drafting proposals that outline how companies and governments are supposed to use AI. As an article in the New York Times points out — “The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.”

What might these high-risk applications be you ask?

Let’s start with one application that’s fraught with ethical issues. Facial recognition.

A scientific paper published in 2018 showed how you could train algorithms to distinguish faces of Uyghur people, a Muslim minority ethnic group who’ve been under constant Chinese surveillance. And while you would think an autocratic state like China could get away with such blatant violations of privacy by virtue of being autocratic, other more liberal countries could also get away with this stuff, if they don’t have stringent regulations prohibiting the use of AI in facial recognition technologies. So as the draft rules noted, the use of facial recognition technology by law enforcement for the purpose of surveillance shall be prohibited. You can’t use live face detection models in public space, unless the “situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack.”

Even other cases where you could deploy AI systems to subliminally manipulate people into causing self-harm or harm to others will be completely banned. For instance, a few months ago news media reported that a chatbot built on GPT-3 had advised one fake patient to kill himself when he reported he had suicidal tendencies.

Even other risky applications that include “deepfakes” — AI-generated videos that look remarkably real, will be strictly regulated and any system responsible for creating such videos will have to clearly label them as computer-generated.

So if you’re a commercial enterprise dabbling with this stuff, you have to make sure these things don’t happen. If you do, then you’ll be asked to furnish huge penalties — up to €30m ($36m) or 6% of global revenues, whichever is higher. As the Economist put it — "In the case of a firm as big as Facebook, for example, that would come to more than $5bn." These rules would be applicable to people that develop AI systems as well as organizations that use them for commercial reasons.

So yeah, the European Union becomes the first global institution to outline draft rules on regulating AI and we hope that more states follow suit soon enough.

Until then…

You can share this article on WhatsApp, LinkedIn and Twitter.