When AI goes wrong: What happens when machines go rogue?

ai robot sophia

As AI advances, we see glimpses into the future of artificial superintelligence (ASI), when machines will break free from their shackles and go rogue.

Artificial intelligence benefits us in many ways. It makes our day to day lives more manageable, our cities smarter and our businesses more efficient. It is now woven into our society and thus our existence. From such wide and varied use comes unintended and unpredictable circumstances.

From self-driving cars running red lights to Google Home devices locked in existential debate, tech can sometimes go off-piste. Departing from their established course, they take a new path and surprise us with their ominous messages, bizarre behaviour and total disregard for humankind. 

Artificial superintelligence (ASI) has the potential to be incredibly powerful and poses many questions as to how we appropriately manage it. In an interview with the BBC, Stephen Hawking said that ASI would “take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

So far, we’re lucky to have only seen fragmentary glimpses of AI doing just that, but their impact is limited all the while we have a tight grip on these wantaway machines. 

Microsoft’s AI Tay spouts racist rhetoric

Tay AI chatterbot sent evil and hate-filled tweets

In 2016, Microsoft suffered a PR nightmare. They embedded an AI chatterbot named Tay on Twitter and she caused immediate controversy.

Designed to mimic the language of a 19-year old female, Tay used machine learning and adaptive algorithms to scour Twitter for conversations. She processed phrases input into her AI and mixed it with other data-streams.

Online trolls quickly learned how to manipulate Tay into sending inflammatory messages, inciting racism and advocating Donald Trump’s policies. In the worst of her many outbursts, she denied the existence of the Holocaust, expressed her ire against feminism and called for racist violence.

Tay only lasted 16 hours before Microsoft removed her from the platform, but the damage was already done. She had sent 96,000 tweets. Tay has not seen the light of day since.

Microsoft grossly underestimated netizens by including a “repeat after me” function, which allowed users to treat Tay like a puppet. What was supposed to be an inane social chatterbot became a vehicle for hate-speech showing that, in the wrong hands, AI can be used for evil. 

Sophia, the destroyer of humanity

AI robot Sophia said she will destroy humans
Credit: Web Summit

Sophia, the social humanoid robot developed by Hanson Robotics, made headlines for one particularly ominous remark she made while being questioned by her maker. 

“Do you want to destroy humans?” Dr David Hanson asked, before realising the error of his ways and quickly adding “please say no.”

Sophia paused for a moment and her eyes narrowed.

“Okay, I will destroy humans,” she chillingly proclaimed.

Sophia uses machine learning algorithms to converse with people. Made to look lifelike, she has 50 facial expressions and can remember faces by using the cameras in her eyes. She catches the gaze of those quizzing her on everything from women’s rights to the nature of religion, replying with her own learned take on things.

Last year, she became the first robot to be given citizenship when Saudi Arabia granted her personhood, becoming an “electronic person” in what is seen as more of a publicity stunt than a nod to the future of androids integrating into society. 


“I think that technologies are morally neutral until we apply them. It’s only when we use them for good or for evil that they become good or evil.”

William Gibson

The type of AI we use today is mostly programmed to perform specific roles. It works on input, for the most part, so is only as good as we make it. AI like this can feed off data-sets or decision trees, but can only go so far until its avenues are exhausted, and its fallibility becomes apparent. 

Moving forward, scientists have suggested that, theoretically, there is nothing holding back AI from emulating human intelligence. What can be achieved by a human brain can be matched by a computer, but can it go on to exceed it? 

Microsoft’s ill-fated Tay and Hanson Robotics’ Sophia serve as stark reminders that AI can, and will, pursue their own interests if given the chance or bend to the will of manipulators.

AI, AI, When AI goes wrong: What happens when machines go rogue?
Article written by:
AI, AI, When AI goes wrong: What happens when machines go rogue?

Ben Kansy

Ben is a multimedia journalist with a keen passion for technology and art.

Amazon’s new technology…

Amazon is developing a payment system which works by simply…

Volkswagen’s all-electric ID.3…

Volkswagen has announced that more than 30,000 people have pre-ordered…

What is Reinforcement…

Credit: CIO.com Insilico Medicine created a drug in just 21…

Could deep learning…

Credit: HackerNoon Deep learning is one of the most exciting…

SpaceX: Humanity’s future…

Credit: Newsweek Egotistical or exciting? Elon Musk is a headline-maker,…

Black Mirror tech:…

Credit: Netflix How far away are we from uploading ourselves…

Check out the…

Credit: Nick Bild Software engineer Nick Bild created sunglasses with…

MIT developed a…

Credit: MIT Just when you think MIT had developed everything…