Adversa AI invents ethical hacking of facial recognition systems

An image of Adversa AI, Cyber Security, Adversa AI invents ethical hacking of facial recognition systems

Adversa AI has launched a new method of hacking facial recognition applications.

Adversa AI, the leading Trusted AI Research startup, has demonstrated a new attack method on AI facial recognition applications. By making imperceptible changes in human faces, it makes an AI-driven facial recognition algorithm misrecognize persons. Compared to other similar approaches, this method is transferable across all AI models and is much more accurate, stealth and resource-efficient.

Adversa AI Red Team has demonstrated a proof-of-concept attack against PimEyes, the most popular and advanced face search engine for public images. It is also similar to Clearview, a commercial facial recognition database sold to law enforcement and governments. PimEyes has been tricked and mistaken Adversa’s CEO for Elon Musk in the photo.

Uniquely, the attack is a black-box one that was developed without any detailed knowledge of the algorithms used by the search engine, and the exploit is transferable to different facial recognition engines. As the attack allows malefactors to camouflage themselves in a variety of ways, we’ve named it Adversarial Octopus, highlighting such qualities of this animal as stealth, precision, and adaptability.

The existence of such vulnerabilities in AI applications and facial recognition engines, in particular, may lead to dire consequences and may be used in both poisoning and evasion scenarios, such as the following ones:

  • Hacktivists may wreak havoc in the AI-driven internet platforms that use face properties as input for any decisions or further training. Attackers can poison or evade the algorithms of big Internet companies by manipulating their profile pictures.
  • Cybercriminals can steal personal identities and bypass AI-driven biometric authentication or identity verification systems in banks, trading platforms, or other services that offer authenticated remote assistance. This attack can be even more stealthy in every scenario where traditional deepfakes can be applied.
  • Dissidents may secretly use it to hide their internet activities in social media from law enforcement. It resembles a mask or fake ID for the virtual world we currently live in.


Recently Adversa AI has released the world-first analytical report concerning a decade of growing activities in the Secure and Trusted AI field. In the wake of interest in practical solutions for ensuring AI system’s security against advanced adversarial attacks, we have developed our technology for testing facial recognition systems for such attacks. We are looking for early adopters and forward-thinking technology companies to partner with us to implement adversarial testing capabilities in your SDLC and MLLC capabilities, increase trust in your AI applications, and provide customers with best-of-breed solutions.

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

An image of Adversa AI, Cyber Security, Adversa AI invents ethical hacking of facial recognition systems

Luke Conrad

Technology & Marketing Enthusiast

Ab Initio partners with BT Group to deliver big data

Luke Conrad • 24th October 2022

AI is becoming an increasingly important element of the digital transformation of many businesses. As well as introducing new opportunities, it also poses a number of challenges for IT teams and the data teams supporting them. Ab Initio has announced a partnership with BT Group to implement its big data management solutions on BT’s internal...

The Metaverse changing the workplace

Luke Conrad • 28th February 2022

We look at the various ways in which the Metaverse will change the workplace and the way businesses operate, with comments from Phil Perry, head of UK & Ireland at Zoom and James Morris-Manuel, EMEA MD at Matterport.