Creating sustainable, trustworthy AI

We heard from Anna Felländer, Founder, AI Sustainability Centre, advises on how organizations can achieve sustainable and trustworthy AI.
We heard from Anna Felländer, Founder, AI Sustainability Center, who advised on how organizations can achieve sustainable and trustworthy AI.

The acceleration of AI in the last nineteen months has raised further ethical and societal risks. Anna Felländer, the founder of AI Sustainability Center, explained at the Data Innovation Summit 2021, how organizations can use sustainable AI to address these issues and avoid costly legal, financial and reputational risks.

Felländer founded the AI Sustainability Center in 2018. She believes that AI will be as commonplace as electricity and will continue to revolutionize all aspects of our lives. However, Felländer notes that AI is not without its risks: privacy invasion, discrimination, lost autonomy, and disinformation are the challenges companies face. In addition to this, AI can harm the safety of humans, creating a violation of human rights. Unfortunately, much of this is out of human governance as it scales quickly and because attention has mainly been focused on the engineering aspect.

Why is it so hard to mitigate societal risks in AI?

To begin with, Felländer explains that algorithms are focused on targets, sales and profits. During this journey, AI makes ethical decisions that will enable it to reach the target hidden from coders. There are no go-to-market solutions for AI, which is why it is so costly and reputationally when AI presents ethical issues. Felländer references Microsoft, who came under fire for their facial recognition software, which failed to recognize black people. The same bias is present in credit loans, where

AI-based automated decisions will authorize a credit to a man but not a woman when they both have the same credit score. Sometimes these risks result from the coder being unable to translate the values of the company into the code. These coders are not taught to have a multidisciplinary skillset to support this kind of coding. Even then, they may be unaware of the context in which the AI is being used, so, unsurprisingly, bias like this is taking place. Asymmetric information makes it difficult to know how to mitigate against this. Still, it is something that all organizations need to consider in implementing their AI strategy, as the ethical and financial risks are far higher than an organization may initially realize.

Developing a methodology for ethical risks in AI

Felländer has spent years developing a methodology to detect, assess and govern ethical risks in AI applications. The AI Sustainability Center approached this from a multidisciplinary perspective, with a legal, technical, and societal lens. Yet, just as importantly, to activate the organization, its framework to detect, assess and mitigate against these risks is visualizing ethical considerations in AI applications that would otherwise have not been considered. These considerations do not belong in a silo or with only the legal or tech team; these are business-critical considerations. Felländer gives examples of the ethical considerations trade-offs in AI solutions:

  • Explainability vs accuracy
  • Fairness vs precision
  • Profits vs values

For example, if an individual had a machine attached to them to measure heart rate, there would need to be a trade-off of precision over fairness. When translating this to AI ethics, organizations need support in navigating these trade-offs. Felländer notes that this is becoming increasingly essential for high-risk organizations who will need to explain their AI procedures

when the EU Artificial Intelligence Regulation comes into effect in 2024.

Organizations are gradually rising to meet this responsibility head-on, with SalesForce being one of the first US organizations to hire an ethical AI officer. Organizations just moving to explainable AI need to ask: What needs to be explained? What can be explained? To whom do we need to explain it to? Companies will now need to scan their vast amount of AI and ML to see what will be explained to the EU.

READ MORE:
Ethical AI screening

Felländer says that at the AI Sustainability Center, they have created ethical AI filters. She cites the risks ignored until now as a “dark cloud of pollution” that companies such as the AI Sustainability Center seek to mitigate. She believes that the EU’s regulation will drive it as a region in creating transparency, also giving these organizations a competitive edge. The AI Sustainability Center has created an insight engine for ethical AI governance, with an ethical AI profiler that screens AI applications and businesses against ethical and societal risks. In addition to this, it has developed solutions that can predict risks and recommend mitigation tools. Felländer says that the AI Sustainability Center supports scaleups, major corporations and recruitment companies. She concludes by saying that all organizations should embrace AI and ethical AI and encourages organizations to educate themselves on the ethics of AI.

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter

An image of AI, AI, Creating sustainable, trustworthy AI

Amber Donovan-Stevens

Amber is a Content Editor at Top Business Tech

Unlocking productivity and efficiency gains with data management

Russ Kennedy • 04th July 2023

Enterprise data has been closely linked with hardware for numerous years, but an exciting transformation is underway as the era of the hardware businesses is gone. With advanced data services available through the cloud, organisations can forego investing in hardware and abandon infrastructure management in favour of data management.