Creating sustainable, trustworthy AI

We heard from Anna Felländer, Founder, AI Sustainability Centre, advises on how organizations can achieve sustainable and trustworthy AI.
We heard from Anna Felländer, Founder, AI Sustainability Centre, who advised on how organizations can achieve sustainable and trustworthy AI.

The acceleration of AI in the last nineteen months has raised further ethical and societal risks. Anna Felländer, the founder of AI Sustainability Center, explained at the Data Innovation Summit 2021, how organizations can use sustainable AI to address these issues and avoid costly legal, financial and reputational risks. 

Felländer founded the AI Sustainability Centre in 2018. She believes that AI will be as commonplace as electricity and will continue to revolutionize all aspects of our lives. However, Felländer notes that AI is not without its risks: privacy invasion, discrimination, lost autonomy, and disinformation are the challenges companies face. In addition to this, AI can harm the safety of humans, creating a violation of human rights. Unfortunately, much of this is out of human governance as it scales quickly and because attention has mainly been focussed on the engineering aspect. 

Why is it so hard to mitigate societal risks in AI?

To begin with, Felländer explains that algorithms are focused on targets, sales and profits. During this journey, AI makes ethical decisions that will enable it to reach the target hidden from coders. There are no go-to-market solutions for AI, which is why it is so costly and reputationally when AI presents ethical issues. Felländer references Microsoft, who came under fire for their facial recognition software, which failed to recognize black people. The same bias is present in credit loans, where AI-based automated decisions will authorize a credit to a man but not a woman when they both have the same credit score. Sometimes these risks result from the coder being unable to translate the values of the company into the code. These coders are not taught to have a multidisciplinary skillset to support this kind of coding. Even then, they may be unaware of the context in which the AI is being used, so, unsurprisingly, bias like this is taking place. Asymmetric information makes it difficult to know how to mitigate against this. Still, it is something that all organizations need to consider in implementing their AI strategy, as the ethical and financial risks are far higher than an organization may initially realize. 

Developing a methodology for ethical risks in AI

Felländer has spent years developing a methodology to detect, assess and govern ethical risks in AI applications. The AI Sustainability Centre approached this from a multidisciplinary perspective, with a legal, technical, and societal lens. Yet, just as importantly, to activate the organization, its framework to detect, assess and mitigate against these risks is visualizing ethical considerations in AI applications that would otherwise have not been considered. These considerations do not belong in a silo or with only the legal or tech team; these are business-critical considerations. Felländer gives examples of the ethical considerations trade-offs in AI solutions: 

  • Explainability vs accuracy
  • Fairness vs precision
  • Profits vs values 

For example, if an individual had a machine attached to them to measure heart rate, there would need to be a trade-off of precision over fairness. When translating this to AI ethics, organizations need support in navigating these trade-offs. Felländer notes that this is becoming increasingly essential for high-risk organizations who will need to explain their AI procedures when the EU Artificial Intelligence Regulation comes into effect in 2024. 

Organizations are gradually rising to meet this responsibility head-on, with SalesForce being one of the first US organizations to hire an ethical AI officer. Organizations just moving to explainable AI need to ask: What needs to be explained? What can be explained? To whom do we need to explain it to? Companies will now need to scan their vast amount of AI and ML to see what will be explained to the EU. 

Ethical AI screening 

Felländer says that at the AI Sustainability Centre, they have created ethical AI filters. She cites the risks ignored until now as a “dark cloud of pollution” that companies such as the AI Sustainability Centre seek to mitigate. She believes that the EU’s regulation will drive it as a region in creating transparency, also giving these organizations a competitive edge. The AI Sustainability Centre has created an insight engine for ethical AI governance, with an ethical AI profiler that screens AI applications and businesses against ethical and societal risks. In addition to this, it has developed its own AI that can predict risks and can recommend tools. Felländer says that the AI Sustainability Centre support scaleups, major corporations and recruitment companies. She concludes by saying that all organizations should embrace AI and ethical AI and encourages organization’s to educate themselves on the ethics of AI.

For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin!

Follow us on LinkedIn and Twitter


How to move from CIO to Chief Customer Success Officer

Amber Donovan-Stevens • 21st October 2021

Dean Leung, Chief Customer Success Officer at iManage, reflects on his own path shifting from CIO to Chief Customer Success Officer (CCSO) and discusses both the similarities and differences of the two roles, and why it can be a natural progression when approached with the proper mindset.

The importance of edtech in the early years sector

Amber Donovan-Stevens • 18th October 2021

Technology has become an operational mainstay across a multitude of industries – helping businesses, education establishments, governments, and charities to streamline their processes and enhance communications. When it comes to the early years education sector, this is no different. Chris Reid, CEO and founder of Connect Childcare, shares his thoughts on the intrinsic link between...

A deep dive into the Scaled Agile Framework

Jeff Keyes • 14th October 2021

The Scaled Agile Framework (SAFe) was designed to help large organizations successfully adopt agile methodologies. In this article Jeff Keyes, VP of Product Marketing and Strategy at Plutora, discusses the four core values of this approach, and how and why businesses are using the SAFe framework to improve agility in software development.

How click fraud has worsened in the wake of Covid-19

Amber Donovan-Stevens • 05th October 2021

Stewart Boutcher, CTO and Data Lead at Beacon, examines how click fraud – which was already a serious threat to companies engaged in digital marketing prior to the pandemic – has worsened considerably in its wake. He seeks to provide a forecast on how the situation is likely to evolve overtime, and advice on what...

Join our webinar on 26th October: Intelligent Automation - Maintaining the competitive edge.