We must not wait for the government to fill the vacuum of AI governance.

In July this year, the UK set out a new AI ‘rulebook’ proposing how this transformative technology may be regulated in the future. Balancing support for AI innovation with the need for public protection, it opens a public debate on the role of regulators within AI governance structures. However, organizations cannot wait until this rulebook is finalized to deal with the questions AI poses.

Concerns over opaque black-box algorithms to questions regarding the ethical use of personal data and responsibilities related to security and privacy have made AI a hotbed of modern ethical dilemmas.

These dilemmas must be addressed by the swathes of public and private organizations now relying on AI to power innovation. However, despite the proliferation of AI in the enterprise, many organizations still lack strong AI governance crucial to ensuring the integrity and security of data-led systems.

In fact, the latest O’Reilly research shows that over half of AI products in production at global organizations still do not have a governance plan overseeing how projects are created, measured and observed. Deeply concerning is that privacy and security – issues that may directly impact individuals – were among the risks least cited by organizations when questioned on how they evaluate the risks for AI applications. AI-empowered organizations report that ‘unexpected outcomes’ are the most significant risk facing AI projects, followed closely by model interpretability and model degradation, representing business issues. Interpretability, privacy, fairness, and safety all ranked below business risks.

There may be AI applications where privacy and fairness are not issues (for example, an embedded system that decides whether the dishes in your dishwasher are clean). However, companies with AI practices must prioritize the human impact of AI as both an ethical imperative and a core business priority.

As UKRI (UK Research and Innovation) highlights, ‘responsible use of AI is proving to be a competitive differentiator and key success factor for the adoption of AI technologies. However, cultural challenges, and particularly the lack of trust, are still deemed to be the main obstacles preventing broader and faster adoption of AI.’

Lack of governance is not just an ethical concern. Security is also a massive issue, with AI subject to many unique risks: data poisoning, malicious inputs that generate false predictions, and reverse engineering models to expose private information, to name a few. However, security remains close to the bottom of the list of perceived AI risks.

With cybercriminals and bad actors surging ahead in their adoption of sophisticated technology, cybersecurity cannot take a back seat in the race to realize AI’s promise. It is a vital strand of much-needed AI governance. Governance must rise up the matrix of risk factors for AI projects, becoming a cornerstone of any development and deployment program.

AI governance in a nutshell

With that in mind, what exactly is AI governance? According to Deloitte, it encompasses a ‘wide spectrum of capabilities focused on driving the responsible use of AI. It combines traditional governance constructs (policy, accountability, etc.) with differential ones such as ethics review, bias testing, and surveillance. The definition comes down to an operational view of AI and has three components: data, technique/algorithm, and business context.’

In summary, ‘achieving widespread use of AI requires effective governance of AI through active management of AI risks and implementation of enabling standards and routines.’

Without formalizing AI governance, organizations are less likely to know when models are becoming stale, results are biased, or when data is improperly collected. Companies developing AI systems without stringent governance to tackle these issues are risking their businesses. They leave the way open for AI to effectively take control, with unpredictable results that could cause irreparable damage to reputation and large legal judgments.

The least of these risks is that legislation will impose governance, and those who have not been practicing AI governance will need to catch up. In today’s rapidly shifting regulatory landscape, playing catch up is a risk to reputation and business resilience.

What has created the AI governance gap?

The reasons for AI governance failure are complex and interconnected. However, one thing is clear – accelerated AI development and adoption has not been matched by a surge in education and awareness of its risks. What this means is that AI is suffering a people problem.

For example, the most significant bottlenecks to AI adoption are a lack of skilled people. Our research demonstrates significant skills gaps in key technological areas, including ML modeling and data science, data engineering, and the maintenance of business use cases. The AI skills gap is well documented, with much government discussion and policy to drive data skills through focused tertiary education and up/reskilling.

However, technological skills are not enough to bridge the gap between innovation and governance. It is neither advisable nor fair to leave governance to technical talent alone. Undoubtedly those with the skills to develop AI must also be equipped with the knowledge and values to make decisions and problem solve within the broader context in which they operate. However, AI governance is truly a team effort and represents the values of an organization brought to life.

That means no organization can be complacent when embedding ethics and security within AI projects from the outset. Everyone across the organization, from CEO to data analyst, CIO to project manager, must engage in AI governance. They must align on why these issues matter and how the organization’s values play out through AI implementations.

AI innovation is surging ahead while governments and regulators across the world play catchup in regard to what these new technologies mean for wider society. Any morally cognizant organization puts the privacy, security, and protection of the public first and foremost, but without stringent AI governance in place, there is a potential moral void. We cannot wait for this to be filled by regulation. Organizations must be as proactive about facing these difficult questions as they are about embracing the power and promise of AI.

How Predictive AI is Helping the Energy Sector

Colin Gault head of product at POWWR • 29th April 2024

In the past year or so, we have seen the emergence of many new and exciting applications for predictive AI in the energy industry to better maintain and optimise energy assets. In fact, the advances in the technology have been nothing short of rapid. The challenge, though, has been in supplying the ‘right’ data to...

How Predictive AI is Helping the Energy Sector

Colin Gault head of product at POWWR • 29th April 2024

In the past year or so, we have seen the emergence of many new and exciting applications for predictive AI in the energy industry to better maintain and optimise energy assets. In fact, the advances in the technology have been nothing short of rapid. The challenge, though, has been in supplying the ‘right’ data to...

Cheltenham MSP is first official local cyber advisor

Neil Smith Managing Director of ReformIT • 23rd April 2024

ReformIT, a Managed IT Service and Security provider (MSP) based in the UK’s cyber-capital, Cheltenham, has become the first MSP in the local area to be accredited as both a Cyber Advisor and a Cyber Essentials Certification Body. The Cyber Advisor scheme was launched by the Government’s official National Cyber Security Centre (NCSC) and the...

How we’re modernising BT’s UK Portfolio Businesses

Faisal Mahomed • 23rd April 2024

Nowhere is the move to a digitised society more pronounced than the evolution from the traditional phone box to our innovative digital street units. Payphone usage has dropped massively since the late 1990s/2000s, with devices and smart phones replacing not only communication access, but the central community points that the payphones once stood for. Our...

How we’re modernising BT’s UK Portfolio Businesses

Faisal Mahomed • 23rd April 2024

Nowhere is the move to a digitised society more pronounced than the evolution from the traditional phone box to our innovative digital street units. Payphone usage has dropped massively since the late 1990s/2000s, with devices and smart phones replacing not only communication access, but the central community points that the payphones once stood for. Our...

What is a User Journey

Erin Lanahan • 19th April 2024

User journey mapping is the compass guiding businesses to customer-centric success. By meticulously tracing the steps users take when interacting with products or services, businesses gain profound insights into user needs and behaviors. Understanding users’ emotions and preferences at each touchpoint enables the creation of tailored experiences that resonate deeply. Through strategic segmentation, persona-driven design,...

From Shadow IT to Shadow AI

Mark Molyneux • 16th April 2024

Mark Molyneux, EMEA CTO from Cohesity, explains the challenges this development brings with it and why, despite all the enthusiasm, companies should not repeat old mistakes from the early cloud era.

Fixing the Public Sector IT Debacle

Mark Grindey • 11th April 2024

Public sector IT services are no longer fit for purpose. Constant security breaches. Unacceptable downtime. Endemic over-spending. Delays in vital service innovation that would reduce costs and improve citizen experience.