We must not wait for the government to fill the vacuum of AI governance.

In July this year, the UK set out a new AI ‘rulebook’ proposing how this transformative technology may be regulated in the future. Balancing support for AI innovation with the need for public protection, it opens a public debate on the role of regulators within AI governance structures. However, organizations cannot wait until this rulebook is finalized to deal with the questions AI poses.

Concerns over opaque black-box algorithms to questions regarding the ethical use of personal data and responsibilities related to security and privacy have made AI a hotbed of modern ethical dilemmas.

These dilemmas must be addressed by the swathes of public and private organizations now relying on AI to power innovation. However, despite the proliferation of AI in the enterprise, many organizations still lack strong AI governance crucial to ensuring the integrity and security of data-led systems.

In fact, the latest O’Reilly research shows that over half of AI products in production at global organizations still do not have a governance plan overseeing how projects are created, measured and observed. Deeply concerning is that privacy and security – issues that may directly impact individuals – were among the risks least cited by organizations when questioned on how they evaluate the risks for AI applications. AI-empowered organizations report that ‘unexpected outcomes’ are the most significant risk facing AI projects, followed closely by model interpretability and model degradation, representing business issues. Interpretability, privacy, fairness, and safety all ranked below business risks.

There may be AI applications where privacy and fairness are not issues (for example, an embedded system that decides whether the dishes in your dishwasher are clean). However, companies with AI practices must prioritize the human impact of AI as both an ethical imperative and a core business priority.

As UKRI (UK Research and Innovation) highlights, ‘responsible use of AI is proving to be a competitive differentiator and key success factor for the adoption of AI technologies. However, cultural challenges, and particularly the lack of trust, are still deemed to be the main obstacles preventing broader and faster adoption of AI.’

Lack of governance is not just an ethical concern. Security is also a massive issue, with AI subject to many unique risks: data poisoning, malicious inputs that generate false predictions, and reverse engineering models to expose private information, to name a few. However, security remains close to the bottom of the list of perceived AI risks.

With cybercriminals and bad actors surging ahead in their adoption of sophisticated technology, cybersecurity cannot take a back seat in the race to realize AI’s promise. It is a vital strand of much-needed AI governance. Governance must rise up the matrix of risk factors for AI projects, becoming a cornerstone of any development and deployment program.

AI governance in a nutshell

With that in mind, what exactly is AI governance? According to Deloitte, it encompasses a ‘wide spectrum of capabilities focused on driving the responsible use of AI. It combines traditional governance constructs (policy, accountability, etc.) with differential ones such as ethics review, bias testing, and surveillance. The definition comes down to an operational view of AI and has three components: data, technique/algorithm, and business context.’

In summary, ‘achieving widespread use of AI requires effective governance of AI through active management of AI risks and implementation of enabling standards and routines.’

Without formalizing AI governance, organizations are less likely to know when models are becoming stale, results are biased, or when data is improperly collected. Companies developing AI systems without stringent governance to tackle these issues are risking their businesses. They leave the way open for AI to effectively take control, with unpredictable results that could cause irreparable damage to reputation and large legal judgments.

The least of these risks is that legislation will impose governance, and those who have not been practicing AI governance will need to catch up. In today’s rapidly shifting regulatory landscape, playing catch up is a risk to reputation and business resilience.

What has created the AI governance gap?

The reasons for AI governance failure are complex and interconnected. However, one thing is clear – accelerated AI development and adoption has not been matched by a surge in education and awareness of its risks. What this means is that AI is suffering a people problem.

For example, the most significant bottlenecks to AI adoption are a lack of skilled people. Our research demonstrates significant skills gaps in key technological areas, including ML modeling and data science, data engineering, and the maintenance of business use cases. The AI skills gap is well documented, with much government discussion and policy to drive data skills through focused tertiary education and up/reskilling.

However, technological skills are not enough to bridge the gap between innovation and governance. It is neither advisable nor fair to leave governance to technical talent alone. Undoubtedly those with the skills to develop AI must also be equipped with the knowledge and values to make decisions and problem solve within the broader context in which they operate. However, AI governance is truly a team effort and represents the values of an organization brought to life.

That means no organization can be complacent when embedding ethics and security within AI projects from the outset. Everyone across the organization, from CEO to data analyst, CIO to project manager, must engage in AI governance. They must align on why these issues matter and how the organization’s values play out through AI implementations.

AI innovation is surging ahead while governments and regulators across the world play catchup in regard to what these new technologies mean for wider society. Any morally cognizant organization puts the privacy, security, and protection of the public first and foremost, but without stringent AI governance in place, there is a potential moral void. We cannot wait for this to be filled by regulation. Organizations must be as proactive about facing these difficult questions as they are about embracing the power and promise of AI.

Data-Sharing Done Right: Finding the Best Business Approach

Bart Koek • 20th November 2024

To ensure data is not only available, but also accessible to those that need it, businesses recognise that it is vital to focus on collecting, sorting and governing all the data in their organisation. But what happens when data also needs to be accessed and shared across the business? That is where organisations discover a...

Nova: The Ultimate AI-Powered Martech Solution for Boosting Sales, Marketing...

Erin Lanahan • 19th November 2024

Discover how Nova, the AI-powered engine behind Launched, revolutionises Martech by automating sales and marketing tasks, enhancing personalisation, and delivering unmatched ROI. With advanced intent data integration, revenue attribution, and real-time insights, Nova empowers businesses to scale, streamline operations, and outperform competitors like 6Sense and 11x.ai. Experience the future of Martech with Nova’s transformative AI...

How E-commerce Marketers Can Win Black Friday

Sue Azari • 11th November 2024

As new global eCommerce players expand their influence across both European and US markets, traditional brands are navigating a rapidly shifting landscape. These fast-growing Asian platforms have gained traction by offering ultra-low prices, rapid product turnarounds, heavy investment in paid user acquisition, and leveraging viral social media trends to create demand almost in real-time. This...

Why microgrids are big news

Craig Tropea • 31st October 2024

As the world continues its march towards a greener future, businesses, communities, and individuals alike are all increasingly turning towards renewable energy sources to power their operations. What is most interesting, though, is how many of them are taking the pro-active position of researching, selecting, and implementing their preferred solutions without the assistance of traditional...

Is automation the silver bullet for customer retention?

Carter Busse • 22nd October 2024

CX innovation has accelerated rapidly since 2020, as business and consumer expectations evolved dramatically during the Covid-19 pandemic. Now, finding the best way to engage and respond to customers has become a top business priority and a key business challenge. Not only do customers expect the highest standard, but companies are prioritising superb CX to...

Automated Testing Tools and Their Impact on Software Quality

Natalia Yanchii • 09th October 2024

Test automation refers to using specialized software tools and frameworks to automate the execution of test cases, thereby reducing the time and effort required for manual testing. This approach ensures that automation tests run quickly and consistently, allowing development teams to identify and resolve defects more effectively. Test automation provides greater accuracy by eliminating human...