The Importance of responsible AI in our fast-paced world
Matthew Nolan, Senior Director of Product Marketing at Pegasystems, explains the four criteria in developing AI responsibly.
Artificial intelligence (AI) drives practically everything we do online today. Predictive algorithms recommend products and services, diagnose our health, translate our words, and make calculations based on our past behavior. They even determine what news we see and what content shows up in our social media feeds (for better or for worse). What separates AI from other decisioning technology is that it “learns.” And as AI becomes even more embedded in our lives, it will continue to become more autonomous, ultimately acting without human supervision.
As the need for AI-driven technologies increases, so does the pressure on organizations to stay competitive and keep pace. The “AI Arms Race” has never been so great – not just who can build the most sophisticated algorithms, it’s a battle for control of our attention and to gain influence. As the saying goes, “with great power comes great responsibility,” but outside of the U.S. and Europe, there are far fewer regulations on collection and data usage to protect people’s privacy. The question for many organizations becomes “how do we make sure our AI models reflect our cultural and ethical values while staying ahead of the competition?” With businesses being forced to move AI faster – and perhaps faster than they are ready to – that speed comes at a cost.
Four tenants of responsible AI
This year has brought the fight for social justice to the forefront of the news cycle. With a spotlight on prejudice and discrimination, organizations driving AI’s usage and development absolutely must do what they can to eliminate bias, explain decisions made by their technology, and take accountability if/when their AI goes rogue. If we truly want to live up to the promise of customer centricity – not just use it as a buzzword – we need to commit to developing that AI responsibly, not just rapidly.
Responsible AI has four critical tenants:
Organizations need to be proactive and vigilant about policing their AI to ensure it’s fair to everyone. This means building truly unbiased AI data models, proactively monitoring them, and analyzing their outputs. Is the AI treating all genders, ethnicities, age groups, zip codes, incomes, religious groups, etc. the same way?
Fair AI usually isn’t top of mind until a company makes a mistake – and the results can be damaging. When it comes to bias, in some cases, AI has to be “better” than the society we live in. For instance, Northpointe’s Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) AI tool was found to be discriminating against African-American offenders – declaring that they were twice as likely to re-offend than white criminals. The root of the problem was that the decisions were based on skewed data in the justice system that was inherently biased and perpetuated an unfair stereotype.
Companies need to show how AI arrived at a decision, especially when it comes to highly regulated industries like financial services and insurance. Late last year, a high-profile company was under attack for its credit card offerings, because it supposedly offered higher credit limits for men as compared to women with similar financial circumstances and backgrounds. Because the chosen AI model wasn’t transparent about how it made its decisions – a condition known as “black box” or “opaque” AI – the company appeared to be biased against women.
Companies must be proactive about certifying their algorithms, clearly communicating their policies on bias, and providing a clear explanation of why decisions were made, especially when there’s a problem. They should also consider using transparent and explainable algorithms for regulated/higher-risk use-cases like credit approvals to make it easier for frontline employees to understand and explain their decisions to customers.
The need for companies to show empathy for their customers has never been greater. Empathy in AI means that the decisions it makes are relevant, helpful, and put the customers’ needs first. It means looking at a customer’s complete context to understand exactly what they need “right now” in the moment. It’s knowing when to sell – but it’s just as important to know when to serve, retain, and just stay quiet.
For instance, Commonwealth Bank of Australia’s “Benefits Finder” service prompted 600,000 of its customers with information about the cash value of expiring credit card points and how to redeem them. While most companies let points expire so they don’t have to incur the cost, CBA saw proactive communication as a more empathetic way to serve their customers.
Turns out, empathy is good for business too. According to a recent Total Economic Impact Study from Forrester, using AI to develop a strong 1:1 engagement program can drive significant incremental revenue – almost $700M in three years – while minimizing more than $500M in customer churn losses.
Remember Tay? The opaque Twitter bot that went off the rails a few years ago? People were encouraged to interact with it and within 24 hours, the model became misogynistic and racist because of the data it was ingesting from Twitter conversations. It became a huge joke for some, but it was a turning point for the market – especially those organizations that might be trying to move too quickly. They realized they needed a more robust AI, or one with built-in protections so it couldn’t be so easily influenced. They also needed to take the time to establish AI rules and guardrails that governed what types of actions were truly “suitable” in given situations.
Most of us don’t think about algorithms until they make mistakes – but organizations need to proactively prevent discrimination by policing themselves and making decisions based on what’s most suitable for the customer.
Fairness, transparency, empathy, and robustness should be the four key pillars of any company’s responsible AI policy. While companies can’t afford to slow down, we need to coalesce around a foundational set of principles that respect the customer and provide a sustainable (and hopefully profitable) vision for long-term success. That will benefit everyone. As it’s not only the right thing to do, it will ultimately protect and bolster our customer relationships, our brands, and our bottom lines, no matter what crisis we are faced with next.
We think you'll like:
- 63% of UK data decision makers experience resistance from employees in adopting data-driven methods <span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix="Reading Time:"></span> <span class="bsf-rt-display-time" reading_time="4"></span> <span class="bsf-rt-display-postfix" postfix="mins"></span></span><!-- .bsf-rt-reading-time -->
- Does AI have a positioning problem? <span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix="Reading Time:"></span> <span class="bsf-rt-display-time" reading_time="4"></span> <span class="bsf-rt-display-postfix" postfix="mins"></span></span><!-- .bsf-rt-reading-time -->
- Exasol V7: unlock more data at speed for improved businesses agility <span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix="Reading Time:"></span> <span class="bsf-rt-display-time" reading_time="4"></span> <span class="bsf-rt-display-postfix" postfix="mins"></span></span><!-- .bsf-rt-reading-time -->
- 3 Ways 5G could benefit AI and IoT technology <span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix="Reading Time:"></span> <span class="bsf-rt-display-time" reading_time="4"></span> <span class="bsf-rt-display-postfix" postfix="mins"></span></span><!-- .bsf-rt-reading-time -->