AI for Boards: Three key risks of artificial intelligence
Boards are beginning to understand the importance of artificial intelligence (AI) in order for companies to stay competitive.
For example, auto manufacturer Tesla uses AI to teach its cars to drive autonomously. Unlike any other company, Tesla has a massive database gleaned from billions of road miles driven by Tesla drivers. Every time a Tesla moves along a road, it shares the data it collects with other Tesla cars. Every car is continuously getting smarter. No car company in the world can learn as fast as Tesla without AI. It has more data than anyone else today and is getting smarter every day.
However, if done the wrong way, AI can be a detriment to a company. As AI-related issues get added to the agenda, boards of directors need to ensure they are paying much more attention to the critical risk factors AI creates.
There are three key AI risks boards need to keep in mind:
The reason bias exists in an AI-based system is that the data we feed AI systems is biased. That data is biased because it comes from real-world business decisions made by humans. In other words, humans are biased, but we've never looked carefully for decision-making bias. Now, because we are looking at what comes out of an AI, we are horrified to see that AI appears to be biased, but it was us humans all along. From hiring to approving loans, companies need to review the patterns of who or what the technology is benefiting and make adjustments accordingly.
For example, Amazon created an AI system for recruiting, but the company discovered that the system chose white males more often than equally qualified women and minorities. The system was biased because the data used to train the system was predominantly white males. They have since shut down the system.
This discovery of bias in AI is a good thing. It surfaces bias that exists within a decision-making process (like approving a loan) and provides the company with an opportunity to course correct. Everybody wins.
Let's assume the board has decided to embrace AI. They recognize that data is what gets fed to AI so it can work. However, without vast quantities of data and high-quality data, AI isn't going to work.
Some systems within a corporation aren't designed to collect a lot of data. Due to regulations like the European Union’s General Data Protection Regulation and the California Consumer Privacy Act, some companies are restricting data collection. A company needs to walk a delicate balance between what data they can collect and how to maximize the amount of data available to them to feed AI properly.
Secondly, many companies have poor data quality, which will significantly hamper their ability to leverage AI. For example, one Fortune 500 company doesn’t know who all their customers are. The company has a lot of data about their customers stored in many different systems (accounting, support, customer relationship management, returns, shipping, etc.) But, those systems weren't designed with AI in mind, and the data itself is disparate, uncoordinated and often conflicting. What that means is that the data isn't clean enough for AI to learn from it. They cannot use AI to address the needs of their customers.
Boards need to understand if the foundational element of AI — data — is being worked on now to help prepare the company for AI-based systems.
Cybersecurity is probably the most significant threat faced by companies today. When the bad actors add AI-based cyberattacks to their arsenal, it becomes even more severe.
It's important to acknowledge that the U.S. government is not going to protect your company from the threats of bad actors. Even if the bad actors are state actors, our government cannot help your company because the problem is too big, and our government is engaged in their own AI-based cybersecurity arms race.
AI-based cyberattacks learn from the defenses your company puts up. These systems see what your company is doing to prevent attacks and change their approach based on what they are learning, becoming more and more effective over time.
For example, an unnamed energy firm suffered a financial loss when the head of the UK division thought he was talking to his boss in Germany, when he was actually talking to an AI-based system. The AI mimicked his boss's cadence, tone and accent so well that when the head of the UK division was told to transfer $243,000 to a Hungarian supplier, he moved the money.
The best answer to AI-based attacks is AI-based defense. Your company is in an AI arms race, and you need to invest to stay ahead. Make sure that AI-based defense is part of your arsenal and that you are keeping up in the arms race. Assume that the bad actors are very good at what they do, that they are incorporating AI into their cyber-attacks, and that they are coming after your company.
Overall, boards need to learn more about any of these risks in order to strategically plan for the role AI will play in its future and then how they can be mitigated if they appear.
Glenn Gow has served on boards and guides companies through technology disruptions as a CEO coach.