Director Briefing: AI and the Board

Since the release of chat GPT on November 30th, 2022, artificial intelligence has gone from something out of a sci-fi movie into something we now find in our search engines and in our office software suites. Even Elon Musk, who’s never been prone to understatement says that AI can mean that no one will need a job unless they really want one. So what’s a board member to make of AI make, in order to discuss this and understand it a little bit more fully, I’m joined by George Casey, who’s the advanced analytics practice leader at R-S-M-U-S-L-L-P. George, can you start us off in a more general sense with a definition of ai? Yeah, absolutely David, and thank you. It’s, it’s great to be here. So when it comes to artificial intelligence, what we’re really trying to think about is when a system or a computer can do something that in the past, we thought required a human. And so in the last 70 years that terms been around, but it’s changed definition as to how do we interpret this from, Hey, can it make a decision to now it can interpret speech, it can actually speak back and it can interpret images and videos, or it can even stop my car. So the idea is ultimately predicting what it is I’m asking, looking for what my data endpoint is, making a prediction as to how confident am I in the appropriate response. And if I’m confident enough, I’m going to respond or act appropriately based on how this model’s been trained, looking at previous data being kind of scored or graded by an expert.

So ultimately that’s kind of how we’re trying to differentiate or classify artificial intelligence type scenarios. Well, so from a board perspective though, I mean, given everything that’s happened since the launch of chat GPT, it seems like it’s an overwhelmingly complex thing. How can board members break down AI into some more manageable chunks in their thinking? Yeah, absolutely. And I think that’s where we try to, a lot of times frame it as artificial intelligence and machine learning, because they’re kind of opening up the doors to where sometimes there’s a lot of value to be had without closing the loop and having that actual automated response or action or whatever. But still being able to make an accurate prediction. So when boards are, are kind of asking the question of where do we start or how do we approach this? We like to describe this as a research agenda, right? Which is basically a backlog or a to-do list, what are questions we’d like to answer? And then the ability to use some of these techniques to answer them in different or more, uh, accurate ways than we had in the past. And I’ll just give one quick example. When we think of forecasting as a technique has been around forever, right? We’ve always kind of said, what’s gonna happen next? How might I use that information in the context of board management or governance to say, am I hitting the targets?

Are my targets appropriately set for where I wanna be? Uh, now we can start doing things beyond just the mathematics involved in predicting a time series, but now I can look at massive data sets and look external to say, what else is going on in the world that’s gonna help me predict what’s gonna, how my business might react. For example, am I sensitive to inflation? Is my business sensitive to unemployment? Are there supply things going on? And so being able to discover those patterns and those correlations and then make a more accurate prediction is an interesting kind of new approach to an old problem. And it started with that question of what am I gonna sell next month, next quarter, next year? Or what influences my ability to grow or be successful? And so I think that’s where really driving into those questions is a useful way for boards and leadership teams to be able to understand their kind of priorities as it comes to problems to be solved. That sounds great. Uh, but of course, you know, there are the scare stories too. Uh, obviously ch um, AI’s trained on large language models. Is it violating copyright? What are the risks and ethical considerations that boards need to be considering when they’re looking at AI implementation? Yeah, it’s interesting. There’s a classic data science quote that says all models are wrong and some are useful. And even with ai, that’s, that’s true in terms of like, is this something that’s a hundred percent correct? And so in terms of risk management, one of the, the strategies being employed is this idea of a human in the loop, right? Where it’s like there is the ability to review, assess the quality. Uh, is this the appropriate response, uh, to what it is I’m asking for? The challenge is humans make a lot of mistakes too. And so the idea that, oh, just adding a human in the loop is gonna make it perfect, it’s like, well, we’ve been making mistakes for for, and so it comes down to like, what’s the impact of the mistake?

- Advertisement -

And, and a lot of that is around the use case. And so it’s like, Hey, is this something I could be wrong 5%, 10% of the time and it’s still okay? Or do I need a hundred percent accuracy, in which case I just need to design the right reinforcement, uh, to make sure that I have those reviews. I think the other thing when it comes to deploying these models is also we have some great techniques to assess accuracy and say, Hey, does this do what I would want it to do? Or does it do what I would expect a, uh, a trained expert human to do? And if it does, then I can start kind of getting some confidence into it does enough of what I’m expecting it to do. And if it doesn’t, then it becomes like, well, how good is good enough? And making sure like, Hey, we’re still recognizing there may be some errors or flaws in the process, but it’s better than we were. So it it, some of it speaks to kind of the, the impact of being wrong as well as the, the strategy around designing both quality into the model upfront and then assessing over time so that you can have that reinforcement learning element that says, Hey, this gets better over time because it keeps improving as we get more data and more kind of access to what the right answer would’ve been. Now the second part of your question is like the ethical considerations. And this is important because ultimately these models are trained off of past data and, and the the scenario we’re saying is like, the past data was correct, but things may change. And this is one of the ethical considerations is we may have been making bad decisions for a long time and all we’re gonna do is train the model to continue to make bad decisions.

An example of this we’ve seen is, uh, in human resources. And so when it comes to hiring, I might say, well, here’s the type of engineer or IT resource that’s been successful in the past. Here’s what they look like, here’s their demographics or their gender or their race. And 20 years ago that might’ve been a different case than what it is today. But if we say, Hey, model, be trained on that data set and look for people who look like this, we may be perpetuating bias or inappropriate selection criteria going forward. And it says, oh, this person can’t be good at this job because she doesn’t look like an engineer did 50 years ago or 30 years ago when we were hiring. So that’s an important consideration when it comes to the data we use to train the models. And the, the positive is there are some good techniques to assess like underrepresentation of a certain class of data in that training set as well as kind of appropriately balance this so that the model now has better awareness into how it should, uh, you know, value these things going forward when it makes recommendations. Hmm. Okay.

So let’s assume a board, uh, is thinking, okay, how do we, how do we do something with ai? Um, what are some simple ways that, that a company can get involved with AI that, that don’t create too much complexity but then create the opportunity for approvable business case? Yeah, no, I think we like to think about it starting with kind of people productivity. And that’s where a lot of people have been experimenting with things like chat cp t to say, how can I be more efficient? Like, how can I be more productive? And I think that the quote that I like is, you’re not gonna be replaced by ai, you’ll be replaced by someone using ai. And so it really is about how do I arm that person or that subject matter expert with a tool to make them more efficient, productive. So what we look at or the board looks at is where is there a lot of time being spent? ’cause that’s something we can quantify easily measure, identify. Is there an automation opportunity or is it truly an AI opportunity where I can take certain use cases and remove them from the human’s workload and say, Hey, we’re gonna support that expert or that department now with a tool that can do a lot of the basic tasks. And so we look at things like, uh, you know, AP invoices and being able to say, Hey, I get a whole stack of invoices sent to me that in the past required me to code them, enter them into a system, route them for approval. Now with computer vision and the ability for it to say, Hey, I may have never seen this exact invoice, but I’m smart enough to say if it says invoice number and there’s a a number next to it, that’s probably the invoice number.

Or if there’s an amount due, that’s the amount due. And so now given some guidance in terms of making decisions, a lot of that process can be automated and really leave the 10% exception to the AP processor or AP clerk as opposed to the 90% that would just go through automatically anyway. So I think those are the types of opportunities that are the right ways to start. We’ve already seen some wins in the market. We have a sense as, as practitioners on how to do it. So those are great kind of early wins to shift a company from kind of the old process, manual process or heavily human based to this data-driven and kind of automation or digitally transform process.

About the Author(s)

directorsandboards

Gregory P. Shea, Ph.D., is adjunct professor of management and senior fellow at the Wharton Center for Leadership and Change Management, and adjunct senior fellow of the Leonard Davis Institute of Health Economics at the Wharton School.


This is your 1st of 5 free articles this month.

Introductory offer: Unlimited digital access for $20/month
4
Articles Remaining
Already a subscriber? Please sign in here.

Related Articles

Navigate the Boardroom

Sign up for the Directors & Boards weekly newsletter for the latest news, trends and analysis impacting public company boardrooms.