Talking (and Thinking) Carefully about AI

To lead change, directors must become more conversant in artificial intelligence.

Artificial intelligence (AI) has secured considerable conversational shelf space, and it’s growing, as is reflected by sizable and rapidly increasing private investment. Stanford University’s Institute for Human-Centered AI noted in its March 2022 Artificial Intelligence Index Report 2022, “103% more money was invested in the private investment of AI and AI-related startups in 2021 than in 2020 ($96.5 billion versus $46 billion),” a trajectory that is expected to continue. How might directors and boards enhance their ability to oversee AI activities in their organizations?

The Role of the Board

You have no doubt heard repeated counsel and admonitions: “Boards oversee. They do not ‘do.’” Or, more colloquially, “Eyes on, hands off.” So goes the principle. 

Naturally, how one operationalizes the line between oversight and doing matters greatly regardless of the issue, as does how one operationalizes (and normalizes) the conversation around just what constitutes a correctly drawn line. The newer the issue or the relationship (e.g., new board chair, director or CEO), the more important the conversation, since board members and senior managers alike must map the terrain carefully before drawing any lines of demarcation.

Artificial intelligence carries particular significance for boards. Its demonstrated power and potential importance grow even as definition and working familiarity with it varies widely. Hence, boards need to grow effectively conversant in it.

- Advertisement -

Implication 1: The old rules about the role of the board still apply.

 

Signs of Trouble

You know you’re in trouble when people, such as writers and product and service salespeople, assume that you know what “artificial intelligence,” or “AI,” means. It’s a bit like “emotional intelligence” or “EI” was (and to an extent still is). “Anyone with a few synapses firing knows, and only the uninformed, dysfunctional or simply uncool do not know.” So begins the intimidation and the limitation of crucial learning, even as the definitional barn becomes so big that nearly all of Noah’s animals fit inside it. So too begins an unnecessary and likely ill-defined and risky dependence on experts, especially outside experts.  

Language and definitions matter. Inaccuracy and misdirection lead to more errors and increased costs. Careful use of careful definitions enables careful (and meaningful) oversight discussion. Or, to quote Ludwig Wittgenstein, “The limits of my language are the limits of my world.”

Definitional fuzziness need not indicate malicious intent. In Artificial Intelligence: A Guide for Thinking Humans, a useful primer on AI, Melanie Mitchell notes “a committee of prominent researchers defined the field ‘as a branch of computer science that studies the properties of intelligence by synthesizing intelligence.’”  Setting aside the beckoning abyss of defining intelligence, Mitchell states, “The lack of a precise, universally accepted definition of AI probably has helped the field grow, blossom and advance at an ever-accelerating pace… Practitioners, researchers and developers of AI are instead guided by a rough sense of direction and an imperative to ‘get on with it.’” The article before you focuses on the powerful but more mundane aspect of AI – namely machine learning to mine massive data sets –  not the intriguing possibility of AI replicating or surpassing human intelligence.

Implication 2: Take your own counsel when defining AI. Develop and continually refine your definition of AI to ground necessary conversation and meet your fiduciary obligations concerning AI. Direct your own learning and be interactive with the material and other learners.

 

Defining AI

Try this flawed but potentially useful definition of AI: “greatly enhanced algorithmic mining of data that enables finding connections and generating useful results in massive data fields, even refining the search itself, all at speeds far exceeding human capacity.” 
The size of the data fields (and of the necessary computing power), combined with the iterative quality built into its programming, means that AI both resembles “set piece” computing and represents something new. Originally, computers did just that: they computed. AI enables moving so far beyond computation that it is or will become something other than what we term “computing.” At some point, the horseless carriage became an automobile. It’s a different thing, usefully identified as such.

Implication 3: AI is not simply bigger computers. Therefore, referencing computing when discussing AI can clarify or mislead. Take linguistic care. 

 

AI’s Purpose for an Organization 

AI can help you learn. Pick the topic: your markets, product or service quality, image reading, voice or face recognition, vehicle navigation, financial fraud, screening job applicants, identifying employees who should be promoted or early warning of impending pandemics and expedited identification of best care practices. Pick a question. Refine it enough to provide guidance to an AI search. Load in the variables you think could prove relevant. Program AI to mine for noteworthy patterns. Set AI free inside a data set of any size. In fact, the bigger, the better. Wait for the results. Examine and sort the results as necessary. Adjust programming as deemed appropriate. Utilize your learnings.  Debrief and refine protocols. Repeat.

Implication 4: Get the questions right. Check the quality of questions being asked. Sloppy questioning can lead to a sloppy investigation, even an eerily named phenomenon called “hallucination,” which amounts to the machine coming up with a finding because it is “supposed to” do so. It’s a spooky cousin of the ghost in the machine.

Implication 5: Certify the programming used. The more executives view AI as a black box, especially a black box held by someone else, the more vulnerable they are to false claims that the data mined yielded information that can knowledgably inform decision-making. Understand what your organization or vendor does to the data.

Implication 6: Inspect any data set in which you hunt, because:

  • Poor data collection techniques, such as failure to examine inherent survey or sample biases, can contaminate entire data sets. Early computer engineers termed it “GIGO,” or “garbage in, garbage out.” Data contamination happens with the greatest of ease, yielding garbage, whether the recipient recognizes the contamination or not. Data analysis cannot yield findings of greater quality than the data used.
  • Each organization organizes its data in its own way. Should you wish bigger and more diverse data sets, then network with other organizations and establish protocols for data collection, data organization, data sharing and data security—before you need the data.
  • The fight is on regarding what constitutes inappropriate use of personal data. The target is moving and looming ever larger. It will only grow more significant in the near and middle term. 

Implication 7: Develop, buy or rent, but be an informed consumer.  Regardless of how you pursue the benefits of AI, make sure that you have the necessary expertise in-house to evaluate any AI that you use and any product that it produces. At the very least, such expertise will assist in smart purchasing from and supervision of AI vendors. The expertise will also enable establishment of compliance protocols essential for appropriate data usage and compliance. Consider the composition of your board and its likely current predominance of legal, financial and technical expertise, and the underrepresentation of AI, cybersecurity and ethical expertise. Do you have the correct mix of expertise in the board and in senior management to oversee the digital business of tomorrow?

Implication 8: Educate yourself, the board and senior management in the basics of critical thinking (including cognitive tendencies toward acontextual analysis), research methods, statistics and how they play out in AI. Leadership and the board must be able to effectively oversee retained AI expertise.

Implication 9: AI makes advanced “What if?” thought exercises possible. Strategic planning should make use of thought technology and include planning for growing AI-related costs. Estimated costs to maintain large AI capability currently run as high as $1 billion per year. Thus, even renting and overseeing such capability becomes a strategic financial issue.

Implication 10: Envision a future filled with AI. AI will come to shape numerous aspects of your organization whether you plan for it or not. Better to plan and lead the change. To lead the change, consider: 

  • If your executive team were to leverage AI broadly and successfully, what stories would characterize your organization in the future?
  • What kind of an organization would make those stories likely?  
  • What actions would make those stories likely to occur?  

Think systemically to convert aspects of the organization into levers of change. (See Shea and Solomon, Leading Successful Change: 8 Keys to Making Change Work.

Implication 11: Successful implementation of AI requires the same project management and change leadership skills required of other major initiatives. AI is a bright and shiny object to be sure, and will likely remain so. Nonetheless, its successful implementation requires individuals to not only use it, but to skillfully employ its outputs. Hence, project management should solicit and utilize end users’ active, ongoing and informed participation in AI’s design and implementation.

To optimize use of AI, boards must enhance discipline of thought, educate themselves and management , assist in acquiring AI expertise, develop best practices and create a vision for an AI future.

 

About the Author(s)

Gregory P. Shea Ph.D.

Gregory P. Shea, Ph.D., is adjunct professor of management and senior fellow at the Wharton Center for Leadership and Change Management and adjunct senior fellow of the Leonard Davis Institute of Health Economics at the Wharton School of the University of Pennsylvania.


Related Articles

Navigate the Boardroom

Sign up for the Directors & Boards weekly newsletter for the latest news, trends and analysis impacting public company boardrooms.