The Board’s Role in Setting Up AI’s Ethical ‘Guardrails’
By Eve Tahmincioglu

Boardroom discussions on artificial intelligence and machine learning’s impact on humanity need to be fast-tracked.

Whenever there are discussions in the boardroom about artificial intelligence (AI) and how the technology may fit in the company’s strategy, Linda Goodspeed likes to bring up two questions: “What decisions are we going to allow machines to make, and how are we going to audit those decisions?”

While Goodspeed, a former CIO who sits on the boards of AEP, AutoZone and Darling Ingredients, says AI and machine learning (ML) implementation is still in the early stages for the companies she serves on, “thoughtful discussions” about how to carefully adopt the technology are critical.

For many companies and their boards, it’s early days for AI adoption and in particular, ML, but the recent rush to implement the latest and greatest so you’re not left behind is causing some to worry about whether corporate leaders are taking enough time to consider the potential impacts on employees, communities and society at large.

AI can have “a very positive impact on people and society with greater efficiencies, sustainability and better ways of living,” says Martin Fiore, the northeast tax leader for EY, who has spearheaded the firm’s Humans Inc. initiative to raise awareness around the ethics-conscious pursuit of new technologies like AI. “But if you don’t build trust, it can be very negative. We’re looking at how we preserve and maximize humanity.”

Boards, he explains, are just beginning to consider these issues.

The best starting point, he stresses, is figuring out whether there “should there be guardrails governing these issues and, if there should be, who should make the decision on what those guardrails look like?

What guardrails do you have in place, do you understand what all functions in the business are doing? Everyone is innovating, do you know the inventory of innovation you have at your organization?”

The AI innovation that has most technology experts worried is machine learning.

“The primary ethical problem with machine learning right now is that because it programs itself by analyzing existing data, and because existing data almost always reflects existing biases, ML can replicate or even amplify those biases,” explains David Weinberger, senior researcher at the Harvard Berkman Center for Internet & Society and author of the forthcoming book “Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility.

ML essentially programs itself, he continues, “by building insanely large, complex ‘neural networks’ based on the data it has analyzed. Those networks can be too complex for humans to understand.”

Beyond the complexity is being realistic on how AI technologies could be used and that’s where corporate culture should play a role.

“Every great company has a set of values,” explains Ron Williams, the former CEO and chairman of Aetna, who sits on the boards of American Express, Boeing and Johnson & Johnson. He would not speak specifically about the boards on which he serves, but said when implementing AI or any other technology, “human beings have to apply ethics, moral and values of the enterprise.”

Companies should not just be “plopping technology in the middle of a situation,” says Williams, the author of the forthcoming book ‘Learning to Lead: Leading Yourself, Leading Others, and Leading an Organization’. “Good companies will look at their R&D expenditures and investing in AI and other technologies to enhance their competitive position, and they will broadly look at the implications for all stakeholders, consumers, employees and others.”

There has been, he continues, “an increasing awareness that there is a multi-stakeholder dimension, and a broader view beyond just the monetary value of a particular activity,” says Williams.

Indeed, executives are realizing they need to think about the ethics of AI.

A study of business leaders commissioned by SAS, Accenture Applied Intelligence and Intel conducted by Forbes Insights in July 2018, found that 70% conduct ethics training for their technologists and 63% have ethics committees in place to review the use of AI. Among the leaders who rated AI deployment “successful” or “highly successful” — 92% train their technologists in ethics, compared to 48% of other AI adopters.

There are some basic ethical principles to consider when implementing any type of technology including fairness, transparency, privacy, human rights and accountability, says Anna Bethke, who heads AI for social good of Intel’s Artificial Intelligence Products Group. The critical ethics issue, she adds, is making sure that the technology and applications being built are overall “net positive.”

Boards, she advises, should be looking at the bigger picture, the long-term potential harm and potential gains. But, she acknowledges, “it’s really complex. No two people have the exact same idea of what is ethical and what is not.”

To help sort things out, Harvard Berkman Center’s Weinberger provides a list of questions for boards to ask management:

• Will ML help with a task in some substantial way? Will it do something new, or do some existing task faster, more accurately or cheaper?

• What will ML be optimized for? What are the trade-offs? For example, outside the world of ML, some cars are optimized for speed, some for fuel efficiency, some for large family comfort, etc. These optimizations can be mutually exclusive. What exactly will your ML be optimized for? Have all stakeholders agreed to this?

• How thorough and careful will the testing processes be?

• Have all stakeholders — in all their diversity — been a part of the consultation, design and testing processes? Have they been listened to?

• Is the data ML uses representative of the full diversity of all those affected by ML’s outcomes? Are you sure?

• What are the technical methods by which the ML can be forensically investigated in the case of problems?

“Whenever possible, which will be in almost all cases,” he explains, “ML systems ought to be transparent about the procedures in place to clean data of biases, the procedures for addressing problems, the risks, the performance of the system against goals, and, when relevant, how these outcomes compare to the systems the ML is replacing.”

 

 


Issue: 
2019 Second Quarter

Other related articles

  • Director Data Q2 2019
    Published April 10, 2019
    By Directors and Boards
    Gender Diversity Progresses SlowlyIn 2018, 85% of large company boards across 44 countries had at least one woman among their directors. But women hold only 20.4% of all director seats, up from 13.6% ...
  • Avoiding a Punch in the Mouth
    Published April 09, 2019
    By David Shaw
    Shifting some (or more) board focus to stakeholder interests sounds good in principle, but…Throughout this year, Directors & Boards is devoting space in every issue to discuss and analyze “The ...
  • My Board Journey: JANET KERR, Director, La-Z-Boy, Inc., Tilly’s, Inc., AppFolio.
    Published April 09, 2019
    By Eve Tahmincioglu
    Janet Kerr, Vice-Chancellor of Pepperdine University and Professor Emeritus of Law at Pepperdine University School of LawWhat was your first corporate board seat appointment and how did that come ...
  • Assessing Board Performance
    Published April 09, 2019
    By Sabine Dembkowski and Lauren E. Smith
    The secret sauce is in focusing on director strengths and skills.The need for high-performing and accountable board leadership is ever-increasing as the complexities of the business environment contin ...