Staying Ahead of AI Regulation

A fragmented and quickly evolving AI regulation and enforcement landscape carries significant risks for boards and companies.

Companies around the world, and across most sectors, are racing to take advantage of AI. The rapid uptake of generative AI in the private sector has forced governments and regulators to consider how they can ensure their economies and societies can seize the benefits of AI, while managing its potential risks. Governments and regulators are grappling with whether their current approaches are sufficient to manage AI, how to balance innovation and regulation, and what AI risks and opportunities to prioritize.

While there is a growing international consensus that AI poses risks, challenges and opportunities, there is no consensus on how to regulate it. The United States, European Union and United Kingdom recently joined several other countries in signing up to the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy, and the Rule of Law (the Convention). The obligations of that first legally binding treaty on AI reflect significant disagreements between its signatories: The Convention is high-level, often vague and lacks robust enforcement mechanisms. In practice, signatory states will retain significant discretion as to whether and how they implement the Convention, especially in respect to the private sector.

The trend of countries adopting different regulatory approaches is likely to continue, but some key themes are emerging. What this means for boards and companies is that solving for AI risks requires understanding regulatory risks and closely monitoring how developments and enforcement trends impact business sectors.

A Growing Patchwork of AI-Focused Laws

- Advertisement -

Leading the charge, the European Union and China have introduced wide-ranging national laws specifically targeting AI.

The European Union's AI Act is far-reaching and applicable to providers, users, end-product manufacturers, importers and distributors of AI systems. It is backed by fines of up to €35m or 7% of annual turnover for noncompliance. AI systems with an “unacceptable level of risk” are strictly prohibited and those considered as “high-risk” are subject to especially stringent obligations. For example, developers of high-risk AI systems must conduct a self-conformity assessment and high-risk AI systems must be registered in an E.U. database. The AI Act will also regulate foundation models and generative AI systems under the label of “general-purpose AI.” It will become applicable in stages between now and August 2, 2027, with the first obligations on companies applying from February 2, 2025.

Brazil, Canada and South Korea are among those countries also considering draft AI laws, although each is proposing a unique approach that diverges from that which the European Union has taken.

In the United States, the future of AI regulation at federal level following the 2024 presidential and congressional elections remains uncertain. The Biden administration's 2023 Executive Order on AI made several targeted interventions, including in areas relating to advancing the United States' competitive position in the AI space, AI safety and security, privacy, anti-discrimination and civil rights. President-elect Trump has promised to rescind that Executive Order (EO), which many Republicans regard as hindering innovation. Trump has announced he will name David Sacks to serve as an “AI czar” and will nominate Michael Kratsios as the head of the Office of Science and Technology. Both have significant private sector experience with AI and technology and will have a significant role in developing the administration's AI policy. Trump has indicated he will replace Biden's EO on AI, and it seems likely that his administration will likely be more “business-friendly” when it comes to AI regulation. Nevertheless, we expect that the United States will continue to increase restrictions on the export of AI-related technologies to perceived adversaries and that national security risks related to AI will continue to be examined.

To the extent the U.S. government avoids regulating AI at a federal level, states are likely to pick up the slack, as they have done with privacy laws. Some U.S. states, including California, Utah and Colorado, have also passed AI laws. Colorado's law imposes the most significant obligations on both providers and deployers of certain “high-risk” AI systems, although the governor has indicated he expects it to be amended before going into force in 2026. Other U.S. states, including Texas, are likely to be considering draft AI laws in 2025. While there are many overlaps between these laws, there are also significant differences, including the extent to which they place liability on the developers of AI products or those that deploy them.

Existing Laws Refocused

Most countries have yet to propose comprehensive laws specifically targeted at widespread regulation of AI, but they can still use their existing enforcement powers to regulate it. While lawmakers are often reluctant to impose new laws, citing the risks of stifling innovation and the fear that continued rapid advancement of AI technologies will render any legislation quickly out of date, many jurisdictions, including the United Kingdom, Australia, Japan, Singapore and the United States at the federal level, have noted that their existing regulatory authorities can and will be applied to these new technologies.

Many regulators in the United States and other countries have set their sights on aspects of AI within their existing remits, including data protection and privacy, equality and anti-discrimination, intellectual property (IP), product liability and consumer protection, and misrepresentation. This has led to a raft of new guidance, policy proposals and areas of regulatory focus that businesses developing or deploying AI need to consider.

Privacy regulators — especially in Europe — have led regulating AI by issuing new guidance, launching investigations and enforcing against companies over their use of AI.

Several U.S. regulators issued a joint statement earlier this year on enforcement of civil rights, fair competition, consumer protection and equal opportunity laws in automated systems. This joint statement was released by a broad array of regulators, including the Consumer Financial Protection Bureau, Department of Justice, Equal Employment Opportunity Commission, U.S. Federal Trade Commission (FTC), the Department of Education, Department of Health and Human Services, Department of Homeland Security, Department of Housing and Urban Development, and the Department of Labor.

The FTC has been proactively focusing on AI and has publicly stated its interest in enforcement relating to advertising claims, product misuse to perpetuate fraud and scams, competition concerns, and IP and data privacy. The FTC has increasingly brought investigations and enforcement actions related to AI and recently announced enforcement actions concerning AI-related misrepresentations. It has also announced an investigation into AI-related “surveillance pricing” (i.e. using technologies, such as AI, along with personal information about consumers to categorize individuals and set an individualized targeted price for a product or service). In addition, financial regulators in the United States are clamping down on companies that overstate their AI capabilities to investors and consumers (so-called “AI washing”).

Regulatory Action Is Not the Whole Story

Regulatory action is not the only legal or governance pressure that businesses developing or using AI must navigate.  

Governments around the world are pouring immense effort into encouraging the development of voluntary standards for AI. Customers and other counterparties may begin to expect compliance with these voluntary standards to show AI governance maturity.

Many companies in the United States are also facing greater scrutiny in regard to their use of AI from shareholders and nongovernmental organizations. Recently, there has been a trend of shareholder petitions aimed at eliciting further detail relating to a company's AI strategy.

We've also seen an uptick in private litigation relating to AI, including over alleged misuse of IP rights, discrimination and shareholder actions. In addition, there are increasing numbers of AI-related class actions in the United States, including allegations of unfair and discriminatory outcomes based on automated decision-making for processing insurance claims or granting mortgage application as well as privacy violations. We expect other jurisdictions, such as the European Union, to follow that trend.

How Boards Should Respond

Companies developing or using AI need to be prepared to navigate a rapidly changing regulatory landscape, a patchwork of varying regulator regimes in different countries — and the risk of multiple investigations by separate national regulators. Boards have a crucial role in ensuring that companies understand and consider the risks and opportunities of AI, and have duties to do so.  

Navigating regulation and other risks is primarily about ensuring the right questions are asked and appropriate processes are in place to ensure the development or use of AI accords with the expectations of regulators and other stakeholders, such as shareholders, staff and customers.

Boards should respond by asking whether their companies have AI governance systems, the remit of those processes, and whether there is sufficient staffing and resources to support those mandates. Governance and risk management necessarily must cover more than regulatory and legal risks, but companies that ignore these risks are likely to find themselves at crosswinds with regulators — and possibly their own customers. 

Done well, an effective governance framework can also help boards seize strategic opportunities and avoid commercial missteps. The framework can help ensure the business is well placed to leverage AI to reduce costs, develop new products and attract investment, as well as ensuring the businesses avoid wasting resources on high-risk, low-reward projects.

Getting governance for AI right requires ensuring various matters are addressed, including:

  • Ensuring the board has the appropriate skills and resources to appropriately engage with AI.
  • Understanding the company's governance structure(s), including their mandates, remits and areas of risk management.
  • Evaluating whether the company has sufficient processes to ensure the governance framework is subject to regular review and remains suitably flexible to adapt to the likely rapid continuing evolution of AI technology and regulation.
  • Understanding how the company has identified and mitigated major risks, and setting expectations with management as to what level of risk acceptance should be elevated to the board.
  • Discussing whether risks related to its development or use of AI are significant and material, and how the company plans to comply with or address new and emerging requirements.
  • Ensuring that shareholders are provided with accurate information about the company's use of AI and the limitations and risks related to that usage.

As AI becomes increasingly vital for maintaining a competitive edge, regulatory scrutiny will only intensify. Boards that act now to understand how these evolving requirements may hinder or enhance their AI usage and development will be more able to effectively seize the opportunities that AI presents.

About the Author(s)

Beth George

Beth George is a partner at Freshfields.


This is your 1st of 5 free articles this month.

Introductory offer: Unlimited digital access for $20/month
4
Articles Remaining
Already a subscriber? Please sign in here.

Related Articles

Navigate the Boardroom

Sign up for the Directors & Boards weekly newsletter for the latest news, trends and analysis impacting public company boardrooms.