The regulatory landscape surrounding the use of AI in the workplace is evolving rapidly. As AI systems become more integrated into organizational processes, it is crucial for leadership teams to understand the risks and challenges of adopting AI.
Leadership teams should be aware that AI implicates certain requirements under both new and existing laws. The following are key legal considerations that organizations should understand while deploying AI systems in the workplace.
AI Systems and Employment Discrimination
AI has the potential to enhance hiring processes by improving efficiency and objectivity. However, without proper safeguards, AI may unintentionally lead to discrimination in recruitment and hiring.
Existing laws prohibit employment discrimination in the United States, including Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the Americans with Disabilities Act, the Equal Pay Act and Title II of the Genetic Information Nondiscrimination Act.
These laws apply to all aspects of hiring, from job postings and recruiting processes to managing and firing employees. Their protections also extend to any AI tools used in employment.
New state laws are also expanding protections for employees and job applicants, specific to AI decision-making.
Illinois. In Illinois, the Illinois Human Rights Act has been expanded to include provisions for AI, with amendments going into effect on January 1, 2026. The Act prohibits discrimination for protected classes in Illinois based on “race, color, religion, sex, national origin, ancestry, age, order of protection status, marital status, mental or physical disability, military status, sexual orientation, pregnancy or unfavorable discharge from military service.”
Building upon this anti-discrimination law, the amended act provides that employers may not use AI systems that have a discriminating effect on employees or job applicants based on any of these protected characteristics.
The amended act also requires that employers provide notice to employees and job applicants when using AI in their decision-making. This includes when AI is used in recruitment, selection, hiring, promotion and more. This notice extends to employees, interns and applicants.
Colorado. Colorado has similar legislation that is, in part, aimed at employment decisions. The Colorado Artificial Intelligence Act includes parameters around “high-risk” systems, which includes systems that make “consequential decisions.” These decisions include those related to employment or employment opportunities.
If a company is using a high-risk system, they must fulfill certain requirements, including providing notice, documenting risk management systems, governance measures for possible biases and conducting impact assessments. These systems are subject to additional requirements if they make an adverse employment decision, including disclosures to the employee or applicant.
New York City. Both the Illinois and Colorado laws follow New York City Local Law 144, signed in 2021. Local Law 144 applies to employers and employment agencies in New York City that use “automated employment decision tools” to screen candidates or employees.
Local Law 144 also requires an independent bias audit within one year of using the AI tools. Employers must also summarize the audit and publish the information about the audit on their website.
Like the Illinois and Colorado laws, the employer must provide notice to the job applicant or employee that the employer is using an AI decision under Local Law 144.
AI and Use of Likeness
New AI laws are also shaping protections around digital replicas, requiring consent and transparency when using a person's likeness or voice for commercial purposes.
California. California's Assembly Bill 2602 (AB 2602) establishes legal requirements for employers and their employees with respect to the use of an employee's “digital replica.” The law goes into effect on January 1, 2025.
Under this law, the definition of “digital replica” is broad. It includes a “computer-generated, highly realistic electronic representation” that can be readily identified as the voice or visual likeness of an individual. Under this definition, a digital replica could mean anything from an AI-generated avatar to a synthetic voice, or any other form of realistic replication used to mimic a person's professional presence.
While these tools are invaluable in areas like marketing, entertainment and media, this law mandates strict conditions under which these replicas can be created. Any contract involving the creation or use of a digital replica must have clear specific language outlining its intent and scope. Additionally, employers may not penalize workers for refusing to agree to these terms.
Under AB 2602, directors and leaders looking to implement AI-driven digital replicas must recognize that consent for using an employee's likeness cannot be presumed. Instead, informed, voluntary consent is required, either with the presence of legal representation or union representation in specific circumstances.
Leaders should focus on transparent communication. Contracts concerning the use of digital replicas of an employee should clearly outline how AI will be used, and this information should be clearly presented to the employee.
AI and Biometric Information
When it comes to AI in the workplace, biometric information cannot be ignored. Employers often use biometric information, sometimes unintentionally, through AI facial recognition, identification, access control and performance monitoring technologies. While these technologies can enhance security and boost efficiency, they can also raise concerns under existing biometric information laws.
California and U.S. State Laws. As of the date of publication, 20 states have comprehensive privacy laws that require notice and opt-out rights with respect to personal information. In addition, the majority of these laws allow consumers to limit the use of their sensitive personal information, like biometric information. Most state laws also have data breach laws that cover biometric information.
In California, the California Consumer Privacy Act (CCPA) includes a definition for “biometric information” that includes one's “physiological, biological or behavioral characteristics,” including information about their DNA, used to identify an individual. In Colorado, the Colorado Privacy Act defines “biometric identifiers” as “data generated by technological processing, measurement or analysis of an individual's biological, physical or behavioral characteristics” that can be used for identifying an individual. Connecticut's Consumer Data Privacy and Online Monitoring Act and Virginia's Consumer Data Protection Act define “biometric data” similarly, as does Delaware's Personal Data Privacy Act.
Illinois. In Illinois, the Biometric Information Privacy Act (BIPA) restricts how a “private entity” can collect, use or otherwise disclose “biometric identifiers or biometric information.” The BIPA requires employers to obtain explicit written consent from employees before collecting their biometric information. It also requires that employers inform employees of the purpose and length of time their information will be used for, among other things.
Texas and Washington. In Texas and Washington, similar laws are on the books. Texas's Capture or Use of Biometric Identifiers (CUBI) Act prohibits an employer from capturing biometric information, including a “retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry” for a commercial purpose.
Washington's Biometric Identifiers Law (BIL) also regulates biometric information in a commercial context. Under BIL, a “biometric identifier” means “data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics” used to identify a specific person.
While the commercial requirements for BIL are different than CUBI, both laws require limiting the use of biometric data to what is necessary and ensuring that the information is securely stored.
Directors and leadership teams should be especially vigilant when using AI systems that collect or analyze this biometric information. Because this information involves sensitive, personal identifiers, misuse or inadequate protection of biometric data can lead to liability and legal issues.
When using AI to collect or process biometric information, leaders should work to ensure compliance with regulations, obtain informed consent and prioritize secure data-handling practices.
Proposed Rulemaking of Additional AI Regulations
In addition to existing laws on AI and biometric information, there are draft amendments to these laws in California and Colorado.
The draft amendments to the CCPA were discussed at a public hearing on November 8, where the California Privacy Protection Board voted 4-1 to move forward with formal rulemaking. As written, these amendments apply to a wide range of AI systems and require impact assessments and opt-out availability.
Similarly, draft amendments to the Colorado Privacy Act Rules were discussed at a rulemaking hearing on November 7. These amendments would build on requirements of the Act and, if adopted, would require employers to provide notice at or before the collection or processing of any biometric identifiers. As AI continues to play a transformative role in the workplace, leadership teams must be proactive in understanding related legal concerns. By integrating AI responsibility and staying informed about the legal landscape, organizations can mitigate risk, protect employee rights and harness the potential of AI technologies.