Advertisement

California Civil Rights Council weighs rules to prevent AI discrimination in hiring, employment

The California Civil Rights Council weighed changes that would prohibit the use of AI in certain hiring and employment practices.
California capitol building
(Getty Images)

Last week, the California Civil Rights Council weighed amendments to the state’s Fair Employment and Housing Act that would prohibit the use of artificial intelligence in certain hiring and employment practices conducted by businesses, nonprofits and governments.

The proposed rules were shared by the California Civil Rights Council in May, and on Thursday, the council heard public testimony during a hearing about the rules at the University of California, Berkeley School of Law. The amendments would make it a clear violation of California law to use an automated decision-making system in a discriminatory way, clarifying existing FEHA rules that protect employees — including state and local government employees — from harassment or discrimination.

In an April 2021 council hearing on algorithms and bias, experts said AI is commonly used in every stage of the hiring process, including recruitment, screening, analyzing and making recommendations based on applicant interviews, as well as during employment.

“The Council has determined that the proposed amendments are not inconsistent or incompatible with existing regulations,” the council’s initial statement of reason read. “Currently, there are no regulations expressly addressing the use of automated‐decision systems to make or assist in making hiring or other employment decisions.”

Advertisement

Under the proposed rules, employers that wish to use a AI in their hiring or employment practices would not be able to use a system that screens out, ranks or prioritizes applicants based on their religious creeds, disabilities or medical conditions unless the factors are job-related, for example.

The rules would also prohibit employers from using automated-decision systems during the interview process, such as tools that can analyze an applicant’s tone of voice, facial expressions or other physical characteristics related to their race, national origin, gender or other protected characteristics.

The rules require covered employers and entities to maintain employment records — including data created from automated decision-making systems and AI training data — for at least four years. They must also conduct anti-bias testing on their automated decision-making systems.

The California Civil Rights Council said it consulted federal guideposts, such as the White House’s AI “bill of rights” blueprint published last October and the Equal Employment Opportunity Commission’s guidelines on algorithmic fairness. Both note that AI use in settings such as employment can result in discrimination against minority groups and further systemic inequality.

The council said the proposed rules would also benefit the state by decreasing the number of employment‐related FEHA violations — reducing litigation costs and the burden on the courts — by providing guidance to employees, applicants, employers and other covered entities to help them better understand their rights and obligations.

Advertisement

When faced with alleged violations of the proposed rules, the civil rights council said employers can defend their use of the systems by demonstrating that the evaluated criteria were job-related, necessary for business and that no less-discriminatory alternatives were available.

The California legislature is currently evaluating a bill that would similarly rein in the use of automated systems in employment processes. Bill AB-2930 would require deployers of automated decision tools used in making a “consequential decision” — such as in employment or housing — to notify the applicant than an automated decision tool is being used. The bill would prohibit automated decision tools from being used if they might enable algorithmic discrimination.

Similar to the FEHA amendments, that bill would also require employers and developers to perform annual impact assessments for potential adverse impacts, which would be provided to the California Privacy Protection Agency, and to take measures ensuring their systems are not engaging in algorithmic discrimination.

Keely Quinlan

Written by Keely Quinlan

Keely Quinlan reports on privacy and digital government for StateScoop. She was an investigative news reporter with Clarksville Now in Tennessee, where she resides, and her coverage included local crimes, courts, public education and public health. Her work has appeared in Teen Vogue, Stereogum and other outlets. She earned her bachelor’s in journalism and master’s in social and cultural analysis from New York University.

Latest Podcasts