Law designed to stop AI bias in hiring decisions is so ineffective it's slowing similar initiatives

New York’s LL144 rated too broad, but researchers hope others can learn from that mistake

A study into the effectiveness of a New York City law targeting bias in AI hiring algorithms has found the legislation is largely ineffective.

New York City Local Law 144 (LL144) was passed in 2021, came into effect as on January 1 last year and has been enforced as of July 2023. The law requires employers using automated employment decision tools (AEDTs) to audit them annually for race and gender bias, publish those results on their websites, and include notice in job postings that they use such software to make employment decisions.

The study from researchers at Cornell University, nonprofit reviews service Consumer Reports, and the nonprofit Data & Society Research Institute, is as yet to be published but was shared with The Register. It found that of 391 employers sampled, only 18 had published audit reports required under the law. Just 13 employers (11 of whom also published audit reports) included the necessary transparency notices.

LL144 "grants near total discretion for employers to decide if their system is within the scope of the law," Jacob Metcalf, a researcher at Data & Society and one of the study's authors, told The Register. "And there are multiple ways for employers to escape that scope.”

Metcalf told us that LL144 doesn't require companies to take any action if one of its audits shows an AEDT has led to discriminatory outcomes. That doesn't mean companies found to be using biased AEDTs won't be on the hook, however.

"Employers posting an audit showing disparate impact are open to other forms of action," Metcalf told us. "Civil lawsuits about employment discrimination can be very expensive.”

Metcalf and several of his colleagues are working on a second paper about LL144 that focuses on the experience of auditors reviewing AEDTs used by NYC companies. The Register has viewed the paper, which is currently under peer review. It finds that audits have found cases of discrimination by AEDTs.

"We know from interviews with auditors that employers have paid for these audits and then declined to post them publicly when the numbers are bad," Metcalf told us. "Their legal counsel is more scared of the Equal Employment Opportunity Commission than New York City.”

New York City law slowing adoption of similar rules

Laws similar to LL144 have been considered by other jurisdictions. Those proposals have mostly stalled as lawmakers have become aware that NYC's attempt at preventing AI hiring bias hasn't been effective.

"Sponsors [of the bills] are rethinking their structure. As far as I'm aware there hasn't been any action on similar laws," Metcalf told us. California considered similar legislation in 2022, while Washington D.C. and New York state have also pondered legislation like LL144.

The European Union's AI Act, provisionally agreed to in December 2023, places AI used in the recruiting process in the Act's "high risk" category, meaning such products will need to be reviewed before reaching the market and throughout their lifecycle. The EU has yet to pass its law.

Bias in AI has been well established at this point.

HR software firm Workday has even been sued over allegations that its recruitment software has been used to discriminate against Black applicants - the exact sort of thing that LL144 was designed to combat.

While LL144 has been largely ineffective, the researchers concluded that the law is a first step toward better regulation.

"Anyone working on these laws is experimenting on accountability structures - we don't know yet what works," Metcalf told us. "Nearly everything that civil society critics said [about LL144] has come true, but we learned things in this paper that other people can pick up [for future enforcement efforts].”

One of the most important things that could be learned from LL144, Metcalf said, is that the scope of what constitutes covered use of AEDT software should be broadened.

Language in LL144 is abstract, only defining the use of AEDT software in cases where it's "used to substantially assist or replace discretionary decision making for making employment decisions." What "substantial" means is up to interpretation.

If future laws created to combat AEDT discrimination are to be effective, we're told, any qualification on the use of AI hiring algorithms needs to be dropped.

"If a system is rendering a score, it's in scope. Period," Metcalf told us. "Giving employers discretion [in deciding if their use of AEDT falls under LL144's scope] creates perverse incentives that undermines actual accountability this law could have achieved." ®

More about

TIP US OFF

Send us news


Other stories you might like