In the News

3 Hurdles Facing Worker-Side Attys Looking To Tackle AI Bias

Law360

August 15, 2024

The growing use of artificial intelligence in hiring and other workplace decisions has the plaintiffs bar playing catch-up, trying to figure out how and when AI is being used.

Employers aren’t often obligated to disclose when they’re using AI to evaluate applicants, and even if they are, the tools themselves are a so-called black box. That opacity has so far put the worker side at a big disadvantage, lawyers and employee advocates told Law360.

. . .

For now, applicants must do some of the work themselves. Once they know they’ve been assessed by an AI tool and turned down, said Cohen Milstein Sellers & Toll PLLC partner Christine Webber, an enterprising applicant could start “looking at the pattern of decisions … who has been hired, and come to a reasonable conclusion that this seems to be having an adverse impact.” Webber co-chairs the plaintiff-side firm’s civil rights and employment practice.

Yoshihara added that many employers are not “doing what they should be doing in terms of testing” AI tools for biased impacts before deploying them.

But the Mobley case may prove key, Scherer said. If Workday can’t prove it tested the tool and found it fair, “I don’t see how Workday avoids letting them get under the hood to see what data was used to develop and train it, because that might shed light on whether there actually was a possibility that it had a disparate impact,” Scherer said.

Webber has brought disparate impact cases in the housing sphere, where housing providers that use information from consumer reporting agencies are required to tell rejected applicants that they were turned away based on that information.

She said some of the information she would want in order to build an employment case includes programming instructions, the data the tool was trained on and whether that data changed over time, and whether the tool used only machine learning or if a human was involved.

Webber would also be looking to see what parameters were used as controls for the tool, what kinds of feedback the tool received from humans during its development, how it was tested and how it performed on those tests. She’s watching closely to see what comes out of the discovery process in the Mobley case.

“Thinking through what’s the best regulation of these types of tools — which a lot of states are looking at — where are the risk factors, what are the things we need to worry about?” Webber said. “Having the chance to really get your hands in the guts of it and do a little dissection and learn how it works I just think would be interesting.”

Read 3 Hurdles Facing Worker-Side Attys Looking To Tackle AI Bias.