February 15, 2023
The U.S. Equal Employment Opportunity Commission held a public hearing Jan. 31 examining the implications of artificial intelligence technology on equal employment opportunity.
According to EEOC Chair Charlotte A. Burrows:
The goals of this hearing were to both educate a broader audience about the civil rights implications of the use of these technologies and to identify next steps that the Commission can take to prevent and eliminate unlawful bias in employers’ use of these automated technologies.
During the hearing, panelists Pauline Kim, professor at Washington University School of Law in St. Louis, and Manish Raghavan, assistant professor at Massachusetts Institute of Technology, testified about the prevalent misuse of the four-fifths rule in evaluating whether AI selection tools cause adverse impact.
We agree, and we believe this shows that the EEOC should revise its guidelines to abandon the rule.
Whether or not the EEOC eliminates the rule, courts have long made clear that the four-fifths rule is not the test for adverse impact that they will apply.
As the U.S. Court of Appeals for the Ninth Circuit described in Stout v. Potter in 2002, under the four-fifths rule:
[A] selection practice is considered to have a disparate impact if it has a “selection rate for any race, sex, or ethnic group which is less than four-fifths (4/5) (or eighty percent) of the rate of the group with the highest rate.”
The rule originates from the Uniform Guidelines on Employee Selection Procedures, which were originally published in 1978. The four-fifths rule specifically states that it speaks only to what federal agencies would generally do.
It’s been nearly 45 years since the four-fifths rule was published by the EEOC, but even at that time, courts largely looked to more formal and sophisticated statistical tests for evidence of adverse impact. Since then, the consensus, from the U.S. Supreme Court on down, and including leading authorities, has been that the four-fifths rule of thumb was not the governing test and was not as probative as formal statistical analysis.
As the consensus rejected reliance on the four-fifths rule in favor of statistical significance tests, some proponents of the four-fifths rule argued that Title VII of the Civil Rights Act required showing practical significance in addition to statistical significance, and repackaged the four-fifths rule as a test of practical significance.
While some courts have considered practical significance, the four-fifths rule has not been generally adopted as a test for practical significance, and many courts did not consider practical significance a requirement at all.
Recently, the Supreme Court’s 2009 decision in Ricci v. DeStefano appeared to foreclose reading a practical significance into Title VII, stating that a prima facie case of disparate impact requires that plaintiffs show a statistically significant disparity and nothing more.
The recent push to incorporate a requirement of practical significance into disparate impact analysis, and to make the four-fifths rule the test to show satisfaction of such a requirement, has come in large part from those wanting to enable AI selection devices to be accepted without violating disparate impact rules.
Since AI selection tools provide fertile opportunities for disparate impact claims, and AI vendors want to assure the employers they hope to recruit as customers that their products will not result in Title VII violations, AI vendors have cited the four-fifths rule as the basis for their claims that their products will keep employers compliant with Title VII.
Simultaneously, there has been a push to define disparate impact liability under federal and state laws in relation to the four-fifths rule. For example, a proposed 2020 California Senate bill would have defined disparate impact as being indicated where the selection rate for any protected class existing in 2% or more of the total applicant population is less than four-fifths of the selection rate for the class with the highest selection rate and where such difference in selection rates between such classes is statistically significant and to provide a safe harbor for any selection tool that cleared the four-fifths rule.
The push to anoint the four-fifths rule as the test for disparate impact is not only unsupported by the case law, but it is also bad policy. The four-fifths rule is a proclamation that practices that keep up to 20% more of a disadvantaged group from being hired should be accepted without further scrutiny.
Read the complete article on Law360.