EEOC Commissioner: AI system audits might not comply with federal anti-bias laws

The emerging patchwork of laws and regulations governing the use of AI in finding, hiring, and promoting workers serve as reminders that, regardless of the technology in use, a company remains liable for federal civil rights violations, says the EEOC's Keith Sonderling.

shutterstock 2125941194
Shutterstock/Andrey_Popov

Keith Sonderling, commissioner of the US Equal Employment Opportunity Commission (EEOC), has for years been sounding the alarm about the potential for artificial intelligence (AI) to run afoul of federal anti-discrimination laws such as the Civil Rights Act of 1964.

It was not until the advent of ChatGPT, Bard, and other popular generative AI tools, however, that local, state and national lawmakers began taking notice — and companies became aware of the pitfalls posed by a technology that can automate efficiencies in the business process.

Instead of speeches he'd typically make to groups of chief human resource officers or labor employment lawyers, Sonderling has found himself in recent months talking more and more about AI. His focus has been on how companies can stay compliant as they hand over more of the responsibility for hiring and other aspects of corporate HR to algorithms that are vastly faster and capable of parsing thousands of resumes in seconds.

Computerworld spoke with Sonderling about how companies can deal with the collection of local, state, federal, and international laws that have emerged to ensure AI's potential biases are exposed and eliminated. The following are excerpts from that interview:

eeoc commissioner keith e. sonderling EEOC

EEOC Commissioner Keith Sonderling

How have you and the EEOC been involved in addressing AI’s use in human resources and hiring? "I’ve been talking about this for years, but now everyone wants to hear about what I’ve been talking about.

"We’re the regulating body for HR. Usually, the demands on the EEOC commissioner are to talk about workplace trends, workplace discrimination and all those issues. With AI impacting HR specifically, now there’s a lot of interest in that — not just in the terms of the traditional lawyer, or government affairs aspect but more broadly in terms of the technology as a whole.

"It's a technology most laypeople can understand because everyone’s applied for a job, everyone’s been in the workforce. If you’re going to be in the workforce you’re going to be subject to this technology, whether it’s through resume screening…or more advanced programs that determine what kind of worker you are or what positions you should be in. This extends all the way to automating the performance management side of the house. Really, it’s impacted all aspects of HR, so there’s a lot of demand in that.

"More broadly, because I was one of the government officials to talk about this early on, now I talk about broad AI governance for corporations and what they can be doing to implement best practices, policies and procedures internally.

What is your opinion on how various nations and localities are addressing AI regulation. China has moved quickly because it sees both the threat posed by AI and its potential. They want to get their hooks into the tech. Who’s doing the best job? “That’s why it’s so interesting thinking about how AI is going to be regulated and the different approaches different countries are taking; [there's] the approach the United States broadly is taking, and also you’re seeing cities and states try to address this on the local level.  The biggest one is the EU and their proposed AI Act and the RISK-based approach.

"To your point in the debate about regulating AI, will anyone build systems there? The UK is saying come to us because we’re not going to overregulate it. Or are tech companies just going to go develop it in China and forget about all the others."

Why is New York’s Local Law 144 important? “Taking a step back, for cities, states, foreign countries —for anyone who wants to take up the very complex area of algorithmic decision-making laws and trying to regulate it, obviously they should be committed because not only does it take a certain level of expertise of the underlying use of the tool, but also being able to understand how it works and how it will apply to their citizens.

"What we’re starting to see is a patchwork of different regulatory frameworks that can sometimes cause more confusion than clarity for employers who operate on a national or even international level. I think with a lot of these HR tools, and you see who the early adopters are or who they’re marketed to, it’s generally for larger companies with bigger work forces. Now, I’m not saying there aren’t AI tools made for smaller and mid-sized businesses, because there certainly are. But a lot of it is designed for [those who] need hiring scaled or promotions scaled and need to make employment decisions for a larger workforce. So, they’re going to be subject to these other various requirements if they’re operating in various jurisdictions."

How should companies approach compliance considering some are local, some are state, and some are federal? "What I’m trying to warn companies using these products when it comes to compliance with these laws — or if they are in places where there are no laws on the books because legislators don’t understand AI — is to take a step back. The laws we enforce here at the EEOC have been around since the 1960s. They deal with all aspects of employment decisions, from hiring, firing, promotions, wages, training, benefits — basically, all the terms and conditions of employment. Those laws protect against the big ticket items: race, sex, national origin, pregnancy, religion, LGBT, disability, age.

"They have been regulated and they’ll continue to be regulated by federal law. So you can’t lose sight of the fact that no matter where you are and regardless of whether your state or city has engaged or will be engaging in algorithmic discrimination standards or laws, you still have federal law requirements.

"New York is the first to come out and broadly regulate employment decisions by AI, but then it’s limited to hiring and promotion. And then it’s limited to sex, race and ethnicity for doing those audits before requiring consent from employees or doing an audit and publishing those audits. All those requirements will only be for hiring and promotions.

"Now, there’s a lot of hiring and promotion going on using these AI tools, but that doesn’t mean it you’re an employer that’s not subject to New York’s Local Law 144 that you shouldn’t be doing audits to begin with. Or if you’re saying, 'OK, I have to do this because New York is requiring me to do [a] pre-deployment audit for race, sex and ethnicity,' well, the EEOC is still going to require compliance with all the laws I just mentioned across the board, regardless."

So, if your AI-assisted applicant tracking system is audited, should you feel secure that you're fully compliant? "You shouldn’t be lulled into false sense of security that your AI in employment is going to be completely compliant with federal law simply by complying with local laws. We saw this first in Illinois in 2020 when they came out with the facial recognition act in employment, which basically said if you’re going to use facial recognition technology during an interview to assess if they’re smiling or blinking, then you need to get consent. They made it more difficult to do [so] for that purpose.

"You can see how fragmented the laws are, where Illinois is saying we’re going to worry about this one aspect of an application for facial recognition in an interview setting. New York is saying our laws are designed for hiring and promotion in these categories. So, OK, I’m not going to use facial recognition technology in Illinois, and I’ll audit for hiring and promotion in New York. But, look, the federal government says you still have to be compliant with all the civil rights laws.

"You could have been doing this since the 1960s, because all these tools are doing is scaling employment decisions. Whether the AI technology is making all the employment decisions or one of many factors in an employment decision; whether it’s simply assisting you with information about a candidate or employer that otherwise you wouldn’t have been able to ascertain without advanced machine learning looking for patterns that a human couldn’t have fast enough. At the end of the day, it’s an employment decision and at the end of the day, only an employer can make an employment decision."

So, where does the liability for ensuring AI-infused or machine learning tools lie? "All the liability rests with the employer in the same way it rested with HR using a pencil and paper back in the 1960s. You cannot lose sight that these are just employment decisions being made faster, more efficiently, with more data, and potentially with more transparency. But [hiring] has been regulated for a long time.

"With the uncertain future of federal AI legislation and where it may go, where the EU’s legislation may go, and as more states take this on — California, New Jersey, and New York State wants to get involved — you can’t just sit back and say well, there’s not certainty yet in AI law. You can’t think there’s no AI regulatory body that a senator wants to create; there’s no EU law that will require me to do one, two, three before using it, and think, 'We can just wait and implement this software like we do other software.' That’s just not true.

"When you're dealing with HR, you’re dealing with civil rights in the workplace. You’re dealing with a person’s ability to enter and thrive in the workforce and provide for their family, which is different from other uses. I’m telling you that are laws in existence and will continue to be in existence that employers are familiar with, we just need to apply them to these HR tools in the same way we would with any other employment decision."

Do you believe New York’s Local Law 144 is a good baseline or foundation for other laws to mimic? "I think Local Law 144 is raising the awareness of the ability for employers to do employment audits. I think it’s a good thing, in the sense that now employers in New York who are hiring are being forced to do an audit. It raises awareness that whether or not you’re being forced to do it, it’s good compliance.

"Just because a local government is forcing you to do an audit, doesn’t mean you cannot do it yourself. In the sense that employers are now recognizing and investing in how to get AI compliant before it makes a decision involving someone’s livelihood, it’s developing this framework of how to audit AI pre-deployment, post-deployment and how [to] test it. How do we create the framework for AI broadly, whether it’s being used in employment, housing, or credit? It gets companies more familiar with not only spending the resources needed to build these systems or buy them, but the implementation side of it has a compliance aspect to it.

"I think it’s raising awareness in a positive way of performing audits to prevent discrimination. If you find the job candidate recommendation algorithm has a factor in there that’s not necessary for the job, but instead is eliminating a certain class of workers who are qualified but excluded because of age, or race, or national origin, or whatever the algorithm is picking up — if you can see that and prevent it and tweak it, whether by changing the job description or doing more recruiting for certain areas to ensure you have the inclusive job applicant pool or just ensuring the job parameters are necessary — that’s preventing discrimination.

"A big part of our mission here at the EEOC, even though people look at us as an enforcement agency — which we are — is to prevent discrimination and promote equal opportunity in the workplace. Doing these audits in the first place can prevent that."

What makes an AI applicant tracking system problematic in the first place? "A true ATS systems is just going to be a repository of applications and how you look at them. It’s what you’re doing with that data set that can lead to problems, and how you’re implementing the AI on that data set, and what characteristics you’re looking for within that pool and how does it get you the flow of candidates. In that funnel from the ATS to who you’re going to select for the job is where AI can be helpful. Many times when we’re looking at a job description or a job recommendation, or the requirements for that job, those in some cases haven’t been updated in years or even decades. Or a lot of times they’ve just been copied and pasted from a competitor. That has the potential to discriminate because you don’t know if you’re copying a job description that may have historical biases.

"The EEOC is going to look at that and simply say, what were the results? If the results were discrimination, you have the burden of going through every aspect of the characteristics you put into that ATS and can you prove that’s necessary for the job in that location based upon the applicant pool?

"So, it’s not as much the ATS systems that can be problematic, but what machine learning tools are scanning the ATS systems and if it wasn’t a diverse pool of applicants in the first place. That’s a long-winded way of asking: how are you getting into that ATS system and then once the applicant is in that system, what are they being rated on? You can see how historically biases can prevent some [people] from getting into those systems in the first place, and then once you’re in the ATS system, and the next level is what skills or recommendations are not necessary but are discriminatory?"

1 2 Page 1
Page 1 of 2
It’s time to break the ChatGPT habit