NYC May Regulate Hiring Algorithms. Here’s What That Means.

New York City could soon become one of the first U.S. cities to create new rules for the use of algorithms in hiring. Built In spoke with AI expert Julia Stoyanovich about what the proposal does well and where it falls short.

Written by Ellen Glover
Published on Jan. 14, 2021
NYC May Regulate Hiring Algorithms. Here’s What That Means.
NYC wants to reform the way companies use hiring algorithms, here's what an AI expert thinks
Image: Shutterstock

The New York City Council is considering a bill that would create new rules for the use of hiring algorithms, making it one of the first cities in the United States to attempt to regulate this increasingly common, yet misunderstood, hiring practice.

In short, the bill would require the makers of these tools to complete annual audits to make sure their technology isn’t biased. It also would require that companies notify their job candidates when they have been assessed by these tools.

“Given some of the problems that have arisen with this kind of technology, no matter how well-intentioned it is, people have a definite right to know if it has been used in the hiring process,” Robert Holden, chair of the City Council’s Committee on Technology, told Built In via email. “It’s a major decision that affects a person’s life and livelihood. We have to make sure that [automated decision systems] tech doesn’t disproportionately impact people based on factors like age, gender, race or religion.”

The Committee on Technology is helping oversee what goes into the bill, which was originally proposed last February. A committee spokesperson also told Built In via email that the bill could yet see changes before coming to a vote before the entire council, although it’s hard to say when that might be. If passed, the legislation could go into effect as early as January of 2022, according to a recent Wired article.

This news comes at a time when several state and city governments around the country have been working to regulate the use of predictive algorithms. For instance, more than a dozen U.S. cities including San Francisco and Boston have banned government use of facial recognition software. Massachusetts nearly became the first state to do so in December, but Governor Charlie Baker struck the bill down. Meanwhile, 10 U.S. senators called on the Equal Employment Opportunity Commission to create rules on AI hiring tools in order to prevent bias going forward.

Julia Stoyanovich, the director of New York University’s Center for Responsible AI, says these efforts and this proposed bill in NYC are meaningful steps in starting a long overdue conversation about AI — both the benefits and the risks.

“I think that it’s very important for us to understand that, if we want a particular technology in society, that we know how to keep it in check,” Stoyanovich told Built In. “To use technology just because it exists doesn’t make sense. I think that it’s actually harmful. So I hope that we have more nuanced conversations about this.”

Stoyanovich, who has testified in support of NYC’s proposed bill, has already taken steps to start the conversation. Back in December, she and her colleagues at the Center for Responsible AI published a paper documenting their public engagement activities thus far, and she is helping make a comic book on data responsibility called “Mirror, Mirror.” Volume one is currently available in English and Spanish and she says a second volume will be released soon.

Built In spoke with Stoyanovich about these efforts and her thoughts on the city’s proposed legislation — both what it’s doing well and where it falls short. The conversation has been edited for length and clarity.

RelatedIBM, Amazon’s Facial Recognition Action Draws Praise, Skepticism

* * *

What causes a hiring algorithm to discriminate?

Algorithms — hiring algorithms in particular — are just a reflection of how our society has been working. It’s a reflection that has been scaled up and automated.

More concretely, the reason that hiring algorithms discriminate is because of the objectives that they are built with. Also because of the way in which they are built. The way in which they are built is we take data that is a reflection of the world that has been so far. And we use that data as a kind of a surrogate for experience, to then shape how a predictive algorithm is going to make predictions. When we make predictions based on our experiences, we cannot generalize beyond what we have seen — if we have only seen white swans, we will never guess that black swans also exist.

This sort of closed-world assumption is something that machine learning algorithms used ubiquitously in hiring are also subject to. If you have never seen, for example, Black women in CEO positions, and if that’s not reflected in your data, you’re not going to predict that the person of that demographic profile is going to do well in that position. So you’re limited by your experience, and that experience is in the data.

Of course, the question is then, you know, what can one do?

I’m curious, has there been another state or city that’s tried to do what New York City is trying to do?

To the best of my knowledge, no. I think this is actually a very ambitious attempt.

This New York City bill has two parts. One part is that there needs to be a bias audit for these algorithmic hiring tools when they’re sold, and that it needs to happen periodically. The second part is that individuals need to be notified that they were screened by an algorithm. Also, they need to be told what features and characteristics of their application were used to make a determination as to whether to hire them. This second part, to the best of my knowledge, really has not been explored in the scope of algorithmic systems.

Generally, when we talk about discrimination in hiring, there is this understanding that the weighted decisions are made by human HR managers. They need to be based on criteria that they can show are job-relevant. But we have not yet attempted to operationalize this principle for algorithmic decision making — explaining specifically which criteria are used and why these are used. Why are these criteria job-relevant? This is something that is very important, and a very ambitious part of the bill.

As you mentioned earlier, a key part of this legislation has to do with accountability. Do you think this bill goes far enough in holding the companies who make and use this kind of tech accountable?

I don’t think that we can have an accountability structure that is robust if we target only one set of stakeholders here. It absolutely is up to companies that are producing this software to make sure that they’re doing their job. But it’s also up to the companies that are using this software to use it appropriately.

But we cannot really hold these systems accountable until job seekers themselves participate in this oversight.

That’s an interesting way to look at it. In order to better hold these companies accountable for their use of these algorithms, people need to actually know that these algorithms are being used. I think a lot of people don’t know.

They don’t know. Unless we compel employers to disclose this to us, they are not going to disclose.

You seem to be in support of this bill, but do you see any places where it falls short?

I’m absolutely in support of this bill. The reason, again, is that we are using these tools, but we’re not regulating them appropriately, or almost at all. We just cannot have a dangerous set of practices proliferate in society and have it go unchecked.

Where it falls short is in being specific. In being concrete. I think that we need to be very careful, to make sure that it does not become a rubber stamp where a company says, “Yeah, I’m doing my own internal bias audit, here are the results, I’m getting the lowest threshold for non discrimination and, therefore, my software is good to go.”

We need to make sure that, if audits become a component of this accountability structure, that they are done by a third party, and that they are done according to criteria that we all — not just the companies — established together. There needs to be public input, there needs to be public deliberation.

We need to bring people to the table who actually are members of these groups that have been marginalized, to hear what their concerns are so that we know what to look out for.

This bill will really live or die by how we define bias, how we define audit, and who does that audit.

Also in NYCCockroach Labs Reaches Double Unicorn Status With $160M Raise

Hiring Now
Biz2Credit Inc.
Fintech • Other • Software • Analytics • Financial Services