Introduction
If you have a job, you are no stranger to the use of technology in hiring. Chances are that you applied to a job on the internet, using a resume that you developed from an online template. The company you applied to likely used an applicant tracking system (ATS) to organize your application materials as well as track your progress through the hiring cycle. And there was probably automation—and even artificial intelligence (AI)—involved at some stage. A 2022 survey by the Society for Human Resource Management (SHRM) found that the use of AI to support HR-related activities is increasing; of the organizations using such technology, 79% of them are focusing on automation for recruitment and hiring. Despite the common use of automated technology in hiring, utilizing AI tools can lead to concerns regarding the potential for algorithmic bias, discrimination, and a lack of transparency in these systems. As a result, lawmakers have begun implementing policies to regulate the use of such automation in hiring to ensure fairness, equity, and accountability.
New York City Local Law 144 (NYC LL 144) is a prime example of this trend, as it sets out comprehensive regulations to govern automated employment decision tools (AEDTs). This article will delve into the implications of NYC LL 144, including its historical context, potential advantages and pitfalls, and recommendations for future legislative actions based on Industrial-Organizational (IO) Psychology best practices.
A brief history of technology in hiring
Over the past four decades, technology has revolutionized the way we hire: from posting jobs, to screening applicants, to tracking applicants via an applicant tracking system (ATS), to emailing the candidate with a formal offer. However, some employers and candidates are skeptical about the use of technology for hiring, and, in some cases, that skepticism is rightfully placed. It’s important to recognize that hiring tools, both under human review and artificial intelligence, can incorporate biases in the hiring process. As a recent example, just a few years ago Amazon ditched an AI recruiting tool after they found it was biased against women. However, we cannot place blame wholly on technology. Research has shown humans can incorporate numerous biases into the hiring process, including biases around gender and attractiveness, as well as race. If humans are the ones developing the technology behind these tools, then it follows that some of these biases may be unintentionally incorporated. However, all hope is not lost. AI, when developed thoughtfully, can actually mitigate bias in hiring. AI can be used to write gender-neutral job descriptions, systematically screen resumes, objectively measure the skills of candidates, and so much more. Plus, AI tools can be systematically analyzed for bias, and clear bias-related metrics can be tied directly back to the tools. Given the growing use of technology in hiring and its tumultuous history, it is no surprise that policy experts have pushed for regulations. NYC LL 144 is just one of the first major laws that looks to regulate the use of automated tools in hiring.
The origins of NYC LL144
Although NYC LL 144 officially became enforced in July 2023, its history goes back several years. The law was first proposed in 2020, and was passed by the New York City Council in late 2021. It underwent many iterations over the three years it took to go from proposal to being in effect, with efforts being led by the NYC Department of Consumer and Worker Protection (DCWP). These iterations included changes to the verbiage and scope, shaped by policy experts and feedback given via public hearings held in late 2022 and early 2023. Following these sessions, the DCWP finalized the rules in April 2023 and set the enforcement date for July 5, 2023. The law has been in effect since.
What does the law require?
NYC LL 144 is the first law in the US that regulates the use of automation in hiring. It requires that automated employment decision tools (AEDTs) have undergone an independent bias audit in the last year of use. Likewise, employers must publicly display a summary of the results of the most recent bias audit, including key statistics, for the tool on the employer or employment agency’s website.
Responses to NYC Local Law 144
Because NYC LL 144 is the first law of its kind in the United States, it has naturally generated a lot of buzz. In fact, the first attempt at a public hearing resulted in the video conferencing system crashing due to the high volume of attendees trying to join. Later sessions drew over 250 attendees, many of whom voiced their individual perspectives on the law. However, as with much pioneering legislation, opinions on the law are decidedly mixed. No matter which side of the argument you fall on, it’s important to recognize that there are both positives and potential shortcomings to the law.
The good
NYC LL 144 introduces many potential benefits through the regulation of automated tools. First, the law fosters transparency by mandating clear reporting and oversight when deploying automated decision-making systems. This could help prevent algorithmic biases, ensuring that these tools do not disproportionately impact marginalized communities and underrepresented groups. The law’s guidelines also encourage continuous monitoring and evaluation of the automated tool(s), which could promote the refinement and improvement of automated systems over time. Overall, the transparency that stems from NYC LL 144 has the intent and potential to enhance public trust in technology, mitigate potential harms, and pave the way for responsible and equitable innovation within the city.
However, there are a few important implications of NYC LL 144 that could have unintentional negative consequences.
The potential bad
Despite the positive intent of the law, it remains to be seen if NYC LL 144 will have a positive impact on the NYC workforce and the diversity of organizations. If this law is used as a framework for other legislation, new variations of the law could lead to organizations taking misguided steps, such as prioritizing compliance over the validity of their hiring tools or incorporating more bias into the hiring process. Consider these potential challenges.
No validity required: NYC LL 144 does not consider validity as evidence. There are several types of validity that can be used to evaluate a hiring system, including content validity and criterion validity; validation is the process of collecting evidence to evaluate how well a hiring tool (e.g., an assessment) or system measures what it is supposed to measure. Conducting validation is important to establish the job relevance and predictiveness of hiring tools. Skipping validation studies could result in the absence of both of these things. Because the law does not require any validation, NYC LL 144 could inadvertently encourage employers to use hiring tools that are not job related because they are only focused on demonstrating equality in outcomes (e.g., pass rates), rather than also ensuring on-the-job relevance and predictiveness of hiring measures. Relatedly, employers may opt for tools that claim to measure important predictors of job success, but in reality do not measure anything at all.
The unintentional chilling effect: While NYC LL 144 aims to increase the fairness of hiring practices through transparency, it could actually negatively impact fairness, leading to worse diversity, equity, and inclusion (DEI) outcomes through the chilling effect. In the workplace, the chilling effect occurs when some aspect of the organization—whether it be signing a non-compete agreement or a negative comment from a supervisor—deters an individual from doing something that they otherwise would have done. In the case of NYC LL 144, the use of automated tools, and publicly posting adverse impact calculations, could lead underrepresented groups to opting out of the hiring process entirely, as individuals might feel as if they already have a lower likelihood of success in the hiring process. This could have a number of potential impacts, including potential candidates deciding to not apply to the organization in the first place, opting to not take a pre-hire assessment, or dropping out prior to an automated interview. Candidates dropping out before the hiring process even begins, as well as at key stages throughout, could have a huge, yet virtually unmeasurable, impact on diversity metrics.
Unintended bias shift: While the law seeks to eliminate bias in automated hiring systems, there’s a risk that it could shift bias to other stages of the hiring process. Employers might opt to swap their automated tools for subjective…
Source link