July 5, 2023, marked the enforcement of New York City's landmark Automated Employment Decision Tool (AEDT) law regulating the use of AI in hiring practices.
The new law requires employers using AI in the hiring process to audit these systems annually and prove that their systems are not racist or sexist. The enactment of the law has ignited substantial controversy. Proponents of the law say it is an important first step in regulating problematic AI systems, while critics argue it does not go far enough.
Under the new rules, companies must inform candidates if AI is used to assess their application and allow them to request more information about what data is collected and analyzed. Companies face fines up to $1,500 for violations.
Several public interest groups and civil rights advocates, including the Center for Democracy & Technology, the Surveillance Technology Oversight Project (S.T.O.P.), the NAACP Legal Defense and Educational Fund, and the New York Civil Liberties Union, argue that the law is "underinclusive." Their primary concern is that the law might not sufficiently cover many uses of automated systems in hiring, such as those in which AI is employed to screen thousands of candidates.
Albert Fox Cahn, executive director of S.T.O.P., further highlights the uncertainty around the effectiveness of independent auditing, considering the industry's current level of maturity. He warns that the law could generate a false sense of security, as the audits, while assessing the potential bias of an AI system against certain groups, are not required to elucidate the algorithm's decision-making process, thus avoiding the "explainability" challenges inherent in complex machine learning systems like deep learning.
However, trade organizations like BSA - whose members include IBM, Microsoft and others - say mandated third-party audits are currently "not feasible" given the lack of accepted principles.
This groundbreaking law, which aligns with similar regulations incorporated in the European AI Act, is indicative of a growing trend within the United States, with states like New Jersey and California also considering similar legislation.
As AI becomes increasingly entrenched in the hiring process, lawmakers face pressure to address potential harms while avoiding impractical requirements on businesses. Regardless of the critique, the law represents a important step towards regulating AI and mitigating associated risks. It pushes companies to delve deeper into the training data and algorithms they use and to scrutinize whether their technology perpetrates unfair discrimination. Striking the right balance remains a complex challenge.