A new law went into effect in New York City on Wednesday that requires any business using A.I. in its hiring to submit that software to an audit to prove that it does not result in racist or sexist…
A new law went into effect in New York City on Wednesday that requires any business using A.I. in its hiring to submit that software to an audit to prove that it does not result in racist or sexist…
Hell no, speaking as someone who has actually done a lot of hiring. It is very easy to find the top 20 candidates or so based on CVs. The hard part is actually sorting those folks out, which AI cannot do.
AI offers an unknowable bias and unbounded potentoal for discrimination without consequences. this rubber stamp from NYC is a disaster for civil rights.
AI should not be touching these sort of decisions , all agorithims need to be fully auditable and replicable
I’m not talking about the people who might be good for the job. I’m talking about the 600 dime a dozen script kiddies that apply for high level devops jobs or as senior engineers. A model can reject those just fine, and it takes the sweat off the human ops people to screen them.
That’s simply not how hiring works at most institutions.
For high traffic lower level positions, hiring managers resent getting given these AI tools. You wind up with candidates that are best at manipulating AI, not the most qualified. Their previous method, basic sorting and hitting the first acceptable worker (rather than the absolute best), is much more efficient use of their time.
For higher level positions, networking plays a much more significant roll. Since it’s a much more significant decision, companies are also less likely to entrust it to an AI.
Screening out unserious applicants is easier than you think, and can be addressed without a blackbox of potential lawsuits