- cross-posted to:
- news@lemmy.world
- technology@lemmy.world
- techtakes@awful.systems
- cross-posted to:
- news@lemmy.world
- technology@lemmy.world
- techtakes@awful.systems
The Microsoft-powered bot says bosses can take workers’ tips and that landlords can discriminate based on source of income
Yet another example of people fundamentally misunderstanding the proper use of LLMs and throwing them into production without any kind of sanity checks on the input and output. As someone who used to work for NYS as a software engineer, this is entirely unsurprising.
Work in HR. Have a very smart boss. Asked me about AI for recruiting, screening and other purposes. Told my boss, wait 5 years, we’ll see the catastrophic lawsuits and early adopters, then after 5 more there will be some plug and play usable solutions.
Anyone eating up Big4 and startups own horseshit deserve what they get. They’ve fully demonstrated they don’t QC, and especially on critical, difficult to parse, contextual or changing info LLMs are incredibly immature.
LLMs are still good for the kind of flowery language you need in HR, but not for any sort of fact-based generation.
Think of it as being creative, not logical.
Yeah not talking about flowery language.
HR needs automated text and audio(pretty much the same with transcription) screening/interviewing. It needs to be able to ask industry or role-specific questions, generate follow ups based on responses, and rank the answers accurately or at least better than humans do(which is miserable coin flip). 75% of interviewing could be done away with.
Right now however, the quality would be poor and the risk astronomical. I’m sure we’ll get there in 5-10 years. The risk will still be there to a degree, but just like autonomous cars, at some point it will be statistically proven that the AI is less biased and of course more efficient.
The crap part will be subscribing to updates for your AI. "Oh, you want it to ask questions about that new software? $$$
The biggest thing I’ve found is limiting the inputs with a filter and vetting outputs results in higher quality results. One project I’m working on takes highly complex language and simplifies it for users. There’s no user input and it’s not being used to create anything that isn’t already there. It takes the highly technical language with lots of acronyms and breaks it down into more understandable units for normal people. Of course my company is heavily regulated so we’re extremely focused on QA and ensuring it will never output something that doesn’t align correctly.
I guess the chat bot is drawing from the data where corpos get away with everything?
Believe it when they say the truth out loud.
deleted by creator