https://futurism.com/the-byte/government-ai-worse-summarizing
The upshot: these AI summaries were so bad that the assessors agreed that using them could require more work down the line, because of the amount of fact-checking they require. If that’s the case, then the purported upsides of using the technology — cost-cutting and time-saving — are seriously called into question.
Continuing to look under LLM rocks of varying size and shininess in search of the solve-every-problem robot god of the future
Observing that newer models perform better than older models on a variety of benchmarks means you want to have an intimate relationship with your computer.
Stating that newer models that perform better than old models somehow implies that the newer models are completely living up to marketing hype, up to and including calling it “artificial intelligence” to begin with.
And yes, it’s a known and established issue where some people that stan for these treat printers do see them as replacements for people, not tools. There’s already an entire startup industry of “AI companions” selling that belief, so what I said isn’t as absurd as you claim it is. Besides, I said “robot god of the future” there, not “AI” waifus, but there’s certainly a connection that some true believers make between the two concepts.
I didn’t mention marketing, I’m talking about benchmarks. Benchmarks designed to test the machine’s abilities to perform reasoning like humans. And they’re being improved on constantly. Sorry if that rubs ya the wrong way.
That’s too bad, because “AI” as it stands, and what is branded as “AI,” is not what it claims to be on the label. There are certainly scientific efforts underway to make rudimentary versions of that, but large language models and related technology simply isn’t it, and to believe otherwise is marketing, whether you accept it or not.
Again, you’re believing in the marketing.
https://bigthink.com/the-future/artificial-general-intelligence-true-ai/
https://time.com/collection/time100-voices/6980134/ai-llm-not-sentient/
You’re not sorry, this isn’t /r/Futurology or /r/Singularity, and the closer to your post only makes it worse.
You seem to have a kind of “head in the sand” approach to this (I get it, we have to protect our egos). Maybe educate yourself on what some of the research in this field looks like.
Here’s a list of a lot of the common benchmarks that are used by researchers all over the world, and have nothing to do with Sam Altman trying to hype OpenAI’s stock price or whatever the latest late stage capitalist shenanigans are in the business world.
I know some people are, but I’m not saying these things are sentient (nice Time link tho lmao). This is a massive leap in logic that you are making. I’m saying, these models are way better at taking standardized tests and shit than they were even months ago and that has implications for labor.
Honestly you sound scared about this stuff.
Even more and there’s so much more text to read. Here we go.
Maybe stop ignoring entire fields of research that, to this date, are still figuring out what biological brains are doing and how they are doing them instead of just nodding along to what you already want to believe from people that have blinders for anything outside of their field (computers, in this case). It’s a case of someone with a hammer seeing everything as a nail, and you buying into that.
More like tired. If you weren’t so religiously defensive about the apparent advent of whatever you’re hoping for, you’d know that I have on many occasions stated that artificial intelligence is possible and may even be achieved within current lifetimes, but reiterating and refining the currently hyped “AI” product simply isn’t it.
It’s like if people were trying to develop rocketry to achieve space travel, but you and yours were smugly stating that this particularly sharp knife will cut the heavens open, just you wait.
I respect you, but I think you have a hard time separating the players (silicon valley, redditor incels, marketers, hype men) from the game (real science that is getting done that is interesting and miles beyond where were were last year).
I’m not talking about biology or anything else. Just pointing out that if this train keeps moving at it’s current pace, we’re in for a massive upheaval. I’m not hoping for anything or pushing an agenda. Honestly the best case probably would be if a lot of the detractors are right, and this tech stagnates or plateaus in some way to give society time to adjust a bit. Or to imagine a world where you don’t die if you don’t have a job. I personally don’t have reason to believe it will stagnate, and am preparing for it not to.
What do your preparations look like?