Thing is, this isn’t AI causing the problem. It’s humans using it in incredibly dumb irresponsible ways. Once again, it’ll be us that do ourselves in. We really need to mature as a species before we can handle this stuff.
I mean I won’t disagree with you but I think a more fundamental issue is that we are so easy to lie to. I’m not sure it matters whether the liar is an AI, a politician, a corporation, or a journalist. Five years ago it was a bunch of people in office buildings posting lies on social media. Now it will be AI.
In a way, AI could make lie detection easier by parsing posting history for contradictions and fabrications in a way humans could never do on their own. But whether they are useful/used for that purpose is another question. I think AI will be very useful for processing and summarizing vast quantities of information in ways other than statistical analysis.
AITruthBot will be just downvoted into oblivion on half of social media. They’ll call it, “liberal propaganda bot”
There is a [slight] difference between people pushing propaganda and those taken by it. Their actions are similar, but if the latter can be convinced to actually do their own research instead of being handfed someone else’s “research” there is hope of reaching some of them.
The real trick is ensuring they aren’t being assisted by a right wing truth bot, which the enemies of truth are doubtless working tirelessly on.
It may be pessimistic, but I don’t think we’re going to get very far in trying to convince people who don’t believe in fact checking to do their own actual research.
But but but… they are always harping on people to dO TheIR OwN rEsEarCH!
AI is only as good as the model it is trained on, so while there are absolute truths, like most scientific constants, there are also relative truths, like “the earth is round” (technically it’s irregularly shaped ellipsoid, not “round”), but the most dangerous “truth” is the Mandela effect, which would likely enter the AI’s training model due to human error.
So while an AI bot would be powerful, depending on the how tricky it is to create training data, it could end up being very wrong.
Completely agree. For every tool we have created to accomplish great things, we have without fail also used it for dumb things at best and completely evil things at worst.
What exactly does it mean to “mature as a species” though? Human psychology (as in, the way human minds fundamentally work) doesn’t fundamentally change on human timescales, not currently anyway. It’s not like we can just wait a few years or decades and the various tricks people have found to more effectively convince people of falsehoods will stop working. Barring evolution (which takes so long as to be essentially not relevant to this) and some sort of genetic or other modification of humans (which is technology we don’t have ready yet and opens up even bigger cans of worms than the kind of AI we currently have), nothing fundamentally changes about us as a species except for culture and material circumstance.
You know, of all the ways AI could threaten us, I never imagined it would be chat programs madly spewing falsehoods.
Seems obvious now but like everyone who grew up watching movies like Terminator I figured the threat would be killer drones or meddling with financial markets, or even replacing too many jobs.
I don’t know, I feel like those things you mentioned are still going to happen, or are already happening.
We are in a low security world. Social network depends on trust. No trust, then the social network fails. So the deal is our social network and news sources are going to have to be more secure. You will basically have to just not trust anything that did not come from a trusted source. Kind of like that now, but going to have to be more that way. I think all security is going to have to be like that.
The big question I have is what does it mean for representative government. AI potentially concentrates even more power at the top. Maybe it changes the viability of representative governments.
Best part is we don’t even have AI, we have keyboard prediction on steroids. Maybe that’s why it’s everywhere, not telling everybody to leave them alone.
This AI hype shit has to stop. Humans are “mentally unready” to handle AI post-truth whatever in the same way that we’re not ready to handle finding the peanut in the turd. What you’re saying is chat bots will make the internet useless/worthless and we already know that
The stuff about replacing doctors and teachers is insane, given the state of AI tech today, you’re gonna go to your doctor and they’ll tell you you’ve got frog DNA? No
I think you underestimate how many doctors, programmers, etc already use ai. And how good the ai is most of the time. And if you are knowledgeable enough, you can catch its bullshit and it can improve its outcome with your guidance. The ai as is right now is useful and the ai is now the worst it will ever be(unless they shackle it even more).
People already believe random bullshit lies written in social media. Imagine if those lies are accompanied by images, sound and video.
Humans weren’t ready for easily accessible sucrose, let alone an easily manipulated reality. My parents can’t tell the difference between Facebook rumors and reality. My siblings can’t tell the difference between YouTube conspiracies and reality. And I’m this boat, letting myself get personally affected by text on some website. We’re in over out heads.
Humans aren’t ready for life in general.
We weren’t mentally ready for a human saturated post truth world, let alone human + ai
Humans are still not ready to use the internet… Needed a generation of house hippo ads targeted at older folks on facebook:X
I feel mentally unready to constantly hear about AI all day everyday.
It’s too late. Al is here and is coming for all of us.
He’s so weird.
We don’t deserve Weird Al at all, but I’m so glad he’s here.
Ain’t that the truth