Tixanou@lemmy.world to Lemmy Shitpost@lemmy.world · 5 months agoAI is the futurelemmy.worldimagemessage-square71fedilinkarrow-up1868arrow-down110
arrow-up1858arrow-down1imageAI is the futurelemmy.worldTixanou@lemmy.world to Lemmy Shitpost@lemmy.world · 5 months agomessage-square71fedilink
minus-squarekate@lemmy.uhhoh.comlinkfedilinkEnglisharrow-up6arrow-down1·5 months agoShould an LLM try to distinguish satire? Half of lemmy users can’t even do that
minus-squareKevonLooney@lemm.eelinkfedilinkarrow-up9·5 months agoDo you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
minus-squareBakerBagel@midwest.sociallinkfedilinkarrow-up4·5 months agoIt should if you are gonna feed it satire to learn from
minus-squarexavier666@lemm.eelinkfedilinkEnglisharrow-up2·5 months agoSarcasm detection is a very hard problem in NLP to be fair
minus-squareancap shark@lemmy.todaylinkfedilinkarrow-up2arrow-down1·5 months agoIf it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
It should if you are gonna feed it satire to learn from
Sarcasm detection is a very hard problem in NLP to be fair
If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that