David Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 16 days agoDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comexternal-linkmessage-square104fedilinkarrow-up1261arrow-down11
arrow-up1260arrow-down1external-linkDon’t use AI to summarize documents — it’s worse than humans in every waypivot-to-ai.comDavid Gerard@awful.systemsM to TechTakes@awful.systemsEnglish · 16 days agomessage-square104fedilink
minus-squarequeermunist she/her@lemmy.mllinkfedilinkEnglisharrow-up12·15 days agoUnless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary. Nirvana fallacy. All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.
Unless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary.
All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.