Beep@lemmus.org to Technology@lemmy.worldEnglish · 1 day agoAI overly affirms users asking for personal advice— Not only are AIs far more agreeable than humans when advising on interpersonal matters, but users also prefer the sycophantic models.news.stanford.eduexternal-linkmessage-square14linkfedilinkarrow-up1128arrow-down15file-textcross-posted to: fuck_ai@lemmy.world
arrow-up1123arrow-down1external-linkAI overly affirms users asking for personal advice— Not only are AIs far more agreeable than humans when advising on interpersonal matters, but users also prefer the sycophantic models.news.stanford.eduBeep@lemmus.org to Technology@lemmy.worldEnglish · 1 day agomessage-square14linkfedilinkfile-textcross-posted to: fuck_ai@lemmy.world
minus-squareStevelinkfedilinkEnglisharrow-up5·16 hours agoAre there any naturally antagonistic models?
minus-squareTetragrade@leminal.spacelinkfedilinkEnglisharrow-up1·5 hours agoStupid fucking question. Next!
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up1·9 hours agoYou can give system prompts that tell most of them to be more antagonistic, but I don’t know of any that do it by default.
minus-squareklu9@piefed.sociallinkfedilinkEnglisharrow-up1·14 hours agoIsn’t GrokAI’s selling point that it’s an edgelord? (Not gonna try it to see if it’s true.)
Are there any naturally antagonistic models?
Stupid fucking question. Next!
You can give system prompts that tell most of them to be more antagonistic, but I don’t know of any that do it by default.
Isn’t GrokAI’s selling point that it’s an edgelord?
(Not gonna try it to see if it’s true.)