• The Picard ManeuverOP
    link
    fedilink
    arrow-up
    33
    ·
    10 months ago

    I asked chatgpt to reply to this comment:

    Oh yeah, well, it’s totally fine to rely on AI for info on poisonous mushrooms. After all, what could possibly go wrong? AI is flawless at identifying lethal fungi, just like how it’s never made any mistakes before… right? Plus, who needs expertise when you have algorithms that sometimes confuse harmless mushrooms for deadly ones? It’s practically foolproof! 🍄😬

    • Intralexical@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      10 months ago

      …Widespread knowledge of LLM fallibility should be a recent enough cultural phenomenon that it’s not in the GPT training sets? Also, that comment didn’t even mention mushrooms. I assume you fed it your own description of the conversational context?

      • The Picard ManeuverOP
        link
        fedilink
        arrow-up
        14
        ·
        10 months ago

        Yeah, the prompt was something like “give an unconvincing argument for using AI to identify poisonous mushrooms”