Poisoned AI went rogue during training and couldn’t be taught to behave again in ‘legitimately scary’ study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

  • Daxtron2
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 months ago

    Not just reddit, LAION is a huge dataset

    • JustMy2c@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      5 months ago

      Obviously but reddit is in the goldilocks zone where you get coherent intelligent stuff and humor and facts.

      But it’s still toxic for an Ai.

      • Daxtron2
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 months ago

        Saying it served as the base for most models is just objectively incorrect though

        • JustMy2c@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Correcto but maybe it DOES apply to most asked questions, if you know where I’m going with that