• db0
    link
    fedilink
    191
    edit-2
    21 days ago

    My gawds, some people need to learn what’s a homage and also stop being upset on behalf of others. This comic is fine, stop bellyaching. This is what terminal permission culture does to a motherfucker.

    • @CoggyMcFee@lemmy.world
      link
      fedilink
      12
      edit-2
      21 days ago

      In a version that doesn’t even fully make sense. With databases there is a well-defined way to sanitize your inputs so arbitrary commands can’t be run like in the xkcd comic. But with AI it’s not even clear how to avoid all of these kinds of problems, so the chiding at the end doesn’t really make sense. If anything the person should be saying “I hope you learned not to use AI for this”.

  • Bappity
    link
    fedilink
    English
    13021 days ago

    if someone is actually using ai to grade papers I’m gonna LITERALLY drink water

  • Ech
    link
    fedilink
    English
    87
    edit-2
    21 days ago

    More like “And I hope you learned not to trust the wellbeing and education of the children entrusted to you to a program that’s not capable of doing either.”

      • TheHarpyEagle
        link
        fedilink
        25
        edit-2
        21 days ago

        It could be credibly called an homage if it had a new punchline, but methinks the creator didn’t know what “sanitize” meant in this context.

        • @CileTheSane@lemmy.ca
          link
          fedilink
          -121 days ago

          Stealing in the sense that it’s the exact same joke.

          It’s like a YouTuber creating a ‘reaction’ video that adds nothing but their face in the corner of the screen. Adding a link to the original video doesn’t suddenly make it reasonable.

          • @AndrasKrigare@beehaw.org
            link
            fedilink
            921 days ago

            I think it’s more equivalent to someone making a meme of a standup routine and changing text in order to make fun of something else. The original was a joke about general data sanitization circa 2007, this one is about the dangers of using unfiltered, unreviewed content for AI training.

            • @14th_cylon@lemm.ee
              link
              fedilink
              3
              edit-2
              21 days ago

              Except this “routine” is word for word clone. It is more like people retelling the same political joke with only difference being the politician’s name… No one calls it new joke, or “homage”. We call it “yes, this joke was given to Moses on stone tablet” 😊

              • @CileTheSane@lemmy.ca
                link
                fedilink
                121 days ago

                If I watch something funny I’ll quote it with my friends, but I wouldn’t share a clip of me and my friends if I wanted to share the joke with someone. I’d share a clip of the actual joke.

    • @seang96@spgrn.com
      link
      fedilink
      17
      edit-2
      21 days ago

      So to combat our horrible privacy culture we should name everything null…

      hi my name is null, null.

      • @Venator@lemmy.nz
        link
        fedilink
        5
        edit-2
        21 days ago

        Fun until you want to get a mortgage or something 😂

        But maybe you won’t need to with all the inheritances you’ll get from rich people who died with no children 😂

        • @seang96@spgrn.com
          link
          fedilink
          321 days ago

          The key is to get the mortgage before then when you are null your debt will be null triggering their system to automatically send the deed to your house!

  • @nucleative@lemmy.world
    link
    fedilink
    English
    2721 days ago

    One of the best things ever about LLMs is how you can give them absolute bullshit textual garbage and they can parse it with a huge level of accuracy.

    Some random chunks of html tables, output a csv and convert those values from imperial to metric.

    Fragments of a python script and ask it to finish the function and create a readme to explain the purpose of the function. And while it’s at it recreate the missing functions.

    Copy paste of a multilingual website with tons of formatting and spelling errors. Ask it to fix it. Boom done.

    Of course, the problem here is that developers can no longer clean their inputs as well and are encouraged to send that crappy input straight along to the LLM for processing.

    There’s definitely going to be a whole new wave of injection style attacks where people figure out how to reverse engineer AI company magic.

    • @CanadaPlus@lemmy.sdf.org
      link
      fedilink
      45
      edit-2
      21 days ago

      Easy, you just have a human worker strip out anything that could be problematic, and try not to bring it up around your investors.

    • @xmunk@sh.itjust.works
      link
      fedilink
      3621 days ago

      It’s really easy, just throw an error if you detect a program will cause a halt. I don’t know why these engineers refuse to just patch it.

    • @kromem@lemmy.world
      link
      fedilink
      English
      2
      edit-2
      21 days ago

      Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

      For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.

      • @redcalcium@lemmy.institute
        link
        fedilink
        521 days ago

        Won’t this cause subtle but serious issue? Kinda like how pomegranate translates to “granada” in Spanish, but when you translate “granada” back to English it translates to grenade?

        • @kromem@lemmy.world
          link
          fedilink
          English
          121 days ago

          It will, but it will also cause less subtle issues to fragile prompt injection techniques.

          (And one of the advantages of LLM translation is it’s more context aware so you aren’t necessarily going to end up with an Instacart order for a bunch of bananas and four grenades.)