cross-posted from: https://lemmy.zip/post/15863526

Steven Anderegg allegedly used the Stable Diffusion AI model to generate photos; if convicted, he could face up to 70 years in prison

  • @laughterlaughter@lemmy.world
    link
    fedilink
    English
    230 days ago

    We’re not disagreeing.

    The question was:

    “Is this an abuse image if it was generated?”

    Yes, it is an abuse image.

    Is it actual abuse? Of course not.

    • @Daxtron2
      link
      English
      030 days ago

      And yet its being treated as though it is

      • @PoliticalAgitator@lemmy.world
        link
        fedilink
        English
        0
        edit-2
        29 days ago

        Images of children being raped are being treated as images of children being raped. Nobody has every been caught with child pornography and charged as if they abused the children themselves, nor is anybody advocating that people generating AI child pornography are charged as if they sexually abused a child.

        Everything is being treated as it always has been, but you’re here arguing that it’s moral and harmless as long as an AI does it, using every semantic trick and shifted goalpost you possibly can.

        It’s been gross as fuck to watch. I know you’re aiming for a kind of “king of rationality, capable of transcending even your disgust of child abuse” thing, but every argument you make is so trivial and unimportant that you’re coming across as someone hoping CSAM becomes more accessible.

      • @laughterlaughter@lemmy.world
        link
        fedilink
        English
        030 days ago

        Well, that’s another story. I just answered your question. “Are these images about abuse even if they’re generated?” Yup, they are.

        “Should people be prosecuted because of them?” Welp, someone with more expertise should answer this. Not me.