Thousands of authors demand payment from AI companies for use of copyrighted works::Thousands of published authors are requesting payment from tech companies for the use of their copyrighted works in training artificial intelligence tools, marking the latest intellectual property critique to target AI development.

  • Melllvar
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    11 months ago

    An AI analyzes the words of a query and generates its response(s) based on word-use probabilities derived from a large corpus of copyrighted texts. This makes its output derivative of those texts in a way that someone applying knowledge learned from the texts is not.

    • planish@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      11 months ago

      Why, though?

      Is it because we can’t explain the causal relationships between the words in the text and the human’s output or actions?

      If a very good neuroscientist traced out the engineer’s brain and could prove that, actually, if it wasn’t for the comma on page 73 they wouldn’t have used exactly this kind of bolt in the bridge, now is the human’s output derivative of the text?

      Any rule we make here should treat people who are animals and people who are computers the same.

      And even regardless of that principle, surely a set of AI weights is either not copyrightable or else a sufficiently transformative use of almost anything that could go into it? If it decides to regurgitate what it read, that output could be infringing, same as for a human. But a mere but-for causal connection between one work and another can’t make text that would be non-infringing if written by a human suddenly infringing because it was generated automatically.

      • Melllvar
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 months ago

        Because word-use probabilities in a text are not the same thing as the information expressed by the text.

        Any rule we make here should treat people who are animals and people who are computers the same.

        W-what?

        • Tangent5280@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I think what he meant was that we should an AI the same way we treat people - if a person making a derivative work can be copyright striked, then so should an AI making a derivative work. The same rule should apply to all creators*, regardless of whether they are an AI or not.

        • planish@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          11 months ago

          In the future, some people might not be human. Or some people might be mostly human, but use computers to do things like fill in for pieces of their brain that got damaged.

          Some people can’t regognize faces, for example, but computers are great at that now and Apple has that thing that is Google Glass but better. But a law against doing facial recognition with a computer, and allowing it to only be done with a brain, would prevent that solution from working.

          And currently there are a lot of people running around trying to legislate exactly how people’s human bodies are allowed to work inside, over those people’s objections.

          I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.

          • Melllvar
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            11 months ago

            But the subject under discussion is large language models that exist today.

            I think we should write laws on the principle that anybody could be a human, or a robot, or a river, or a sentient collection of bees in a trench coat, that is 100% their own business.

            I’m sorry, but that’s ridiculous.

            • planish@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              11 months ago

              I have indeed made a list of ridiculous and heretofore unobserved things somebody could be. I’m trying to gesture at a principle here.

              If you can’t make your own hormones, store bought should be fine. If you are bad at writing, you should be allowed to use a computer to make you good at writing now. If you don’t have legs, you should get to roll, and people should stop expecting you to have legs. None of these differences between people, or in the ways that people choose to do things, should really be important.

              Is there a word for that idea? Is it just what happens to your brain when you try to read the Office of Consensus Maintenance Analog Simulation System?

              • Melllvar
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                11 months ago

                The issue under discussion is whether or not LLM companies should pay royalties on the training data, not the personhood of hypothetical future AGIs.

                • planish@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  11 months ago

                  Why should they pay royalties for letting a robot read something that they wouldn’t owe if a person read it?

                  • Melllvar
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    11 months ago

                    It’s not reading. It’s word-probability analysis.