• FunkyStuff [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 months ago

    I don’t disagree that experiences are data. The major distinction I’m making is that the human creative process uses more than just data, we have intention, aesthetics, we make mistakes, change our minds, iterate, etc. For a generative AI, the “creative process” is tokenizing a string, running the tokens through an attention matrix, plugging that into a thousand different matrices that then go into a post processing layer and they spit out an image. At no point does it look at what it’s doing and evaluate how it’s gonna fit into the final picture.

    As for the rest of your reasoning, I neither agree nor disagree, I think we just don’t have the same definition of consciousness.

    • novibe@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      I feel your description of what a generative AI does is pretty reductive. The middle part of “plugging the ‘token’ through thousands of different matrices” is not at all well understood. We don’t know how the AI generates the images or text. It can’t explain itself.

      And we have ample research showing these models have internal models of the world and can have “thoughts”.

      In any case, what would you say consciousness is? This is a more interesting question to me tbh.

      • FunkyStuff [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        7 months ago

        Well I don’t see the problem, AI can’t explain itself but it’s nothing more than matrix multiplication with a nonlinearity. Maybe you use a Fourier transform and a kernel instead of scalar weights for a convolutional neural network, maybe it has state instead of being purely feed forward, but at the core of it all you’re doing is multiplying matrices and applying a nonlinearity. I don’t know what you mean that we don’t know how it generates images and text. It’s literally just doing the thing it was programmed to do?

        What research? I’d like to see some evidence that these models “think,” given that the way every LLM I know of works is by generating a single word at a time. When you ask a GPT how to bake bread, and the first word it outputs is “Surely!” it has no clue what explanation it’ll start giving you. In fact, whether or not it chooses the exact word “Surely!” as the start of the response has a cascading response on the rest of the output. Then, as I had said earlier, LLMs don’t see anything more than the statistical correlations between words. No LLM knows what gravity is, but when you ask it why things fall down it has enough physics textbooks in its training data that it can parrot the answer from there.

        One of the ways I really broke down the idea that GPTs have any model of thought is playing this game. If AI had any actual model of meaning, it would understand security and it would understand not to just tell the player the password. Instead, it will literally blurt it out if you do as much as ask it for words that rhyme. You don’t even need to mention “password,” the way GPT works means that if it detects a lot of weight on a certain word in its previous prompt (which naturally would’ve emphasized the password), it’s almost guaranteed to bring it up again. I know it’s not exactly a hard proof, but it is fun.

        As for your last question you’re out of luck because I’m actually just a Catholic lol, not a lot more to say than I believe that there is a metaphysical nature to human experience connecting us to a soul. But that’s a completely unscientific belief to be honest, and it’s not a point I can argue because it’s not based on evidence.

        • novibe@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          7 months ago

          It’s not true to say that LLMs just do as they are programmed. That’s not how machine and deep learning work. The programming goes into making it able to learn and parse through data. The results are filtered and weighted, but they are not the result of the programming, they are the result of the training.

          Y’know, like our brain was programmed by natural selection and the laws of biology to learn and use certain tools (eyes, touch, thoughts etc.) and with “training data” (learning or lived experience) it outputs certain results which are then filtered and weighted (by parents, school, society)….

          I think LLMs and diffusors will be a part of the AI mind, generating thoughts like our mind does.

          Regarding the last part, do you think the brain or the mind create or are a part of the soul?

          I think discussing consciousness is very scientific. To think there’s no point in doing so is reductionist to materiality, which is unscientific. Unfortunately many people, even scientists, are more scientificists than actually scientific.

          • FunkyStuff [he/him]@hexbear.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            7 months ago

            I don’t know how much you know about computer science and coding, but if you know how to program in Python and have some familiarity with NumPy, you can make your own feed forward neural network from scratch in an afternoon. You can make an AI that plays tic tac toe and train it against itself adversarially. It’s a fun project. What I mean by this is to say, yes they do, LLMs and generative models do as they are programmed. They are no different than a spreadsheet program. The thing that makes them special is the weights and biases that were baked into them by going through countless terabytes of training data, as you correctly state. But it’s not like AI have a secret, arcane mathematical operation that no computer scientist understands. What we don’t understand about them is why they activate the way they do; we don’t really know why any given part of the network gets activated, which makes sense because of the stochastic nature of deep learning: it’s all just convergence on a “pretty good” result after getting put through millions of random examples.

            I think the mind and consciousness are separate from the soul that precedes their thoughts. But, again, I have absolutely no evidence for that. It’s just dogma.