• MotoAsh@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    6 months ago

    No, because LLMs are just a mathematical blender with ONE goal in mind: construct a good sentence. They have no thoughts, they have no corrective motion, they just spit out sentences.

    You MIGHT get to passing a Turing test with enough feedback tied in, but then the “conciousness” is specifically coming from the systemic complexity at that point and still very much not the LLMs.

    • maynarkh@feddit.nl
      link
      fedilink
      English
      arrow-up
      13
      ·
      6 months ago

      So you’re saying it’s not good enough for a sentient personality, but it might be good enough for an average politician?

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        Oh, if we’re talking about what it takes to replace politicians, technology has been capable of that for years.

      • nxdefiant
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        Hell, maybe even above average if the model can update itself in real time.

    • Hotzilla@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      6 months ago

      In my opinion you are giving way too much credit to human beings. We are mainly just machines that spit out sentences.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        No, you are giving too much credit to LLMs. Thinking LLMs are capable of sentience is as stupid as thinking individual neurons could learn physics.