No, because LLMs are just a mathematical blender with ONE goal in mind: construct a good sentence. They have no thoughts, they have no corrective motion, they just spit out sentences.
You MIGHT get to passing a Turing test with enough feedback tied in, but then the “conciousness” is specifically coming from the systemic complexity at that point and still very much not the LLMs.
No, because LLMs are just a mathematical blender with ONE goal in mind: construct a good sentence. They have no thoughts, they have no corrective motion, they just spit out sentences.
You MIGHT get to passing a Turing test with enough feedback tied in, but then the “conciousness” is specifically coming from the systemic complexity at that point and still very much not the LLMs.
So you’re saying it’s not good enough for a sentient personality, but it might be good enough for an average politician?
Oh, if we’re talking about what it takes to replace politicians, technology has been capable of that for years.
Hell, maybe even above average if the model can update itself in real time.
In my opinion you are giving way too much credit to human beings. We are mainly just machines that spit out sentences.
No, you are giving too much credit to LLMs. Thinking LLMs are capable of sentience is as stupid as thinking individual neurons could learn physics.