Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 months ago

    No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.

    I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.

    The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.

    The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.

      • Iconoclast@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

        I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.

        These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.

        I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            Have you tried running your own local llm?

            Nah, I’ve only messed around with ChatGPT and Grok. My interest in AI originates from the philosophical side of it - mainly the dangers and implications of creating AGI. I’m not tech-savvy enough for anything deeper - I even needed ChatGPT to walk me through installing Linux.

      • HugeNerd@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 months ago

        think the “fancy auto complete” meme is a disingenuous

        “LLMs don’t have human understanding or metacognition”

        Then what’s the (auto-completing) fucking problem? It’s just a series of steps on data. You could feed it white noise and it would vomit up more noise. And keep doing it as long as there’s power.

        Intelligent?

          • HugeNerd@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            2 months ago

            Instead it tries to make sense of it. Why? Because it learned strong language priors from us and it leans on that when the prompt is meaningless. It tries to make sense of it.

            No, it doesn’t. You’re in sci-fi land. There is no “it” “trying to make sense”. That cogitation is happening in YOU, not the motherboard.

              • Iconoclast@feddit.uk
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 months ago

                Sure, there’s no ghost in the machine - but that’s true of your neurons too.

                Touché.

                Intelligence doesn’t require “self” and we’re a living proof of that. The way LLMs and humans operate have much more similarities than people like to admit. We’re just applying higher standards to AI.