It just feels too good to be true.

I’m currently using it for formatting technical texts and it’s amazing. It doesn’t generate them properly. But if I give it the bulk of the info it makes it pretty af.

Also just talking and asking for advice in the most random kinds of issues. It gives seriously good advice. But it makes me worry about whether I’m volunteering my personal problems and innermost thoughts to a company that will misuse that.

Are these concerns valid?

  • rob64
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        11 months ago

        ChatGPT then internally asks itself to summarize the entire 4000 token history into 500 tokens.

        From my understanding, ChatGPT doesn’t do anything like that by itself. If you want the story summarized, you’ll have to request it and it will show up in the text buffer. There is no hidden internal state that ChatGPT can use to “think”, there is just the text that you see in the text buffer.

        The only hidden text that exists is the initial prompt that turns GPT into a chatbot, along with some start/stop tokens, that give control back to the user (plain GPT will just auto-complete both sides of the conversation).

        Some experiments like AutoGPT do generate summaries and outlines for larger problems from what I understand. But ChatGPT is so far just a chatbot layer on top of GPT, without any extra cleverness.