• lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I guess? I’m surprised that the original model was on equal footing to the user prompts to begin with. Why was the removal of the origina training a feature in the first place? It doesn’t make much sense to me to use a specialized model just to discard it.

    It sounds like a very dumb oversight in GPT and it was probably long overdue for fixing.

    • TwilightVulpine@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      A dumb oversight but an useful method to identify manufactured artificial manipulation. It’s going to make social media even worse than it already is.

    • jacksilver@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      Because all of these models are focused on text prediction/QA, the whole idea of “prompts” organically grew out of the functionality when they tried to make it something more useful/powerful. Everything from function calling, agents, now this are just be bolted onto the foundation of LLMs.

      Its why this seems more like a patch than an actual iteration of the technology. They aren’t approaching it at the fundamentals.