• @Octopus1348@lemy.lol
    link
    fedilink
    197 months ago

    Anyone else get the feeling that GPT-3.5 is becoming dumber?

    I made an app for myself that can be used to chat with GPT and it also had some extra features that ChatGPT didn’t (but now has). I didn’t use it (only Bing AI sometimes) for some time and now I wanted to use it again. I had to fix some API stuff because the OpenAI module jumped to 1.0.0, but that didn’t affect any prompt (this is important: it’s my app, not ChatGPT, so cannot possibly be a prompt cause if I did nothing) and I didn’t edit what model it used.

    When everything was fixed, I started using it and it was obviously dumber than it was before. It made things up, misspelled the name of a place and other things.

    This can be intentional, so people buy ChatGPT Premium and use GPT-4. At least GPT-4 is cheaper from the API and it’s not a subscription.

    • @Daxtron2
      link
      87 months ago

      Every time they try and lock it down more, the quality gets noticeably less reliable

    • @Smacks@lemmy.world
      link
      fedilink
      37 months ago

      I’ve noticed that too. I recall seeing an article of it detailing how to create a nuclear reactor David Hahn style. I don’t doubt that they’re making it dumber to get people to buy premium now.