• @wick@lemm.ee
    link
    fedilink
    English
    63 months ago

    I guess I just didn’t know that LLMs were set up his way. I figured they were fed massive hash tables of behaviour directly into their robot brains before a text prompt was even plugged in.

    But yea, tested it myself and got the same result.

    • @ilinamorato@lemmy.world
      link
      fedilink
      English
      63 months ago

      They are also that, as I understand it. That’s how the training data is represented, and how the neurons receive their weights. This is just leaning on the scale after the model is already trained.

    • just another dev
      link
      fedilink
      English
      33 months ago

      There are several ways to go about it, like (in order of effectiveness): train your model from scratch, combine a couple of existing models, finetune an existing model with extra data you want it to specialise on, or just slap a system prompt on it. You generally do the last step at any rate, so it’s existence here doesn’t proof the absence of any other steps. (on the other hand, given how readily it disregards these instructions, it does seem likely).

    • @afraid_of_zombies@lemmy.world
      link
      fedilink
      English
      23 months ago

      Some of them let you preload commands. Mine has that. So I can just switch modes while using it. One of them for example is “daughter is on” and it is to write text on a level of a ten year old and be aware it is talking to a ten year old. My eldest daughter is ten