When querying AI, end the query with “provide verifiable citations”. It often vastly reduces the bullshit.

  • sga@lemmings.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    6 months ago

    It does not. I get your perspective, and would not even deny that when you added that, you got better response. what most likely happened was that it also added what seemed to be verifiable sources, but there is no guarantee that those sources cited are correct (or even exist). llms (usually) do not have ways to generate less factual or more factual responses. they just give most “likely” response. hence, you adding the “provide verifiable citations” does not really affect the factuality, but rather changes the perspective with which the answer is given. for example, if I say that a student got “x” marks, vs assume the kid is smart, the student got “y” marks. You most likely would guess y > x, but i never told you in which domain/s was the kid smart, wheter the kid was even tested in domains in which they were smart or not, or was the test even fair (i could have rigged the test to give my favuorite student higher marks). with llms, they take your additional “context”, which slightly changes the “effective weights” (that is the best eli5 that i can do for it. in reality, what changed is that your additional tokens would just be add searate dot products, so the resultant likeliness vector for output tokens got changed).

    I added usually, because one could design a setup (there are some things like that in production somewhere) where you adding additional context, could be first parsed a “smaller”(or more specialised) system, which would then change the parameters (temperature, top k, …) for the actual llm, so the answer may actual become more “reproducible”, but that still does not guarantee that there would be less bullshit.

    • texture@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 days ago

      It literally does help, I do this.

      Anyway they said “often” not “guaranteed”.

      You act like it wouldnt be helpful at all to have sources provided. Seems like you just hate ai and want everyone else to hate it too.

      • sga@lemmings.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 days ago

        I get your perspective, and would not even deny that when you added that, you got better response

        You act like it wouldnt be helpful at all

        what part of above made you blive i suggested always?

        I do not hate ai (at least not as much as an average fediverse person). As a researcher, i know what ml can do. llms are fine for language processing, and i even run local models (less than 10B). I do not like how a lot of things are going on (not going about them here), but I do not hate ai.

        I even suggested a way that can be implemented to tool call whenever it reads verifiable sources, but never said never. I would recommend you to read my original comment.

        • texture@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 days ago

          oh, no it was the first sentence that stated “It does not.”

          sorry for the misunderstanding.