• acec@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    10 months ago

    Compile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      10 months ago

      Would that actually be decent? Even 6b models feel way too rudimentary after experiencing 33+b models and/or chatgpt. I haven’t tried those really scaled down and optimized models, though!