• acec@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    Compile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Would that actually be decent? Even 6b models feel way too rudimentary after experiencing 33+b models and/or chatgpt. I haven’t tried those really scaled down and optimized models, though!