SaltyIceteaMaker@lemmy.ml to Linux@lemmy.ml · 1 year agoany cool ideas what i could do with termux?lemmy.mlimagemessage-square26fedilinkarrow-up1188arrow-down112
arrow-up1176arrow-down1imageany cool ideas what i could do with termux?lemmy.mlSaltyIceteaMaker@lemmy.ml to Linux@lemmy.ml · 1 year agomessage-square26fedilink
minus-squareacec@lemmy.worldlinkfedilinkarrow-up10arrow-down1·1 year agoCompile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.
minus-squarebassomitron@lemmy.worldlinkfedilinkEnglisharrow-up4·1 year agoWould that actually be decent? Even 6b models feel way too rudimentary after experiencing 33+b models and/or chatgpt. I haven’t tried those really scaled down and optimized models, though!
minus-squareacec@lemmy.worldlinkfedilinkarrow-up2·1 year agoDecent enough for a model 50 times smaller than ChatGPT. I use orca_mini_3b.
minus-squarearthurpizza@lemmy.worldlinkfedilinkEnglisharrow-up2·1 year agoI got llama to compile but it crashes every time I tried running it.
Compile llama.cpp, download a small GGML LLM model and you will have a quite intelligent assiatant running into your phone.
Would that actually be decent? Even 6b models feel way too rudimentary after experiencing 33+b models and/or chatgpt. I haven’t tried those really scaled down and optimized models, though!
Decent enough for a model 50 times smaller than ChatGPT. I use orca_mini_3b.
I got llama to compile but it crashes every time I tried running it.