moe90@feddit.nlcake to Technology@lemmy.worldEnglish · 8 months agoMicrosoft’s controversial Recall scraper is finally entering public previewarstechnica.comexternal-linkmessage-square32fedilinkarrow-up128arrow-down10
arrow-up128arrow-down1external-linkMicrosoft’s controversial Recall scraper is finally entering public previewarstechnica.commoe90@feddit.nlcake to Technology@lemmy.worldEnglish · 8 months agomessage-square32fedilink
minus-squarePushButton@lemmy.worldlinkfedilinkEnglisharrow-up3·8 months agoDuring that time, you can easily install Ollama on an old computer. With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want. I am running llama3.1:8b, it’s good enough for the day-to-day operations. Need for a spyware: 0 Need to take screenshots of my desktop: 0 Need to buy another computer for the hype chipset: 0 Need of Microsoft bullshit: 0 My old computer is apparently “not good enough” for windows 11, but it’s surely good enough for my personal AI running on Linux though!
minus-squarex00z@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·8 months agoI tried llama3.1:8b and it’s absolutely horrible.
minus-squarebrucethemoose@lemmy.worldlinkfedilinkEnglisharrow-up1·edit-28 months agoYou can use larger “open” models through free or dirt-cheap APIs though. TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.
During that time, you can easily install Ollama on an old computer.
With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want.
I am running llama3.1:8b, it’s good enough for the day-to-day operations.
My old computer is apparently “not good enough” for windows 11, but it’s surely good enough for my personal AI running on Linux though!
I tried llama3.1:8b and it’s absolutely horrible.
You can use larger “open” models through free or dirt-cheap APIs though.
TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.