• PushButton@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    During that time, you can easily install Ollama on an old computer.

    With a client like Oatmeal, you can save your session/ reload/delete as you wish; so your model remembers what you want.

    I am running llama3.1:8b, it’s good enough for the day-to-day operations.

    • Need for a spyware: 0
    • Need to take screenshots of my desktop: 0
    • Need to buy another computer for the hype chipset: 0
    • Need of Microsoft bullshit: 0

    My old computer is apparently “not good enough” for windows 11, but it’s surely good enough for my personal AI running on Linux though!

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 months ago

        You can use larger “open” models through free or dirt-cheap APIs though.

        TBH local LLMs are still kinda “meh” unless you have a high vram GPU. I agree that 8b is kinda underwhelming, but the step up to like Qwen 14B is enormous.