I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.
I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.
In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.
P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!
While you can run an llm on an “old” laptop with an Nvidia GC it will likely be really slow. Like several minutes to much much longer slow. Huggingface.co is a good place to start and has a ton of different LLMs to choose from that range from small enough to run on your hardware to ones that won’t.
As you are a teacher you know that research is going to be vital to your understanding and implementing this project. There is a plethora of information out there. There will not be a single person’s answer that will work perfectly for your wants and your hardware.
When you have figured out your plan and then run into issues that’s a good point to ask questions with more information about your situation.
I say this cause I just went through this. Not to be an ass.
I’m in the early stages of this myself and haven’t actually run an LLM locally but the term that steered me in the right direction for what I was trying to do was ‘RAG’ Retrieval-Augmented Generation.
ragflow.io (terrible name but good product) seems to be a good starting point but is mainly set up for APIs at the moment though I found this link for local LLM integration and I’m going to play with it later today. https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.md
I’d recommend trying LM Studio (https://lmstudio.ai/). You can use it to run language models locally. It has a pretty nice UI and it’s fairly easy to use.
I will say, though, that it sounds like you want to feed perhaps a large number of tokens into the model, which will require a model made for a large context length and may require a pretty beefy machine.
I watched NetworkChucks tutorial and just did what he did but on my Macbook. Any recent Macbook(M-series) will suffice. https://youtu.be/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo
NetworkChuck is the man
You need more than a llm to do that. You need a Cognitive Architecture around the model that include RAG to store/retrieve the data. I would start with an agent network (CA) that already includes the workflow you ask for. Unfortunately I don’t have a name ready for you, but take a look here: https://github.com/slavakurilyak/awesome-ai-agents
The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile
For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama
https://matilabs.ai/2024/02/07/run-llms-locally/
Haven’t done this yet, but this is a source I saved in response to a similar question a while back.
While this will get you a selfhosted LLM it is not possible to feed data to them like this. As far as I know there are a 2 possibilities:
-
Take an existing model and use the literature data to fine tune the model. The success of this will depend on how much “a lot” means when it comes to the literature
-
Create a model yourself using only your literature data
Both approaches will require some yrogramming knowledge and understanding of how a llm works. Additionally it will require a preparation of the unstructured literature data to a kind of structured data that can be used to train or fine tune the model.
Im just a CS student so not an expert in this regard ;)
Thx for this comment.
My main drive for self hosting is to escape data harvesting and arbitrary query limits, and to say, “I did this.” I fully expect it to be painful and not very fulfilling…
-
deleted by creator
Jan.ai might be a good starting point or ollama? There’s https://tales.fromprod.com/2024/111/using-your-own-hardware-for-llms.html which has some guidance for using jan.ai for both server and client
There’s a few.
Very easy if you set it up with Docker.
Best is probably just ollama and use danswer as a frontend. Danswer will do all the RAG stuff for you. Like managing / uploading documents and so on
Ollama is becoming the standard selfnhosted LLM. And you can add any models you want / can fit.
https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image