I’m interested in hosting something like this, and I’d like to know experiences regarding this topic.
The main reason to host this for privacy reasons and also to integrate my own PKM data (markdown files, mainly).
Feel free to recommend me videos, articles, other Lemmy communities, etc.
I’m actively using ollama with docker to run llama2:13b model. It’s generally works fine but heavy on resources as expected.
ollama + codellama works perfect, I use it from neovim with a plug-in called gen-nvim I think
Checkout ollama.
There’s a lot of models you can pull from the official library.
Using ollama, you can also run external gguf models found on places like huggingface if you use a modelfile with something as simple as
echo "FROM ~/Documents/ollama/models/$model_filepath" >| ~/Documents/ollama/modelfiles/$model_name.modelfile
I tired a bunch, but current state of the art is
text-generation-webui
, which can load multiple models and has a workflow similar tostable-diffusion-webui
.I’ve tried both this and https://github.com/jmorganca/ollama. I liked the latter a lot more; just can’t remember why.
GUI for ollama is a separate project: https://github.com/ollama-webui/ollama-webui