Saturday, September 27, 2025
local LLM setup with ollama
install ollama:
1. download ollama https://ollama.com/download and double click to install.
2. Run "ollama serve" and "ollama --version" to verify
3. ollama website search deepsek, find the version to install, say: ollama pull deepseek-r1:8b.
or openai: ollama pull gpt-oss:20b
4. run the model: ollama run deepseek-r1. (ollama list)
5. /bye to quit
6. Ollama is command-line by default, but for a web-based interface
Install open-webui (make sure python is 3.11)
pip install open-webui
open-webui serve
Visit http://http://0.0.0.0:8080/ in your browser, sign up, and connect to your Ollama models. It looks just like ChatGPT.
uninstall ollama:
1. ollama ps
2. ollama stop
3. Drag the Ollama app to the Trash, or right-click and select Move to Trash
4. rm -rf ~/.ollama
5. rm ~/Library/LaunchAgents/com.ollama.ollama.plist (optional)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment