Saturday, September 27, 2025
local LLM setup with ollama
install ollama:
1. download ollama https://ollama.com/download and double click to install.
2. Run "ollama serve" and "ollama --version" to verify
3. ollama website search deepsek, find the version to install, say: ollama pull deepseek-r1:8b.
or openai: ollama pull gpt-oss:20b
4. run the model: ollama run deepseek-r1. (ollama list)
5. /bye to quit
6. Ollama is command-line by default, but for a web-based interface
Install open-webui (make sure python is 3.11)
pip install open-webui
open-webui serve
Visit http://0.0.0.0:8080/ in your browser, sign up, and connect to your Ollama models. It looks just like ChatGPT.
It has some issues like python version dependency, it's better to install open-webui using docker, it's clean and quick.
uninstall ollama:
1. ollama ps
2. ollama stop
3. Drag the Ollama app to the Trash, or right-click and select Move to Trash
4. rm -rf ~/.ollama
5. rm ~/Library/LaunchAgents/com.ollama.ollama.plist (optional)
llama3.1:70b uses 40G memory and it took like 10 mins to
(base) peterpeng@f6:77:6d:59:8b:d3 ~ % ollama run llama3.1:70b
>>> 小明比赛超过了第二名,他现在是第几名?
他应该是第一!
ollama rm llama3.1:70b
ollama pull gpt-oss:20b
gpt-oss:20b only took 2 seconds to figure out:
他现在是 **第二名**。
如果小明在比赛中超过了原本的第二名,那么他就取代了那个位置,成为新的第二名,原来第二名的人则跌到第三名。
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment