AI progress, with honesty.
Tools, models, and guides for your AI journey
Ollama is the easiest way to run large language models locally. It handles model downloading, setup, and provides a simple command-line interface for chatting with models. Works on Mac, Windows, and Linux.
curl -fsSL https://ollama.ai/install.sh | sh (Linux)ollama pull llama3.2 to download your first modelollama run llama3.2 to start chattingLM Studio provides a graphical interface for discovering, downloading, and running local LLMs. Perfect for users who prefer clicking over command lines. Includes a built-in chat interface and lets you adjust model settings like temperature and context length.
Full AI assistant with tools & automation
OpenClaw is a desktop AI assistant that builds on Ollama to provide a complete AI experience. It includes local and cloud model support, file analysis, web browsing, code execution, and automation features. Designed for productivity and privacy.
Model hub with 500k+ models
Hugging Face is the GitHub of AI models. Browse hundreds of thousands of open-source models for text, image, audio, and more. Most are free to download and use. Great for finding specialized models or exploring what's possible.
Run these locally with Ollama or LM Studio. All are free and open-source.
| Model | Size | Best For | Install Command |
|---|---|---|---|
| Llama 3.2 3B | ~2GB | General use, balanced performance | ollama pull llama3.2 |
| Llama 3.2 1B | ~1.3GB | Fast responses, low resources | ollama pull llama3.2:1b |
| Phi-4 Mini | ~2.5GB | Reasoning, coding, math | ollama pull phi4-mini |
| Mistral 7B | ~4GB | Efficient, fast, great all-around | ollama pull mistral |
| Qwen 2.5 7B | ~4.7GB | Coding, multilingual, long context | ollama pull qwen2.5 |
| Gemma 3 4B | ~3GB | Google's efficient open model | ollama pull gemma3:4b |
| DeepSeek R1 | ~4.7GB | Advanced reasoning, math, code | ollama pull deepseek-r1 |
| Codellama | ~4GB | Code generation & completion | ollama pull codellama |
Tip: Start with llama3.2:1b if you're unsure or have limited RAM. It's small and fast. Move up to larger models as needed.
🍎 Apple Silicon (M1/M2/M3/M4): Macs with Apple Silicon run local models very efficiently thanks to unified memory. A base M1 Mac mini with 8GB RAM can run most 3-4GB models smoothly.
🖥️ Windows/Linux with NVIDIA GPU: NVIDIA GPUs with 6GB+ VRAM provide excellent performance. RTX 3060 or better recommended. CPU-only works but is slower.
No install required, but data goes to servers. Good for complex tasks or when you don't have hardware for local models.
Privacy note: When using cloud services, your data is sent to their servers. Avoid sharing sensitive personal or business information.
Fastest way to get started:
Download Ollama from ollama.ai
Mac: drag to Applications. Windows: run installer. Linux: curl command.
Open Terminal and run:
ollama pull llama3.2
Start chatting:
ollama run llama3.2
That's it! You now have a fully local AI assistant running on your computer. No internet required, completely private.