← Back to Ai-Edu Hub

📚 AI Resources

Tools, models, and guides for your AI journey

🛠️ Recommended Tools

🦙

Ollama

Run Llama, Mistral, Qwen, and more locally

Ollama is the easiest way to run large language models locally. It handles model downloading, setup, and provides a simple command-line interface for chatting with models. Works on Mac, Windows, and Linux.

One-line install
100+ models available
Automatic GPU detection
API compatible with OpenAI
Free & open source
Works offline
Quick Install
1
Download from ollama.ai (Mac/Windows) or run curl -fsSL https://ollama.ai/install.sh | sh (Linux)
2
Run ollama pull llama3.2 to download your first model
3
Run ollama run llama3.2 to start chatting
🎨

LM Studio

Beautiful GUI for running local models

LM Studio provides a graphical interface for discovering, downloading, and running local LLMs. Perfect for users who prefer clicking over command lines. Includes a built-in chat interface and lets you adjust model settings like temperature and context length.

Visual model browser
Built-in chat interface
Model parameter controls
Local server mode
Mac & Windows
Free to use
Quick Install
1
Download from lmstudio.ai
2
Open the app and browse models in the sidebar
3
Click a model to download, then start chatting

OpenClaw

Full AI assistant with tools & automation

OpenClaw is a desktop AI assistant that builds on Ollama to provide a complete AI experience. It includes local and cloud model support, file analysis, web browsing, code execution, and automation features. Designed for productivity and privacy.

Local & cloud models
File & code analysis
Web browsing
Automation & skills
Memory & context
Mac-first
🤗

Hugging Face

Model hub with 500k+ models

Hugging Face is the GitHub of AI models. Browse hundreds of thousands of open-source models for text, image, audio, and more. Most are free to download and use. Great for finding specialized models or exploring what's possible.

500k+ models
Free to download
Model benchmarks
Community & docs

📦 Recommended Models

Run these locally with Ollama or LM Studio. All are free and open-source.

Model Size Best For Install Command
Llama 3.2 3B ~2GB General use, balanced performance ollama pull llama3.2
Llama 3.2 1B ~1.3GB Fast responses, low resources ollama pull llama3.2:1b
Phi-4 Mini ~2.5GB Reasoning, coding, math ollama pull phi4-mini
Mistral 7B ~4GB Efficient, fast, great all-around ollama pull mistral
Qwen 2.5 7B ~4.7GB Coding, multilingual, long context ollama pull qwen2.5
Gemma 3 4B ~3GB Google's efficient open model ollama pull gemma3:4b
DeepSeek R1 ~4.7GB Advanced reasoning, math, code ollama pull deepseek-r1
Codellama ~4GB Code generation & completion ollama pull codellama

Tip: Start with llama3.2:1b if you're unsure or have limited RAM. It's small and fast. Move up to larger models as needed.

💻 Hardware Requirements

Minimum

  • 4GB RAM
  • 2 CPU cores
  • 5GB disk space
  • 1-2GB models only
  • May be slow

Ideal

  • 16GB+ RAM
  • Apple Silicon or NVIDIA GPU
  • 20GB+ disk space
  • All models available
  • Fast responses

🍎 Apple Silicon (M1/M2/M3/M4): Macs with Apple Silicon run local models very efficiently thanks to unified memory. A base M1 Mac mini with 8GB RAM can run most 3-4GB models smoothly.

🖥️ Windows/Linux with NVIDIA GPU: NVIDIA GPUs with 6GB+ VRAM provide excellent performance. RTX 3060 or better recommended. CPU-only works but is slower.

☁️ Cloud Services

No install required, but data goes to servers. Good for complex tasks or when you don't have hardware for local models.

Privacy note: When using cloud services, your data is sent to their servers. Avoid sharing sensitive personal or business information.

🚀 Quick Start Guide

Fastest way to get started:

1

Download Ollama from ollama.ai

Mac: drag to Applications. Windows: run installer. Linux: curl command.

2

Open Terminal and run:

ollama pull llama3.2
3

Start chatting:

ollama run llama3.2

That's it! You now have a fully local AI assistant running on your computer. No internet required, completely private.