AI progress, with honesty.
Install and configure OpenClaw on your system
🍎 Apple Silicon note: M1/M2/M3/M4 Macs run local AI very efficiently. The base 8GB Mac mini can run most models smoothly thanks to unified memory.
Download the latest macOS release from the official website:
.dmg fileOpenClaw will request several permissions:
Go to System Preferences → Privacy & Security and enable these for OpenClaw.
OpenClaw uses Ollama for local AI models:
ollama pull llama3.2OpenClaw will detect Ollama automatically. You can also use cloud models (ChatGPT, Claude) without Ollama.
Download the Windows installer:
.exe fileWindows may show a "Windows protected your PC" warning. Click "More info" then "Run anyway" if you downloaded from the official source.
OpenClaw may request:
Approve these in the Windows security dialogs that appear.
ollama pull llama3.2Download the Linux package (AppImage or .deb/.rpm):
For AppImage:
chmod +x openclaw-*.AppImage
./openclaw-*.AppImage
For Debian/Ubuntu (.deb):
sudo dpkg -i openclaw-*.deb
sudo apt-get install -f # Fix dependencies
For Fedora/RHEL (.rpm):
sudo rpm -i openclaw-*.rpm
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama3.2
Launch from your application menu or run:
openclaw
After installation, OpenClaw will guide you through setup. Here's what to configure:
llama3.2 for a good balance.OpenClaw can remember information across conversations. In settings:
Connect to messaging platforms in Settings → Integrations:
Make sure Ollama is installed and running:
# Check if Ollama is running
ollama list
# If not, start it
ollama serve
On macOS, make sure OpenClaw has:
llama3.2:1b instead of llama3.2:3b)