5 OpenClaw Cost Mistakes
▶ New Video 8 min watch
5 OpenClaw Mistakes Costing You Money Right Now
Cut your bill from $36K/yr to $5–10K — heartbeat fix, model routing, session resets
Watch →
Need help? Remote OpenClaw setup, troubleshooting, and training - $100/hour Book a Call →
View on Amazon →
← Back to Blog

How to Install OpenClaw in 2026

If you want the shortest path from zero to a working install, this is it. The official docs support macOS, Linux, and Windows, recommend Node 24, still support Node 22.16+, and strongly recommend WSL2 on Windows.

Need help getting OpenClaw running?

Book a live troubleshooting or training session for a call at calendly.com/cloudyeti/meet.

Install time is usually 2 to 5 minutes. Onboarding typically takes 5 to 15 minutes, depending on whether you use the installer script or the npm path.

Prerequisites

Before you start, make sure you have a supported Node version. The docs recommend Node 24, and Node 22.16+ is still supported.

Want a dedicated always-on host? The Apple Mac mini M4 (16GB) is the most popular choice for running OpenClaw 24/7. See our Mac mini setup guide.

  • macOS: supported.
  • Linux: supported.
  • Windows: supported, with WSL2 strongly recommended.

If you are on macOS or Linux and install with npm, the docs call out this PATH fix:

export PATH="$(npm prefix -g)/bin:$PATH"

Fastest Install Path

The fastest official route is the installer script:

curl -fsSL https://openclaw.ai/install.sh | bash

The FAQ also shows this as the recommended pair when you want the full guided setup:

curl -fsSL https://openclaw.ai/install.sh | bash
openclaw onboard --install-daemon

If you want to skip onboarding for now, use:

curl -fsSL https://openclaw.ai/install.sh | bash -s -- --no-onboard

After the install finishes, the docs recommend continuing with onboarding so OpenClaw can finish setting up your local environment.

npm Install Path

If you prefer npm, the official path is:

npm install -g openclaw@latest
openclaw onboard --install-daemon

That path is useful if you want a more explicit CLI workflow or you are already managing global Node tools this way.

What Onboarding Does

The CLI onboarding step is:

openclaw onboard

Use it when you want OpenClaw to walk through the first-run setup rather than stopping after the binary is installed. If you skipped onboarding during install, run it afterward.

Quick checklist

  • Confirm your Node version is supported.
  • Install OpenClaw with the script or npm.
  • Run openclaw onboard if onboarding did not run automatically.
  • Use the dashboard to verify the install.

First Successful Chat

Once onboarding is complete, the fastest way to test the install is:

openclaw dashboard

That is the quickest way to confirm the install is live and ready for a first conversation.

Common Install Gotchas

  • If a global npm install does not resolve correctly, check the PATH fix above.
  • If you are on Windows, use WSL2 unless you have a specific reason not to.
  • If install or onboarding fails after the binary is present, start with the troubleshooting guide below.
  • If you skipped onboarding with --no-onboard, run openclaw onboard afterward.

For the most common setup errors after install, see the OpenClaw troubleshooting guide. It covers token issues, Docker config problems, blank responses, Windows PATH issues, macOS gateway persistence, and port conflicts.

If you are installing on a dedicated Apple machine, see OpenClaw on Mac Mini. If you want a Qwen-backed setup, see OpenClaw + Qwen.

Still stuck?

We do remote setup, troubleshooting, and training for $100/hour. Book a Call to get started.

Book on calendly.com/cloudyeti/meet

Get guides like this in your inbox every Wednesday.

No spam. Unsubscribe anytime.

You'll probably need this again.

Press Cmd+D (Mac) or Ctrl+D (Windows) to bookmark this page.

Need help with your OpenClaw setup?

We do remote setup, troubleshooting, and training worldwide.

Book a Call

Read next

Best Local LLM by RAM (April 2026): 8GB to 128GB Hardware Picks
Pick the best local LLM for your exact RAM. April 2026 picks featuring Qwen 3.6 27B, gpt-oss 20B/120B, Mistral Small 4, and Nemotron Cascade 2 with quantization, speed, and OpenClaw setup.
Best Local LLMs for 128GB RAM (April 2026): gpt-oss 120B Q6 & Mistral Small 4 Q6
Best local LLMs for 128GB RAM in April 2026. gpt-oss 120B at Q6_K, Mistral Small 4 (119B-A6B) at Q6, Qwen 3.5 122B-A10B at Q5, and quad-model setups. Mac Studio Ultra territory.
Best Local LLMs for 16GB RAM (April 2026): Qwen 3.5 9B & gpt-oss 20B
Best local LLMs that run well on 16GB RAM in April 2026. Verified picks: Qwen 3.5 9B (Q8), gpt-oss 20B (Q4), Qwen 3.6 27B (squeeze IQ3), with quantization, speed, and OpenClaw setup.