5 OpenClaw Cost Mistakes
▶ New Video 8 min watch
5 OpenClaw Mistakes Costing You Money Right Now
Cut your bill from $36K/yr to $5–10K — heartbeat fix, model routing, session resets
Watch →
Need help? Remote OpenClaw setup, troubleshooting, and training - $100/hour Book a Call →
View on Amazon →
← Back to Blog

OpenClaw + Qwen: Verified Setup Paths for OAuth and Ollama

If you searched for OpenClaw Qwen, there are two official paths: use the built-in Qwen OAuth provider, or run a local Qwen model through Ollama. Here is the shortest version that stays inside the official docs.

The Quick Answer

OpenClaw officially supports Qwen OAuth through the qwen-portal provider and also works with local Qwen models through Ollama.

Watch: Qwen 3.6 + Ollama End-to-End Walkthrough

Running a local open-weights model (Qwen 3.6) through an agent runtime looks the same whether you’re pairing it with OpenClaw, Qwen Code, or any other Claude Code alternative. This 16-minute video walks through the full setup — installing Ollama, pulling the model, wiring the agent, and three real coding demos against Claude Code.

Apply the same Ollama wiring below for OpenClaw — the agent runtime is the only piece that differs.

Option 1: Use the Official Qwen OAuth Provider

# 1. Enable the Qwen auth plugin
openclaw plugins enable qwen-portal-auth

# 2. Restart the gateway
openclaw gateway restart

# 3. Start Qwen login and set it as default
openclaw models auth login --provider qwen-portal --set-default

# 4. Switch to the coder model explicitly
openclaw models set qwen-portal/coder-model

The flow uses Qwen’s device-code OAuth with a free-tier limit of 2,000 requests per day.

Option 2: Run a Local Qwen Model Through Ollama

# 1. Pull a Qwen model into Ollama
ollama pull qwen2.5-coder:32b

# 2. Let OpenClaw talk to Ollama
export OLLAMA_API_KEY="ollama-local"

# 3. Verify the model is visible
ollama list
openclaw models list

# 4. Set it as the default if needed
openclaw models set ollama/qwen2.5-coder:32b

The Most Important Ollama Caveat

Do not point OpenClaw at the OpenAI-compatible /v1 endpoint. The official Ollama page warns that native tool calling is the reliable path.

# Good: native Ollama API
http://ollama-host:11434

# Risky for tool calling
http://ollama-host:11434/v1

How to Verify Your Qwen Setup

openclaw models list
openclaw status
openclaw doctor
openclaw dashboard

If you are still setting up OpenClaw itself, start with How to Install OpenClaw. If Qwen is configured but replies are blank or tools fail, jump to the troubleshooting guide.

Get guides like this in your inbox every Wednesday.

No spam. Unsubscribe anytime.

You'll probably need this again.

Press Cmd+D (Mac) or Ctrl+D (Windows) to bookmark this page.

Need help with your OpenClaw setup?

We do remote setup, troubleshooting, and training worldwide.

Book a Call

Read next

Best Local Models for OpenClaw with Ollama (2026)
Find the best local LLM for OpenClaw using Ollama. We compare Qwen3.5 27B, Llama 3.3 70B, Mistral Large, DeepSeek V3, and more for tool calling, speed, and RAM requirements.
Qwen 3.5 27B on a Single RTX 3090 Beats 120B Models on $70K H200 Rigs (For Agent Coding)
Qwen 3.5 27B dense Q4 on a single RTX 3090 one-shots agent coding tasks that 120B MoE models on $70K H200 rigs fail. Benchmarks, setup, and OpenClaw install steps.
Best Local LLM by RAM (April 2026): 8GB to 128GB Hardware Picks
Pick the best local LLM for your exact RAM. April 2026 picks featuring Qwen 3.6 27B, gpt-oss 20B/120B, Mistral Small 4, and Nemotron Cascade 2 with quantization, speed, and OpenClaw setup.