OpenClaw + Qwen: Verified Setup Paths for OAuth and Ollama
If you searched for OpenClaw Qwen, there are two official paths: use the built-in Qwen OAuth provider, or run a local Qwen model through Ollama. Here is the shortest version that stays inside the official docs.
Need help getting Qwen working?
Book a live OpenClaw troubleshooting or training session for $100 per hour at saurav.io/meet, or email openclaw@saurav.io.
The Quick Answer
OpenClaw officially supports Qwen OAuth through the qwen-portal provider and also works with local Qwen models through Ollama. Which path you choose depends on whether you want Qwen's hosted OAuth flow or a local model you control yourself.
Option 1: Use the Official Qwen OAuth Provider
The OpenClaw Qwen provider page documents a direct OAuth flow. It exposes two model IDs: qwen-portal/coder-model and qwen-portal/vision-model.
# 1. Enable the Qwen auth plugin openclaw plugins enable qwen-portal-auth # 2. Restart the gateway openclaw gateway restart # 3. Start Qwen login and set it as default openclaw models auth login --provider qwen-portal --set-default # 4. Switch to the coder model explicitly openclaw models set qwen-portal/coder-model
According to the official provider doc, this flow uses Qwen's device-code OAuth and writes the provider entry into your models.json. The page also notes a free-tier flow for Qwen Coder and Qwen Vision with a published limit of 2,000 requests per day, subject to Qwen's own rate limits.
Source: OpenClaw Qwen Provider
Option 2: Run a Local Qwen Model Through Ollama
If you want local inference, OpenClaw also supports Ollama. The official Ollama docs show qwen3.5:27b as a custom model ID during onboarding, and use ollama/qwen2.5-coder:32b in model selection examples.
# 1. Pull a Qwen model into Ollama ollama pull qwen2.5-coder:32b # 2. Let OpenClaw talk to Ollama export OLLAMA_API_KEY="ollama-local" # 3. Verify the model is visible ollama list openclaw models list # 4. Set it as the default if needed openclaw models set ollama/qwen2.5-coder:32b
If you want the model selected by default, configure OpenClaw to use the Ollama provider with a primary model like ollama/qwen2.5-coder:32b. The docs also show Qwen as a valid fallback in the Ollama model configuration.
Source: OpenClaw Ollama Provider
The Most Important Ollama Caveat
If your Ollama server is remote, do not point OpenClaw at the OpenAI-compatible /v1 endpoint unless you know why you are doing it. The official Ollama page warns that native tool calling is the reliable path, and that the OpenAI-compatible mode can break tool use or make the model output raw tool JSON as text.
# Good: native Ollama API http://ollama-host:11434 # Risky for tool calling http://ollama-host:11434/v1
If You Already Use the Qwen Code CLI
The official Qwen provider page says OpenClaw can sync credentials from ~/.qwen/oauth_creds.json if you already logged in with the Qwen Code CLI. You still need a models.providers.qwen-portal entry, and the easiest way to create that is to run the same login command shown above.
How to Verify Your Qwen Setup
Do not guess. Check the model list and the gateway health directly.
openclaw models list openclaw status openclaw doctor openclaw dashboard
If the model does not appear in openclaw models list, the problem is usually one of these:
- The provider login did not complete.
- Ollama is not running.
- You defined an explicit
models.providers.ollamablock and disabled auto-discovery. - Your local model is not reporting tool support, so OpenClaw is not auto-listing it.
Which Path Should You Choose?
- Use Qwen OAuth if you want the simplest official hosted Qwen path inside OpenClaw.
- Use Ollama + Qwen if you want a local model and control over the machine that runs it.
If you are still setting up OpenClaw itself, start with How to Install OpenClaw. If Qwen is configured but replies are blank or tools fail, jump to the troubleshooting guide.
Need a Real Person?
We can help you choose between Qwen OAuth, Ollama, and hosted models, then get the setup working live on your machine.
Book OpenClaw setup or training
Use saurav.io/meet or email openclaw@saurav.io. Troubleshooting and training sessions are $100/hour.