How to Connect AWS Bedrock Agents to OpenClaw via MCP | OpenClaw DC
OpenClaw is becoming an MCP server. That means Bedrock Agents, Lambda functions, and any MCP client can call OpenClaw as a tool. This guide covers both directions: using Bedrock models inside OpenClaw, and calling OpenClaw from Bedrock Agents.
There are two ways to connect OpenClaw and AWS Bedrock. Most guides cover only the first. This covers both, including the architecture that actually matters for enterprise teams.
Two Integration Directions
Direction 1: Bedrock models powering OpenClaw. You configure OpenClaw to use Bedrock as its model provider. OpenClaw sends prompts to Bedrock, Bedrock returns completions. This is a config change. It takes five minutes.
Direction 2: Bedrock Agents calling OpenClaw as a tool. You expose OpenClaw as an MCP server. A Bedrock Agent can then invoke OpenClaw’s skills, search its memory, or trigger automations as part of a larger agent workflow. This is architecture. It takes planning.
Direction 1 is table stakes. Direction 2 is where the value is.
Direction 1: Enable Bedrock as a Model Provider
Prerequisites
- An AWS account with Bedrock model access enabled (you must request access to specific models in the Bedrock console)
- OpenClaw running on AWS LightSail, EC2, or any server with network access to AWS APIs
- IAM credentials: either an IAM role (preferred on EC2/LightSail) or access key pair
Step 1: Enable the Bedrock connection
Open the OpenClaw admin panel. Navigate to Connections (or Settings > Connections depending on your version). Find Amazon Bedrock in the provider list and toggle it on.
Step 2: Configure credentials
You have two options.
Option A: IAM role (recommended for LightSail/EC2). If your instance has an IAM role attached with bedrock:InvokeModel and bedrock:ListFoundationModels permissions, OpenClaw will detect the credentials automatically via the EC2 instance metadata service (IMDS). No keys to paste.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream",
"bedrock:ListFoundationModels"
],
"Resource": "*"
}
]
}
Attach this policy to your instance’s IAM role.
Option B: Access keys. Paste your AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY into the Bedrock connection settings in the admin panel. Set your region (e.g., us-east-1).
Step 3: Select a model
Once credentials are configured, OpenClaw calls the ListFoundationModels API to discover available models. You will see them in the model dropdown. Pick one and set it as the default or assign it to a specific agent.
Common gotchas
IMDS credential detection. On LightSail, the instance metadata service works differently than on EC2. If OpenClaw cannot detect IAM credentials automatically, fall back to Option B (access keys). LightSail instances do support IAM roles, but the metadata endpoint can be flaky depending on the container runtime.
Cross-region models. Some Bedrock models are only available in specific regions. Claude 3.5 Sonnet is in us-east-1 and us-west-2. If you configured eu-west-1 but the model you want is not there, change the region in the Bedrock connection settings.
Model access requests. Bedrock does not enable all models by default. You must go to the Bedrock console, navigate to Model access, and request access to each model family (Anthropic, Meta, Mistral, etc.) before they appear in OpenClaw.
Which Bedrock models work best with OpenClaw
| Model | Best for | Cost (per 1K input tokens) | Notes |
|---|---|---|---|
| Claude 3.5 Sonnet | General agent tasks, skill execution | $0.003 | Best balance of quality and speed |
| Claude 3 Haiku | High-volume, low-latency tasks | $0.00025 | Good for triage and routing agents |
| Llama 3.1 70B | Open-weight alternative | $0.00265 | No Anthropic dependency |
| Mistral Large | European compliance requirements | $0.004 | Mistral AI, EU-based company |
For most OpenClaw setups, Claude 3.5 Sonnet via Bedrock is the default recommendation. You get enterprise-grade inference with IAM-based access control, usage logging via CloudTrail, and no direct API key exposure.
Direction 2: OpenClaw as MCP Server for Bedrock Agents
This is the integration that changes what OpenClaw can do inside an enterprise stack.
What is happening
OpenClaw is being exposed as an MCP (Model Context Protocol) server. MCP is the open protocol that lets AI models call external tools in a standardized way. When OpenClaw runs as an MCP server, any MCP-compatible client — including Bedrock Agents — can call OpenClaw functions: search its memory, execute skills, trigger automations, or start conversations.
This turns OpenClaw from a standalone assistant into a callable service inside a larger AI architecture.
Architecture
+---------------------+ +-------------------+ +------------------+
| Bedrock Agent | | Lambda Bridge | | OpenClaw |
| (Orchestrator) +------>+ (Action Group) +------>+ MCP Server |
| | | | | :18789 |
| - Reasoning | | - Auth | | |
| - Tool selection | | - Request map | | - Skills |
| - Response merge | | - Error handle | | - Memory |
+---------------------+ +-------------------+ | - Search |
| - Chat |
+------------------+
|
+-------+--------+
| LightSail |
| Instance |
| (Your VPC) |
+----------------+
The Bedrock Agent decides when to call OpenClaw. The Lambda function bridges the Bedrock action group format to OpenClaw’s MCP endpoint. OpenClaw executes the request and returns results.
Step 1: Enable the MCP server in OpenClaw
OpenClaw’s MCP server listens on port 18789 by default. Enable it:
openclaw config set mcp.server.enabled true
openclaw config set mcp.server.port 18789
openclaw config set mcp.server.auth.token "your-secure-token-here"
openclaw restart
Verify it is running:
curl -H "Authorization: Bearer your-secure-token-here" \
http://localhost:18789/mcp/v1/tools
You should get a JSON response listing available MCP tools (search, chat, skill_execute, memory_query, etc.).
Step 2: Create the Lambda bridge function
The Lambda function translates Bedrock Agent action group requests into MCP calls to OpenClaw.
import json
import urllib3
OPENCLAW_ENDPOINT = "http://<your-lightsail-ip>:18789/mcp/v1"
OPENCLAW_TOKEN = "<your-mcp-token>"
http = urllib3.PoolManager()
def lambda_handler(event, context):
action = event.get("actionGroup", "")
function = event.get("function", "")
parameters = {p["name"]: p["value"] for p in event.get("parameters", [])}
# Map Bedrock action to MCP tool call
mcp_request = {
"tool": function,
"arguments": parameters
}
response = http.request(
"POST",
f"{OPENCLAW_ENDPOINT}/invoke",
headers={
"Authorization": f"Bearer {OPENCLAW_TOKEN}",
"Content-Type": "application/json"
},
body=json.dumps(mcp_request)
)
result = json.loads(response.data.decode("utf-8"))
return {
"messageVersion": "1.0",
"response": {
"actionGroup": action,
"function": function,
"functionResponse": {
"responseBody": {
"TEXT": {"body": json.dumps(result)}
}
}
}
}
Deploy this as a Lambda function in the same region as your Bedrock Agent.
Step 3: Create the Bedrock Agent action group
In the Bedrock console:
- Create a new Agent (or open an existing one)
- Navigate to Action groups and click Add
- Select Lambda function as the action group type
- Point it to your Lambda bridge function
- Define the API schema:
{
"openapi": "3.0.0",
"info": {
"title": "OpenClaw MCP Tools",
"version": "1.0.0"
},
"paths": {
"/search": {
"post": {
"operationId": "openclaw_search",
"summary": "Search OpenClaw memory and knowledge base",
"parameters": [
{
"name": "query",
"in": "query",
"required": true,
"schema": {"type": "string"},
"description": "The search query"
}
]
}
},
"/skill_execute": {
"post": {
"operationId": "openclaw_skill_execute",
"summary": "Execute an OpenClaw skill by name",
"parameters": [
{
"name": "skill_name",
"in": "query",
"required": true,
"schema": {"type": "string"},
"description": "Name of the skill to execute"
},
{
"name": "input",
"in": "query",
"required": false,
"schema": {"type": "string"},
"description": "Input data for the skill"
}
]
}
},
"/chat": {
"post": {
"operationId": "openclaw_chat",
"summary": "Send a message to OpenClaw and get a response",
"parameters": [
{
"name": "message",
"in": "query",
"required": true,
"schema": {"type": "string"},
"description": "The message to send"
}
]
}
}
}
}
Step 4: Alternative — API schema approach (no Lambda)
If your OpenClaw instance has a public HTTPS endpoint (behind a reverse proxy with TLS), you can skip the Lambda bridge entirely. Create the action group with API schema type and point it directly at your OpenClaw MCP endpoint.
This is simpler but requires:
- A valid TLS certificate on your OpenClaw instance
- A public endpoint (or VPC connectivity from Bedrock)
- Authentication via the API schema’s security definitions
For most setups, the Lambda bridge is more practical because it keeps OpenClaw off the public internet.
Step 5: Secure the connection
Do not expose port 18789 to the internet. Use one of these approaches:
VPC peering (recommended). LightSail instances can be peered with your default VPC. Create a VPC peering connection in the LightSail console, then configure the Lambda function to run inside the same VPC. Traffic stays private.
# LightSail console > Networking > VPC peering > Enable
# Lambda console > Configuration > VPC > Select the peered VPC
# Security group: allow TCP 18789 from Lambda's subnet CIDR only
VPC PrivateLink. For stricter isolation, create a VPC endpoint service that fronts your OpenClaw instance. This gives you a private DNS name that resolves inside the VPC only. More setup, but zero public exposure.
Security group rules. At minimum, restrict port 18789 to your VPC CIDR range:
# On the LightSail instance firewall:
# Allow TCP 18789 from 172.26.0.0/16 (your VPC CIDR)
# Deny TCP 18789 from 0.0.0.0/0
Enterprise Architecture: Per-User Instances
For leadership teams, one shared OpenClaw instance is not enough. Each leader needs their own instance with isolated credentials, memory, and audit trail.
Why isolation matters
A VP of Engineering and a CFO should not share an OpenClaw instance. The VP’s instance has access to GitHub, Jira, and CI/CD pipelines. The CFO’s instance connects to financial systems, Stripe, and accounting APIs. Credential bleed between roles is an unacceptable risk.
The architecture
One LightSail instance per leader. Each instance runs its own OpenClaw deployment with:
- Separate IAM roles (each instance only has permissions for that leader’s tools)
- Isolated memory stores (conversations and knowledge do not cross)
- Individual Bedrock connections (usage tracked per user via CloudTrail)
- Separate MCP endpoints (each callable independently from a central Bedrock Agent)
+-------------------+ +-------------------+ +-------------------+
| LightSail #1 | | LightSail #2 | | LightSail #3 |
| VP Engineering | | CFO | | CTO |
| IAM: eng-role | | IAM: fin-role | | IAM: cto-role |
| MCP: :18789 | | MCP: :18789 | | MCP: :18789 |
| Skills: GitHub, | | Skills: Stripe, | | Skills: All |
| Jira, CI/CD | | QuickBooks | | |
+-------------------+ +-------------------+ +-------------------+
| | |
+-------------------------+-------------------------+
|
+--------+--------+
| Bedrock Agent |
| (Orchestrator) |
| Routes to the |
| right instance |
+-----------------+
Cost estimate
| Component | Per instance/month | 5 leaders | 10 leaders |
|---|---|---|---|
| LightSail instance ($5 plan) | $5 | $25 | $50 |
| Bedrock API usage (est.) | $10-30 | $50-150 | $100-300 |
| Lambda invocations | <$1 | <$5 | <$10 |
| Data transfer | <$1 | <$5 | <$10 |
| Total | $16-37 | $80-185 | $160-370 |
That is $16-37 per leader per month for a fully isolated, enterprise-grade AI assistant with memory, skills, and Bedrock-powered reasoning.
Security considerations
- IP restriction. Each LightSail instance should only accept connections from the VPC (Bedrock/Lambda) and the leader’s IP range (for the admin panel). No open ports.
- Audit logging. Enable CloudTrail for Bedrock API calls. Each instance’s IAM role generates its own trail, so you get per-user audit logs out of the box.
- Credential rotation. Use IAM roles, not static keys. If you must use keys, rotate them every 90 days and store them in Secrets Manager, not in the OpenClaw config file.
- Backup. Snapshot each LightSail instance weekly. Memory and skill configs are the valuable artifacts. Rebuilding from scratch takes hours; restoring from a snapshot takes minutes.
What This Architecture Is Actually Worth
This is not a $100 hobby setup. This is enterprise AI infrastructure.
What you get: a private, self-hosted AI assistant per leader with persistent memory, custom skills, access to any API, Bedrock-powered reasoning, and zero data leaving your AWS account.
What the SaaS alternatives charge:
| SaaS alternative | Per user/month | 10 users/month | You control the data? |
|---|---|---|---|
| Glean | $25-40 | $250-400 | No |
| Lindy.ai | $49-99 | $490-990 | No |
| Dust.tt | $29 | $290 | No |
| Custom GPT (ChatGPT Team) | $30 | $300 | No |
| OpenClaw + Bedrock | $16-37 | $160-370 | Yes |
The OpenClaw + Bedrock setup costs the same or less, and you own everything. The data stays in your AWS account. The skills are yours. The memory is yours. There is no vendor lock-in because OpenClaw is open source and Bedrock supports multiple model providers.
For a 10-person leadership team, you are looking at $160-370/month versus $490-990/month for Lindy or $250-400/month for Glean. And neither Glean nor Lindy gives you the ability to call custom skills, run automations, or integrate with your internal APIs the way OpenClaw does.
The real cost is the architecture work: designing the IAM roles, setting up VPC peering, configuring the MCP endpoints, building the Lambda bridge, and hardening the security. That is a one-time investment that compounds every month.
Next Steps
- Start with Direction 1. Get Bedrock models running inside OpenClaw. Verify credentials, pick a model, confirm inference works. This takes 10 minutes.
- Enable the MCP server. Turn it on, test it locally with curl, confirm your tools are listed.
- Build the Lambda bridge. Deploy the function, create the Bedrock Agent action group, test end to end.
- Lock it down. VPC peering, security groups, auth tokens. Do not skip this.
- Scale to per-user instances if your team needs isolation.
Need help architecting this?
We set up enterprise OpenClaw deployments -- Bedrock integration, MCP server configuration, per-user instance architecture, and security hardening. Book a call -- $100/hour.
Get guides like this in your inbox every Wednesday.
No spam. Unsubscribe anytime.
You'll probably need this again.
Press Cmd+D (Mac) or Ctrl+D (Windows) to bookmark this page.
Need help with your OpenClaw setup?
We do remote setup, troubleshooting, and training worldwide.
Book a Call