Skip to main content
Skip to article

#WhatsApp Support Bot with OpenClaw and MoltFlow

Your Customers Are Already on WhatsApp

Here's a question: why force your customers to switch apps just to get support? They're already on WhatsApp, checking messages, talking to friends, running their lives. Making them download your support app, or visit a clunky web portal, adds friction. And friction kills engagement.

What if your support agent lived where your customers already are? Not as a half-baked chatbot that can only answer "What are your hours?" but a genuinely intelligent AI that understands context, pulls from your knowledge base, and knows when to loop in a human. That's what we're building today.

Using OpenClaw's agent framework for the brains and MoltFlow's WhatsApp API for the messaging layer, you'll have an autonomous support bot running on WhatsApp in under an hour. Let's dig in.

Architecture: Keep It Simple, Keep It Decoupled

Here's the flow:

  1. Customer sends WhatsApp message → MoltFlow receives it
  2. MoltFlow fires webhook → hits your server with message payload
  3. Your server passes message to OpenClaw agent → agent processes with its skills (FAQ lookup, ticket creation, escalation detection)
  4. Agent generates response → your server sends it back via MoltFlow API
  5. Customer sees reply on WhatsApp → seamless conversation
text
Customer → WhatsApp → MoltFlow → Your Server → OpenClaw Agent
                           ↑                          ↓
                           └──────── Response ────────┘

Why this architecture? Decoupling. Your AI logic (OpenClaw) doesn't know about WhatsApp. Your messaging layer (MoltFlow) doesn't know about AI. If you want to swap OpenClaw for LangChain tomorrow, or add Telegram support next week, you change one piece without touching the others.

This is how you build systems that don't turn into spaghetti six months from now.

Step 1: Set Up Your MoltFlow Session

Before we can receive messages, we need a WhatsApp session. If you've already done this, skip ahead. If not, here's the quick version:

Create a session to link your WhatsApp Business account:

bash
curl -X POST https://apiv2.waiflow.app/api/v2/sessions \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "support-bot",
    "status": "starting"
  }'

MoltFlow generates a QR code. Scan it with WhatsApp Business on your phone. Status changes to working. Done.

Now configure a webhook to forward incoming messages to your server:

bash
curl -X POST https://apiv2.waiflow.app/api/v2/webhooks \
  -H "Authorization: Bearer YOUR_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://your-server.com/webhook/whatsapp",
    "events": ["message"]
  }'

Every time a message arrives, MoltFlow POSTs to your webhook URL. That's where the magic happens.

For more details, check out our Getting Started with WhatsApp Automation guide.

Step 2: Create Your OpenClaw Support Agent

OpenClaw is an open-source framework for building autonomous AI agents. Think of it as the control center: you define what your agent can do (its "skills"), give it a system prompt, and let it run.

First, install OpenClaw:

bash
pip install openclaw-sdk

Now define your support agent. Here's the skeleton:

python
from openclaw import Agent, Skill
from openclaw.skills import FAQRetriever, TicketCreator

# Define your agent with a clear system prompt
support_agent = Agent(
    name="WhatsApp Support Bot",
    system_prompt="""You are a helpful customer support agent for our e-commerce platform.
    Your job is to answer customer questions, help troubleshoot issues, and escalate
    complex problems to human agents when necessary.

    Keep responses concise (WhatsApp users expect quick replies).
    Use plain text only—WhatsApp doesn't render markdown well.
    If you don't know the answer, admit it and offer to connect them with a human.
    Be warm but professional. No emojis unless the customer uses them first.""",

    skills=[
        FAQRetriever(knowledge_base="./data/faq.json"),
        TicketCreator(ticketing_system="zendesk"),
    ],

    model="gpt-4o",  # or claude-3-5-sonnet, whatever you prefer
    temperature=0.7
)

Key parts:

  • System prompt: Sets the tone. This is where you encode your brand voice and support policies.
  • Skills: Modular capabilities. FAQRetriever searches your knowledge base. TicketCreator opens tickets in your CRM when the agent can't resolve the issue.
  • Model choice: Use whatever LLM fits your budget and performance needs.

Building Skills: FAQ Retrieval

The FAQRetriever skill needs a knowledge base. Keep it simple for now—a JSON file with question/answer pairs:

json
{
  "faqs": [
    {
      "question": "What's your return policy?",
      "answer": "We accept returns within 30 days of purchase. Items must be unused with original packaging. Start a return through your order page."
    },
    {
      "question": "How do I track my order?",
      "answer": "Check your order status at example.com/orders using the email you provided at checkout."
    }
  ]
}

When a customer asks "Can I return this?", the agent searches the FAQ, finds the closest match, and returns the answer. No need to hard-code every possible phrasing—the LLM understands semantic similarity.

Step 3: Wire Up the Webhook Handler

Now connect MoltFlow's webhook to your OpenClaw agent. Here's a FastAPI endpoint that does the job:

python
from fastapi import FastAPI, Request
import httpx
from openclaw import Agent

app = FastAPI()

# Your support agent instance (defined above)
support_agent = Agent(...)

# MoltFlow API credentials
MOLTFLOW_API_TOKEN = "your_token_here"
MOLTFLOW_API_BASE = "https://apiv2.waiflow.app/api/v2"

@app.post("/webhook/whatsapp")
async def handle_whatsapp_message(request: Request):
    """Receives messages from MoltFlow, processes with OpenClaw, replies via API."""

    # Parse incoming webhook payload
    payload = await request.json()

    # Extract relevant fields
    message_text = payload["data"]["body"]  # Customer's message
    sender_id = payload["data"]["from"]  # Customer's WhatsApp ID
    session_name = payload["session"]  # Your MoltFlow session name

    # Pass message to OpenClaw agent for processing
    agent_response = await support_agent.process(
        input_text=message_text,
        context={"sender_id": sender_id}  # Include sender for conversation memory
    )

    # Send agent's response back via MoltFlow API
    async with httpx.AsyncClient() as client:
        await client.post(
            f"{MOLTFLOW_API_BASE}/sessions/{session_name}/messages",
            headers={
                "Authorization": f"Bearer {MOLTFLOW_API_TOKEN}",
                "Content-Type": "application/json"
            },
            json={
                "chatId": sender_id,
                "text": agent_response.text
            }
        )

    return {"status": "ok"}

What's happening here:

  1. MoltFlow webhook delivers the message payload
  2. Extract the text and sender ID
  3. Feed it to the OpenClaw agent with agent.process()
  4. Agent returns a response (after consulting its skills)
  5. Send response back to customer via MoltFlow's /messages endpoint

The customer sees a reply within seconds. From their perspective, they're just chatting on WhatsApp. Behind the scenes, an AI agent is searching your knowledge base, evaluating ticket creation rules, and crafting contextual responses.

Step 4: Handle Human Escalation

Not everything can be automated. Sometimes the customer needs a real person. Your agent should detect these scenarios and escalate gracefully.

Add escalation logic to your system prompt:

python
system_prompt="""...
If the customer explicitly asks for a human ("speak to a person", "I want to talk to someone"),
or if you've failed to resolve the issue after 3 exchanges, respond with:
'I'll connect you with one of our support specialists. One moment please.'
Then escalate.
"""

Implement escalation as a skill:

python
from openclaw import Skill

class HumanEscalation(Skill):
    """Forwards conversation to support team WhatsApp group."""

    def __init__(self, support_group_id: str):
        self.support_group_id = support_group_id

    async def execute(self, context: dict):
        """Send message to support group with customer context."""

        sender = context["sender_id"]
        conversation_summary = context.get("conversation_summary", "No summary available")

        # Notify support team via group message
        async with httpx.AsyncClient() as client:
            await client.post(
                f"{MOLTFLOW_API_BASE}/sessions/support-bot/messages",
                headers={"Authorization": f"Bearer {MOLTFLOW_API_TOKEN}"},
                json={
                    "chatId": self.support_group_id,  # Support team group ID
                    "text": f"🚨 Escalation Request\n\nCustomer: {sender}\n\nSummary:\n{conversation_summary}\n\nPlease respond directly to the customer."
                }
            )

        return "Escalated to support team"

Add this skill to your agent's skill list. Now when escalation is needed, the support team gets pinged in their WhatsApp group with full context.

Step 5: Add Conversation Memory

Right now, every message is processed in isolation. The agent doesn't remember what was said two messages ago. That's fine for simple FAQs, but breaks down for multi-turn conversations.

Let's add basic conversation memory using a dictionary (in production, use Redis or a database):

python
conversation_history = {}  # sender_id -> list of messages

@app.post("/webhook/whatsapp")
async def handle_whatsapp_message(request: Request):
    payload = await request.json()
    message_text = payload["data"]["body"]
    sender_id = payload["data"]["from"]

    # Retrieve or initialize conversation history
    if sender_id not in conversation_history:
        conversation_history[sender_id] = []

    history = conversation_history[sender_id]
    history.append({"role": "user", "content": message_text})

    # Pass full conversation to agent
    agent_response = await support_agent.process(
        input_text=message_text,
        conversation_history=history,
        context={"sender_id": sender_id}
    )

    # Store agent's response in history
    history.append({"role": "assistant", "content": agent_response.text})

    # Keep only last 10 messages to avoid token bloat
    conversation_history[sender_id] = history[-10:]

    # Send response via MoltFlow
    # ... (same as before)

Now your agent remembers context. If a customer says "I ordered the blue one" and then "When will it arrive?", the agent knows "it" refers to the blue item from two messages ago.

Production tip: Store conversation history in Redis with a 24-hour TTL. If a customer messages again the next day, start fresh—nobody wants the bot to remember last week's conversation.

Advanced: Multi-Language Support

WhatsApp is global. Your customers might message you in Spanish, Portuguese, Hindi, or Arabic. OpenClaw agents can handle this transparently if you use a multilingual LLM like GPT-4 or Claude.

Update your system prompt:

python
system_prompt="""...
Detect the customer's language and respond in the same language.
If unsure, default to English.
"""

That's it. The LLM handles translation internally. You don't need separate agents for each language.

What's Next?

You've built a WhatsApp support bot that:

  • Answers FAQs from your knowledge base
  • Creates tickets when issues can't be resolved
  • Escalates to human agents gracefully
  • Remembers conversation context

MoltFlow's OpenClaw integration makes production-ready support bots achievable in hours, not weeks.

Continue learning:

Ready to implement a support bot? Follow our step-by-step guide: Build a WhatsApp Support Bot with OpenClaw

Sign up for MoltFlow and get your API token in under 60 seconds. Your customers are waiting on WhatsApp—meet them where they are.

> Try MoltFlow Free — 100 messages/month

$ curl https://molt.waiflow.app/pricing

bash-5.2$ echo "End of post."_