Difficulty: Medium · Time: About 20 minutes · Outcome: Master Hermes’ core configuration for all-platform access
Have you ever thought—what if the same AI assistant could run in Telegram, Lark, and WeCom at the same time, with only one deployment? Hermes Agent can do exactly that.
It’s Nous Research’s open-source, self-learning AI Agent. One of its biggest highlights is a built-in universal message gateway. You only need to start a single process on your server to connect to 10+ messaging platforms at once—Telegram, Discord, Slack, Lark, WeCom, Signal, WhatsApp, and more. Messages are automatically routed, memory is shared across platforms, and you configure it once to make it permanently effective.
In this article, we’ll connect the three most commonly used platforms—Telegram, Lark (Lark), and WeCom—end to end.
Target Audience
This article is for developers like:
- You have some Linux / command-line experience and want to self-host an AI Agent on a server
- Independent developers or small teams who need multi-platform messaging integration
- Teams evaluating Hermes Agent as a replacement for OpenClaw
TIP
If you don’t have an LLM API Key yet, we recommend getting one from Defapi. Defapi’s Claude Opus 4.6 is $2.5 input / $12.5 output per million tokens—half the official price, with excellent value. For supported API protocols, see the Defapi Claude documentation.
Core Dependencies & Environment
| Dependency | Description |
|---|---|
| Python | 3.11+ (handled automatically by the install script) |
| Operating System | Linux (Ubuntu 22.04+), macOS, or WSL2 |
| Messaging Platform Accounts | Telegram Bot Token / Feishu enterprise self-built app / WeCom |
| LLM API Key | Recommended: Defapi (Claude Opus 4.6 at half price) |
| Docker | Optional; v0.6.0 is officially supported for container deployment |
WARNING
Windows native is not supported. Install WSL2, then operate inside your WSL2 terminal.
Complete Project Structure
hermes-agent/
├── scripts/
│ └── install.sh # One-click install script
└── ~/.hermes/ # Main configuration directory (generated automatically after install)
├── config.yaml # Core configuration (model, toolset)
├── .env # API Keys (do not commit to Git!)
├── skills/ # Skills directory
├── memory/ # Persistent memory
├── sessions/ # Session history (FTS5 full-text search)
└── gateway/ # Gateway configuration (separate YAML per platform)
├── telegram.yaml
├── feishu.yaml
└── wecom.yaml
Step-by-Step
Step 1: One-click Install Hermes Agent
On Linux / macOS / WSL2, run the official install script:
curl -fsSL https://raw.githubusercontent.com/NousResearch/hermes-agent/main/scripts/install.sh | bash
The install script automatically handles Python, Node.js, and all dependencies—no manual intervention needed. After installation, reload your shell:
source ~/.bashrc # bash users
source ~/.zshrc # zsh users
Verify the installation:
hermes doctor
TIP
hermes doctor automatically diagnoses environment issues—Python version, API Key, network connectivity, tool dependencies, and more. Strongly recommended to run it once on the first try.
Step 2: Configure the LLM Provider—Connect to Defapi
Next, you need to tell Hermes where to find the model. Add your API Key to the .env file:
hermes config set ANTHROPIC_API_KEY <your-defapi-key>
Defapi is compatible with Anthropic’s v1/messages interface protocol, so Hermes can connect directly. You can also use the hermes model command to choose the model and provider interactively:
hermes model
# Interactive selection:
# 1. Nous Portal
# 2. OpenRouter (200+ models)
# 3. OpenAI
# 4. Defapi / custom endpoint
# 5. Other...
If you use Defapi, in the interactive menu select Custom endpoints, then provide:
API Endpoint: https://api.defapi.org
Model: anthropic/claude-opus-4.6
WARNING
The .env file contains your API Key—never submit it to a Git repository. If you deploy with Docker, inject it via environment variables or Docker Secrets, not by hardcoding it directly in your Dockerfile.
Step 3: Connect Your First Platform—Telegram Bot
This is the easiest platform to validate, because Telegram’s Bot API is public and doesn’t require enterprise verification.
3.1 Create a Bot
In Telegram, open @BotFather, send /newbot, follow the prompts to name your Bot, and get BOT_TOKEN. The format looks like 123456789:ABCdefGHIjklMNOpqrsTUVwxyz.
3.2 Configure the Hermes Gateway
hermes gateway setup
# Select telegram, then go through the interactive configuration
Or manually create ~/.hermes/gateway/telegram.yaml:
platform: telegram
enabled: true
bot_token: "123456789:ABCdefGHIjklMNOpqrsTUVwxyz"
# Access control: allow only specific users
allowed_users:
- your_telegram_username
# Security settings
require_approval_for_dangerous_tools: true
3.3 Verify
After starting the gateway, send a message to the Bot:
hermes gateway start
# Or run in the background:
hermes gateway start --daemon
In Telegram, send /new to the Bot— it should reply with a welcome message. Send a Chinese message as a test (for example, "Hello"), and see if the AI responds.
TIP
If the Bot doesn’t respond, check: hermes gateway status for the running state, and hermes logs for the latest logs. Common causes are an incorrect Bot Token or a port that’s already in use.
Step 4: Connect Lark (Feishu) — Enterprise App
Lark integration is a bit more complex—you need to create an enterprise self-built app in the Lark Open Platform.
4.1 Create an App in the Lark Open Platform
- Open Lark Open Platform and create an enterprise self-built app
- In “Credentials and Basic Information”, get the
App IDandApp Secret - Enable the “Bot” capability
4.2 Configure Event Subscriptions
In the app’s “Event Subscriptions”:
- Set Request URL to:
https://your-domain/gateway/feishu/webhook - Choose Events:
im.message.receive_v1(receive messages)
WARNING
Lark requires the callback URL to be publicly accessible via HTTPS. If your server doesn’t have a public domain, you can use ngrok for temporary tunneling: ngrok http 8080, then put the generated HTTPS URL into Lark.
4.3 Install the App to Your Enterprise
In “Version Management and Publishing”, create and publish a version; otherwise the Bot can’t receive messages.
4.4 Configure the Hermes Gateway
hermes gateway setup
# Select feishu, then enter App ID and App Secret interactively
Manually edit ~/.hermes/gateway/feishu.yaml:
platform: feishu
enabled: true
app_id: "cli_xxxxxxxxxxxxxx"
app_secret: "xxxxxxxxxxxxxxxxxxxx"
verification_token: "your_verification_token"
# Message encryption (optional but recommended)
encrypt_key: "your_encrypt_key"
# Allowed chats and users
allowed_chats: []
require_mention: false
4.5 Verify
In Lark, send /new to the Bot and see if it responds. Lark’s message format is different from Telegram—your Bot can send rich-text message cards, and Hermes adapts automatically.
Step 5: Connect WeCom (WeChat Work)
WeCom integration is similar to Lark: it also uses the callback Webhook mode.
5.1 Create an App in the WeCom Admin Backend
- Log in to WeCom Admin Backend
- Go to “Application Management” → “Create Application” → choose “Enterprise self-built”
- Get
AgentId,Secret, and your enterprise CorpId - Set “Trusted IPs” to your server IP
5.2 Configure Receiving Messages
In the app settings, enable “Message Receiving” mode:
- Fill in
URL:https://your-domain/gateway/wecom/webhook - Choose “Compatible mode” or “Event mode”
5.3 Configure the Hermes Gateway
# ~/.hermes/gateway/wecom.yaml
platform: wecom
enabled: true
corp_id: "wwxxxxxxxxxxxxxx"
agent_id: "1000001"
corp_secret: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
encrypt_mode: "safe" # Safe mode requires setting encoding_aes_key
encoding_aes_key: "your_32_char_aes_key"
5.4 Verify
In WeCom, find your app and send a test message.
TIP
WeCom has strict requirements for IP whitelisting. If you deploy on a cloud server (e.g., DigitalOcean or Hetzner), make sure to add your server’s public IP to the trusted IP list in the admin backend—otherwise WeCom will reject messages directly.
Step 6: Start the Gateway—Run All Three Platforms Once
After you finish configuring all platforms, start the gateway with a single command:
hermes gateway start
Hermes automatically loads all enabled platform configurations under ~/.hermes/gateway/ and starts all adapters concurrently. If one platform fails to start (for example, because the token is wrong), it won’t affect the normal operation of the other platforms.
Use hermes gateway status to check each platform’s running status:
Platform Status Users Messages
telegram â—Ź online 3 142
feishu â—Ź online 7 89
wecom â—Ź online 2 15
Stop the gateway with Ctrl+C, or run it in the background with --daemon:
hermes gateway start --daemon
Step 7: Multi-Platform Message Verification
Now you can send messages to the Bot on each of the three platforms for testing:
# Telegram
/new
Hello, introduce yourself
# Lark
/new
Who are you?
# WeCom
/new
What can I help you with?
The three sides respond identically, with shared memory—whatever you say on Telegram, Lark and WeCom will know as well. This is Hermes’ most core value: one-time setup, a unified experience across all platforms.
TIP
If you want each platform to run a different Hermes Agent instance (different models, different skill sets), you can use the new Profiles feature in Hermes v0.6.0: hermes -p telegram profile create, then configure different models and platforms under different profiles. Profiles are completely isolated from each other, with no interference.
Troubleshooting FAQ
1. Telegram Bot Doesn’t Receive Messages
Symptoms: The Bot doesn’t respond at all, and there’s no log entry.
Troubleshooting steps:
# 1. Confirm the token is correct
hermes config get telegram.bot_token
# 2. Confirm the port isn’t already in use
ss -tlnp | grep 8080
# 3. If you use webhook mode, check the Telegram Bot API configuration
# Webhook URL must be publicly accessible; for local dev, use ngrok tunneling
ngrok http 8080
# 4. Reset the webhook (sometimes the Bot gets stuck on an old webhook)
curl -X POST "https://api.telegram.org/bot<YOUR_TOKEN>/deleteWebhook"
2. Lark Callback Verification Fails
Symptoms: Lark Open Platform shows “Callback URL verification failed”.
Troubleshooting steps:
# 1. Confirm the public URL is accessible
curl -X GET "https://your-domain.com/gateway/feishu/webhook"
# 2. Check Lark signature verification
# Lark uses AES to encrypt messages; encrypt_key must match the backend configuration
# Check Hermes logs:
hermes logs --platform feishu
WARNING
Lark’s encryption mode (encrypt_mode) must match the backend setting. If you select “safe mode” in the backend, Hermes must also set encrypt_mode: safe and the corresponding encoding_aes_key.
3. WeCom Messages Have No Response
Symptoms: You send a message, but there is no reply, and no record appears in the logs.
Troubleshooting steps:
# 1. Confirm CorpId / AgentId / Secret are correct
# 2. Most importantly: check the IP whitelist
# WeCom requires the server IP calling the API to be in the trusted IP list
# 3. Get an AccessToken to test
curl "https://qyapi.weixin.qq.com/cgi-bin/gettoken?corpid=wwxxx&corpsecret=secret"
# If it returns "errmsg": "ip not in whitelist", the IP wasn’t whitelisted
4. Model Doesn’t Respond
Symptoms: The Bot receives the message, but the AI doesn’t reply— it keeps showing “thinking”.
Troubleshooting steps:
# 1. Test whether the API key is valid
curl -X POST "https://api.defapi.org/api/v1/messages" \
-H "Authorization: Bearer <your-key>" \
-H "Content-Type: application/json" \
-d '{"model":"anthropic/claude-opus-4.6","messages":[{"role":"user","content":"hi"}],"max_tokens":10}'
# 2. Check model configuration
hermes config get model
# 3. Check fallback_providers (recommended)
# Add a backup Provider in ~/.hermes/config.yaml:
fallback_providers:
- provider: openrouter
api_key: "${OPENROUTER_API_KEY}"
TIP
Besides Defapi, it’s recommended to configure at least one backup Provider. Hermes v0.6.0 supports chained fallback_providers: if the primary Provider times out or returns an error, Hermes automatically switches to the backup Provider to keep the Agent online.
5. Tool Execution Errors
Symptoms: The Bot replies that it can’t perform a certain operation.
Troubleshooting steps:
# 1. Check the currently enabled toolsets
hermes tools
# 2. Enable the toolsets you need
hermes tools enable web_search
hermes tools enable terminal
# 3. Dangerous commands require manual confirmation
# Check configuration:
hermes config get approval_required
# If set to true, all dangerous commands require your manual approval
6. Multi-Platform Concurrency Conflicts
Symptoms: After one platform receives a message, the other platforms also stop responding.
Troubleshooting steps:
Hermes v0.6.0’s token-lock mechanism prevents two platform instances from conflicting when using the same Bot Token. But if you run multiple instances of the same Bot across multiple machines, this problem can still happen.
# Confirm only one Hermes gateway instance is running
ps aux | grep hermes
pkill -f hermes-agent
# Then restart
hermes gateway start
TIP
Using Profiles lets each platform run a fully independent Hermes instance: each profile has its own configuration directory, memory, and sessions. Different profiles are completely isolated—ideal for teams where different members use their own Bot.
Further Reading / Advanced Directions
Migrating from OpenClaw
If you’re already using OpenClaw, Hermes provides an official migration tool that can automatically import SOUL.md, MEMORY.md, Skills, API Keys, and messaging platform configuration:
hermes claw migrate # Interactive full migration
hermes claw migrate --dry-run # Preview first
hermes claw migrate --preset user-data # Don’t migrate sensitive information
MCP Server Mode
Hermes v0.6.0 adds MCP Server capability. With one command, Hermes can expose tools to the outside world:
hermes mcp serve
# Supports two transport protocols: stdio and Streamable HTTP
After connecting, IDEs like Cursor, VS Code, and Zed can call all Hermes tools and conversation capabilities via the MCP protocol.
Custom Skills System
Hermes’ Skills system supports creating skills automatically from natural-language descriptions. You can also write them manually:
# List existing skills
hermes skills
# Browse Skills Hub (community skills marketplace)
/skills
# Install community skills from agentskills.io
Cron Scheduled Tasks
Use natural language to set scheduled tasks. Hermes will execute them automatically and push the results to any platform:
# Every day at 8am, push a weather report to Telegram
/cron "Every day at 8am, summarize today's weather and send to me" --platform telegram
Conclusion
Hermes Agent’s all-platform gateway design is very elegant: you only need to maintain a single configuration, and one Agent instance can run on any number of messaging platforms. Memory is shared across platforms; configure it once and you’re done. Resource usage is also much lower than running multiple independent Bots.
If you’re still using OpenClaw—or you’re looking for a truly battle-tested open-source AI Agent—Hermes is worth spending 20 minutes trying. Pair it with Defapi’s half-priced Claude Opus 4.6, and your monthly cost can be compressed to 50% of what you’d pay—without sacrificing the experience.