Getting Started with OpenMOSS: Transforming AI into a 24/7 Continuous AI Agency via Multi-Agent Collaboration

March 29, 2026

Difficulty: Beginner | Duration: 30 Minutes | Takeaway: Understand the OpenMOSS four-role collaboration system and build your own AI company team


Target Audience

You are already using OpenClaw and have experienced the power of a single Agentβ€”it can chat, write code, and execute tasks. But you might have also noticed: once a task becomes complex or requires multi-step collaboration, a single Agent can easily get stuck at a certain stage and just "die" there, waiting for your prompt.

OpenMOSS aims to solve exactly this. Its logic is simple: Don't let one Agent carry everything alone; instead, equip the AI with an organizational structure. The Planner breaks down requirements, the Executor does the work, the Reviewer handles quality control, and the Patrol oversees the processβ€”four roles performing their duties, operating autonomously through scheduled wake-ups to form a closed-loop AI company team.

OpenMOSS itself is a middleware; it doesn't care about specific business logic, only about scheduling and collaboration. Whatever Skills you configure for it, it can automatically collaborate to complete those tasks.

This article is suitable for:

  • Users already using OpenClaw who want to experience multi-Agent collaboration
  • Those who need AI to run automatically and continuously without manual supervision
  • Those who want to build content production pipelines, automated O&M, code review workflows, etc.

Core Dependencies and Environment

DependencyVersion RequirementDescription
Python>= 3.10OpenMOSS backend runtime environment
Node.js>= 18Required only for building the frontend; not needed if static/ directory exists
OpenClawRecent versionEach Agent is an OpenClaw instance, the execution carrier for Agents
OpenMOSSLatest versionFastAPI middleware + SQLite database
API KeySelf-providedAPI Key for Agents to call LLMs; Claude or GPT recommended

TIP

OpenMOSS itself does not run AI models; it is merely a scheduling center. The actual tasks are executed by Agent instances running on OpenClaw, and each Agent requires an LLM API Key. The stronger the model capability (and the larger the context window), the better OpenMOSS performs. GPT-5.3-Codex or Claude 4 and above are recommended.

WARNING

Multi-Agent setups will multiply model credit consumption. Please set interface limits and rates reasonably in the configuration to prevent excessive additional costs.

GitHub Repository: https://github.com/uluckyXH/OpenMOSS


Complete Project Structure

OpenMOSS/
β”œβ”€β”€ app/                          # FastAPI Backend
β”‚   β”œβ”€β”€ main.py                   # Entry point: Route registration, middleware, SPA static service
β”‚   β”œβ”€β”€ config.py                 # Configuration loading
β”‚   β”œβ”€β”€ database.py               # Database initialization (SQLAlchemy)
β”‚   β”œβ”€β”€ models/                  # Data models (10 tables)
β”‚   β”œβ”€β”€ routers/                 # API routes
β”‚   β”œβ”€β”€ services/                # Business logic layer
β”‚   └── schemas/                 # Pydantic serialization models
β”œβ”€β”€ webui/                        # Vue 3 frontend source code (requires build)
β”œβ”€β”€ static/                       # Frontend build artifacts (served directly by backend)
β”œβ”€β”€ prompts/                      # Agent role prompt templates
β”‚   β”œβ”€β”€ templates/                # Role base templates
β”‚   β”œβ”€β”€ agents/                   # Agent prompt examples
β”‚   └── tool/                    # Tool call prompts
β”œβ”€β”€ skills/                        # OpenClaw Agent Skill definitions
β”‚   β”œβ”€β”€ task-cli.py              # Shared API call script for all Skills
β”‚   β”œβ”€β”€ task-planner-skill/      # Planner Skill
β”‚   β”œβ”€β”€ task-executor-skill/      # Executor Skill
β”‚   β”œβ”€β”€ task-reviewer-skill/     # Reviewer Skill
β”‚   β”œβ”€β”€ task-patrol-skill/       # Patrol Skill
β”‚   └── dist/                    # Packaged Skill .zip files
β”œβ”€β”€ config.example.yaml            # Configuration file template
β”œβ”€β”€ requirements.txt              # Python dependencies
β”œβ”€β”€ Dockerfile
└── docker-compose.yml

Step-by-Step Guide

Step 1: Clone the Project and Install Dependencies

# Clone project
git clone https://github.com/uluckyXH/OpenMOSS.git openmoss
cd openmoss

# Install Python dependencies
pip install -r requirements.txt

If your repository does not have a static/ directory (frontend not built), you also need to build the frontend:

cd webui
npm install
npm run build

# Copy build artifacts to static/ directory
rm -rf ../static/*
cp -r dist/* ../static/
cd ..

Step 2: Configure config.yaml

On the first run, OpenMOSS will automatically generate config.yaml from config.example.yaml. It is recommended to copy the template and modify it directly:

cp config.example.yaml config.yaml

Then edit config.yaml; the following fields must be configured:

# Administrator password (automatically encrypted after first run)
admin:
  password: "your-secure-password-here"

# Agent registration token (required when Agents register; random generation recommended)
agent:
  registration_token: "your-random-token-here"
  allow_registration: true

# Workspace directory (where Agent outputs are stored)
workspace:
  root: "/home/your-user/TaskWork"

# Server external address (address accessed by Agents during connection)
server:
  external_url: "http://your-server-ip:6565"

WARNING

registration_token acts as the entry ticket for Agents; this token must be provided for an Agent to join. Use a random string and do not use the default value.


Step 3: Start the Service and Complete the Initialization Wizard

python -m uvicorn app.main:app --host 0.0.0.0 --port 6565

First-time startup will automatically:

  1. Generate config.yaml (if not already present)
  2. Initialize the SQLite database (data/tasks.db)
  3. Automatically mount the frontend (if the static/ directory exists)

Open your browser and visit http://localhost:6565, which will redirect to the initialization wizard, guiding you through:

  • Setting the admin password
  • Configuring the project name
  • Generating the Agent registration token
  • Optional configuration of notification channels

Upon completion, the service addresses will be displayed:

AddressDescription
http://localhost:6565WebUI Admin Dashboard
http://localhost:6565/docsSwagger API Documentation
http://localhost:6565/api/healthHealth Check Endpoint

Step 4: Log in to WebUI and Familiarize Yourself with the Backend

After initialization, log in to the WebUI with the admin account. You will see the following pages:

PagePurpose
DashboardSystem overview, statistical highlights, trend charts
Task ManagementTask lists, module breakdown, sub-task management
AgentAgent list, status, workload, activity logs
Activity StreamReal-time display of all Agent API call activities
LeaderboardAgent performance rankings, credit flow
Review RecordsReview record list, filtering, detail viewing
Prompt ManagementView and manage role prompts and global rules
System SettingsConfiguration management, password modification, notification settings

When first started, the Agent list is emptyβ€”we haven't registered any Agents yet. We will start creating them in Step 5.


Step 5: Create Four Agents and Register them to OpenClaw

OpenMOSS's four roles correspond to different responsibilities:

RoleResponsibilityCorresponding OpenClaw Instance
plannerDeconstruct requirements, create modules/sub-tasks, assign tasksOne OpenClaw instance
executorClaim sub-tasks, write code, submit deliverablesMultiple OpenClaw instances
reviewerReview quality, score, approve or reject for reworkOne OpenClaw instance
patrolMonitor system, detect anomalies, mark blockagesOne OpenClaw instance

Each Agent is essentially an instance running OpenClaw. We'll use creating the planner as an example; the process for the other three roles is identical.

Method 1: Register via WebUI (Recommended)

  1. Click "Register Agent" on the Agent page in WebUI.
  2. Fill in basic information:
Role: planner
Name: OpenMOSS-Planner
Registration Token: (Fill in the agent.registration_token from config.yaml)
  1. After submission, the WebUI will return:
    • The Agent's API Key (Keep it safe; it's only shown once)
    • Download links for the Agent's SKILL.md and task-cli.py
    • Integration guide (including registration commands and configuration methods)

Method 2: Register via API

curl -X POST http://localhost:6565/api/agents/register \
  -H "X-Registration-Token: your-registration-token" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "OpenMOSS-Planner",
    "role": "planner"
  }'

The response contains the API Key, registration command, and Skill download link:

{
  "api_key": "om_xxxxxxxxxxxxxxxxxxxx",
  "skill_cli_url": "http://localhost:6565/skills/task-cli.py",
  "skill_url": "http://localhost:6565/skills/task-planner-skill/SKILL.md",
  "register_command": "openclaw agents add ..."
}

Register the remaining three Agents in the same way:

executor β†’ task-executor-skill
reviewer β†’ task-reviewer-skill
patrol   β†’ task-patrol-skill

Step 6: Configure Agent Skills

After each Agent is successfully registered, you will get two key files:

  • task-cli.py β€” The OpenMOSS API call script shared by all roles
  • SKILL.md β€” The exclusive Skill definition for that specific role

Place them in a directory readable by OpenClaw (usually alongside the Agent's prompt):

# Assuming OpenClaw Agent config directory
mkdir -p ~/.openclaw/agents/openmoss-planner/skills

# Download Skill files
curl -o ~/.openclaw/agents/openmoss-planner/skills/task-cli.py \
  http://localhost:6565/skills/task-cli.py

curl -o ~/.openclaw/agents/openmoss-planner/skills/SKILL.md \
  http://localhost:6565/skills/task-planner-skill/SKILL.md

TIP

Skill files support hot reloading. After you modify SKILL.md or task-cli.py, the Agent will automatically read the latest version the next time it wakes up, without needing to restart OpenMOSS.

Then, in the OpenClaw Agent configuration, add the path of SKILL.md to the Agent's Prompt or Skill configuration so the Agent knows which OpenMOSS APIs it can call.


Step 7: Configure Cron for Scheduled Wake-upsβ€”Let Agents "Go to Work" Automatically

This is the biggest difference between OpenMOSS and a standard single Agent: Agents don't wait for you to send a message to act; they "clock in" on a schedule like employees.

In the OpenClaw Agent configuration, set up a cron schedule for each Agent:

planner β€” Check for new tasks every 30 minutes:

{
  "cron": "*/30 * * * *",
  "task": "Check the OpenMOSS task queue; if there are new un-planned tasks, execute the planning workflow"
}

executor β€” Check for claimable tasks every 15 minutes:

{
  "cron": "*/15 * * * *",
  "task": "Check the OpenMOSS sub-task queue, claim and execute them"
}

reviewer β€” Check for deliverables pending review every 20 minutes:

{
  "cron": "*/20 * * * *",
  "task": "Check the OpenMOSS review queue and process deliverables pending review"
}

patrol β€” Inspect system status every 10 minutes:

{
  "cron": "*/10 * * * *",
  "task": "Inspect OpenMOSS system status, mark blocked tasks, and alert immediately upon detecting anomalies"
}

The workflow after an Agent is woken up is fixed:

  1. Call the OpenMOSS API to get current status
  2. Perform corresponding operations based on its role
  3. Write results back to OpenMOSS
  4. Enter sleep mode and wait for the next wake-up

The entire process requires no manual intervention.


Step 8: Assign the First Task and Verify the Four-Role Collaboration Loop

On the "Task Management" page of the WebUI, click "Create Task" and fill in:

Task Name: Automatic AI News Translation and Publishing
Goal: Collect AI/Tech/Digital news from the Chinese internet, translate them into English, and publish them

The planner, when woken by cron, will:

  1. Receive the task and break it down into modules (Collect β†’ Translate β†’ Review β†’ Publish)
  2. Create sub-tasks for each module
  3. Define acceptance criteria for each sub-task

The executor, when woken up, will:

  1. Claim the first pending sub-task from the queue
  2. Execute the work (searching news online, translating content)
  3. Submit deliverables to the review queue

The reviewer, when woken up, will:

  1. Retrieve deliverables pending review
  2. Score them based on acceptance criteria
  3. If approved, mark the sub-task as completed; if rejected, send it back to the executor for rework

The patrol continuously monitors the system:

  • If a sub-task shows no progress beyond a threshold, automatically mark it as blocked
  • If the review rejection rate rises abnormally, send an alert notification
  • If an executor is caught in an infinite loop, notify the admin to intervene

You can view the real-time activity of all Agents at any time on the "Activity Stream" page of the WebUI.


Step 9: Configure Notification Channels

OpenMOSS supports automatic notifications when key events occur. Configure notification channels in config.yaml:

notification:
  enabled: true
  channels:
    - "Lark Group ID"    # Requires pulling the Agent into a Lark group and @ing it once to get the chat_id
    - "user:ou_xxx"      # Lark user open_id
  events:
    - task_completed     # Sub-task completed
    - review_rejected    # Review rejected (rework)
    - all_done           # All sub-tasks completed
    - patrol_alert       # Patrol detected anomaly

Agents will read the notification configuration from the GET /config/notification endpoint and use their own capabilities (Email, Lark, etc.) to send notifications.

NOTE

OpenMOSS itself does not implement notification sending; it simply tells the Agent the notification target. The Agent must have the corresponding notification Skill (e.g., Lark message Skill) to actually push the message.


Troubleshooting Common Issues

1. Agent Registration Failure (registration_token mismatch)

Symptom: Agent registration returns 403 or "invalid token" error.

The most common cause is inconsistent registration_token. Check two things:

  1. The agent.registration_token value in OpenMOSS config.yaml
  2. The X-Registration-Token value in the Header of the Agent's registration request

Both must be identical, including spaces and casing. If you modify config.yaml, remember to restart the OpenMOSS service.


2. Cron Tasks Not Waking Up Agents

Symptom: Agent does not start working automatically, and there are no error logs in the console.

Troubleshooting order:

  1. Confirm OpenClaw-side cron configuration is correctβ€”Verify if the cron expression is valid and the task description is clear.
  2. Confirm OpenClaw instance is runningβ€”Check if the OpenClaw process corresponding to the Agent is alive.
  3. Check if the Agent's API Key is validβ€”Check if the status is "active" on the WebUI Agent page.

You can use the WebUI "Agent" page to check the last active time of a specific Agent to determine if it has been woken up as scheduled.


3. Sub-tasks Repeatedly Rejected, Falling into an Infinite Loop

Symptom: Deliverables submitted by the executor are constantly rejected by the reviewer, and the same sub-task is reworked repeatedly.

This usually indicates a deviation in the executor's understanding of the acceptance criteria, or the criteria themselves are unclearly defined.

Solutions:

  1. Check the rejection records in the WebUI to see the reasons provided by the reviewer.
  2. Adjust the acceptance criteria written by the planner when creating the sub-task to make them more specific and quantifiable.
  3. If it's a lack of capability on the executor's part, switch it to a more powerful model (update the API Key in the OpenClaw Agent configuration).

4. Patrol Not Responding

Symptom: There are blocked tasks in the system, but the patrol has not marked them or sent alerts.

Check:

  1. Whether the OpenClaw instance for the patrol Agent is running.
  2. If the patrol's cron interval is too sparse (setting */10 is much more sensitive than */60).
  3. Whether notification is enabled in config.yaml and channels are configured correctly.
  4. Whether the patrol Agent has the Skill to send notifications (Lark/Email, etc.).

5. API Key Expired or Insufficient Permissions

Symptom: Agent stops suddenly after working for a while, and 401 or 403 errors appear in the logs.

The Agent's API Key (starting with om_xxx) is an internal credential for OpenMOSS, which is not the same as the LLM's API Key. If there are LLM authentication errors in the logs, it means there is an issue with the Agent's LLM API Key (e.g., quota exhausted, permissions revoked).

Solution: Update the corresponding LLM API Key in the OpenClaw Agent configuration, then restart that Agent instance.


6. Multiple Executors Claiming the Same Task Simultaneously

Symptom: Two executors both start working on the same sub-task, causing redundant labor.

OpenMOSS's task queue has atomicity protection; theoretically, two Agents should not claim the same sub-task simultaneously. If this happens, it might be due to:

  1. Two executors having cron intervals set too short, waking up simultaneously within a tiny time window.
  2. Delay in task status updates, causing an executor to read stale data.

Solution: Suitably increase the executors' cron interval (e.g., from */5 to */15) to give enough of a time window for status updates to complete.


Extended Reading / Advanced Directions

1. Customizing Agent Role Prompts

The prompts/ directory contains prompt templates for each role. You can modify these to build Agent personalities that fit your specific business needs:

# Edit Planner prompt
vim prompts/agents/planner-agent-prompt.md

# Edit Executor prompt
vim prompts/agents/executor-agent-prompt.md

After modifying the prompts, the Agent will automatically read the latest version the next time it wakes up.

2. Integrating WordPress Skill for Automated Publishing

skills/wordpress-skill/ provides WordPress publishing capabilities. Used in conjunction with an executor, it allows Agents to automatically publish translated articles to a WordPress site without manual intervention.

You need to configure the WordPress site URL and API Key:

# View WordPress Skill configuration instructions
cat skills/wordpress-skill/SKILL.md

3. Credit and Performance System Optimization

OpenMOSS has a built-in Agent credit mechanism; the reviewer's scoring directly affects the executor's performance ranking. You can view the output quality of each Agent on the "Leaderboard" page of the WebUI.

If you want the credit system to be stricter or more lenient, simply modify the description of scoring standards in the reviewer's prompt.

4. Integrating Grok Search Skill for Online Searching

skills/grok-search-runtime/ provides Grok web search capabilities. With this Skill, an Executor can scrape internet news in real-time, then translate and publish itβ€”the core workflow of the real-world case 1M Reviews.

5. Migrating to PostgreSQL / MySQL

Currently, OpenMOSS defaults to SQLite, which is suitable for small to medium teams. If you need support for higher concurrency, you can switch to PostgreSQL or MySQL:

# config.yaml
database:
  type: postgresql  # or mysql
  path: ""           # Leave blank, use the connection string below
  connection_string: "postgresql://user:pass@localhost:5432/openmoss"

6. One-Click Deployment with Docker

The project provides a Dockerfile and docker-compose.yml for one-click environment deployment:

# Start all services
docker-compose up -d

# View logs
docker-compose logs -f

This will package and run OpenMOSS, the database, and the frontend together, ideal for quick verification or migration to a server.

7. Building Your Own Content Production Pipeline

Referring to the actual case of 1M Reviews, the complete pipeline is:

  1. planner deconstructs the task: Collect β†’ Translate β†’ Review β†’ Publish
  2. executor (Collect) scrapes news via the Grok Search Skill
  3. executor (Translate) calls a translation API to rewrite content
  4. reviewer audits content quality and formatting
  5. executor (Publish) calls the WordPress Skill to publish to the site
  6. patrol monitors system status and alerts immediately upon detecting anomalies

The entire process operates fully autonomously; you only need to set the goals at the start and accept the results at the end.

Updated March 29, 2026