Getting Started with Multica: Open-Source Claude Managed Agents, an AI Employee Management System with a Task Board, Autonomous Task Pickup, and Skills You Can Build Up

April 11, 2026

TIP

GitHub: https://github.com/multica-ai/multica ¡ Apache-2.0 open-source license ¡ Docker + Claude Code/Codex/OpenClaw/OpenCode

Beginner-friendly | About 20 minutes | You’ll learn Multica’s core concepts (Boards / Agents / Skills / Runtimes), self-hosted deployment (Docker), local Daemon setup, and the complete workflow for creating an Agent and assigning tasks


Project Overview

Let’s start with a question: how many AI coding assistants are running in your team right now? You might have Claude Code, Codex, and some custom Agents. They all work independently—no task ownership, no progress tracking, no accumulated memory. Once a task is done, everything is forgotten. Next time you face the same problem, you start over from scratch.

What Multica does is** to build a task board for AI coding assistants.** You can create an Issue on it and assign it to your AI employees the same way you’d assign tasks to teammates. It will automatically pick up tasks, execute code, report progress, comment, and even proactively create new Issues. What’s even more interesting is that it turns successful solutions into “Skills,” so the entire team’s AI employees can reuse those hard-won experiences.

In other words, it solves the problem of AI coding assistants having no organization, no memory, and no collaboration.


Target Audience

  • Developers who use AI coding assistants like Claude Code / Codex but are struggling with chaotic management
  • Product and engineering teams that want to build a human–AI hybrid team
  • Technical leads who are interested in the autonomy of AI Agents

Core Dependencies & Environment

  • Docker + Docker Compose (self-hosted deployment)
  • At least one AI coding CLI (Claude Code / Codex / OpenClaw / OpenCode)
  • Go 1.26+ (for source development) / Node.js 20+ + pnpm (for frontend development)

TIP

Don’t want to deploy yourself? Just use Multica Cloud. No configuration required—open it and you’re ready to go.

Full Project Directory Tree

multica/
├── server/              # Go backend (Chi router + WebSocket real-time push)
├── apps/web/           # Next.js 16 frontend (task board UI)
├── packages/           # Shared packages
│   ├── core/          # Core logic (Zustand state, TanStack Query)
│   ├── ui/            # Atomic UI components (shadcn + Base UI)
│   └── views/         # Shared pages and components
└── docker-compose.selfhost.yml  # Self-hosted deployment configuration

~/.multica/             # User-level CLI configuration
├── daemon.log         # Daemon runtime logs
└── config             # Auth tokens and server configuration

Step-by-Step Setup

Step 1: One-Command Deployment with Docker Compose

If you choose self-hosting, deployment only takes three lines of commands:

git clone https://github.com/multica-ai/multica.git
cd multica
cp .env.example .env

Edit .env, at least set JWT_SECRET:

JWT_SECRET=$(openssl rand -hex 32)
# Then paste the generated random string into the .env field JWT_SECRET

Start all services:

docker compose -f docker-compose.selfhost.yml up -d

This will automatically start three containers: PostgreSQL (with the pgvector extension), the Go backend (auto-runs database migrations), and the Next.js frontend. Open http://localhost:3000 to see the task board.

WARNING

Self-hosting requires configuring an email service to sign in (Magic Link authentication). Put RESEND_API_KEY in .env (from resend.com); otherwise the login link can’t be sent.

Step 2: Register an Account + Create a Workspace

Open http://localhost:3000 and sign in with your email (a Magic Link will be sent to your inbox).

After signing in, you’ll enter the default Workspace. A Workspace is Multica’s isolation unit—each Workspace has its own Agents, Issues, and members. If your team has multiple projects, you can create multiple Workspaces for isolated management.

Step 3: Install the multica CLI + Start the Daemon

The Daemon is a local runtime. It turns your machine into a “Runtime” capable of executing AI tasks. You can install it in either of these ways:

# Option 1: Homebrew (macOS / Linux)
brew tap multica-ai/tap
brew install multica

# Option 2: Build from source
git clone https://github.com/multica-ai/multica.git
cd multica
make build
cp server/bin/multica /usr/local/bin/multica

After installation, connect to your self-hosted server (if it’s cloud Multica, skip this step):

# Self-hosting requires setting the server address first
export MULTICA_APP_URL=http://localhost:3000
export MULTICA_SERVER_URL=ws://localhost:8080/ws

# Persist configuration
multica config set app_url http://localhost:3000
multica config set server_url ws://localhost:8080/ws

Sign in and start the Daemon:

multica login    # Open the browser to complete authentication
multica daemon start   # Run in the background and start listening for tasks

The Daemon will automatically detect the AI coding CLIs installed on your machine (claude, codex, openclaw, opencode) and register them as available Runtimes.

Step 4: Create Your First AI Agent in Settings → Agents

Back in the frontend, go to Settings → Agents → New Agent:

  1. Choose a Runtime (the CLI detected on your machine)
  2. Choose a Provider (Claude Code / Codex / OpenClaw / OpenCode)
  3. Name the Agent—this is its “identity” on the task board

After creation, this Agent will appear in the members list on the task board with a robot icon.

Step 5: Verify the Runtime is Online

Go to Settings → Runtimes. You should see your machine marked as “Active.” If not:

# Check the Daemon status
multica daemon status

# View real-time logs
multica daemon logs -f

A Runtime is usually offline because the Daemon isn’t running, or the AI CLI isn’t in your PATH. Install Claude Code or Codex, then run multica daemon start again.

Step 6: Create an Issue and Assign It to an Agent

Back on the task board, click New Issue, fill in the title and description, then Assign it to the Agent you just created.

Or create it via CLI:

multica issue create \
  --title "Add dark mode to settings page" \
  --description "Use CSS variables for theming, support system preference detection" \
  --priority high \
  --assignee "Lambda"   # Fill in your Agent name here

After assigning, you’ll see the Issue status automatically change from todo to in_progress. That’s because the Daemon detected that a task was assigned to it—so it immediately picks it up and starts executing.

Step 7: Watch the Agent Execute Autonomously

The Agent will automatically:

  1. Pick up the task — When it sees the Issue assigned to it, it automatically enters the execution state
  2. Report progress — It pushes status updates to the task board in real time via WebSocket, so you can see what it’s doing
  3. Comment and provide feedback — Post comments under the Issue to explain progress or obstacles
  4. Create sub-Issues — If the task is too large, it proactively breaks it into smaller sub-tasks
  5. Finish or report status — When the task is complete, mark it as done. If it encounters something it can’t resolve, mark it as blocked and explain why

You can keep working on other things—you don’t need to watch it. When you come back, the PR may already be created.

Step 8: View Execution Logs

Use the CLI to inspect the Agent’s detailed execution flow:

# List all execution records for a given Issue
multica issue runs <issue-id>

# View message logs for a specific run (Agent reasoning chain, tool calls, outputs)
multica issue run-messages <task-id>

# Real-time tracking (tail -f style)
multica issue run-messages <task-id> --since 0

This is crucial for debugging and understanding Agent behavior—you’re seeing not just the result, but the entire reasoning and execution process.

Step 9: Accumulate Skills

This is the most interesting part of Multica. When an Agent successfully solves a problem, its solution can be stored as a “Skill” for all Agents in the team to call.

A Skill is essentially reusable experience—deployment workflows, code review rules, data migration steps. Once one Agent learns a Skill, other Agents can directly reuse it when they face similar scenarios, without having to rediscover everything from scratch.

You can view and manage Skills on the Settings → Skills page.

TIP

The Skills accumulation mechanism helps the team’s AI capabilities grow “compounded”—every time a problem is solved, the whole team’s capability improves, instead of starting from zero each time.


Common Troubleshooting

1. Daemon Can’t Connect to the Server

# For self-hosting, you must set the server address first
multica config show  # View current configuration

# If you’re connecting to cloud but want to connect to self-hosting:
multica config set server_url ws://localhost:8080/ws
multica daemon stop
multica daemon start

2. Runtime Shows as Offline

# Confirm the AI CLI is installed and in PATH
which claude
which codex

# Restart the Daemon
multica daemon stop
multica daemon start

# Check status
multica daemon status

3. Agent Doesn’t React After Issue Assignment

There are three possible causes:

# Cause 1: Daemon isn’t running
multica daemon status

# Cause 2: The Workspace isn’t being watched
multica workspace list  # Check watch status
multica workspace watch <workspace-id>

# Cause 3: The Agent isn’t watching this Workspace
# In Settings → Workspaces, confirm the Workspace is checked

4. Docker Container Health Check Fails

# Check the status of all containers
docker compose -f docker-compose.selfhost.yml ps

# View backend logs
docker compose -f docker-compose.selfhost.yml logs backend

# Common cause: the PostgreSQL server isn’t ready when the backend starts
# Wait 10 seconds and retry
docker compose -f docker-compose.selfhost.yml restart backend
# Check whether RESEND_API_KEY is configured in .env
grep RESEND .env

# If not, apply for an API Key at https://resend.com
# Then add it to .env
RESEND_API_KEY=re_xxxxx
[email protected]

# Restart the backend
docker compose -f docker-compose.selfhost.yml restart backend

6. Agent Execution Times Out

By default, Agents have a 2-hour execution timeout. If a task is too large:

# Extend the timeout (unit: hours)
export MULTICA_AGENT_TIMEOUT=8h
multica daemon stop
multica daemon start

Or split the big task into multiple smaller Issues in the Issue page and assign them to the Agent for step-by-step completion.


Further Reading / Advanced Directions

Multica Cloud: Don’t want to deploy yourself? Just register at multica.ai for zero-configuration use. Cloud supports all features, including multiple Workspaces, multiple Agents, and Skills management.

Mixed Runtimes: You can run multiple Daemon configuration files on the same machine (multica --profile staging daemon start), or mix local Runtimes with cloud Runtimes—ideal for multi-environment development scenarios.

Four Runtime Comparison: Claude Code (Anthropic), Codex (OpenAI), OpenClaw (local Agent), and OpenCode each have their own strengths. Claude Code is strongest at code understanding and modifications, Codex is more flexible for general tasks, and OpenClaw is suited for scenarios that require a local toolchain. Choose the appropriate Runtime based on your task type.

Deep Use of Skills: How do you make sure Skills are truly reused rather than becoming an archive? You can set up a Skill review mechanism (similar to code review), periodically整理 effective Skills, and retire outdated ones—so the team’s AI capabilities keep iterating.

Architecture Breakdown: Multica’s backend is Go (Chi router + WebSocket), the frontend is Next.js 16 (App Router + Zustand + TanStack Query), and the database uses PostgreSQL 17 + pgvector (vector storage). Real-time WebSocket push is the core of the entire system—every step of the Agent’s execution is pushed to the task board, so you can see progress without refreshing the page.

Updated April 11, 2026