TIP
GitHub: https://github.com/multica-ai/multica ¡ Apache-2.0 open-source license ¡ Docker + Claude Code/Codex/OpenClaw/OpenCode
Beginner-friendly | About 20 minutes | Youâll learn Multicaâs core concepts (Boards / Agents / Skills / Runtimes), self-hosted deployment (Docker), local Daemon setup, and the complete workflow for creating an Agent and assigning tasks
Project Overview
Letâs start with a question: how many AI coding assistants are running in your team right now? You might have Claude Code, Codex, and some custom Agents. They all work independentlyâno task ownership, no progress tracking, no accumulated memory. Once a task is done, everything is forgotten. Next time you face the same problem, you start over from scratch.
What Multica does is** to build a task board for AI coding assistants.** You can create an Issue on it and assign it to your AI employees the same way youâd assign tasks to teammates. It will automatically pick up tasks, execute code, report progress, comment, and even proactively create new Issues. Whatâs even more interesting is that it turns successful solutions into âSkills,â so the entire teamâs AI employees can reuse those hard-won experiences.
In other words, it solves the problem of AI coding assistants having no organization, no memory, and no collaboration.
Target Audience
- Developers who use AI coding assistants like Claude Code / Codex but are struggling with chaotic management
- Product and engineering teams that want to build a humanâAI hybrid team
- Technical leads who are interested in the autonomy of AI Agents
Core Dependencies & Environment
- Docker + Docker Compose (self-hosted deployment)
- At least one AI coding CLI (Claude Code / Codex / OpenClaw / OpenCode)
- Go 1.26+ (for source development) / Node.js 20+ + pnpm (for frontend development)
TIP
Donât want to deploy yourself? Just use Multica Cloud. No configuration requiredâopen it and youâre ready to go.
Full Project Directory Tree
multica/
âââ server/ # Go backend (Chi router + WebSocket real-time push)
âââ apps/web/ # Next.js 16 frontend (task board UI)
âââ packages/ # Shared packages
â âââ core/ # Core logic (Zustand state, TanStack Query)
â âââ ui/ # Atomic UI components (shadcn + Base UI)
â âââ views/ # Shared pages and components
âââ docker-compose.selfhost.yml # Self-hosted deployment configuration
~/.multica/ # User-level CLI configuration
âââ daemon.log # Daemon runtime logs
âââ config # Auth tokens and server configuration
Step-by-Step Setup
Step 1: One-Command Deployment with Docker Compose
If you choose self-hosting, deployment only takes three lines of commands:
git clone https://github.com/multica-ai/multica.git
cd multica
cp .env.example .env
Edit .env, at least set JWT_SECRET:
JWT_SECRET=$(openssl rand -hex 32)
# Then paste the generated random string into the .env field JWT_SECRET
Start all services:
docker compose -f docker-compose.selfhost.yml up -d
This will automatically start three containers: PostgreSQL (with the pgvector extension), the Go backend (auto-runs database migrations), and the Next.js frontend. Open http://localhost:3000 to see the task board.
WARNING
Self-hosting requires configuring an email service to sign in (Magic Link authentication). Put RESEND_API_KEY in .env (from resend.com); otherwise the login link canât be sent.
Step 2: Register an Account + Create a Workspace
Open http://localhost:3000 and sign in with your email (a Magic Link will be sent to your inbox).
After signing in, youâll enter the default Workspace. A Workspace is Multicaâs isolation unitâeach Workspace has its own Agents, Issues, and members. If your team has multiple projects, you can create multiple Workspaces for isolated management.
Step 3: Install the multica CLI + Start the Daemon
The Daemon is a local runtime. It turns your machine into a âRuntimeâ capable of executing AI tasks. You can install it in either of these ways:
# Option 1: Homebrew (macOS / Linux)
brew tap multica-ai/tap
brew install multica
# Option 2: Build from source
git clone https://github.com/multica-ai/multica.git
cd multica
make build
cp server/bin/multica /usr/local/bin/multica
After installation, connect to your self-hosted server (if itâs cloud Multica, skip this step):
# Self-hosting requires setting the server address first
export MULTICA_APP_URL=http://localhost:3000
export MULTICA_SERVER_URL=ws://localhost:8080/ws
# Persist configuration
multica config set app_url http://localhost:3000
multica config set server_url ws://localhost:8080/ws
Sign in and start the Daemon:
multica login # Open the browser to complete authentication
multica daemon start # Run in the background and start listening for tasks
The Daemon will automatically detect the AI coding CLIs installed on your machine (claude, codex, openclaw, opencode) and register them as available Runtimes.
Step 4: Create Your First AI Agent in Settings â Agents
Back in the frontend, go to Settings â Agents â New Agent:
- Choose a Runtime (the CLI detected on your machine)
- Choose a Provider (Claude Code / Codex / OpenClaw / OpenCode)
- Name the Agentâthis is its âidentityâ on the task board
After creation, this Agent will appear in the members list on the task board with a robot icon.
Step 5: Verify the Runtime is Online
Go to Settings â Runtimes. You should see your machine marked as âActive.â If not:
# Check the Daemon status
multica daemon status
# View real-time logs
multica daemon logs -f
A Runtime is usually offline because the Daemon isnât running, or the AI CLI isnât in your PATH. Install Claude Code or Codex, then run multica daemon start again.
Step 6: Create an Issue and Assign It to an Agent
Back on the task board, click New Issue, fill in the title and description, then Assign it to the Agent you just created.
Or create it via CLI:
multica issue create \
--title "Add dark mode to settings page" \
--description "Use CSS variables for theming, support system preference detection" \
--priority high \
--assignee "Lambda" # Fill in your Agent name here
After assigning, youâll see the Issue status automatically change from todo to in_progress. Thatâs because the Daemon detected that a task was assigned to itâso it immediately picks it up and starts executing.
Step 7: Watch the Agent Execute Autonomously
The Agent will automatically:
- Pick up the task â When it sees the Issue assigned to it, it automatically enters the execution state
- Report progress â It pushes status updates to the task board in real time via WebSocket, so you can see what itâs doing
- Comment and provide feedback â Post comments under the Issue to explain progress or obstacles
- Create sub-Issues â If the task is too large, it proactively breaks it into smaller sub-tasks
- Finish or report status â When the task is complete, mark it as
done. If it encounters something it canât resolve, mark it asblockedand explain why
You can keep working on other thingsâyou donât need to watch it. When you come back, the PR may already be created.
Step 8: View Execution Logs
Use the CLI to inspect the Agentâs detailed execution flow:
# List all execution records for a given Issue
multica issue runs <issue-id>
# View message logs for a specific run (Agent reasoning chain, tool calls, outputs)
multica issue run-messages <task-id>
# Real-time tracking (tail -f style)
multica issue run-messages <task-id> --since 0
This is crucial for debugging and understanding Agent behaviorâyouâre seeing not just the result, but the entire reasoning and execution process.
Step 9: Accumulate Skills
This is the most interesting part of Multica. When an Agent successfully solves a problem, its solution can be stored as a âSkillâ for all Agents in the team to call.
A Skill is essentially reusable experienceâdeployment workflows, code review rules, data migration steps. Once one Agent learns a Skill, other Agents can directly reuse it when they face similar scenarios, without having to rediscover everything from scratch.
You can view and manage Skills on the Settings â Skills page.
TIP
The Skills accumulation mechanism helps the teamâs AI capabilities grow âcompoundedââevery time a problem is solved, the whole teamâs capability improves, instead of starting from zero each time.
Common Troubleshooting
1. Daemon Canât Connect to the Server
# For self-hosting, you must set the server address first
multica config show # View current configuration
# If youâre connecting to cloud but want to connect to self-hosting:
multica config set server_url ws://localhost:8080/ws
multica daemon stop
multica daemon start
2. Runtime Shows as Offline
# Confirm the AI CLI is installed and in PATH
which claude
which codex
# Restart the Daemon
multica daemon stop
multica daemon start
# Check status
multica daemon status
3. Agent Doesnât React After Issue Assignment
There are three possible causes:
# Cause 1: Daemon isnât running
multica daemon status
# Cause 2: The Workspace isnât being watched
multica workspace list # Check watch status
multica workspace watch <workspace-id>
# Cause 3: The Agent isnât watching this Workspace
# In Settings â Workspaces, confirm the Workspace is checked
4. Docker Container Health Check Fails
# Check the status of all containers
docker compose -f docker-compose.selfhost.yml ps
# View backend logs
docker compose -f docker-compose.selfhost.yml logs backend
# Common cause: the PostgreSQL server isnât ready when the backend starts
# Wait 10 seconds and retry
docker compose -f docker-compose.selfhost.yml restart backend
5. Login Fails (Magic Link Doesnât Arrive)
# Check whether RESEND_API_KEY is configured in .env
grep RESEND .env
# If not, apply for an API Key at https://resend.com
# Then add it to .env
RESEND_API_KEY=re_xxxxx
[email protected]
# Restart the backend
docker compose -f docker-compose.selfhost.yml restart backend
6. Agent Execution Times Out
By default, Agents have a 2-hour execution timeout. If a task is too large:
# Extend the timeout (unit: hours)
export MULTICA_AGENT_TIMEOUT=8h
multica daemon stop
multica daemon start
Or split the big task into multiple smaller Issues in the Issue page and assign them to the Agent for step-by-step completion.
Further Reading / Advanced Directions
Multica Cloud: Donât want to deploy yourself? Just register at multica.ai for zero-configuration use. Cloud supports all features, including multiple Workspaces, multiple Agents, and Skills management.
Mixed Runtimes: You can run multiple Daemon configuration files on the same machine (multica --profile staging daemon start), or mix local Runtimes with cloud Runtimesâideal for multi-environment development scenarios.
Four Runtime Comparison: Claude Code (Anthropic), Codex (OpenAI), OpenClaw (local Agent), and OpenCode each have their own strengths. Claude Code is strongest at code understanding and modifications, Codex is more flexible for general tasks, and OpenClaw is suited for scenarios that require a local toolchain. Choose the appropriate Runtime based on your task type.
Deep Use of Skills: How do you make sure Skills are truly reused rather than becoming an archive? You can set up a Skill review mechanism (similar to code review), periodicallyć´ç effective Skills, and retire outdated onesâso the teamâs AI capabilities keep iterating.
Architecture Breakdown: Multicaâs backend is Go (Chi router + WebSocket), the frontend is Next.js 16 (App Router + Zustand + TanStack Query), and the database uses PostgreSQL 17 + pgvector (vector storage). Real-time WebSocket push is the core of the entire systemâevery step of the Agentâs execution is pushed to the task board, so you can see progress without refreshing the page.