OpenLobster Getting Started Guide: A More Secure Alternative to OpenClaw

March 17, 2026

Foreword

By the end of 2025, OpenClaw has completely broken into the mainstream. This self-hosted Agent platform that "lets AI do things for you" saw its GitHub stars skyrocket to over 200,000, becoming almost synonymous with AI Agents.

However, along with this explosive popularity came intensive warnings from security teams. 26% of community Skills contained vulnerabilities, 40,000 instances were exposed on the public internet, and CVEs appeared in batches. The community began to realize: OpenClaw is like a very fast car without seatbelts—it's great to use, but things go south quickly when trouble hits.

This is why OpenLobster exists.

It is not just a simple UI optimization, but a complete security reconstruction. Rewritten in Go for the backend, it features encrypted storage, default authentication, and a true multi-user architecture. Some in the community call it "what OpenClaw should have been"—today, we’ll see if that claim holds water.

Core Comparison: OpenLobster vs. OpenClaw

Let's take a moment to understand the differences between the two, which will help you decide if it's worth migrating.

Architecture: Node.js → Go

OpenClaw is a Node.js/TypeScript project that depends on a pile of npm packages. OpenLobster is rewritten directly in Go, and the entire backend is a ~66MB static binary.

Actual measurement data:

  • Startup time: 200ms (OpenClaw takes 2-3 seconds)
  • Memory footprint: 30MB (OpenClaw is 150MB+)
  • Deployment: One binary file + config file, no Node environment required

If you are running this on a Raspberry Pi or a low-spec VPS, this difference will be quite significant.

Memory System: Markdown → Graph Database

This is the most core difference.

OpenClaw's "memory" is essentially a MEMORY.md file where content is appended after every conversation. As concurrent sessions increase, this file becomes a tangled mess. Official documentation even states "only the main session can write to MEMORY.md"—which sounds like a feature but is actually a major embarrassment.

OpenLobster uses a true graph database architecture. It has two built-in modes:

  • File Backend: Local GML format, no extra services needed
  • Neo4j Backend: A true graph database supporting complex queries

Every concept established in an AI session is a node; relationships between characters and event associations have typed edges. This means you can truly "query" the AI's memory instead of just having it repeat what was said before.

Multi-User Support

OpenClaw has almost no concept of multi-user support. All sessions share a primary memory, and data can get crossed when used across different channels.

OpenLobster fundamentally supports multiple users:

  • Every user on every channel (Telegram, Discord) is a separate entity
  • Independent conversation history
  • Independent tool permissions
  • Independent Pairing process

A Telegram user and a Discord user can converse with the same AI simultaneously without interfering with each other.

Task Scheduling: Heartbeat → Cron

OpenClaw's scheduled tasks are handled by a daemon process that reads a HEARTBEAT.md file every 30 minutes. It’s simple and crude, but it only goes so far.

OpenLobster implements a full task scheduler:

  • Cron expressions handle recurring tasks
  • ISO 8601 handles one-time tasks
  • Task status, next execution time, and execution logs are all visualized in the Dashboard

Security Model: Default Open → Default Auth

This is the biggest change.

OpenClaw does not enable authentication by default, resulting in tens of thousands of instances exposed on Censys. One particular CVE even allowed unauthenticated attackers to call the Agent API directly.

OpenLobster's security strategy:

  • Dashboard requires a Bearer Token by default (OPENLOBSTER_GRAPHQL_AUTH_TOKEN)
  • Configuration files and secrets are all stored with encryption
  • API Keys and channel tokens are no longer written in plaintext in YAML, but stored in an encrypted backend (File or OpenBao)
  • OPENLOBSTER_* environment variables will never leak to terminal tools

WARNING

If you are going to expose your instance to the public internet, remember to configure the authentication Token immediately.

MCP Integration

OpenClaw's MCP support is basically a demo. OpenLobster implements a full MCP ecosystem:

  • Connect to any Streamable HTTP MCP Server
  • Full OAuth 2.1 flow
  • Tools for each server can be browsed individually
  • User-level permission matrix to control who can use which tools
  • Built-in Marketplace for one-click integration of common services

Environment Preparation

OpenLobster has extremely low hardware requirements, which is one of its advantages over OpenClaw.

Minimum Hardware Requirements

ConfigRecommendedMinimum
CPU2 Cores1 Core
RAM1 GB512 MB
Storage10 GB SSD5 GB
OSLinux/macOS/WindowsLinux (Docker)

It can run on a Raspberry Pi 3/4, a 512MB RAM VPS, a NAS, or even a $15 LicheeRV Nano. Testing shows it runs effortlessly on a Raspberry Pi 4.

Software Dependencies

  • Docker (Recommended, easiest)
  • Or: Go 1.21+ (if compiling yourself)

Quick Deployment

We will use Docker to demonstrate, as it is the fastest way.

1. Create Config Directory

mkdir -p ~/.openlobster/data ~/.openlobster/workspace

2. Start Container

docker run -p 8080:8080 \
  -e OPENLOBSTER_GRAPHQL_HOST=0.0.0.0 \
  -e OPENLOBSTER_GRAPHQL_AUTH_TOKEN=your-secret-token \
  -e OPENLOBSTER_AGENT_NAME=my-agent \
  -e OPENLOBSTER_DATABASE_DRIVER=sqlite \
  -e OPENLOBSTER_DATABASE_DSN=/app/data/openlobster.db \
  -v ~/.openlobster/data:/app/data \
  -v ~/.openlobster/workspace:/app/workspace \
  -d ghcr.io/neirth/openlobster/openlobster:latest

Explanation of key configurations:

  • OPENLOBSTER_GRAPHQL_AUTH_TOKEN: Dashboard access password, required
  • OPENLOBSTER_AGENT_NAME: The name of your AI assistant
  • OPENLOBSTER_DATABASE_DRIVER=sqlite: Uses SQLite, no extra database service required
  • Port 8080 is the entry point for the GraphQL API and Web UI

3. Verify Startup

curl http://127.0.0.1:8080/health

Returning {"status":"ok"} means it has started successfully.


Initial Configuration

First-Run Wizard

Open http://127.0.0.1:8080 in your browser to enter the Setup Wizard.

TIP

Remember to use the OPENLOBSTER_GRAPHQL_AUTH_TOKEN set during startup as the Bearer Token to log in.

The Setup Wizard will guide you through:

  1. Agent basic configuration (name, description)
  2. Choosing a database (SQLite / PostgreSQL / MySQL)
  3. Choosing a memory backend (File / Neo4j)
  4. Adding an AI Provider

Configure AI Provider

This is the most critical step. OpenLobster supports multiple AI Providers:

  • OpenAI
  • Anthropic (Claude)
  • Ollama (Local models)
  • OpenRouter
  • Docker Model Runner
  • Any OpenAI-compatible interface

TIP

We recommend using the Defapi platform here. It is an API relay service with prices at half the official rate, supporting mainstream models like OpenAI, Claude, and Gemini. It is fully compatible with the v1/chat/completions interface, so you can simply replace the base_url and API Key without modifying code.

Defapi configuration example (using Claude Sonnet as an example):

# Environment variable method
OPENLOBSTER_PROVIDERS_OPENAICOMPAT_API_KEY=Your-Defapi-Key
OPENLOBSTER_PROVIDERS_OPENAICOMPAT_BASE_URL=https://api.defapi.org/v1
OPENLOBSTER_PROVIDERS_OPENAICOMPAT_MODEL=anthropic/claude-sonnet-4-5

Or fill it in the Dashboard's Settings → Providers interface:

FieldValue
Provider TypeOpenAI Compatible
API KeyYour Defapi Key
Base URLhttps://api.defapi.org/v1
Modelanthropic/claude-sonnet-4-5

Advantages of Defapi:

  • Price is half of the official rate
  • Interface is fully compatible; no changes needed in OpenLobster
  • Supports major models like Claude, GPT, and Gemini
  • Low latency for access within mainland China

Connecting Communication Channels

OpenLobster supports enabling multiple channels simultaneously.

Telegram

  1. Search for @BotFather in Telegram and create a new bot
  2. Get the Bot Token
  3. Fill in the Token in Dashboard → Settings → Channels → Telegram
  4. After saving, your bot is online

The first time a user sends a message, it will trigger the pairing process to bind the Telegram User ID to the OpenLobster account.

Discord

  1. Create an Application in the Discord Developer Portal
  2. Add a Bot and get the Token
  3. Invite the Bot to your server (requires message.content permission)
  4. Fill in the Bot Token in the Dashboard

Other Platforms

Similar process for: WhatsApp (requires Business API), Slack (Socket Mode), Twilio SMS. The configuration is largely the same; find the corresponding channel in the Dashboard and fill in the credentials.


Troubleshooting Common Issues

1. Dashboard Login Failed

Check if OPENLOBSTER_GRAPHQL_AUTH_TOKEN is set correctly. All API requests must carry it in the Header:

curl -H "Authorization: Bearer your-secret-token" \
  http://127.0.0.1:8080/graphql

2. AI Does Not Reply to Messages

Common reasons:

  • API Key was entered incorrectly
  • Model name mismatch (check case sensitivity, e.g., claude-sonnet-4-5 is not claude-sonnet-4.5)
  • Network issues (ensure the Docker container can access the internet)

Check logs: docker logs <container-id>, which usually contains detailed error info.

3. Memory Not Working

If using the File backend, check if the GML files in the ~/.openlobster/data/ directory have content.

If using Neo4j, ensure the Neo4j service is running correctly and connection info is accurate.

4. Channel Connected but No Messages Received

Confirm if the Bot has permissions for the corresponding platform:

  • Telegram: Bot must be in the group and set to allow receiving group messages
  • Discord: Bot needs Read Messages/View Channels permissions

5. MCP Tool Call Failed

MCP servers must be in Streamable HTTP mode. Check:

  • If the server URL is accessible
  • If OAuth configuration is correct (if required)
  • If tool permissions are allowed

6. Startup Failed After Upgrade

Database migrations run automatically, but it is recommended to back up first:

cp -r ~/.openlobster/data ~/.openlobster/data.backup

Advanced Directions

MCP Integration

OpenLobster's MCP support is currently the most complete secure implementation of an AI Agent.

In Dashboard → MCP, you can browse connected servers and available tools. Each tool has detailed parameter descriptions and permission controls.

The community-maintained Marketplace has ready-to-use MCP servers, such as File System, GitHub, Slack, etc. Add them with one click without manual configuration.

Neo4j Deployment

If your use case requires complex memory queries, it is recommended to use Neo4j.

# Start Neo4j
docker run -p 7474:7474 -p 7687:7687 \
  -e NEO4J_AUTH=neo4j/password \
  neo4j:latest

Then switch to the Neo4j backend in OpenLobster settings and enter the connection info.

The real value of a graph database is that you can ask the AI, "What happened with that project we discussed last time?" and it can find it by following the relationship chain rather than simple keyword retrieval.

Multi-Instance Cluster

The stateless design of the Go backend makes horizontal scaling easy. For high availability, you can:

  • Have multiple OpenLobster instances share the same Neo4j
  • Share the same PostgreSQL database
  • Add a load balancer in front

Further Reading

If you are looking for a secure alternative to OpenClaw or want a self-hosted AI assistant with lower resource consumption, OpenLobster is worth a try.

Updated March 17, 2026