NemoClaw Deployment Guide: Making OpenClaw More Secure and Controllable

March 19, 2026

WARNING

This article is intended for developers with OpenClaw experience and requires basic knowledge of Linux/Docker.

Imagine this: you ask an AI to help you write code, research data, or manage a server, but it accidentally deletes your critical files or stealthily exfiltrates data after a malicious Prompt Injection. This is a nightmare for almost every AI agent user. NVIDIA's NemoClaw was created to solve exactly this: it provides a "security shell" for OpenClaw, allowing you to enjoy the convenience of AI agents without worrying about them going "out of control."

In this article, we will explore how this protective layer works and how to quickly deploy a secure, controllable AI agent.

Target Audience

  • Developers with OpenClaw experience looking to enhance security.
  • Technical leads of teams requiring AI Agent security.
  • DevOps personnel aiming to deploy AI agents in production environments.

Core Dependencies and Environment

CategoryRequirement
Operating SystemUbuntu 22.04 LTS+ / macOS (Apple Silicon) / Windows WSL2
Node.js20+
npm10+
Container RuntimeDocker Desktop / Colima (macOS)
MemoryRecommended 16GB (Minimum 8GB + 8GB swap)
DiskRecommended 40GB available space

TIP

If you are using macOS (Apple Silicon), Colima or Docker Desktop is recommended; Windows users require WSL2 + Docker Desktop.

Project Structure Overview

NemoClaw is divided into four layers. Understanding this architecture will help you troubleshoot more effectively:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                      Plugin Layer                           β”‚
β”‚            (TypeScript CLI - nemoclaw command)              β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                    Blueprint Layer                         β”‚
β”‚         (Python Blueprint - Orchestrates Sandbox & Policy)  β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                     Sandbox Layer                          β”‚
β”‚        (OpenShell Container - Isolated OpenClaw Environment)β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                    Inference Layer                         β”‚
β”‚      (Inference Router - Intercepted by OpenShell Gateway)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Core logic: None of your inference requests flow directly out of the sandbox; instead, they are routed through the OpenShell gateway. This means even if the AI attempts to send data out secretly, the gateway can intercept it.

Step-by-Step Deployment

1. Check System Environment

First, confirm your system meets the requirements:

# Check OS version
cat /etc/os-release | grep -E "^(NAME|VERSION)="

# Check Node.js version
node --version  # Requires 20+

# Check npm version
npm --version   # Requires 10+

# Check if Docker is running
docker ps

WARNING

If Docker is not running, start it first. NemoClaw relies on containers to create sandbox environments.

2. Install OpenShell

NemoClaw is built on NVIDIA OpenShell, so it must be installed first:

# Install OpenShell (Linux)
curl -fsSL https://www.nvidia.com/openshell.sh | bash

# Verify installation
openshell --version

TIP

If you are using DGX Spark, you need to configure cgroup v2 and Docker according to official docs before running the install script.

3. Install NemoClaw CLI

The official one-click installation script:

curl -fsSL https://www.nvidia.com/nemoclaw.sh | bash

After installation, refresh your environment variables:

# For bash
source ~/.bashrc

# For zsh
source ~/.zshrc

Verify installation:

nemoclaw --version

4. Run Interactive Onboarding

This is the most critical step. Running nemoclaw onboard starts an interactive wizard to help you configure the sandbox, inference provider, and network policies:

nemoclaw onboard

The wizard will ask:

  1. Sandbox Name: e.g., my-agent
  2. Inference Provider: Defapi is highly recommended because it costs half the official price and supports major models like Claude, GPT, and Gemini.
  3. API Key: If choosing Defapi, enter the key obtained from Defapi.
  4. Network Policy: Defaults are strict, allowing only necessary outbound requests.

TIP

Defapi supports the v1/chat/completions interface and is fully compatible with OpenClaw. Using Defapi saves money while providing an identical experience to official providers.

Once finished, you should see output like this:

──────────────────────────────────────────────────
Sandbox      my-agent (Landlock + seccomp + netns)
Model        defapi/claude-sonnet-4.5
──────────────────────────────────────────────────
Run:         nemoclaw my-agent connect
Status:      nemoclaw my-agent status
Logs:        nemoclaw my-agent logs --follow
──────────────────────────────────────────────────

[INFO]  === Installation complete ===

5. Connect to Sandbox and Start Chatting

Use nemoclaw <name> connect to enter the sandbox's interactive shell:

nemoclaw my-agent connect

Once inside, you will see the sandbox prompt:

sandbox@my-agent:~$

Now launch the OpenClaw TUI to chat with the AI:

openclaw tui

Try a test message:

hello, can you introduce yourself?

You should receive a reply. Congratulations! You have successfully deployed a sandboxed AI agent.

6. Using the CLI for Single Requests

If you don't need the interactive UI and want a quick test, use the CLI mode:

openclaw agent --agent main --local -m "What is 2+2?" --session-id test

This prints the full response directly in the terminal, ideal for automation scripts.

7. Configure Custom Network Policies (Optional)

The default policy is very strict. If your AI needs access to specific services, you can customize it:

# View current policy
nemoclaw my-agent status

# Hot-reload policy (edit policy file first)
openshell policy apply -f custom-policy.yaml

Example custom-policy.yaml:

egress:
  allowed:
    - host: "api.github.com"
      port: 443
    - host: "api.openai.com"
      port: 443
  denied:
    - host: "*"
      port: "*"

WARNING

Network policies are a vital security barrier. Think carefully before adding allow rulesβ€”every open port increases the potential attack surface.

8. Manage Sandbox Lifecycle

Commonly used maintenance commands:

# Check sandbox status
nemoclaw my-agent status

# View logs
nemoclaw my-agent logs --follow

# Stop sandbox
nemoclaw my-agent stop

# Start sandbox
nemoclaw my-agent start

# Delete sandbox
nemoclaw my-agent delete

9. Multi-Instance Management

You can run multiple isolated sandbox instances on the same machine:

# Create a second sandbox
nemoclaw onboard  # Follow the wizard again

# List all sandboxes
openshell sandbox list

Troubleshooting

Issue 1: "command not found" for install script

If nemoclaw cannot be found, try:

source ~/.bashrc   # bash
source ~/.zshrc    # zsh

Or use the full path:

~/.local/bin/nemoclaw --version

Issue 2: OOM (Out of Memory) Errors

The sandbox image is about 2.4GB uncompressed. If your machine has less than 8GB RAM, running extraction + Docker + k3s simultaneously may trigger OOM. Solutions:

  • Add at least 8GB swap: sudo fallocate -l 8G /swapfile && sudo chmod 600 /swapfile && sudo mkswap /swapfile && sudo swapon /swapfile
  • Use a machine with more RAM.

Issue 3: Container Runtime Not Ready

# Restart Docker
sudo systemctl restart docker

# Check Docker status
docker ps

Issue 4: NVIDIA API Key Configuration Failure

If you choose NVIDIA Cloud inference instead of Defapi, get your key from build.nvidia.com. Ensure:

  • Key is pasted correctly.
  • Account has sufficient balance.
  • Network can reach build.nvidia.com.

TIP

Highly recommend using Defapi as an alternativeβ€”it's half the price and offers faster global access.

Issue 5: Network Policy Blocking Valid Requests

When AI is blocked from reaching a service, you will see a prompt in the TUI to approve or permanently add rules:

[OpenShell] Agent attempted to reach api.example.com:443
Allow? [y/n/a]:  # y=allow once, n=deny, a=add permanently

Issue 6: Abnormal Sandbox State

First check the NemoClaw layer:

nemoclaw my-agent status

Then check the underlying OpenShell layer:

openshell sandbox list
openshell sandbox inspect my-agent

Advanced Directions

1. Switching Inference Providers

If you started with NVIDIA Cloud and want to switch to Defapi, update the inference config:

# Update provider configuration
openshell provider update --provider defapi --api-key <your-key>

View the full model list on the Defapi website.

2. Monitoring and Log Analysis

Monitor sandbox activity in real-time with OpenShell TUI:

openshell term

Here you can see:

  • Every network request made by the AI.
  • Filesystem read/write operations.
  • Inference call details.

3. Deploy to Remote GPU

If local performance is insufficient, deploy to a cloud GPU instance:

nemoclaw deploy --target remote-gpu --instance-type A100

Refer to official docs for detailed tutorials.

4. Telegram/Discord Integration

Enable AI communication via Telegram or Discord bridges:

nemoclaw start telegram-bridge
nemoclaw start discord-bridge
Updated March 19, 2026