AutoClaw is a stable, highly engineered, and easily scalable AI Agent framework designed specifically for headless systems. Compared to agents that rely on visual parsing, AutoClaw is purely instruction-driven, possessing stronger engineering attributes and extreme stability, making it ideal for running in servers, CI/CD pipelines, or container environments.
This article will detail how to configure LLM APIs for AutoClaw, with a primary recommendation of using the Defapi platform to achieve half-price calls.
Introduction
AutoClaw is developed based on Node.js and TypeScript, using the OpenAI SDK to interact with various LLMs. By default, it uses the official OpenAI API, but through simple configuration, you can switch to any LLM provider compatible with the OpenAI ChatCompletions interface.
Related Resources
- GitHub Repository: https://github.com/tsingliuwin/autoclaw
- Official Documentation: Supports custom API endpoint configuration via
baseUrl - Tech Stack: Node.js, TypeScript, OpenAI SDK, Commander.js
Method 1: Defapi (Recommended - Half-Price Discount)
Defapi is an AI API aggregation platform providing OpenAI-compatible interfaces. Its biggest advantage is that the price is only half of the official rate. For automation scenarios requiring large-scale AI calls, this can significantly reduce costs.
Why Choose Defapi?
- Price Advantage: 50% of the official price, drastically lowering usage costs.
- Fully Compatible: Provides a standard
v1/chat/completionsinterface. - Multi-Model Support: Supports GPT-4o, GPT-4o-mini, and many other models.
- Stable and Reliable: Features high availability and low latency.
Configuration Steps
1. Get Defapi API Key
Visit https://defapi.org to register an account and obtain your API Key.
2. Configuration File Method
Create a .autoclaw/setting.json file in the project root directory:
{
"apiKey": "your-defapi-key",
"baseUrl": "https://api.defapi.org/v1",
"model": "openai/gpt-4o"
}
Or create a global configuration file at ~/.autoclaw/setting.json:
mkdir -p ~/.autoclaw
{
"apiKey": "your-defapi-key",
"baseUrl": "https://api.defapi.org/v1",
"model": "openai/gpt-4o"
}
3. Environment Variable Method
You can also configure it via environment variables:
export OPENAI_API_KEY="your-defapi-key"
export OPENAI_BASE_URL="https://api.defapi.org/v1"
export OPENAI_MODEL="openai/gpt-4o"
4. Interactive Configuration
Run the setup wizard for interactive configuration:
autoclaw setup
Follow the prompts to enter:
- API Key: Your Defapi API Key
- Base URL:
https://api.defapi.org/v1 - Model:
openai/gpt-4o
Method 2: Official OpenAI API
If you already have an official OpenAI API Key, you can use it directly.
Configuration Method
Configuration File .autoclaw/setting.json:
{
"apiKey": "sk-...",
"baseUrl": "https://api.openai.com/v1",
"model": "gpt-4o"
}
Environment Variables:
export OPENAI_API_KEY="sk-..."
export OPENAI_BASE_URL="https://api.openai.com/v1"
export OPENAI_MODEL="gpt-4o"
Method 3: OpenRouter
OpenRouter is an aggregation platform supporting multiple LLMs, allowing access to hundreds of models through a unified interface.
Configuration Method
Configuration File .autoclaw/setting.json:
{
"apiKey": "sk-or-v1-...",
"baseUrl": "https://openrouter.ai/api/v1",
"model": "openai/gpt-4o"
}
Supported Model Examples
{
"model": "openai/gpt-4o"
}
{
"model": "anthropic/claude-3.5-sonnet"
}
{
"model": "google/gemini-pro"
}
Method 4: DeepSeek
DeepSeek is a leading LLM provider from China with high cost-performance ratios.
Configuration Method
Configuration File .autoclaw/setting.json:
{
"apiKey": "sk-...",
"baseUrl": "https://api.deepseek.com/v1",
"model": "deepseek-chat"
}
Method 5: Ollama (Local Models)
If you prefer to run models locally, you can use Ollama.
Configuration Method
Configuration File .autoclaw/setting.json:
{
"apiKey": "ollama",
"baseUrl": "http://localhost:11434/v1",
"model": "llama3"
}
Supported Models
Ollama supports various local models, including:
llama3mistralcodellamaphi3
Verifying Everything Works
After configuration, run the following command to verify:
Interactive Mode Test
autoclaw
Input a test message:
Hello, can you hear me?
If everything is set up correctly, AutoClaw will reply.
Headless Mode Test
autoclaw "List all files in current directory" --no-interactive
Internal Mechanism Explanation
How AutoClaw Works
At its core, AutoClaw is an agent built on the OpenAI SDK. Here are the key mechanisms:
1. Client Initialization
In src/agent.ts, AutoClaw initializes the client using the OpenAI SDK:
this.client = new OpenAI({
apiKey: apiKey,
baseURL: baseURL
});
The baseURL here is flexible, which is why AutoClaw can support various LLM providers seamlessly. Any service provider offering an OpenAI-compatible interface can be integrated.
2. Chat Completions Call
Every time a user provides input, AutoClaw calls the v1/chat/completions interface:
const response = await this.client.chat.completions.create({
model: this.model,
messages: this.messages,
tools: getToolDefinitions() as any,
tool_choice: "auto"
});
3. Configuration Priority
AutoClaw uses a hierarchical configuration system with priority from highest to lowest:
- CLI Arguments (e.g.,
-m gpt-4o) - Environment Variables (
OPENAI_API_KEY, etc.) - Project Configuration (
./.autoclaw/setting.json) - Global Configuration (
~/.autoclaw/setting.json)
This design is ideal for using different API configurations across development, testing, and production environments.
4. Tool Calling Mechanism
AutoClaw supports Function Calling to invoke various built-in tools:
- Shell command execution
- File reading/writing
- Web search
- Screenshot capture
- Email sending
- Notification pushing
Common Use Cases
1. CI/CD Automation
Use AutoClaw in CI/CD pipelines for code review and automated testing:
autoclaw "Review the changes in this PR" -y --no-interactive
2. Batch File Processing
Automate the processing of large numbers of files:
autoclaw "Convert all .md files to .txt files" --no-interactive
3. Data Collection and Reporting
Periodically collect data and generate reports:
autoclaw "Generate a summary report of server logs" --no-interactive
4. Containerized Batch Tasks
Run multiple parallel AutoClaw instances in Kubernetes:
# k8s deployment example
apiVersion: apps/v1
kind: Deployment
metadata:
name: autoclaw-worker
spec:
replicas: 10
template:
spec:
containers:
- name: autoclaw
image: autoclaw:latest
env:
- name: OPENAI_API_KEY
valueFrom:
secretKeyRef:
name: autoclaw-secret
key: api-key
5. Scheduled Maintenance Tasks
Combine with cron for scheduled operations:
0 9 * * * autoclaw "Check disk usage and notify if >80%" -y