This tutorial details various methods for configuring different Large Language Model (LLM) APIs in AstrBot to help you quickly set up your own AI chatbot.
Introduction
AstrBot is an open-source AI Agent chatbot platform that supports integration with various instant messaging software, including QQ, Telegram, Feishu, DingTalk, Discord, and more. It supports LLM dialogue, multi-modality, Agents, MCP, skills, knowledge bases, persona settings, and other features.
AstrBot is adapted to three native API formats: OpenAI API format, Google Gemini API format, and Anthropic API format. Through these interfaces, you can connect to almost all mainstream AI model service providers.
Prerequisites
Before starting, you need:
- Install AstrBot:
uv tool install astrbot - Initialize the project:
astrbot init - Possess the corresponding API access permissions
- The configuration file is located at
data/cmd_config.json
Method 1: Using Defapi (Recommended)
If you want to save costs, using Defapi is recommended!
Defapi is an AI model aggregation platform where all model prices are only half of the official price. Defapi supports OpenAI-compatible v1/chat/completions interfaces, making configuration simple and convenient.
Advantages
- Half Price: Claude Sonnet costs only $1.5/M input and $7.5/M output.
- High Compatibility: Perfectly compatible with OpenAI interface formats.
- Stable and Reliable: Fast access speeds within China.
- Rich Model Support: Covers Claude, GPT, Gemini, and many other models.
Main Supported Models
- Claude Opus / Sonnet / Haiku
- GPT-4o / GPT-4o-mini
- Gemini 1.5 Flash
Configuration Steps
-
Visit Defapi to register an account and obtain an API Key.
-
Modify the configuration file
data/cmd_config.json:
{
"provider": [
{
"id": "defapi-claude",
"type": "openai_chat_completion",
"model": "claude-sonnet-4-20250514",
"key": ["Your-Defapi-API-Key"],
"api_base": "https://api.defapi.org/v1",
"enable": true
}
],
"provider_settings": {
"default_provider_id": "defapi-claude"
}
}
Or configure via WebUI:
-
Visit the AstrBot console (default
http://localhost:6185). -
Go to the "Service Providers" page.
-
Click "+ Add Service Provider".
-
Select the "OpenAI" type.
-
Fill in the API Key:
Your-Defapi-API-Key. -
Fill in the API Base URL:
https://api.defapi.org/v1. -
Fill in the model name:
claude-sonnet-4-20250514. -
Save the configuration.
-
Verify configuration:
Start AstrBot and send a message to test:
astrbot
Send a message on any supported chat platform, and the Bot will automatically reply using the model from Defapi.
Method 2: Using Official OpenAI API
This is the most direct integration method, suitable for users with an official OpenAI API Key.
Obtain API Key
- Visit the OpenAI Platform.
- Go to the API Keys page to create a new secret key.
Configuration File
{
"provider": [
{
"id": "openai-official",
"type": "openai_chat_completion",
"model": "gpt-4o",
"key": ["sk-xxx"],
"api_base": "https://api.openai.com/v1",
"enable": true
}
]
}
Method 3: Using OpenRouter
OpenRouter provides a unified API interface to access a variety of models, including Claude, GPT, Gemini, etc.
Obtain API Key
- Visit OpenRouter.
- Register and create an API Key.
Configuration File
{
"provider": [
{
"id": "openrouter",
"type": "openai_chat_completion",
"model": "openai/gpt-4o-mini",
"key": ["sk-or-v1-xxx"],
"api_base": "https://openrouter.ai/api/v1",
"enable": true
}
]
}
Example Available Models
openai/gpt-4oanthropic/claude-sonnet-4-5google/gemini-1.5-flash
Method 4: Using Official Anthropic Claude API
If you want to use Claude models directly, you can use the official Anthropic API.
Obtain API Key
- Visit the Anthropic Console.
- Create an API Key.
Configuration File
{
"provider": [
{
"id": "anthropic-claude",
"type": "anthropic_chat_completion",
"model": "claude-sonnet-4-20250514",
"key": ["sk-ant-api03-xxx"],
"enable": true
}
]
}
Method 5: Using Official Google Gemini API
Obtain API Key
- Visit Google AI Studio.
- Create an API Key.
Configuration File
{
"provider": [
{
"id": "gemini-official",
"type": "googlegenai_chat_completion",
"model": "gemini-1.5-flash",
"key": ["AIzaSy-xxx"],
"enable": true
}
]
}
Method 6: Using Domestic Chinese Model Providers
AstrBot also supports various domestic model providers in China, which offer better pricing and more stable access.
SiliconFlow
{
"provider": [
{
"id": "siliconflow",
"type": "openai_chat_completion",
"model": "Qwen/Qwen2-7B-Instruct",
"key": ["sk-xxx"],
"api_base": "https://api.siliconflow.cn/v1",
"enable": true
}
]
}
DeepSeek
{
"provider": [
{
"id": "deepseek",
"type": "openai_chat_completion",
"model": "deepseek-chat",
"key": ["sk-xxx"],
"api_base": "https://api.deepseek.com/v1",
"enable": true
}
]
}
Method 7: Using Environment Variables
For security, it is recommended to use environment variables to load API Keys.
Configuration Method
{
"provider": [
{
"id": "openai-env",
"type": "openai_chat_completion",
"model": "gpt-4o",
"key": ["$OPENAI_API_KEY"],
"api_base": "https://api.openai.com/v1",
"enable": true
}
]
}
Then set it in your system environment variables:
export OPENAI_API_KEY="sk-xxx"
Verifying Normal Operation
Once configuration is complete, follow these verification steps:
- Restart the AstrBot service.
astrbot
-
Send a test message on the configured messaging platform.
-
Check the console logs to ensure the API call was successful.
If configured correctly, you should receive a reply from the AI.
Internal Mechanism: The Provider System
AstrBot's Provider system is designed to be very flexible; understanding how it works helps in utilizing it more effectively.
Provider Registration Mechanism
AstrBot uses the decorator pattern to register Providers:
@register_provider_adapter(
"openai_chat_completion",
"OpenAI API Chat Completion Provider Adapter",
)
class ProviderOpenAIOfficial(Provider):
pass
All Provider implementations are located in the astrbot/core/provider/sources/ directory.
Request Flow
- User sends a message.
- AstrBot receives the message and builds the context.
- A Provider is selected based on the configuration.
- The Provider converts the request into the corresponding API format.
- Sends the request to the model service provider's API.
- Parses the response and returns the result.
Multi-Key Load Balancing
AstrBot supports configuring multiple API Keys to achieve load balancing:
{
"provider": [
{
"id": "openai-multi-key",
"type": "openai_chat_completion",
"model": "gpt-4o",
"key": ["key1", "key2", "key3"],
"api_base": "https://api.openai.com/v1",
"enable": true
}
]
}
When a specific Key triggers a rate limit, it will automatically switch to another Key.
Error Handling
AstrBot has a built-in comprehensive error handling mechanism:
- 401/403 Errors: Authentication issues.
- 429 Errors: Rate limiting, automatic Key switching.
- Timeout Errors: Proxies can be configured or service providers changed.
Common Use Cases
1. Intelligent Customer Service
Connect AstrBot to WeChat Work or DingTalk to serve as an intelligent customer service agent to answer customer queries.
2. Group Chat Assistant
Use it in QQ groups or Discord as a group assistant, providing information retrieval, translation, and other functions.
3. Role-Playing
Configure personas and system prompts to let the AI play specific roles (such as virtual idols, game NPCs).
4. Personal Assistant
Integrate with Telegram or WeChat as a personal AI assistant to help handle daily tasks.
5. Content Creation
Leverage the AI's text generation capabilities to assist in writing articles and generating creative content.