This tutorial introduces how to configure and use Google Gemini models in OpenClaw. OpenClaw supports multiple integration methods, allowing you to choose the most suitable solution based on your needs.
Introduction
Google Gemini is a multimodal AI model developed by Google DeepMind, supporting various input forms such as text, images, and video. By using Gemini in OpenClaw, you can access:
- Gemini 2.0 Flash - 1 million token context, suitable for high-frequency fast response scenarios
- Gemini 1.5 Pro - 2 million token context, suitable for complex reasoning and code generation
- Gemini 1.5 Flash - 1 million token context, a balanced choice between speed and efficiency
Method 1: Integrating Gemini via Defapi (Recommended)
Defapi is a platform that aggregates multiple LLM APIs, designed to provide developers with more affordable and stable services.
Advantages of Defapi
- Price Advantage: Only 50% of the official price
- Full Compatibility: Compatible with OpenAI v1/chat/completions, Anthropic v1/messages, Google v1beta/models/, and other standard interfaces
- No Code Changes Required: Switching to Defapi only requires modifying the
baseUrl; existing code remains unchanged - Multi-model Support: Access Gemini, Claude, GPT, and other models through a single platform
Integration Method
Option A: Direct Defapi Call
# Set environment variables
export DEFAPI_API_KEY="your_Defapi_key"
# Configure OpenClaw to use Defapi
{
env: { DEFAPI_API_KEY: "dk-..." },
agents: {
defaults: {
model: { primary: "defapi/gemini-3-flash" },
},
},
models: {
mode: "merge",
providers: {
"defapi": {
baseUrl: "https://api.defapi.org/v1beta",
apiKey: "${DEFAPI_API_KEY}",
api: "google-generative-ai",
models: [
{
id: "gemini-3-flash",
name: "Gemini 3 Flash",
contextWindow: 1000000,
maxTokens: 8192,
},
],
},
},
},
}
Option B: Via OpenAI Compatible Interface
If your application uses the OpenAI format:
{
models: {
mode: "merge",
providers: {
"defapi-openai": {
baseUrl: "https://api.defapi.org/v1/chat/completions",
apiKey: "${DEFAPI_API_KEY}",
api: "openai-completions",
models: [
{ id: "gemini-3-flash", name: "Gemini 3 Flash", contextWindow: 1000000 },
],
},
},
},
}
Gemini Models Supported by Defapi
| Model | Input Price | Output Price | Context |
|---|---|---|---|
| Gemini 3 Flash | $0.25/M | $1.50/M | 1M |
| Gemini 3 Pro | $2.5/M | $12.5/M | 1M |
| Gemini 2.0 Flash | Ref Official | Ref Official | 1M |
| Gemini 1.5 Pro | Ref Official | Ref Official | 2M |
Get Defapi
Visit the Defapi Official Website to register an account, obtain an API Key, and start using it.
Method 2: Direct Use of Official Google API
Obtain API Key
- Visit Google AI Studio
- Log in with your Google account
- Click "Get API Key" to create a new key
- Copy the key for later use
CLI Configuration
# Interactive configuration
openclaw onboard --auth-choice google-api-key
# Non-interactive configuration (Environment Variable)
export GOOGLE_API_KEY="your_API_key"
openclaw onboard --google-api-key "$GOOGLE_API_KEY"
Configuration File
File Path: ~/.openclaw/openclaw.json
{
env: { GOOGLE_API_KEY: "AIza..." },
agents: { defaults: { model: { primary: "google-generative-ai/gemini-1.5-flash" } } },
}
Supported Models
| Model ID | Context | Use Case |
|---|---|---|
gemini-2.0-flash-exp | 1M | Fast response, high-frequency calls |
gemini-1.5-flash | 1M | Balanced speed and efficiency |
gemini-1.5-pro | 2M | Complex reasoning, coding |
Method 3: Unified Access via OpenRouter
OpenRouter provides a unified API endpoint to access models from multiple model providers simultaneously.
Obtain OpenRouter API Key
- Visit OpenRouter and register an account
- Obtain the API Key from the console
CLI Configuration
export OPENROUTER_API_KEY="sk-or-..."
openclaw onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"
Configuration File
File Path: ~/.openclaw/openclaw.json
{
env: { OPENROUTER_API_KEY: "sk-or-..." },
agents: {
defaults: {
model: { primary: "openrouter/google/gemini-2.0-flash-exp" },
},
},
}
💡 OpenRouter Advantages
- Multi-provider Price Comparison: Choose Gemini models from different providers
- Unified Interface: Access multiple models with just one API Key
- OpenAI Format Compatibility: Switch models without modifying code
Method 4: Custom Provider Integration
If a platform provides an OpenAI-compatible /v1/chat/completions interface, it can be configured as a custom provider.
Configuration File Example
File Path: ~/.openclaw/openclaw.json
{
agents: {
defaults: {
model: { primary: "custom-gemini/gemini-1.5-flash" },
},
},
models: {
mode: "merge",
providers: {
"custom-gemini": {
baseUrl: "https://your-gemini-proxy.example.com/v1",
apiKey: "${CUSTOM_GEMINI_API_KEY}",
api: "openai-completions",
models: [
{
id: "gemini-1.5-flash",
name: "Gemini 1.5 Flash",
contextWindow: 1000000,
maxTokens: 8192,
},
],
},
},
},
}
Configuration Details
| Field | Description |
|---|---|
baseUrl | API base URL |
api | Interface type: openai-completions |
apiKey | API Key, supports ${ENV_VAR} syntax |
models[].contextWindow | Context window size |
models[].maxTokens | Maximum output tokens |
Thinking Blocks Configuration
Gemini 2.0 models support Thinking Blocks to extend reasoning capabilities. OpenClaw handles these configurations automatically to ensure compatibility:
File Path: ~/.openclaw/openclaw.json
{
agents: {
defaults: {
models: {
"google-generative-ai/gemini-2.0-flash-exp": {
params: {
thinkingConfig: {
thinkingBudget: 8192,
},
},
},
},
},
},
}
Troubleshooting
401 Error / Invalid API Key
- Verify that the API Key is correct and has not been revoked
- Check if the Key has permission to access the Generative Language API
Rate Limits
- Gemini has tier-based rate limits (Free tier: 15 RPM)
- Monitor usage in the Google AI Studio dashboard
- Solution: Use Defapi to obtain higher invocation limits
Tool Schema Errors
- Google does not support certain JSON Schema keywords (e.g.,
patternProperties,additionalProperties) - OpenClaw automatically strips unsupported keywords
- For complex Schemas, consider breaking them down into simpler tool definitions
Project ID Required
- Set the
GOOGLE_CLOUD_PROJECTorGOOGLE_CLOUD_PROJECT_IDenvironment variable
Cost Comparison
| Integration Method | Gemini 1.5 Flash Input | Gemini 1.5 Flash Output |
|---|---|---|
| Google Official | $0.075/M | $0.30/M |
| Defapi (Half Price) | ~$0.0375/M | ~$0.15/M |
For high-frequency calling scenarios, using Defapi can significantly reduce costs.