OpenClaw Gemini API Integration Tutorial

February 12, 2026

This tutorial introduces how to configure and use Google Gemini models in OpenClaw. OpenClaw supports multiple integration methods, allowing you to choose the most suitable solution based on your needs.


Introduction

Google Gemini is a multimodal AI model developed by Google DeepMind, supporting various input forms such as text, images, and video. By using Gemini in OpenClaw, you can access:

  • Gemini 2.0 Flash - 1 million token context, suitable for high-frequency fast response scenarios
  • Gemini 1.5 Pro - 2 million token context, suitable for complex reasoning and code generation
  • Gemini 1.5 Flash - 1 million token context, a balanced choice between speed and efficiency

Defapi is a platform that aggregates multiple LLM APIs, designed to provide developers with more affordable and stable services.

Advantages of Defapi

  • Price Advantage: Only 50% of the official price
  • Full Compatibility: Compatible with OpenAI v1/chat/completions, Anthropic v1/messages, Google v1beta/models/, and other standard interfaces
  • No Code Changes Required: Switching to Defapi only requires modifying the baseUrl; existing code remains unchanged
  • Multi-model Support: Access Gemini, Claude, GPT, and other models through a single platform

Integration Method

Option A: Direct Defapi Call

# Set environment variables
export DEFAPI_API_KEY="your_Defapi_key"

# Configure OpenClaw to use Defapi
{
  env: { DEFAPI_API_KEY: "dk-..." },
  agents: {
    defaults: {
      model: { primary: "defapi/gemini-3-flash" },
    },
  },
  models: {
    mode: "merge",
    providers: {
      "defapi": {
        baseUrl: "https://api.defapi.org/v1beta",
        apiKey: "${DEFAPI_API_KEY}",
        api: "google-generative-ai",
        models: [
          {
            id: "gemini-3-flash",
            name: "Gemini 3 Flash",
            contextWindow: 1000000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Option B: Via OpenAI Compatible Interface

If your application uses the OpenAI format:

{
  models: {
    mode: "merge",
    providers: {
      "defapi-openai": {
        baseUrl: "https://api.defapi.org/v1/chat/completions",
        apiKey: "${DEFAPI_API_KEY}",
        api: "openai-completions",
        models: [
          { id: "gemini-3-flash", name: "Gemini 3 Flash", contextWindow: 1000000 },
        ],
      },
    },
  },
}

Gemini Models Supported by Defapi

ModelInput PriceOutput PriceContext
Gemini 3 Flash$0.25/M$1.50/M1M
Gemini 3 Pro$2.5/M$12.5/M1M
Gemini 2.0 FlashRef OfficialRef Official1M
Gemini 1.5 ProRef OfficialRef Official2M

Get Defapi

Visit the Defapi Official Website to register an account, obtain an API Key, and start using it.


Method 2: Direct Use of Official Google API

Obtain API Key

  1. Visit Google AI Studio
  2. Log in with your Google account
  3. Click "Get API Key" to create a new key
  4. Copy the key for later use

CLI Configuration

# Interactive configuration
openclaw onboard --auth-choice google-api-key

# Non-interactive configuration (Environment Variable)
export GOOGLE_API_KEY="your_API_key"
openclaw onboard --google-api-key "$GOOGLE_API_KEY"

Configuration File

File Path: ~/.openclaw/openclaw.json

{
  env: { GOOGLE_API_KEY: "AIza..." },
  agents: { defaults: { model: { primary: "google-generative-ai/gemini-1.5-flash" } } },
}

Supported Models

Model IDContextUse Case
gemini-2.0-flash-exp1MFast response, high-frequency calls
gemini-1.5-flash1MBalanced speed and efficiency
gemini-1.5-pro2MComplex reasoning, coding

Method 3: Unified Access via OpenRouter

OpenRouter provides a unified API endpoint to access models from multiple model providers simultaneously.

Obtain OpenRouter API Key

  1. Visit OpenRouter and register an account
  2. Obtain the API Key from the console

CLI Configuration

export OPENROUTER_API_KEY="sk-or-..."
openclaw onboard --auth-choice apiKey --token-provider openrouter --token "$OPENROUTER_API_KEY"

Configuration File

File Path: ~/.openclaw/openclaw.json

{
  env: { OPENROUTER_API_KEY: "sk-or-..." },
  agents: {
    defaults: {
      model: { primary: "openrouter/google/gemini-2.0-flash-exp" },
    },
  },
}

💡 OpenRouter Advantages

  • Multi-provider Price Comparison: Choose Gemini models from different providers
  • Unified Interface: Access multiple models with just one API Key
  • OpenAI Format Compatibility: Switch models without modifying code

Method 4: Custom Provider Integration

If a platform provides an OpenAI-compatible /v1/chat/completions interface, it can be configured as a custom provider.

Configuration File Example

File Path: ~/.openclaw/openclaw.json

{
  agents: {
    defaults: {
      model: { primary: "custom-gemini/gemini-1.5-flash" },
    },
  },
  models: {
    mode: "merge",
    providers: {
      "custom-gemini": {
        baseUrl: "https://your-gemini-proxy.example.com/v1",
        apiKey: "${CUSTOM_GEMINI_API_KEY}",
        api: "openai-completions",
        models: [
          {
            id: "gemini-1.5-flash",
            name: "Gemini 1.5 Flash",
            contextWindow: 1000000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Configuration Details

FieldDescription
baseUrlAPI base URL
apiInterface type: openai-completions
apiKeyAPI Key, supports ${ENV_VAR} syntax
models[].contextWindowContext window size
models[].maxTokensMaximum output tokens

Thinking Blocks Configuration

Gemini 2.0 models support Thinking Blocks to extend reasoning capabilities. OpenClaw handles these configurations automatically to ensure compatibility:

File Path: ~/.openclaw/openclaw.json

{
  agents: {
    defaults: {
      models: {
        "google-generative-ai/gemini-2.0-flash-exp": {
          params: {
            thinkingConfig: {
              thinkingBudget: 8192,
            },
          },
        },
      },
    },
  },
}

Troubleshooting

401 Error / Invalid API Key

  • Verify that the API Key is correct and has not been revoked
  • Check if the Key has permission to access the Generative Language API

Rate Limits

  • Gemini has tier-based rate limits (Free tier: 15 RPM)
  • Monitor usage in the Google AI Studio dashboard
  • Solution: Use Defapi to obtain higher invocation limits

Tool Schema Errors

  • Google does not support certain JSON Schema keywords (e.g., patternProperties, additionalProperties)
  • OpenClaw automatically strips unsupported keywords
  • For complex Schemas, consider breaking them down into simpler tool definitions

Project ID Required

  • Set the GOOGLE_CLOUD_PROJECT or GOOGLE_CLOUD_PROJECT_ID environment variable

Cost Comparison

Integration MethodGemini 1.5 Flash InputGemini 1.5 Flash Output
Google Official$0.075/M$0.30/M
Defapi (Half Price)~$0.0375/M~$0.15/M

For high-frequency calling scenarios, using Defapi can significantly reduce costs.

Updated February 12, 2026