ChromeClaw is an AI Agent extension running in the Chrome browser. It requires no server and allows you to chat with various Large Language Models directly through the browser sidebar. This article will detail how to configure different LLM APIs in ChromeClaw, provide multiple integration solutions, and highlight the cost-effective Defapi platform.
Introduction
Designed based on the OpenClaw project, ChromeClaw utilizes modern browser sandbox technology to provide powerful AI chat capabilities while ensuring security. Unlike traditional AI applications that require server deployment, ChromeClaw only needs the installation of a Chrome extension and an API Key configuration to work.
ChromeClaw supports major LLM providers including OpenAI, Anthropic, Google, and the aggregator platform OpenRouter. More importantly, it supports custom OpenAI-compatible endpoints, meaning you can connect to any third-party AI service that follows standard protocols, including the Defapi platform recommended in this article.
Official API Integration Methods
OpenAI Configuration
In the ChromeClaw Options page, go to the Models panel and click Add Model to configure:
- Provider: Select
openai - Model ID: Enter
gpt-4o(or other models you need) - API Key: Enter your OpenAI API Key
- Base URL: Keep the default
https://api.openai.com/v1
Anthropic Claude Configuration
Anthropic's Claude series models are known for their excellent reasoning capabilities and long context processing:
- Provider: Select
anthropic - Model ID: Enter
claude-sonnet-4-5-20250929(Recommended) orclaude-opus-4.5 - API Key: Enter your Anthropic API Key
- Base URL: Keep the default
https://api.anthropic.com
Google Gemini Configuration
Google's Gemini series offers outstanding multimodal capabilities:
- Provider: Select
google - Model ID: Enter
gemini-2.0-flash(Recommended) orgemini-1.5-pro - API Key: Enter your Google AI Studio API Key
- Base URL: Keep the default
https://generativelanguage.googleapis.com
OpenRouter Aggregator Platform
OpenRouter is a model aggregator platform that gathers API interfaces from numerous AI models, providing access via a unified API. The advantage of using OpenRouter is accessing models from multiple providers through a single endpoint with flexible billing.
Configuration method:
- Provider: Select
openrouter - Model ID: Enter such as
openai/gpt-4ooranthropic/claude-3.5-sonnet - API Key: Enter your OpenRouter API Key
- Base URL: Keep
https://openrouter.ai/api/v1
Defapi Platform Integration (Recommended)
Defapi platform provides official AI API services at half the price, compatible with mainstream OpenAI, Anthropic, and Google Gemini protocols. For users looking to control costs, this is a highly cost-effective choice.
Core Advantages of Defapi
- Price Advantage: Only 50% of the official price, significantly reducing usage costs.
- Multi-Protocol Support: Full support for v1/chat/completions, v1/messages, v1beta, and other interfaces.
- Rich Models: Covers mainstream models such as OpenAI, Anthropic, and Google.
- Stable and Reliable: Provides interfaces compatible with official APIs.
Using the v1/chat/completions Interface
This is the most universal integration method, suitable for most models supporting OpenAI-compatible protocols:
- Provider: Select
custom - Model ID: e.g.,
openai/gpt-4oorgoogle/gemini-2.0-flash - API Key: Enter your Defapi API Key
- Base URL:
https://api.defapi.org/v1 - API Format: Select
openai-completions
Using the v1/messages Interface
If you need to use Anthropic Claude models, you can configure them via a custom endpoint:
- Provider: Select
custom - Model ID: e.g.,
anthropic/claude-sonnet-4-5-20250929 - API Key: Enter your Defapi API Key
- Base URL:
https://api.defapi.org/v1 - API Format: Select
openai-completions
Using the v1beta Interface
For Google Gemini models, Defapi provides a specialized interface:
- Provider: Select
custom - Model ID: e.g.,
google/gemini-2.0-flash - API Key: Enter your Defapi API Key
- Base URL:
https://api.defapi.org - API Format: Select
openai-completions
Verifying Configuration
After completing the configuration, ChromeClaw provides a connection test feature. In the model configuration dialog, click the "Test Connection" button; the system will send a test request to the API endpoint and return the result.
If the test is successful, you will see a green check icon; if it fails, a red cross will be displayed with an error message. Common errors include:
- 401 Unauthorized: Invalid or expired API Key.
- 404 Not Found: Incorrect Base URL configuration or the Model ID does not exist.
- Connection Failed: Network issues or the Base URL is unreachable.
Internal Mechanisms of ChromeClaw
ChromeClaw's core model integration logic is located in the model-adapter.ts file. When a user initiates a chat request, the system converts the configuration into the format required by the pi-mono library via the chatModelToPiModel function. This conversion process sets the corresponding API endpoint and authentication headers based on the provider type.
It is worth noting that ChromeClaw uses streaming responses (Streaming) to provide a real-time chat experience. When the model starts generating a reply, the UI displays content step-by-step rather than waiting for the full response. This not only improves user experience but also allows you to see the model's thinking process as early as possible.
ChromeClaw also supports Tool Calling functionality, meaning the model can actively call external tools you have configured to complete tasks. To enable this feature, simply check the "supportsTools" option in the model configuration.
Common Application Scenarios
1. Code Review and Optimization
Combined with ChromeClaw's browser automation tools, you can have it analyze the code structure of current web pages and suggest optimizations. This serves as a real-time code assistant for frontend developers.
2. Content Creation Assistance
Whether writing blog posts, generating marketing copy, or translating multilingual content, ChromeClaw can become your creative partner once configured with a model that supports long contexts.
3. Intelligent Customer Service Systems
ChromeClaw supports WhatsApp and Telegram channel integration. You can configure it as an automated customer service bot to handle FAQs.
4. Data Analysis and Visualization
Through browser automation tools, ChromeClaw can scrape web page data and perform preliminary analysis, helping you quickly gain information insights.
5. Deep Research Tasks
ChromeClaw features a built-in Deep Research tool that can perform multi-step autonomous research—automatically searching, retrieving, and synthesizing information—suitable for scenarios requiring an in-depth understanding of a topic.
Local Model Options
If you have specific privacy requirements or wish to avoid external APIs entirely, ChromeClaw also supports local model execution. Accelerated by WebGPU, lightweight local models such as Qwen3-0.6B can run directly in the browser.
To enable local models, you need to set the CEB_ENABLE_WEBGPU_MODELS=true environment variable in the project configuration. The advantage of local models is complete offline availability, though their capabilities are relatively weaker than cloud models due to browser performance and model size constraints.