CoPaw API Integration Guide: Quick Start with GPT, Gemini, and Claude

March 3, 2026

This guide introduces how to integrate various LLM (Large Language Model) providers into CoPaw. CoPaw is a Personal AI Assistant that supports multiple chat channels, scheduled tasks, and custom skills.

Introduction

CoPaw is an open-source AI assistant framework built on AgentScope, supporting platforms such as DingTalk, Feishu, QQ, Discord, and iMessage. Whether you're building a personal AI assistant for your business or developing a multi-channel customer service bot, CoPaw can flexibly connect to your preferred LLM provider.

One of CoPaw's core advantages is its support for multiple model providers via an OpenAI-compatible API interface. This guide will walk you through the integration process, highlighting cost-effective solutions.

Overview

CoPaw uses AgentScope’s OpenAIChatModel to communicate with different LLM providers. All remote providers utilize the OpenAI-compatible /v1/chat/completions endpoint, making it very straightforward to switch between providers or add new ones.

Configuration files are stored in the providers.json file located in the CoPaw working directory.

Defapi offers API access to major LLM providers at approximately half the official price. It is fully compatible with the OpenAI protocol, making it an excellent choice for integrating with CoPaw.

Why Choose Defapi?

  • Cost-Effective: Approximately 50% cheaper than the official API
  • Protocol Compatibility: Fully compatible with OpenAI's /v1/chat/completions
  • Multiple Models: Access to GPT-4o, Claude Sonnet, Gemini Flash, and more
  • Simple Configuration: No complex setup required

Configuration

Edit the CoPaw configuration file to add Defapi as a custom provider:

{
  "custom_providers": {
    "defapi": {
      "id": "defapi",
      "name": "Defapi",
      "default_base_url": "https://api.defapi.cn/v1",
      "api_key_prefix": "",
      "base_url": "https://api.defapi.cn/v1",
      "api_key": "your-defapi-api-key",
      "models": [
        {"id": "gpt-4o-mini", "name": "GPT-4o Mini"},
        {"id": "gpt-4o", "name": "GPT-4o"},
        {"id": "claude-sonnet-4-20250514", "name": "Claude Sonnet 4"},
        {"id": "gemini-2.0-flash", "name": "Gemini 2.0 Flash"}
      ],
      "chat_model": "OpenAIChatModel"
    }
  },
  "active_llm": {
    "provider_id": "defapi",
    "model": "gpt-4o-mini"
  }
}

Available Models on Defapi

ModelUse Case
gpt-4o-miniQuick, cost-effective responses
gpt-4oComplex reasoning and coding
claude-sonnet-4-20250514Balanced performance
gemini-2.0-flashMultimodal, rapid responses

Method 2: Built-in Providers

CoPaw comes pre-configured with multiple built-in providers that can be enabled simply by configuring the API Key.

OpenAI

{
  "providers": {
    "openai": {
      "base_url": "https://api.openai.com/v1",
      "api_key": "sk-your-key"
    }
  },
  "active_llm": {
    "provider_id": "openai",
    "model": "gpt-4o-mini"
  }
}

ModelScope

{
  "providers": {
    "modelscope": {
      "base_url": "https://api-inference.modelscope.cn/v1",
      "api_key": "ms-your-key"
    }
  }
}

DashScope

{
  "providers": {
    "dashscope": {
      "base_url": "https://dashscope.aliyuncs.com/compatible-mode/v1",
      "api_key": "sk-your-key"
    }
  }
}

Method 3: OpenRouter

OpenRouter aggregates multiple LLM providers through a unified API:

{
  "providers": {
    "openrouter": {
      "base_url": "https://openrouter.ai/api/v1",
      "api_key": "sk-or-your-key"
    }
  },
  "active_llm": {
    "provider_id": "openrouter",
    "model": "openrouter/google/gemini-2.0-flash"
  }
}

Method 4: Local Models

This method is suitable for scenarios requiring offline use or data privacy:

Ollama

{
  "providers": {
    "ollama": {
      "base_url": "http://localhost:11434/v1",
      "api_key": ""
    }
  },
  "active_llm": {
    "provider_id": "ollama",
    "model": "llama3"
  }
}

llama.cpp

pip install 'copaw[llamacpp]'
copaw models download Qwen/Qwen3-4B-GGUF

Verify Configuration

After configuration, validate your settings:

  1. Start CoPaw: copaw app
  2. Open Console: http://127.0.0.1:8088/
  3. Go to Settings → Models
  4. Test provider connection

Alternatively, use the CLI:

copaw models

Internal Mechanism

How CoPaw Connects to LLM Providers

CoPaw’s provider system works through multiple layers:

  1. Configuration Layer (providers/store.py): Reads and writes providers.json, managing API Keys and endpoint URLs.

  2. Registration Layer (providers/registry.py): Maintains a list of built-in and custom providers and their model definitions.

  3. Model Factory (agents/model_factory.py): Creates the appropriate ChatModel instances based on provider configurations.

  4. Agent Runtime (agents/react_agent.py): Uses models in a ReAct (Reasoning + Action) loop to generate responses.

The OpenAI-compatible interface means that when you send a message, CoPaw constructs a POST request to {base_url}/chat/completions, carrying your message and parameters. The response is then parsed to generate the assistant's reply.

Token Management

CoPaw implements intelligent token management:

  • The context window varies by provider (e.g., GPT-4o: 128K, Claude: 200K)
  • Memory compression helps manage long conversations
  • You can configure max_tokens to control the length of responses

Common Use Cases

1. Customer Service Automation

Deploy CoPaw on DingTalk or Discord, utilizing AI-driven responses to handle customer inquiries. Connect with GPT-4o for complex issues, or use smaller models for common FAQs.

2. Content Generation Pipeline

Schedule CoPaw to generate daily newsletters or social media content, using the heartbeat feature to trigger automatic content creation at specified times.

3. Meeting Summaries

Integrate with calendar APIs to automatically summarize meeting notes and action items. The Claude model excels at understanding nuanced discussions.

4. Knowledge Base Q&A

Use CoPaw’s memory search feature to build a retrieval-augmented generation (RAG) system. Upload documents and allow users to query via any supported channel.

5. Multi-language Translation Service

Leverage Gemini’s multi-language capabilities for real-time translation across channels, configuring different language pairs for various user groups.

Troubleshooting

Common Issues

401 Unauthorized

  • Verify that the API key is correct
  • Check if the key has expired or been revoked

Connection Timeout

  • Check your network connection
  • Ensure firewall settings allow outbound HTTPS

Model Not Found

  • Confirm that the model ID is correct
  • Check if the model is available in your region

Ollama Connection Failed

  • Ensure that ollama serve is running
  • Verify that the base URL is http://localhost:11434/v1

Quick Start Commands

# Install CoPaw
pip install copaw

# Initialize with default configuration
copaw init --defaults

# Start the application
copaw app

# Configure your preferred LLM provider
# Edit ~/.copaw/.secret/providers.json

Conclusion

CoPaw offers flexible LLM integration through its OpenAI-compatible interface. For cost-effective AI assistant deployment, Defapi provides the best value with half-price access to major models. Built-in providers cover most use cases, while custom providers support integration with any OpenAI-compatible API.

Start with Defapi for the best balance of cost and performance, then explore other providers based on your specific needs.

Updated March 3, 2026