AI Providers
ChatShell supports 40+ AI providers out of the box. Configure as many as you like and switch between them per conversation.
Adding a Provider
Section titled “Adding a Provider”- Go to Settings → Providers
- Click Add Provider
- Select the provider from the list
- Enter your API key
- Click Save
API keys are encrypted with AES-256-GCM and stored securely in local SQLite, with the master encryption key in your OS keychain — not in plain text files.
Supported Providers
Section titled “Supported Providers”Cloud Providers
Section titled “Cloud Providers”| Provider | Notes |
|---|---|
| OpenAI | GPT-5.2, GPT-5 Mini, GPT-5 Nano, and more |
| Anthropic | Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5 |
| Google Gemini | Gemini 3.1 Pro, Gemini 3 Flash |
| Azure OpenAI | Azure-hosted OpenAI models |
| OpenRouter | Unified API for 100+ models |
| DeepSeek | DeepSeek Chat, DeepSeek Reasoner |
| Groq | Groq Compound, GPT-OSS (fast inference) |
| Mistral | Mistral Large, Mistral Small, Codestral |
| Perplexity | Sonar, Sonar Pro, Sonar Reasoning |
| Together AI | Llama, Mistral, and more |
| xAI | Grok 4.1 |
| Cohere | Command R+ |
| Moonshot | Kimi models |
| Hyperbolic | Open-source model hosting |
| MiniMax | MiniMax M2.5 |
| MiniMax CN | MiniMax M2.5 (China endpoint) |
| GitHub Models | Models via GitHub Marketplace |
| Fireworks AI | Fast open-source model inference |
| NVIDIA NIM | NVIDIA-optimized model inference |
| Hugging Face | Inference Endpoints |
| Cerebras | Ultra-fast inference on Wafer-Scale Engine |
| Galadriel | Decentralized AI inference |
| Mira | Mira Network models |
| Alibaba Qwen | Qwen 3.5 Plus, Qwen 3 Max, and more |
| Zhipu AI | GLM-5, GLM-4.7, GLM-4.6 |
| 01.AI | Yi Lightning |
| Baichuan | Baichuan series models |
| Doubao | ByteDance Doubao models |
| Tencent Hunyuan | Hunyuan Turbo, Pro, Standard, Lite |
| Tencent Cloud TI | DeepSeek R1, DeepSeek V3 |
| Baidu Cloud | ERNIE series models |
| SiliconFlow | Open-source model hosting |
| ModelScope | Alibaba ModelScope inference |
| StepFun | Step series models |
| Xirang | CTYun Xirang models |
| Xiaomi MiMo | MiMo V2 Flash |
Local Providers
Section titled “Local Providers”| Provider | Description |
|---|---|
| Ollama | Run open-source models locally (auto-discovers installed models) |
| LM Studio | OpenAI-compatible local inference |
| GPUStack | GPU cluster management with OpenAI-compatible API |
| OVMS | OpenVINO Model Server for optimized local inference |
Custom Endpoints
Section titled “Custom Endpoints”ChatShell supports any provider with an OpenAI- or Anthropic-compatible API. Add a Custom Provider with:
- Base URL — The API endpoint (e.g.,
https://api.example.com/v1) - API Key — Your authentication key (leave blank if not required)
- Models — Enter model IDs manually or use auto-discovery
Auto-Discovery
Section titled “Auto-Discovery”For 30+ providers (e.g., Ollama, OpenRouter), ChatShell can automatically fetch the list of available models. Just configure the base URL (and API key where required) and ChatShell will populate the model list for you.
If auto-discovery doesn’t find the model you need, you can always add it manually by entering the model ID directly.
API Connectivity Check
Section titled “API Connectivity Check”Before you start chatting, you can verify your provider is working correctly. Go to Settings → Providers, select the provider, and use the connectivity check to test the connection right from the settings page.
Model Capability Awareness
Section titled “Model Capability Awareness”ChatShell ships with a bundled model capabilities database (sourced from models.dev) that knows what each model can do — vision, tool use, image generation, and more. The UI adapts automatically:
- Vision models accept image attachments via paste or drag-and-drop
- Text-only models disable image input to prevent errors
- Image generation models (e.g., Gemini) can generate images directly in chat
You can refresh the capabilities database from settings to stay up to date with newly released models.
Switching Models
Section titled “Switching Models”You can change the model for any conversation at any time using the model selector in the conversation settings panel. This overrides the assistant’s default model for that conversation only.
Context Window
Section titled “Context Window”Each model has a maximum context window. ChatShell lets you configure how much of the conversation history to include in each request — helping you stay within limits for longer conversations. Configure this in the conversation settings.