Skip to content

AI Providers

ChatShell supports 40+ AI providers out of the box. Configure as many as you like and switch between them per conversation.

  1. Go to Settings → Providers
  2. Click Add Provider
  3. Select the provider from the list
  4. Enter your API key
  5. Click Save

API keys are encrypted with AES-256-GCM and stored securely in local SQLite, with the master encryption key in your OS keychain — not in plain text files.

ProviderNotes
OpenAIGPT-5.2, GPT-5 Mini, GPT-5 Nano, and more
AnthropicClaude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5
Google GeminiGemini 3.1 Pro, Gemini 3 Flash
Azure OpenAIAzure-hosted OpenAI models
OpenRouterUnified API for 100+ models
DeepSeekDeepSeek Chat, DeepSeek Reasoner
GroqGroq Compound, GPT-OSS (fast inference)
MistralMistral Large, Mistral Small, Codestral
PerplexitySonar, Sonar Pro, Sonar Reasoning
Together AILlama, Mistral, and more
xAIGrok 4.1
CohereCommand R+
MoonshotKimi models
HyperbolicOpen-source model hosting
MiniMaxMiniMax M2.5
MiniMax CNMiniMax M2.5 (China endpoint)
GitHub ModelsModels via GitHub Marketplace
Fireworks AIFast open-source model inference
NVIDIA NIMNVIDIA-optimized model inference
Hugging FaceInference Endpoints
CerebrasUltra-fast inference on Wafer-Scale Engine
GaladrielDecentralized AI inference
MiraMira Network models
Alibaba QwenQwen 3.5 Plus, Qwen 3 Max, and more
Zhipu AIGLM-5, GLM-4.7, GLM-4.6
01.AIYi Lightning
BaichuanBaichuan series models
DoubaoByteDance Doubao models
Tencent HunyuanHunyuan Turbo, Pro, Standard, Lite
Tencent Cloud TIDeepSeek R1, DeepSeek V3
Baidu CloudERNIE series models
SiliconFlowOpen-source model hosting
ModelScopeAlibaba ModelScope inference
StepFunStep series models
XirangCTYun Xirang models
Xiaomi MiMoMiMo V2 Flash
ProviderDescription
OllamaRun open-source models locally (auto-discovers installed models)
LM StudioOpenAI-compatible local inference
GPUStackGPU cluster management with OpenAI-compatible API
OVMSOpenVINO Model Server for optimized local inference

ChatShell supports any provider with an OpenAI- or Anthropic-compatible API. Add a Custom Provider with:

  • Base URL — The API endpoint (e.g., https://api.example.com/v1)
  • API Key — Your authentication key (leave blank if not required)
  • Models — Enter model IDs manually or use auto-discovery

For 30+ providers (e.g., Ollama, OpenRouter), ChatShell can automatically fetch the list of available models. Just configure the base URL (and API key where required) and ChatShell will populate the model list for you.

If auto-discovery doesn’t find the model you need, you can always add it manually by entering the model ID directly.

Before you start chatting, you can verify your provider is working correctly. Go to Settings → Providers, select the provider, and use the connectivity check to test the connection right from the settings page.

ChatShell ships with a bundled model capabilities database (sourced from models.dev) that knows what each model can do — vision, tool use, image generation, and more. The UI adapts automatically:

  • Vision models accept image attachments via paste or drag-and-drop
  • Text-only models disable image input to prevent errors
  • Image generation models (e.g., Gemini) can generate images directly in chat

You can refresh the capabilities database from settings to stay up to date with newly released models.

You can change the model for any conversation at any time using the model selector in the conversation settings panel. This overrides the assistant’s default model for that conversation only.

Each model has a maximum context window. ChatShell lets you configure how much of the conversation history to include in each request — helping you stay within limits for longer conversations. Configure this in the conversation settings.