The WeCareRemote AI assistant gives you control over which AI model powers your conversations and how you authenticate API requests. This page explains the configuration options available to you as a user or administrator.Documentation Index
Fetch the complete documentation index at: https://docs.wcr.is/llms.txt
Use this file to discover all available pages before exploring further.
Choosing your AI model
WeCareRemote supports a wide range of AI models from multiple providers. Your administrator sets the platform default, but you can override the model on individual API requests by passing themodel field in your request body.
API authentication
When accessing the AI assistant through the REST API, every request must include a Bearer token in theAuthorization header.
Supported LLM providers
WeCareRemote connects to the following AI providers. Your organization’s administrator controls which providers are active on your instance.OpenAI
OpenAI
Provides GPT-4o, GPT-4o-mini, GPT-4 Turbo, and GPT-3.5 Turbo. Well-suited for general conversation, summarization, and document Q&A.
Anthropic
Anthropic
Provides Claude 3.5 Sonnet, Claude 3 Haiku, and Claude 3 Opus. Known for nuanced, instruction-following responses.
Google
Provides Gemini 1.5 Pro and Gemini 1.5 Flash via Google AI Studio or Vertex AI. Strong multimodal and long-context performance.
Groq
Groq
Provides fast inference for open-weight models including Llama 3.1 and Mixtral. Ideal for low-latency interactions.
Ollama (local)
Ollama (local)
Runs open-weight models locally on your organization’s own hardware. No data leaves your environment — the recommended option for privacy-sensitive use cases.
AWS Bedrock
AWS Bedrock
Provides managed access to Anthropic Claude, Meta Llama, Amazon Titan, and other models through Amazon’s infrastructure.
Azure OpenAI
Azure OpenAI
Provides GPT models through your organization’s Azure OpenAI resource, with enterprise-grade compliance and data residency controls.
OpenRouter
OpenRouter
Aggregates models from many providers — including OpenAI, Anthropic, Google, Meta, and Mistral — through a single API.
DeepSeek
DeepSeek
Provides capable reasoning and coding-focused models via the DeepSeek API.
Custom OpenAI-compatible endpoints
Custom OpenAI-compatible endpoints
Connect any API that follows the OpenAI API format — including self-hosted models, fine-tuned endpoints, or third-party providers not listed above.