The agent endpoint is the primary way to interact with the WeCareRemote AI assistant programmatically. Send a message and receive an AI-generated response, with optional conversation history for context. The agent supports both single-turn requests and multi-turn conversations using thread IDs.Documentation Index
Fetch the complete documentation index at: https://docs.wcr.is/llms.txt
Use this file to discover all available pages before exploring further.
Invoke agent
Send a message to the AI assistant and receive a response. Endpoint:POST /
Base URL: http://your-instance:8080
Authentication: Bearer token required — see Authentication.
Request body
The user message to send to the AI assistant.
Optional conversation thread ID to maintain history across requests. If omitted, the agent starts a new conversation with no prior context.
Optional model override. If omitted, the assistant uses the platform default model configured by your administrator.
Response
The AI assistant’s response text.
The thread ID for this conversation. Pass this value in subsequent requests to continue the same conversation.
The name of the model that generated the response.
Full examples
Streaming responses
For real-time output as the model generates its response, use the streaming variant of the endpoint. The server sends tokens as they are produced rather than waiting for the full response.The streaming endpoint path may vary. Contact your WeCareRemote administrator if the
/stream path is not available on your instance.Continuing a conversation
To maintain conversation history across multiple requests, pass thethread_id returned from a previous response back into the next request.
Error responses
| Status code | Cause |
|---|---|
401 Unauthorized | The Bearer token is missing or invalid. |
422 Unprocessable Entity | The request body is invalid or missing required fields. |
500 Internal Server Error | The model encountered an error or an internal failure occurred. |