Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.wcr.is/llms.txt

Use this file to discover all available pages before exploring further.

The agent endpoint is the primary way to interact with the WeCareRemote AI assistant programmatically. Send a message and receive an AI-generated response, with optional conversation history for context. The agent supports both single-turn requests and multi-turn conversations using thread IDs.

Invoke agent

Send a message to the AI assistant and receive a response. Endpoint: POST / Base URL: http://your-instance:8080 Authentication: Bearer token required — see Authentication.

Request body

{
  "message": "What services are available for refugees?",
  "thread_id": "optional-thread-id-for-history"
}
message
string
required
The user message to send to the AI assistant.
thread_id
string
Optional conversation thread ID to maintain history across requests. If omitted, the agent starts a new conversation with no prior context.
model
string
Optional model override. If omitted, the assistant uses the platform default model configured by your administrator.

Response

{
  "response": "WeCareRemote offers several services including...",
  "thread_id": "abc123",
  "model": "gpt-4o"
}
response
string
The AI assistant’s response text.
thread_id
string
The thread ID for this conversation. Pass this value in subsequent requests to continue the same conversation.
model
string
The name of the model that generated the response.

Full examples

curl -X POST http://localhost:8080/ \
  -H "Authorization: Bearer your-token" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "What services are available for refugees?",
    "thread_id": "session-abc123"
  }'

Streaming responses

For real-time output as the model generates its response, use the streaming variant of the endpoint. The server sends tokens as they are produced rather than waiting for the full response.
import httpx

with httpx.stream(
    "POST",
    "http://localhost:8080/stream",
    headers={
        "Authorization": "Bearer your-token",
        "Content-Type": "application/json"
    },
    json={"message": "Tell me about housing support services."}
) as response:
    for chunk in response.iter_text():
        print(chunk, end="", flush=True)
The streaming endpoint path may vary. Contact your WeCareRemote administrator if the /stream path is not available on your instance.

Continuing a conversation

To maintain conversation history across multiple requests, pass the thread_id returned from a previous response back into the next request.
import httpx

client = httpx.Client(
    base_url="http://localhost:8080",
    headers={"Authorization": "Bearer your-token"}
)

# First message — no thread_id needed
first = client.post("/", json={"message": "Hello, I need help finding housing."}).json()
thread_id = first["thread_id"]

# Follow-up — pass thread_id to continue the conversation
second = client.post("/", json={
    "message": "What documents do I need?",
    "thread_id": thread_id
}).json()

print(second["response"])

Error responses

Status codeCause
401 UnauthorizedThe Bearer token is missing or invalid.
422 Unprocessable EntityThe request body is invalid or missing required fields.
500 Internal Server ErrorThe model encountered an error or an internal failure occurred.