Chat Completion
https://api.trylune.ai/chat/completions
Use the chat completions endpoint to generate text responses either from individual Lunes or from Tycho Agent. If you pass individual Lunes, you will get a response grounded in relevant context from the Lune's knowledge base. If Tycho Agent is used, Tycho will automatically select the most relevant Lunes to use for the response.
Request body
The API chat completions endpoint supports the following fields:
stream
boolean or null Optional (defaults to false):
- toggles response streaming on or off
lunes
list[string] or null Optional (defaults to null):
- Specifies the Lune(s) to use for the request (see Models). If no lunes are specified, Tycho Agent will be used.
include_references
boolean or null Optional (defaults to false):
- toggles whether web url references are streamed back in the response content
response_format
object Optional:
- An object specifying the format that the model must output. See Structured Output for more details.
model
string Required:
- specifies the model to use for the request (see Models)
- Only
tycho
is supported at this time.
messages
: array Required:
- Only user, system, and assistant message types are supported
Request Example
curl https://api.trylune.ai/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $LUNE_API_KEY" \
-d '{
"messages": [
{"role": "user", "content": "What is the format of the AIMessageChunk object in Langchain?"}
],
"model": "tycho",
"stream": true
}'
Response Examples
Streamed responses will be returned as a series of chat.completion.chunk
objects.
Non-streamed responses will be returned as a chat.completion
object.
{
"id": "chatcmpl-1c3f71d633e043649c718dd577f38699",
"object": "chat.completion",
"created": 1729989163,
"model": "tycho",
"system_fingerprint": "fp_44709d6fcb",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The AIMessageChunk object in LangChain represents a chunk or partial message generated by an AI model."
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 286,
"total_tokens": 286,
"completion_tokens_details": {
"reasoning_tokens": 0
}
},
"url_references": [
"https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.add_ai_message_chunks.html"
]
}