Chat Completion

https://api.trylune.ai/chat/completions

Request body

The API chat completions endpoint supports the following fields:

memory boolean or null Optional (defaults to false):

  • toggles the memory feature on or off

stream boolean or null Optional (defaults to false):

  • toggles response streaming on or off

lunes list[string] or null Optional (defaults to null):

  • specifies the Lune(s) to use for the request (see Models)

include_references boolean or null Optional (defaults to false):

  • toggles whether web url references are streamed back in the response content

include_web_chat boolean or null Optional (defaults to false):

  • toggles whether the continue in web button is streamed back in the response content

response_format object Optional:

  • An object specifying the format that the model must output. See Structured Output for more details.

model string Required:

  • specifies the model to use for the request (see Models)
  • Only tycho is supported at this time.

messages: array Required:

  • Only user, system, and assistant message types are supported

Request Example

curl https://api.trylune.ai/chat/completions \
   -H "Content-Type: application/json" \
   -H "Authorization: Bearer $LUNE_API_KEY" \
   -d '{
         "messages": [
           {"role": "user", "content": "What is the format of the AIMessageChunk object in Langchain?"}
         ],
         "model": "tycho",
         "stream": true
       }'

Response Examples

Streamed responses will be returned as a series of chat.completion.chunk objects.

Non-streamed responses will be returned as a chat.completion object.

{
  "id": "chatcmpl-1c3f71d633e043649c718dd577f38699", 
  "object": "chat.completion", 
  "created": 1729989163, 
  "model": "tycho", 
  "system_fingerprint": "fp_44709d6fcb", 
  "choices": [
      {
          "index": 0, 
          "message": {
          "role": "assistant", 
          "content": "The AIMessageChunk object in LangChain represents a chunk or partial message generated by an AI model."
          }, 
          "logprobs": null, 
          "finish_reason": "stop"
      }
  ], 
  "usage": {
      "prompt_tokens": 0,
      "completion_tokens": 286, 
      "total_tokens": 286, 
      "completion_tokens_details": {
          "reasoning_tokens": 0
      }
  }, 
  "url_references": [
      "https://python.langchain.com/api_reference/core/messages/langchain_core.messages.ai.add_ai_message_chunks.html"
  ]
}