Chat Vision
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Body
The name of the vision model to use.
A list of messages comprising the conversation so far.
The maximum number of tokens to generate in the completion.
The size to which to truncate chat prompts.
What sampling temperature to use.
0 < x < 2
Response
A unique identifier of the response.
The Unix time in seconds when the response was generated.
The model used for the chat completion.
The list of chat completion choices.
Usage statistics.
For streaming responses, usage
field is included in the very last response chunk returned.
Note that returning usage
for streaming requests is a popular LLM API extension. If you use any popular LLM SDK, you might access the field directly even if it's not present in the type signature in the SDK.
Was this page helpful?