OpenAIChatCompletionRequestParams
Request parameters for creating a chat completion. Based on the OpenAI Chat Completions API.
Example Usage
typescript
import { OpenAIChatCompletionRequestParams } from "@meetkai/mka1/models/components";
let value: OpenAIChatCompletionRequestParams = {
model: "meetkai:functionary-urdu-mini-pak",
messages: [
{
role: "user",
content: "What is the capital of France?",
},
],
maxTokens: 100,
temperature: 0.7,
};Fields
| Field | Type | Required | Description |
|---|---|---|---|
model | string | ✔️ | ID of the model to use. You can use provider:model format or just the model name with a default provider. |
messages | components.OpenAIRequestMessage[] | ✔️ | A list of messages comprising the conversation so far. At least one message is required. |
tools | components.OpenAIToolDefinition[] | ➖ | A list of tools the model may call. Use this to provide function definitions the model can invoke. |
toolChoice | any | ➖ | Controls which (if any) tool is called by the model. 'none' means the model will not call any tool. 'auto' means the model can pick. 'required' forces a tool call. |
stream | boolean | ➖ | If set, partial message deltas will be sent as server-sent events. Note: This field is ignored by the streaming endpoint, used only by OpenAI-compatible client endpoints. |
n | number | ➖ | How many chat completion choices to generate for each input message. Default is 1. |
maxTokens | number | ➖ | The maximum number of tokens that can be generated in the chat completion. The total length of input tokens and generated tokens is limited by the model's context length. |
temperature | number | ➖ | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. |
topP | number | ➖ | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. |
frequencyPenalty | number | ➖ | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
presencePenalty | number | ➖ | Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
seed | number | ➖ | If specified, the system will make a best effort to sample deterministically. Determinism is not guaranteed, but the same seed should typically return similar results. |
stop | components.Stop | ➖ | Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
responseFormat | components.OpenAIChatCompletionRequestParamsResponseFormat | ➖ | An object specifying the format that the model must output. Setting to { 'type': 'json_object' } enables JSON mode. |
logprobs | boolean | ➖ | Whether to return log probabilities of the output tokens. If true, returns the log probabilities of each output token returned in the content of message. |
topLogprobs | number | ➖ | An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used. |
user | string | ➖ | A unique identifier representing your end-user, which can help monitor and detect abuse. Also used for usage tracking and analytics. |
streamOptions | components.StreamOptions | ➖ | Options for streaming response. Only set this when you set stream: true. |
parallelToolCalls | boolean | ➖ | Whether to enable parallel function calling during tool use. |
reasoningEffort | components.ReasoningEffort | ➖ | Constrains effort on reasoning for reasoning models. Lower effort results in faster responses and fewer reasoning tokens. Supported values: 'none', 'minimal', 'low', 'medium', 'high', 'xhigh', or null. |
autoRouting | boolean | ➖ | When true, the gateway analyzes request complexity and automatically routes between quantized, MoE, and dense variants of the requested model family. |