Llm.Responses
Overview
Available Operations
- create - Create an agent-powered response with tool support
- list - List all responses with pagination
- get - Retrieve response by ID with status and results
- delete - Permanently delete a response and its data
- cancel - Cancel an in-progress background response
- listInputItems - List paginated input items for a response
create
Create a new AI agent response using advanced language models with autonomous tool usage capabilities. The agent can access various tools including web search, file search, image generation, code interpreter, computer use simulation, and Model Context Protocol (MCP) integrations. Specify the model, input messages or prompts, select which tools to enable, configure modalities (text and audio), set output formats (text or JSON), choose between foreground or background processing, and enable streaming for real-time responses. Background responses run asynchronously and can be retrieved later by ID. The agent autonomously decides when and how to use tools to fulfill requests. Returns a response object containing the agent's output, tool usage logs, metadata, and processing status. Useful for building conversational AI assistants, automated workflows, research agents, and complex multi-step task automation.
Example Usage
import { SDK } from "@meetkai/mka1";
import { EventStream } from "@meetkai/mka1/lib/event-streams.js";
const sdk = new SDK({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const result = await sdk.llm.responses.create({
model: "Taurus",
});
// Check if the response is an EventStream instance for union types
if (result instanceof EventStream) {
for await (const event of result) {
// Handle the event
console.log(event);
}
} else {
console.log(result);
}
}
run();Standalone function
The standalone function version of this method:
import { SDKCore } from "@meetkai/mka1/core.js";
import { llmResponsesCreate } from "@meetkai/mka1/funcs/llmResponsesCreate.js";
import { EventStream } from "@meetkai/mka1/lib/event-streams.js";
// Use `SDKCore` for best tree-shaking performance.
// You can create one instance of it to use across an application.
const sdk = new SDKCore({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const res = await llmResponsesCreate(sdk, {
model: "Taurus",
});
if (res.ok) {
const { value: result } = res;
// Check if the response is an EventStream instance for union types
if (result instanceof EventStream) {
for await (const event of result) {
// Handle the event
console.log(event);
}
} else {
console.log(result);
}
} else {
console.log("llmResponsesCreate failed:", res.error);
}
}
run();React hooks and utilities
This method can be used in React components through the following hooks and associated utilities.
Check out this guide for information about each of the utilities below and how to get started using React hooks.
import {
// Mutation hook for triggering the API call.
useLlmResponsesCreateMutation
} from "@meetkai/mka1/react-query/llmResponsesCreate.js";Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
request | components.ResponsesCreateRequest | ✔️ | The request object to use for the request. |
options | RequestOptions | ➖ | Used to set various options for making HTTP requests. |
options.fetchOptions | RequestInit | ➖ | Options that are passed to the underlying HTTP request. This can be used to inject extra headers for examples. All Request options, except method and body, are allowed. |
options.retries | RetryConfig | ➖ | Enables retrying HTTP requests under certain failure conditions. |
Response
Promise<operations.CreateResponseResponse>
Errors
| Error Type | Status Code | Content Type |
|---|---|---|
| errors.APIError | 4XX, 5XX | */* |
list
Retrieve a paginated list of all agent responses for the authenticated user, ordered by creation date. Returns response objects containing their status (in_progress, completed, failed, cancelled, queued, incomplete), the model used, input/output content, tool usage logs, metadata, token usage statistics, and timestamps. Supports cursor-based pagination using 'after' or 'before' parameters to navigate through pages, 'limit' to control page size (1-100, default 20), and 'order' to sort by creation date (asc for oldest first, desc for newest first). Returns a list object with the responses array, pagination cursors (first_id, last_id, has_more), enabling efficient navigation through large result sets. Essential for building response history dashboards, auditing agent interactions, retrieving background job results, monitoring system usage, and managing response lifecycles. Use this to display conversation history, track agent performance, or implement response search and filtering.
Example Usage
import { SDK } from "@meetkai/mka1";
const sdk = new SDK({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const result = await sdk.llm.responses.list({});
console.log(result);
}
run();Standalone function
The standalone function version of this method:
import { SDKCore } from "@meetkai/mka1/core.js";
import { llmResponsesList } from "@meetkai/mka1/funcs/llmResponsesList.js";
// Use `SDKCore` for best tree-shaking performance.
// You can create one instance of it to use across an application.
const sdk = new SDKCore({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const res = await llmResponsesList(sdk, {});
if (res.ok) {
const { value: result } = res;
console.log(result);
} else {
console.log("llmResponsesList failed:", res.error);
}
}
run();React hooks and utilities
This method can be used in React components through the following hooks and associated utilities.
Check out this guide for information about each of the utilities below and how to get started using React hooks.
import {
// Query hooks for fetching data.
useLlmResponsesList,
useLlmResponsesListSuspense,
// Utility for prefetching data during server-side rendering and in React
// Server Components that will be immediately available to client components
// using the hooks.
prefetchLlmResponsesList,
// Utilities to invalidate the query cache for this query in response to
// mutations and other user actions.
invalidateLlmResponsesList,
invalidateAllLlmResponsesList,
} from "@meetkai/mka1/react-query/llmResponsesList.js";Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
request | operations.ListResponsesRequest | ✔️ | The request object to use for the request. |
options | RequestOptions | ➖ | Used to set various options for making HTTP requests. |
options.fetchOptions | RequestInit | ➖ | Options that are passed to the underlying HTTP request. This can be used to inject extra headers for examples. All Request options, except method and body, are allowed. |
options.retries | RetryConfig | ➖ | Enables retrying HTTP requests under certain failure conditions. |
Response
Promise<components.ResponseListObject>
Errors
| Error Type | Status Code | Content Type |
|---|---|---|
| errors.APIError | 4XX, 5XX | */* |
get
Retrieve a previously created agent response using its unique ID. Returns complete details including the response status (in_progress, completed, failed, cancelled), the model used, all input messages or prompts, the agent's output (text, JSON, or audio), complete tool usage logs showing which tools were called and their results, metadata such as creation and completion timestamps, token usage statistics, and any error information if the response failed. Essential for retrieving background responses after asynchronous processing completes. Use the optional stream parameter to enable streaming support for real-time updates (currently returns full response). Returns 404 if the response ID doesn't exist. Useful for building status dashboards, retrieving async job results, debugging agent behavior, and auditing tool usage.
Example Usage
import { SDK } from "@meetkai/mka1";
const sdk = new SDK({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const result = await sdk.llm.responses.get({
responseId: "<id>",
});
console.log(result);
}
run();Standalone function
The standalone function version of this method:
import { SDKCore } from "@meetkai/mka1/core.js";
import { llmResponsesGet } from "@meetkai/mka1/funcs/llmResponsesGet.js";
// Use `SDKCore` for best tree-shaking performance.
// You can create one instance of it to use across an application.
const sdk = new SDKCore({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const res = await llmResponsesGet(sdk, {
responseId: "<id>",
});
if (res.ok) {
const { value: result } = res;
console.log(result);
} else {
console.log("llmResponsesGet failed:", res.error);
}
}
run();React hooks and utilities
This method can be used in React components through the following hooks and associated utilities.
Check out this guide for information about each of the utilities below and how to get started using React hooks.
import {
// Query hooks for fetching data.
useLlmResponsesGet,
useLlmResponsesGetSuspense,
// Utility for prefetching data during server-side rendering and in React
// Server Components that will be immediately available to client components
// using the hooks.
prefetchLlmResponsesGet,
// Utilities to invalidate the query cache for this query in response to
// mutations and other user actions.
invalidateLlmResponsesGet,
invalidateAllLlmResponsesGet,
} from "@meetkai/mka1/react-query/llmResponsesGet.js";Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
request | operations.GetResponseRequest | ✔️ | The request object to use for the request. |
options | RequestOptions | ➖ | Used to set various options for making HTTP requests. |
options.fetchOptions | RequestInit | ➖ | Options that are passed to the underlying HTTP request. This can be used to inject extra headers for examples. All Request options, except method and body, are allowed. |
options.retries | RetryConfig | ➖ | Enables retrying HTTP requests under certain failure conditions. |
Response
Promise<components.ResponseObject>
Errors
| Error Type | Status Code | Content Type |
|---|---|---|
| errors.APIError | 4XX, 5XX | */* |
delete
Permanently delete an agent response and all associated data including input messages, output content, tool usage logs, and metadata. This operation cannot be undone - once deleted, the response cannot be retrieved again. Use this to clean up completed responses, remove sensitive data, manage storage quotas, or comply with data retention policies. Returns a deletion confirmation object with the response ID and deleted status. Returns 404 if the response ID doesn't exist or has already been deleted. Note that deleting a response does not cancel it if it's currently in progress - use the cancel endpoint first if you need to stop processing before deletion.
Example Usage
import { SDK } from "@meetkai/mka1";
const sdk = new SDK({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const result = await sdk.llm.responses.delete({
responseId: "<id>",
});
console.log(result);
}
run();Standalone function
The standalone function version of this method:
import { SDKCore } from "@meetkai/mka1/core.js";
import { llmResponsesDelete } from "@meetkai/mka1/funcs/llmResponsesDelete.js";
// Use `SDKCore` for best tree-shaking performance.
// You can create one instance of it to use across an application.
const sdk = new SDKCore({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const res = await llmResponsesDelete(sdk, {
responseId: "<id>",
});
if (res.ok) {
const { value: result } = res;
console.log(result);
} else {
console.log("llmResponsesDelete failed:", res.error);
}
}
run();React hooks and utilities
This method can be used in React components through the following hooks and associated utilities.
Check out this guide for information about each of the utilities below and how to get started using React hooks.
import {
// Mutation hook for triggering the API call.
useLlmResponsesDeleteMutation
} from "@meetkai/mka1/react-query/llmResponsesDelete.js";Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
request | operations.DeleteResponseRequest | ✔️ | The request object to use for the request. |
options | RequestOptions | ➖ | Used to set various options for making HTTP requests. |
options.fetchOptions | RequestInit | ➖ | Options that are passed to the underlying HTTP request. This can be used to inject extra headers for examples. All Request options, except method and body, are allowed. |
options.retries | RetryConfig | ➖ | Enables retrying HTTP requests under certain failure conditions. |
Response
Promise<components.DeleteResponseObject>
Errors
| Error Type | Status Code | Content Type |
|---|---|---|
| errors.APIError | 4XX, 5XX | */* |
cancel
Cancel an agent response that is currently processing in the background. Immediately stops any ongoing LLM generation, tool execution, or workflow processing. The response status is updated to 'cancelled' and any partial results are preserved. Useful for stopping long-running agent tasks, managing resource usage, implementing user-initiated cancellations, or enforcing timeout policies. Returns the updated response object with cancelled status. Returns 404 if the response ID doesn't exist. Returns 409 conflict if the response has already completed, failed, or been cancelled - only in-progress responses can be cancelled. Note that cancellation may not be instantaneous if the agent is in the middle of a tool execution. The Temporal workflow (if enabled) is also cancelled to free up background processing resources.
Example Usage
import { SDK } from "@meetkai/mka1";
const sdk = new SDK({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const result = await sdk.llm.responses.cancel({
responseId: "<id>",
});
console.log(result);
}
run();Standalone function
The standalone function version of this method:
import { SDKCore } from "@meetkai/mka1/core.js";
import { llmResponsesCancel } from "@meetkai/mka1/funcs/llmResponsesCancel.js";
// Use `SDKCore` for best tree-shaking performance.
// You can create one instance of it to use across an application.
const sdk = new SDKCore({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const res = await llmResponsesCancel(sdk, {
responseId: "<id>",
});
if (res.ok) {
const { value: result } = res;
console.log(result);
} else {
console.log("llmResponsesCancel failed:", res.error);
}
}
run();React hooks and utilities
This method can be used in React components through the following hooks and associated utilities.
Check out this guide for information about each of the utilities below and how to get started using React hooks.
import {
// Mutation hook for triggering the API call.
useLlmResponsesCancelMutation
} from "@meetkai/mka1/react-query/llmResponsesCancel.js";Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
request | operations.CancelResponseRequest | ✔️ | The request object to use for the request. |
options | RequestOptions | ➖ | Used to set various options for making HTTP requests. |
options.fetchOptions | RequestInit | ➖ | Options that are passed to the underlying HTTP request. This can be used to inject extra headers for examples. All Request options, except method and body, are allowed. |
options.retries | RetryConfig | ➖ | Enables retrying HTTP requests under certain failure conditions. |
Response
Promise<components.ResponseObject>
Errors
| Error Type | Status Code | Content Type |
|---|---|---|
| errors.APIError | 4XX, 5XX | */* |
listInputItems
Retrieve a paginated list of all input items (messages, prompts, instructions) that were provided when creating the specified agent response. Each item includes its type (message, system instruction, user prompt), content (text, images, audio, files), timestamps, and ordering information. Supports cursor-based pagination using the 'after' parameter to fetch subsequent pages and 'limit' parameter to control page size (default 20, max 100). Returns a list object containing the input items array, pagination cursors (first_id, last_id, has_more), and total count. Useful for reviewing conversation history, auditing input data, debugging agent behavior, or reconstructing the full context provided to the agent. Returns 404 if the response ID doesn't exist.
Example Usage
import { SDK } from "@meetkai/mka1";
const sdk = new SDK({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const result = await sdk.llm.responses.listInputItems({
responseId: "<id>",
});
console.log(result);
}
run();Standalone function
The standalone function version of this method:
import { SDKCore } from "@meetkai/mka1/core.js";
import { llmResponsesListInputItems } from "@meetkai/mka1/funcs/llmResponsesListInputItems.js";
// Use `SDKCore` for best tree-shaking performance.
// You can create one instance of it to use across an application.
const sdk = new SDKCore({
serverURL: "https://api.example.com",
bearerAuth: "<YOUR_BEARER_TOKEN_HERE>",
});
async function run() {
const res = await llmResponsesListInputItems(sdk, {
responseId: "<id>",
});
if (res.ok) {
const { value: result } = res;
console.log(result);
} else {
console.log("llmResponsesListInputItems failed:", res.error);
}
}
run();React hooks and utilities
This method can be used in React components through the following hooks and associated utilities.
Check out this guide for information about each of the utilities below and how to get started using React hooks.
import {
// Query hooks for fetching data.
useLlmResponsesListInputItems,
useLlmResponsesListInputItemsSuspense,
// Utility for prefetching data during server-side rendering and in React
// Server Components that will be immediately available to client components
// using the hooks.
prefetchLlmResponsesListInputItems,
// Utilities to invalidate the query cache for this query in response to
// mutations and other user actions.
invalidateLlmResponsesListInputItems,
invalidateAllLlmResponsesListInputItems,
} from "@meetkai/mka1/react-query/llmResponsesListInputItems.js";Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
request | operations.ListInputItemsRequest | ✔️ | The request object to use for the request. |
options | RequestOptions | ➖ | Used to set various options for making HTTP requests. |
options.fetchOptions | RequestInit | ➖ | Options that are passed to the underlying HTTP request. This can be used to inject extra headers for examples. All Request options, except method and body, are allowed. |
options.retries | RetryConfig | ➖ | Enables retrying HTTP requests under certain failure conditions. |
Response
Promise<components.InputItemListObject>
Errors
| Error Type | Status Code | Content Type |
|---|---|---|
| errors.APIError | 4XX, 5XX | */* |