curl --request POST \
--url https://api.vivgrid.com/v1/responses \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "gpt-5.2-codex",
"input": "Create a amazon-like web app."
}
'{
"id": "<string>",
"object": "<string>",
"model": "<string>",
"created_at": 123,
"status": "<string>",
"output": [
{}
],
"usage": {},
"service_tier": "<string>"
}Creates a model response. Provide text or image inputs to generate text or JSON outputs. Supports tool calling and conversation state.
curl --request POST \
--url https://api.vivgrid.com/v1/responses \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "gpt-5.2-codex",
"input": "Create a amazon-like web app."
}
'{
"id": "<string>",
"object": "<string>",
"model": "<string>",
"created_at": 123,
"status": "<string>",
"output": [
{}
],
"usage": {},
"service_tier": "<string>"
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
gpt-5.1-codex, gpt-5.1-codex-max or gpt-5.2-codex.
Text, image, file, or chat-style inputs. The Responses API accepts a string or an array of input items/messages.
A system (or developer) message inserted into the model's context.
Whether to run the model response in the background.
Whether to store the generated model response for later retrieval via API.
The unique ID of the previous response to the model. Cannot be used with 'conversation'.
Conversation that this response belongs to. Items are prepended to the request. Cannot be used with 'previous_response_id'.
Additional output data to include in the response.
Upper bound for generated tokens (includes visible output tokens and reasoning tokens).
x >= 1Maximum total built-in tool calls processed in a response.
x >= 0Whether to allow tool calls in parallel.
Tools the model may call (built-in tools, MCP tools, or developer-defined function tools).
Show child attributes
How the model should select which tool(s) to use.
Configuration options for a text response from the model (plain text or structured JSON).
Show child attributes
Reasoning configuration (gpt-5 and o-series models).
0 <= x <= 20 <= x <= 1Number of most likely tokens to return per position with logprobs.
0 <= x <= 20Truncation strategy when input exceeds the model context window.
auto, disabled If true, stream the response via server-sent events.
Options for streaming responses (only when stream=true).
Up to 16 key/value pairs for storing structured metadata.
Show child attributes
Reference to a prompt template and its variables.
Cache key used to help optimize prompt caching.
Retention policy for prompt cache (e.g. '24h').
Stable identifier for abuse detection (recommended to hash username/email).
Processing tier (e.g., auto/default/flex/priority).
Hashed user indentify for improved monitoring and abuse detection.
OK (non-streaming): a Response object
A model response object returned by the Responses API.
Unix timestamp (seconds).
Output items produced by the model (messages, tool calls, reasoning items, etc.).
Token usage and related accounting fields.
Service tier actually used to process the request (may differ from requested).