Vivgrid provides access to a range of powerful AI models for building enterprise-grade AI agents. We select models based on their performance, cost-effectiveness, and suitability for various tasks. Pricing is transparent and matches the rates of the original providers.

Supported Models

  • gpt-4.1
  • gpt-4o
  • gemini-2.5-pro
  • gemini-2.5-flash
  • deepseek-r1
  • deepseek-v3-0324

How to Set Models

You don’t need to specify a model-name in your API calls. The model for your agent is managed on the backend, so switching models won’t require any code changes. To change the model for your agent, go to the Agent Settings page in the Vivgrid Console.

Pricing

Pricing is calculated in USD per 1 million tokens. The table below details the cost for input, cached, and output tokens for each model.
ModelInput TokenCached TokenOutput Token
gpt-4.1$2.00$0.50$8.00
gpt-4o$2.50$1.25$10.00
gemini-2.5-pro$1.25$0.31$10.00
gemini-2.5-flash$0.30$0.08$2.50
deepseek-r1$1.35-$5.40
deepseek-v3-0324$1.14-$4.56

Capabilities

ModelContext WindowMax Output TokensFunction Calling Support
gpt-4.11M tokens32KYes
gpt-4o128K16KYes
gemini-2.5-pro1M64KYes
gemini-2.5-flash1M64KYes
deepseek-r164K8KNo
deepseek-v3-0324128K64KNo

Service Regions

Vivgrid models are globally distributed to reduce latency and support data residency. API requests are routed to the nearest region, and Function Calling Tools are automatically deployed across all regions. The table below outlines the regional availability for each model:
ModelAvailable Regions
gpt-4.1US, EU, APAC
gpt-4oUS, EU, APAC
gemini-2.5-proGlobal
gemini-2.5-flashGlobal
deepseek-r1APAC
deepseek-v3-0324APAC