This guide will help you get started with Vivgrid OpenAI Bridge. You will
learn how to extend your AI Agent abilities by LLM Function
Calling.
1
Deploy NextChat to Vercel
NextChat is an open-source cross platfrom ChatGPUT/Gemini UI, you can create your own by fork and deploy NextChat to Vercel.Once deployed, config the environment variables on Vercel:
These are the required environment variables:

Vercel - Environment Variables Settings Page
BASE_URL
:https://api.vivgrid.com
OPENAI_API_KEY
: grab it from Vivgrid Console
2
It works!
Let’s try to ask the question
OpenAI can not answer this question, but we can extend the 
Compare amazon and shopify network performance
in your AI application.You will see the followings:
OpenAI Chat Completions w/o Function Calling
gpt-4o
model capabilities by Function Calling feature, OpenAI has great cookbook on how to call functions with chat models, but it’s complex to implement and maintain:
OpenAI Cookbook: How to call functions with chat models
3
Extend your AI application abilities by Calling Serverless in Strong-typed language
Let’s try to implement a Create a function Next, we need to wrap it to meet the spec of OpenAI Function Calling, luckily, we have YoMo to help us:First, we need define the The Finally, we need to wrap it as a stateful serverless function:
linux ping
Function Calling serverless in Go, before this, you need to install the YoMo Framework:ping
to measure the network performance for a given website:app.go
description
for our function, this helps OpenAI to understand the function, it’s very important for the accuracy. What we need to do is to implement the Description
function in the app.go
:app.go
ping()
requires a domain name as parameter, we will ask OpenAI to inference the domain name from the user input, and pass it by Arguments
in tools_call
:app.go
app.go
The full source code of this example can be found here: https://github.com/yomorun/llm-function-calling-examples/blob/main/tool-get-ip-and-latency/app.go
4
Run it locally
For test or hosted on your own infra, create a then run:
.env
file with the following content:.env
5
Deploy to Vivgrid Geo-distributed Network
Next, your serverless will be deployed to Vivgrid Geo-distributed Network, it will be available in multiple regions, and the requests will be routed to the nearest region by Vivgrid Geo-distributed Network.At first, create a file named then run:Once Deployed, monitor real-time logs and ask the question “Compare amazon and shopify network performance” again:Your AI Agent can now answer the question with network performance comparison:
yc.yml
with content:yc.yml
important Make sure you have the
yc
CLI installed, if not, install it
follows here.
OpenAI Chat Completions w/ Function Calling on Vivgrid
Do you know? Your Stateful Serverless Function will be deployed to
multiple regions automatically, bringing computing closer to your users.
This reduces latency and improves user experience. For Free Plan
users, there are 7 regions available.