Quick Start
Learn how to deploy NextChat to Vercel and extend abilities with Function Calling by Vivgrid AI Bridge
This guide will help you get started with Vivgrid OpenAI Bridge. You will learn how to extend your AI Agent abilities by LLM Function Calling.
Deploy NextChat to Vercel
NextChat is an open-source cross platfrom ChatGPUT/Gemini UI, you can create your own by fork and deploy NextChat to Vercel.
Once deployed, config the environment variables on Vercel:
Vercel - Environment Variables Settings Page
These are the required environment variables:
BASE_URL
:https://api.vivgrid.com
OPENAI_API_KEY
: grab it from Vivgrid Console
It works!
Let’s try to ask the question Compare amazon and shopify network performance
in your AI application.
You will see the followings:
OpenAI Chat Completions w/o Function Calling
OpenAI can not answer this question, but we can extend the gpt-4o
model capabilities by Function Calling feature, OpenAI has great cookbook on how to call functions with chat models, but it’s complex to implement and maintain:
OpenAI Cookbook: How to call functions with chat models
Extend your AI application abilities by Calling Serverless in Strong-typed language
Let’s try to implement a linux ping
Function Calling serverless in Go, before this, you need to install the YoMo Framework:
Create a function ping
to measure the network performance for a given website:
Next, we need to wrap it to meet the spec of OpenAI Function Calling, luckily, we have YoMo to help us:
First, we need define the description
for our function, this helps OpenAI to understand the function, it’s very important for the accuracy. What we need to do is to implement the Description
function in the app.go
:
The ping()
requires a domain name as parameter, we will ask OpenAI to inference the domain name from the user input, and pass it by Arguments
in tools_call
:
Finally, we need to wrap it as a stateful serverless function:
Run it locally
For test or hosted on your own infra, create a .env
file with the following content:
then run:
Deploy to Vivgrid Geo-distributed Network
Next, your serverless will be deployed to Vivgrid Geo-distributed Network, it will be available in multiple regions, and the requests will be routed to the nearest region by Vivgrid Geo-distributed Network.
At first, create a file named yc.yml
with content:
then run:
important Make sure you have the yc
CLI installed, if not, install it
follows here.
Once Deployed, monitor real-time logs and ask the question “Compare amazon and shopify network performance” again:
Your AI Agent can now answer the question with network performance comparison:
OpenAI Chat Completions w/ Function Calling on Vivgrid
Do you know? Your Stateful Serverless Function will be deployed to multiple regions automatically, bringing computing closer to your users. This reduces latency and improves user experience. For Free Plan users, there are 7 regions available.