Skip to content

lizozom/google-gemini-benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Summary

image

This project benchmarks the performance of the Gemini 1.5 models (Pro and Flash) when extracting a list of items from an image. The primary objective was to compare the performance of JSON vs. YAML as output formats.

Key Findings

  • Model Performance: Gemini Flash outperforms Gemini Pro, as expected.
  • Format Efficiency: YAML consistently outperforms JSON across both models.
  • Scalability: The performance gap between YAML and JSON widens as the number of items in the list increases.

These results align with expectations, as YAML is inherently more efficient than JSON when it comes to LLM output, with the efficiency difference becoming more pronounced with larger outputs.

For a detailed explanation, refer to this article.

Requirements

  1. Google Cloud Project: Create a Google Cloud Platform (GCP) project and enable the Vertex AI API.
  2. Authentication: Authenticate with your GCP account by running:
gcloud init
  1. Environment Setup: Rename the .env.tpl file to .env and fill in the Google project name and region.

Once you have done that, rename the .env.tpl file and fill in the Google project name and region.

Running the project

To run the project, ensure you have Node.js 18+ installed. Then, execute the following commands:

 npm install
 npm run dev

About

Performance benchmarks for Google's Gemini model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published