-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[Vertex AI] Set ModelContent.role to nil in system instructions
#14558
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @andrewheard, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request addresses an issue where the role property in ModelContent was being incorrectly counted in the prompt token count when using system instructions with the Developer API backend. To resolve this, I've explicitly set the role to nil for system instructions. This aligns with Vertex AI documentation and avoids token count discrepancies between countTokens and generateContent. The changes primarily affect the GenerativeModel.swift file and include updates to the unit tests in VertexComponentTests.swift to reflect this change.
Highlights
- Bug Fix: Addresses an issue where the
roleproperty inModelContentwas being incorrectly counted in the prompt token count for system instructions. - Consistency: Ensures consistency between
countTokensandgenerateContentby settingroletonilin system instructions. - Alignment with Documentation: Aligns the code with Vertex AI documentation by explicitly setting the
roleproperty tonilin system instructions.
Changelog
- FirebaseVertexAI/Sources/GenerativeModel.swift
- Modified the
systemInstructionproperty to set theroletonilwithin theModelContentwhen system instructions are used. This change is implemented to avoid token count discrepancies betweencountTokensandgenerateContentwhen using the Developer API backend. The change occurs on line 84, where thesystemInstructionis mapped to a newModelContentobject with anilrole.
- Modified the
- FirebaseVertexAI/Tests/Unit/VertexComponentTests.swift
- Updated the unit tests to reflect the change in
GenerativeModel.swift. Specifically, the tests now expect thesystemInstructionto have anilrole. This change ensures that the tests accurately reflect the expected behavior of the code after the fix. The changes occur on lines 221 and 241, whereexpectedSystemInstructionis created withrole: nil.
- Updated the unit tests to reflect the change in
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
In Vertex's realm, where models reside,
A role's shadow caused tokens to hide.
With a nil assignment, the balance is kept,
Token counts align, while the system has slept.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an important issue where the role property in ModelContent was being incorrectly counted in the prompt token count when using the Developer API backend. The solution involves explicitly setting the role to nil in system instructions to avoid token count discrepancies between countTokens and generateContent. The code changes are straightforward and well-documented.
Summary of Findings
- Redundant Initialization: In the test files, the
systemInstructionis initialized with a role, but then immediately overwritten with aModelContentobject that has anilrole. This could be simplified by directly initializingexpectedSystemInstructionwith thenilrole.
Merge Readiness
The code changes effectively address the token counting issue and include necessary test updates. While the redundant initialization in the tests is a minor point, addressing it would further improve code clarity. I am unable to directly approve this pull request, and recommend that others review and approve this code before merging. However, I believe that this pull request is ready to be merged after addressing the comments.
This is intentional to verify that the |
The
roleproperty inModelContentis now ignored (explicitly set tonil) in system instructions. This aligns with the Vertex AI documentation (and the Developer API). This works around an issue where theroleis counted in the prompt token count incountTokens(...)but is not ingenerateContent(...). No code changes are required for developers.Note: Although not shown in this PR, system instructions are used in most of the integration tests.