Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: added Gemini 2.0 Flash-thinking-exp-01-21 model w… #1202

Merged
merged 1 commit into from
Jan 28, 2025

Conversation

saif78642
Copy link

…ith 65k token support

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

…ith 65k token support

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.
Copy link
Collaborator

@leex279 leex279 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model working.

@elyzionz
Copy link

image

I have a problem, I updated with gitpull and the model "Gemini 2.0 Flash-thinking-exp-01-21" does not appear. How can I solve it?

@leex279
Copy link
Collaborator

leex279 commented Jan 28, 2025

Hi @elyzionz,
this is a PR, so a feature which is not even reviewed completely nor merged to main. You can test it out, but this is still on dev, until its merged to main and then later in the official release.

image

@thecodacus thecodacus changed the title feat(GoogleProvider): added Gemini 2.0 Flash-thinking-exp-01-21 model w… feat: added Gemini 2.0 Flash-thinking-exp-01-21 model w… Jan 28, 2025
@thecodacus
Copy link
Collaborator

I will be adding dymamic model loading for all these providers so we don't have to add them one by one

@thecodacus thecodacus merged commit 39a0724 into stackblitz-labs:main Jan 28, 2025
4 checks passed
damaradiprabowo added a commit to damaradiprabowo/bolt.diy that referenced this pull request Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

* feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206)

* feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205)

* feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191)

* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag

* better summary generation

* improved  summary generation for context optimization

* remove think tags from the generation

* fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216)

* fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218)

---------

Co-authored-by: Mohammad Saif Khan <[email protected]>
Co-authored-by: Anirban Kar <[email protected]>
damaradiprabowo added a commit to damaradiprabowo/bolt.diy that referenced this pull request Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

* feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206)

* feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205)

* feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191)

* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag

* better summary generation

* improved  summary generation for context optimization

* remove think tags from the generation

* fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216)

* fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218)

---------

Co-authored-by: Mohammad Saif Khan <[email protected]>
Co-authored-by: Anirban Kar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants