-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: added Gemini 2.0 Flash-thinking-exp-01-21 model w… #1202
Conversation
…ith 65k token support Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Model working.
Hi @elyzionz, |
I will be adding dymamic model loading for all these providers so we don't have to add them one by one |
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202) Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns. * feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206) * feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205) * feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191) * fix: docker prod env variable fix * lint and typecheck * removed hardcoded tag * better summary generation * improved summary generation for context optimization * remove think tags from the generation * fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216) * fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218) --------- Co-authored-by: Mohammad Saif Khan <[email protected]> Co-authored-by: Anirban Kar <[email protected]>
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202) Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns. * feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206) * feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205) * feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191) * fix: docker prod env variable fix * lint and typecheck * removed hardcoded tag * better summary generation * improved summary generation for context optimization * remove think tags from the generation * fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216) * fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218) --------- Co-authored-by: Mohammad Saif Khan <[email protected]> Co-authored-by: Anirban Kar <[email protected]>
…ith 65k token support
Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.