Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support for <think></think> tags to allow reasoning tokens formatted in UI #1205

Merged
merged 1 commit into from
Jan 28, 2025

Conversation

thecodacus
Copy link
Collaborator

@thecodacus thecodacus commented Jan 28, 2025

some of the providers of deepseek is using <think> tag to stream thinking token directly in the response.
this PR detects that and enclose that inside a container for better user experience

example provider Groq, ollama with deepseek models

@thecodacus thecodacus requested a review from leex279 January 28, 2025 19:13
@thecodacus thecodacus merged commit a199295 into stackblitz-labs:main Jan 28, 2025
4 checks passed
damaradiprabowo added a commit to damaradiprabowo/bolt.diy that referenced this pull request Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

* feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206)

* feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205)

* feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191)

* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag

* better summary generation

* improved  summary generation for context optimization

* remove think tags from the generation

* fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216)

* fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218)

---------

Co-authored-by: Mohammad Saif Khan <[email protected]>
Co-authored-by: Anirban Kar <[email protected]>
damaradiprabowo added a commit to damaradiprabowo/bolt.diy that referenced this pull request Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

* feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206)

* feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205)

* feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191)

* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag

* better summary generation

* improved  summary generation for context optimization

* remove think tags from the generation

* fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216)

* fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218)

---------

Co-authored-by: Mohammad Saif Khan <[email protected]>
Co-authored-by: Anirban Kar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants