forked from stackblitz/bolt.new
-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: enhanced Code Context and Project Summary Features #1191
Merged
thecodacus
merged 9 commits into
stackblitz-labs:main
from
thecodacus:context-optimization-enhanch
Jan 29, 2025
Merged
feat: enhanced Code Context and Project Summary Features #1191
thecodacus
merged 9 commits into
stackblitz-labs:main
from
thecodacus:context-optimization-enhanch
Jan 29, 2025
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…optimization-enhanch
leex279
approved these changes
Jan 28, 2025
damaradiprabowo
added a commit
to damaradiprabowo/bolt.diy
that referenced
this pull request
Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202) Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns. * feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206) * feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205) * feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191) * fix: docker prod env variable fix * lint and typecheck * removed hardcoded tag * better summary generation * improved summary generation for context optimization * remove think tags from the generation * fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216) * fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218) --------- Co-authored-by: Mohammad Saif Khan <[email protected]> Co-authored-by: Anirban Kar <[email protected]>
damaradiprabowo
added a commit
to damaradiprabowo/bolt.diy
that referenced
this pull request
Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202) Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns. * feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206) * feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205) * feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191) * fix: docker prod env variable fix * lint and typecheck * removed hardcoded tag * better summary generation * improved summary generation for context optimization * remove think tags from the generation * fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216) * fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218) --------- Co-authored-by: Mohammad Saif Khan <[email protected]> Co-authored-by: Anirban Kar <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Enhanced Chat Context and Summary Features
Overview
This PR introduces comprehensive improvements to the chat interface and LLM interaction system, focusing on better context management, chat summaries, and UI enhancements. The changes aim to provide users with better visibility into the conversation context and improve the overall chat experience.
Key Changes
1. Chat Interface Enhancements
2. LLM System Improvements
Technical Details
AssistantMessage Component
LLM Processing
Testing
Preview