Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: enhanced Code Context and Project Summary Features #1191

Merged

Conversation

thecodacus
Copy link
Collaborator

@thecodacus thecodacus commented Jan 27, 2025

Enhanced Chat Context and Summary Features

Overview

This PR introduces comprehensive improvements to the chat interface and LLM interaction system, focusing on better context management, chat summaries, and UI enhancements. The changes aim to provide users with better visibility into the conversation context and improve the overall chat experience.

Key Changes

1. Chat Interface Enhancements

  • Added chat summary and code context display in the info popover
  • Implemented clickable code references that open directly in the workbench
  • Enhanced Popover component with configurable side and alignment options
  • Removed console.log debugging statements from Markdown component

2. LLM System Improvements

  • Refactored summary generation with a structured template format
  • Enhanced context selection and management
  • Added message slice optimization for long conversations

Technical Details

AssistantMessage Component

  • Added new state management for chatSummary and codeContext
  • Enhanced popover content with summary and context sections

LLM Processing

  • Implemented structured summary format with project overview, conversation context, and implementation status
  • Enhanced context selection with improved file handling
  • Added support for message slicing to maintain conversation continuity
  • Improved token management and continuation handling

Testing

  • Verified popover functionality with new summary and context features
  • Confirmed proper handling of long conversations with message slicing

Preview

Screenshot 2025-01-27 at 9 23 58 PM

@thecodacus thecodacus changed the title Context optimization enhanch feat: enhanced Chat Context and Summary Features Jan 27, 2025
@thecodacus thecodacus changed the title feat: enhanced Chat Context and Summary Features feat: enhanced Code Context and Project Summary Features Jan 27, 2025
@leex279
Copy link
Collaborator

leex279 commented Jan 28, 2025

  • Font-Size of Summary text is a bit to small, maybe 2pt+ would be good

Results are very good in my view.
image

@thecodacus thecodacus merged commit 7016111 into stackblitz-labs:main Jan 29, 2025
3 checks passed
@thecodacus thecodacus deleted the context-optimization-enhanch branch January 29, 2025 12:51
damaradiprabowo added a commit to damaradiprabowo/bolt.diy that referenced this pull request Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

* feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206)

* feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205)

* feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191)

* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag

* better summary generation

* improved  summary generation for context optimization

* remove think tags from the generation

* fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216)

* fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218)

---------

Co-authored-by: Mohammad Saif Khan <[email protected]>
Co-authored-by: Anirban Kar <[email protected]>
damaradiprabowo added a commit to damaradiprabowo/bolt.diy that referenced this pull request Jan 30, 2025
* feat: add Gemini 2.0 Flash-thinking-exp-01-21 model with 65k token support (stackblitz-labs#1202)

Added the new gemini-2.0-flash-thinking-exp-01-21 model to the GoogleProvider's static model configuration. This model supports a significantly increased maxTokenAllowed limit of 65,536 tokens, enabling it to handle larger context windows compared to existing Gemini models (previously capped at 8k tokens). The model is labeled as "Gemini 2.0 Flash-thinking-exp-01-21" for clear identification in the UI/dropdowns.

* feat: added more dynamic models, sorted and remove duplicate models (stackblitz-labs#1206)

* feat: support for <think></think> tags to allow reasoning tokens formatted in UI (stackblitz-labs#1205)

* feat: enhanced Code Context and Project Summary Features (stackblitz-labs#1191)

* fix: docker prod env variable fix

* lint and typecheck

* removed hardcoded tag

* better summary generation

* improved  summary generation for context optimization

* remove think tags from the generation

* fix: issue with alternate message when importing from folder and git (stackblitz-labs#1216)

* fix: tune the system prompt to avoid diff writing (stackblitz-labs#1218)

---------

Co-authored-by: Mohammad Saif Khan <[email protected]>
Co-authored-by: Anirban Kar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants