-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V1][Pixtral-HF] Add custom slice_encoder_output
for Pixtral
#13080
base: main
Are you sure you want to change the base?
Conversation
* confirm that `offline_inference_vision_language.py` and `benchmark_throughput.py` runs * FIXME: the placeholders in output, however, is empty - will fix in next commit Signed-off-by: Linkun Chen <[email protected]>
* add test for pixtral * fix a typo Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
…ect#10383) Signed-off-by: youkaichao <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
…odels (vllm-project#10374) Signed-off-by: Roger Wang <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Chendi Xue <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
…ject#10394) Signed-off-by: Isotr0py <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: youkaichao <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
…ject#10403) Signed-off-by: imkero <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
vllm-project#10392) Signed-off-by: wchen61 <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Cyrus Leung <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
…m-project#10327) Signed-off-by: Isotr0py <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
…vllm-project#10375) Signed-off-by: Hollow Man <[email protected]> Signed-off-by: Linkun Chen <[email protected]>
… optional argument also require it to be passed as kwargs, to avoid breaking existing code. Signed-off-by: Linkun Chen <[email protected]>
… optional argument also require it to be passed as kwargs, to avoid breaking existing code. Signed-off-by: Linkun Chen <[email protected]>
… optional argument also require it to be passed as kwargs, to avoid breaking existing code. Signed-off-by: Linkun Chen <[email protected]>
mypy is not smart enough to validate kwargs Signed-off-by: Linkun Chen <[email protected]>
mypy is not smart enough to validate kwargs Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Prepare for vllm-project#11409 For pixtral model, we need to insert placeholders in the middle of encoder output, to fit into whole soft embedding. This case makes slicing operation tricky. This PR raises assertion if something's off. Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
Signed-off-by: Linkun Chen <[email protected]>
For your reference, I have added a mapping from encoder outputs to embeddings in the outputs of Molmo multi-modal processor (#12966, see |
Prepare for #11409
This PR allows model to override
_gather_encoder_outputs
logic. Models like Pixtral need to take special tokens (break/end token) into consideration while gathering soft tokens.See this comment for more context.
Test
Tested by running
with #11409 patched, results are the same as V0.
cc @WoosukKwon @comaniac @ywang96