-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix] Check that number of images matches number of <|image|> tokens with mllama #13911
base: main
Are you sure you want to change the base?
Conversation
This check was originally added in vllm-project#11939 But was removed as part of the refactoring in vllm-project#11427 Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
# Number of image tags is greater than the number of images provided | ||
prompt = "<|begin_of_text|><|image|><|image|> Compare the two images" # noqa: E501 | ||
# Number of image groups is greater than the number of images provided | ||
prompt = "<|begin_of_text|><|image|> <|image|> Compare the two images" # noqa: E501 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this whitespace necessary? If so, please add a note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. The crash only triggers if there are more groups of <|image|>
tokens. I updated the comment from tags
to groups
to indicate that
# Check that the number of image tokens matches the number of images | ||
num_image_tokens = mm_inputs['prompt_token_ids'].count( | ||
self.info.get_hf_config().image_token_index) | ||
image_data = mm_data.get("image", []) | ||
num_images = 1 if isinstance(image_data, Image) else len(image_data) | ||
if num_image_tokens != num_images: | ||
raise ValueError( | ||
f"The number of image tokens ({num_image_tokens}) must be" | ||
f" the same as the number of images ({num_images})") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, we already check this via BaseMultiModalProcessor._validate_mm_placeholders
, so I think this is no longer necessary...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The crash is reproducible on main
using the request in #11939.
Looking at _validate_mm_placeholders
, it seems to be a check for the placeholders generated for the encoder "prompt replacements", not a check on the number of image tokens going it to the decoder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you mention "crash", does it occur inside the processor or inside the model? If there is a mismatch, it should throw an error in _validate_mm_placeholders
after calling _find_mm_placeholders
(in case prompt replacement is done in HF) or _apply_prompt_replacements
(in case prompt replacement is done in vLLM).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the quick review and guidance!
When you mention "crash", does it occur inside the processor or inside the model?
In the modeling code, it triggers this assert statement when it tries to generate a cross-attention mask for an image token group that has no tiles / image embeddings.
In my (limited) understanding, all of the "prompt replacements" and "placeholders" are for injecting additional tokens corresponding to the image embeddings. In the EncDecMultiModalProcessor
, the BaseMultiModalProcessor
is only called with the encoder prompt (REF). create_encoder_prompt
currently inspects the image data when constructing the encoder prompt (REF). Should create_encoder_prompt
be changed to count the number of image tokens in the prompt instead?
This check was originally added in
#11939
But was removed as part of the refactoring in
#11427
I also updated a test so it would have caused the crash before this fix was applied.
Relates to #10648