Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] Check that number of images matches number of <|image|> tokens with mllama #13911

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

tjohnson31415
Copy link
Contributor

@tjohnson31415 tjohnson31415 commented Feb 26, 2025

This check was originally added in
#11939

But was removed as part of the refactoring in
#11427

I also updated a test so it would have caused the crash before this fix was applied.

Relates to #10648

This check was originally added in
vllm-project#11939

But was removed as part of the refactoring in
vllm-project#11427

Signed-off-by: Travis Johnson <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

# Number of image tags is greater than the number of images provided
prompt = "<|begin_of_text|><|image|><|image|> Compare the two images" # noqa: E501
# Number of image groups is greater than the number of images provided
prompt = "<|begin_of_text|><|image|> <|image|> Compare the two images" # noqa: E501
Copy link
Member

@DarkLight1337 DarkLight1337 Feb 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this whitespace necessary? If so, please add a note.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. The crash only triggers if there are more groups of <|image|> tokens. I updated the comment from tags to groups to indicate that

Comment on lines +181 to +189
# Check that the number of image tokens matches the number of images
num_image_tokens = mm_inputs['prompt_token_ids'].count(
self.info.get_hf_config().image_token_index)
image_data = mm_data.get("image", [])
num_images = 1 if isinstance(image_data, Image) else len(image_data)
if num_image_tokens != num_images:
raise ValueError(
f"The number of image tokens ({num_image_tokens}) must be"
f" the same as the number of images ({num_images})")
Copy link
Member

@DarkLight1337 DarkLight1337 Feb 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we already check this via BaseMultiModalProcessor._validate_mm_placeholders, so I think this is no longer necessary...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The crash is reproducible on main using the request in #11939.

Looking at _validate_mm_placeholders, it seems to be a check for the placeholders generated for the encoder "prompt replacements", not a check on the number of image tokens going it to the decoder.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you mention "crash", does it occur inside the processor or inside the model? If there is a mismatch, it should throw an error in _validate_mm_placeholders after calling _find_mm_placeholders (in case prompt replacement is done in HF) or _apply_prompt_replacements (in case prompt replacement is done in vLLM).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the quick review and guidance!

When you mention "crash", does it occur inside the processor or inside the model?

In the modeling code, it triggers this assert statement when it tries to generate a cross-attention mask for an image token group that has no tiles / image embeddings.

In my (limited) understanding, all of the "prompt replacements" and "placeholders" are for injecting additional tokens corresponding to the image embeddings. In the EncDecMultiModalProcessor, the BaseMultiModalProcessor is only called with the encoder prompt (REF). create_encoder_prompt currently inspects the image data when constructing the encoder prompt (REF). Should create_encoder_prompt be changed to count the number of image tokens in the prompt instead?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants