-
-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Model] Support VLMs with transformers backend #13754
base: main
Are you sure you want to change the base?
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
||
self.model: PreTrainedModel = AutoModel.from_config( | ||
self.config, | ||
attn_implementation="vllm", | ||
attn_implementation={"text_config": "vllm", "vision_config": "eager"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current way doesn't work with LLMs. Setting self.config.get_text_config().attn_implementation="vllm"
will be much more generic, but that needs a fix on transformers
first
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed! Let's work on refactoring vision models! (tho not "that" important because paged is mostly for text, here using flex / sdpa would work better in general)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, making it a new model will self-solve the problem, so we can use 'sdpa' in HFMultiModalModel
. We can work on refactoring later, it's been long on my list
@MULTIMODAL_REGISTRY.register_processor(MultiModalProcessor, | ||
info=MultiModalProcessingInfo, | ||
dummy_inputs=MultiModalDummyInputsBuilder) | ||
class TransformersModel(nn.Module, SupportsQuant, SupportsMultiModal): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the same class to support both text-only models and multimodal models makes it difficult to maintain V1 compatibility, see: #13157
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arf, in my mind we would only need 1 "wrapper" to rule them all 😃 But makes sense. We can also have something like is multimodal, but might no work on your side!
Thanks for working on this! The main difficulty of supporting VLMs is not the model implementation itself, but rather the preprocessing code - vLLM V1 in particular requires precise tracking of the placeholder tokens. I see how generalizing
cc @ywang96 |
@DarkLight1337 thanks for review! Yeah, checking on more involved models is a good idea to verify all edge cases are covered, will do so. A few clarifications before that:
|
Yes, we currently support mixed-modality (non-interleaved) inputs and plan to eventually support interleaved-modality inputs as well.
We assume that the tokens have only gone through the tokenizer. So, placeholder tokens still have to be inserted into the input tokens. It's fine if we leave this unsolved for now - we can fall back to detokenizing the tokens back into text before passing them through HF processor. |
Thanks for the PR @zucchini-nlp! I'm a bit occupied at the moment but will take a first pass later tonight. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have no say in this but that I am excited! 🚀
@MULTIMODAL_REGISTRY.register_processor(MultiModalProcessor, | ||
info=MultiModalProcessingInfo, | ||
dummy_inputs=MultiModalDummyInputsBuilder) | ||
class TransformersModel(nn.Module, SupportsQuant, SupportsMultiModal): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arf, in my mind we would only need 1 "wrapper" to rule them all 😃 But makes sense. We can also have something like is multimodal, but might no work on your side!
|
||
self.model: PreTrainedModel = AutoModel.from_config( | ||
self.config, | ||
attn_implementation="vllm", | ||
attn_implementation={"text_config": "vllm", "vision_config": "eager"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indeed! Let's work on refactoring vision models! (tho not "that" important because paged is mostly for text, here using flex / sdpa would work better in general)
mask = (input_ids == self.config.image_token_index) | ||
mask = mask.unsqueeze(-1).expand_as(inputs_embeds) | ||
multimodal_embeddings = torch.cat(multimodal_embeddings) | ||
|
||
# FIXME: The returned multimodal_embeddings must be either a 3D torch.Tensor of shape | ||
# (num_items, feature_size, hidden_size), or a list / tuple of 2D torch.Tensor’s of shape | ||
# (feature_size, hidden_size), so that multimodal_embeddings[i] retrieves the embeddings generated | ||
# from the i-th multimodal data item (e.g, image) of the request. | ||
inputs_embeds = inputs_embeds.masked_scatter(mask, multimodal_embeddings) | ||
return inputs_embeds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know Cyrus has mentioned this, but I'd want to emphasize that for some models we need to consider the padding tokens in-between image tokens also as part of the feature tokens, then when we merge the embeddings, the mask is created on top of all these tokens.
How we handled mistral-format Pixtral is another example for this scenario in addition to Fuyu.
vllm/vllm/model_executor/models/pixtral.py
Lines 223 to 266 in aabeb26
def get_multimodal_embeddings(self, **kwargs) -> Optional[NestedTensors]: | |
image_input, image_tokens = self._parse_and_validate_image_input( | |
**kwargs) | |
if image_input is None: | |
return None | |
vision_embeddings = self._process_image_input(image_input) | |
# NOTE: We patch the outputs of the vision encoder with embeddings | |
# from `[IMG_BREAK]` and `[IMG_END]` tokens. | |
image_embeds = self.language_model.get_input_embeddings(image_tokens) | |
image_token_mask = image_tokens == self.vision_args.image_token_id | |
image_embeds[image_token_mask] = vision_embeddings | |
# NOTE: Image embeddings are split into separate tensors for each image | |
# by the indices of `[IMG_END]` token. | |
image_end_mask = image_tokens == self.vision_args.image_end_token_id | |
split_indices = torch.where(image_end_mask)[0] + 1 | |
if len(split_indices) <= 1: | |
# Do not split, return as tensor of shape [1, fs, hs] | |
return image_embeds.unsqueeze(0) | |
# If the last split index is the last index in image_tokens, we | |
# ignore it to avoid empty split tensor | |
if split_indices[-1] == len(image_tokens): | |
split_indices = split_indices[:-1] | |
image_embeds = image_embeds.tensor_split(split_indices.cpu()) | |
return image_embeds | |
def get_input_embeddings( | |
self, | |
input_ids: torch.Tensor, | |
multimodal_embeddings: Optional[NestedTensors] = None, | |
) -> torch.Tensor: | |
inputs_embeds = self.language_model.get_input_embeddings(input_ids) | |
if multimodal_embeddings is not None: | |
inputs_embeds = merge_multimodal_embeddings( | |
input_ids, inputs_embeds, multimodal_embeddings, [ | |
self.vision_args.image_token_id, | |
self.vision_args.image_break_token_id, | |
self.vision_args.image_end_token_id, | |
]) | |
return inputs_embeds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not familiar with Fuyu arch, so looking at Pixtral now I don't totally get why the outputs from get_multimodal_embedding
contain image special tokens? Isn't it more straightforward to obtain only image related features and merge using mask (ids == special_image_token)?
Or maybe I am missing smth here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On V1 we require each embedding tensor corresponding to an image be contiguous and they must be "sliced into" the input embeddings instead of "scattered into".
The reason we're doing this is because V1 natively supports chunked prefilling, so when we schedule requests, we not only look at the decoder input sequence, but also when and how much of the encoder output/multimodal embeddings is needed which is why we need to track where they are in the input sequence.
This location information is represented by PlaceholderRange
generated by the input processor, which contains:
(1) offset
: the index where the multimodal embedding starts in the input sequence
(2) length
: how long it is (and obviously the actual size of the correponding embeddings from the output of get_multimodal_embedding
must match this).
Taking a toy example: my input sequence is [T, T, T, T, I, I, I, I, I, T]
where T
is text token and I
is image placeholder token, then we have offset=4, length=5
. Let's say in each step we schedule 3 tokens for prefilling:
Time 0: we schedule from start_index=0
to end_index=2
([T, T, T]
), since 2 < offset=4
, we know the image embeddings image_embeds
is not needed yet.
Time 1: we schedule from start_index=3
to end_index=5
([T, I, I]
), since 5 >= 4
, we know image_embeds
is required for this step, so we run the vision encoder to generate it, and merge image_embeds[:(5-4+1)]
into [T, I, I]
), and we cache image_embeds
becasue end_index=5 + 1 = 6 < 4 + 5 = 9 (offset + length)
, indicating it's still needed for the next chunked prefills.
Time 2: we schedule from start_index=6
to end_index=8
([I, I, I]
), since 8 >= 4
, we merge image_embeds[(5-4+1):(8-4+1)]
into [I, I, I]
), and now we can evict image_embeds
since end_index=8 + 1 = 9 >= 9
, indicating it's no longer needed for prefilling.
Time 3: we schedule from start_index=9
to end_index=9
([T]
), since 9 >= 9
we know image_embeds
is not required for this step
As you can see the entire logic here assumes the contiguous nature of multimodal embeddings in the input sequence. If we allow the embeddings to be scattered into the text embeddings, we would need a more complex definition for this location information, and thus also complicate the logic for scheduling.
Hope this helps!
This PR adds support for multimodal models in Transformers backend. As a start I tested with vanilla LLaVA using demo scripts from the documentation. The generated outputs matched with VLLM outputs.
For this branch to work, we first need a few changes from
transformers
starting from huggingface/transformers#36367. Currently I want to ask for feedback, if this aligns with how VLLM sees thingscc @Isotr0py @ArthurZucker