Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: Qwen2-VL-2B-Instruct Issue when passing a video URL to /chat/completions #13927

Open
1 task done
cquil11 opened this issue Feb 26, 2025 · 0 comments
Open
1 task done
Labels
usage How to use vllm

Comments

@cquil11
Copy link

cquil11 commented Feb 26, 2025

Your current environment

Image: v0.7.3
Run Command: --model Qwen/Qwen2.5-VL-7B-Instruct --port 8080
GPU: NVIDIA H100 PCIe

I am referencing this and previosuly also this (althought it appears this is dated).

This is the error I get from the server.

error.txt

Here is the script I am using to send the request:

from openai import OpenAI

openai_api_key = "EMPTY"
openai_api_base = "vLLM endpoint running Qwen 2.5 VL"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

video_url = "path to public s3 file containing .mp4"
model = "Qwen/Qwen2.5-VL-7B-Instruct"

chat_response = client.chat.completions.create(
    model=model,
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What’s in this video?"},
            {"type": "video_url", "video_url": {"url": video_url}},
        ],
    }],
)

result = chat_response.choices[0].message.content
print("Chat completion output from image url:", result)

What am I doing incorrectly?

How would you like to use vllm

I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@cquil11 cquil11 added the usage How to use vllm label Feb 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

1 participant