[Usage]: Qwen2-VL-2B-Instruct Issue when passing a video URL to /chat/completions #13927
Open
1 task done
Labels
usage
How to use vllm
Your current environment
Image: v0.7.3
Run Command:
--model Qwen/Qwen2.5-VL-7B-Instruct --port 8080
GPU: NVIDIA H100 PCIe
I am referencing this and previosuly also this (althought it appears this is dated).
This is the error I get from the server.
error.txt
Here is the script I am using to send the request:
What am I doing incorrectly?
How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: