-
-
Notifications
You must be signed in to change notification settings - Fork 981
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor performance for large (~100MB) payloads #651
Comments
I set this up to run using hypercorn (with and without uvloop) by changing the else clause in the main() function to: Click here to expand # Uncomment as appropriate
# Hypercorn:
from hypercorn.asyncio import serve, Config
config = Config()
config.bind = ["localhost:8000"] # As an example configuration setting
asyncio.run(serve(app, config))
# Hypercorn + uvloop
# import uvloop
# from hypercorn.asyncio import serve, Config
# config = Config()
# config.bind = ["localhost:8000"] # As an example configuration setting
# asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
# loop = asyncio.new_event_loop()
# asyncio.set_event_loop(loop)
# loop.run_until_complete(serve(app, config))
# Uvicorn:
# import uvicorn
# uvicorn.run(app) Where uvicorn handles the request in ~16-17s, hypercorn without uvloop handles it in ~1m6s, and hypercorn with uvloop handles it even slower at ~1m13s 👀. So maybe this is an issue with the starlette Any insight or suggested lines of investigation would be appreciated! |
I believe the issue is this line: starlette/starlette/requests.py Line 183 in 6a1c7d3
I think this is related to the quadratic-scaling problem when building strings via I'm going to investigate and if changing it to appending items to a list and calling Edit: this was indeed the issue. Even for payloads as small as 5MB, in my testing, the proposed change caused the server to handle the request ~15-20% faster. |
Closed via #653 |
awesome work @dmontagu! |
I'm noticing especially slow handling of large request bodies when running in uvicorn, and I'm trying to get to the bottom of it.
If this kind of performance is expected for for payloads of this size for any reason, please let me know.
The script below posts a large payload (~100M in size) to a
starlette.applications.Starlette
endpoint, which just returns a success response. Running via the starletteTestClient
, I get a response in ~0.65 seconds; running via uvicorn it takes ~17.5 seconds (or ~27x slower).(I'll note that this discrepancy becomes much smaller as the size of the payload decreases -- I think it was about 3-5x for 10MB, and not really significant below 1MB.)
I was able to get speeds comparable to the
TestClient
runs speed using a flask implementation. I also get similar slowdowns when running via gunicorn with a uvicorn worker (I haven't tested other servers; not sure if there are recommended alternatives).Click here to expand script
This script can perform three actions:
--uvicorn-test
(requires the server to have been previously started)TestClient
if executed with the argument--asgi-test
The script performs only a single request, but the speed difference is very consistently this extreme.
I ran cProfile over the server while the slow response (to a single request) was being generated, and by far the line that stood out was:
where
body
here is a reference to thestarlette.requests.Request.body
method. Nothing else was remotely close in theOwn Time
column. (Onlyuvloop.loop.Loop.run_until_complete
was more than 1%, and I think that was just downtime while waiting for me to trigger the request.)This was originally an issue posted to fastapi fastapi/fastapi#360, but seems to be an issue with either uvicorn or starlette. (I am going to cross post this issue to uvicorn as well.) In that issue, a (more complex) script was posted comparing the performance to that of flask; flask was able to achieve similar performance to what I can get using the ASGI TestClient.
The text was updated successfully, but these errors were encountered: