-
Notifications
You must be signed in to change notification settings - Fork 879
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is TorchServe No Longer Actively Maintained? #3396
Comments
Could you clarify your decision? What do you plan to use in the future? |
If torchserve is no longer the way to serve PyTorch models, what else is out there? |
The best like for like replacement is probably Nvidia Triton with the pytorch backend right now I think |
Developer of LitServe here - LitServe has similar API interface and on-par performance so super easy to port your application. |
Just want to share a list of resources to go from here... Ray Serve (https://docs.ray.io/en/latest/serve/index.html) So far I looked at BentoML and RayServe.
in LiteServe I cannot find any model managment (?) If you are also comming from TorchServe and like OpenSource, the packaging and management api (like me =) feel free to share your experiences/research or correct or add anything, I'm still searching/researching. Cheers. |
Does anyone have any recommendations for alternative frameworks that allow per-model user-provided code like torchserve's |
The "no longer actively maintained" notice should include a date. Especially true for the documentation at pytorch.org/serve Without having to dig I should be able to determine how recently the project has been abandoned. Thankfully google found this repo and the list of releases has the latest release in 2024-09 |
Hi @geodavic, I’d recommend LitServe as a great alternative. As a contributor, I can say it offers a user-friendly interface for serving models with excellent performance. Feel free to try it out and let me know if you have any questions! 😊 |
@bhimrazy What I miss TorchServe is it support seperate endpoints and different GPU configuration for multiple models. However LitServe doesn't according to https://lightning.ai/docs/litserve/features/multiple-endpoints#multiple-routes. |
Hi @yuzhichang, Btw, you can easily configure devices, GPUs, and workers while setting up the LitServer (see: LitServer Devices). For multiple endpoints, I’d suggest creating a Docker image for each endpoint and serving them that way. If you’d like to share any thoughts or use cases on multiple endpoints, feel free to add them to this issue: #271. 😊 |
The text was updated successfully, but these errors were encountered: