Ray Serve
Model InferenceOpen SourceVerifiedOpen Source
Provider
Anyscale / Community
Scalable model-serving library built on Ray. Composes multiple ML models and Python functions into a single production endpoint with fractional GPU support, batching, model multiplexing, and autoscaling. Framework-agnostic — works with any Python ML library.
Released
2020-09
API
Available
Price
From $0
License: Apache-2.0