BigDL Cluster Serving is a lightweight distributed, real-time serving solution that supports a wide range of deep learning models (such as TensorFlow, PyTorch, Caffe, BigDL and OpenVINO models). It provides a simple pub/sub API, so that the users can easily send their inference requests to the input queue (using a simple Python API); Cluster Serving will then automatically manage the scale-out and real-time model inference across a large cluster (using distributed streaming frameworks such as Apache Spark Streaming, Apache Flink, etc.)
Documents in these sections helps you getting started quickly with Serving.
Key Features Guide
Each guide in this section provides you with in-depth information, concepts and knowledges about DLLib key features.
Cluster Serving Examples and Tutorials.