View the runnable example on GitHub

Accelerate TensorFlow Inference using OpenVINO#

You can use InferenceOptimizer.trace(..., accelerator='openvino') API to enable the OpenVINO acceleration for TensorFlow inference. It only takes a few lines.

Let’s take an EfficientNetB0 model pretrained on ImageNet dataset as an example. First, we load the model:

[ ]:
from tensorflow.keras.applications import EfficientNetB0

model = EfficientNetB0(weights='imagenet')

To enable OpenVINO acceleration for your TensorFlow inference, the only change you need to made is to import BigDL-Nano InferenceOptimizer, and trace your TensorFlow model to convert it into an OpenVINO accelerated module for inference:

[ ]:
import tensorflow as tf
from bigdl.nano.tf.keras import InferenceOptimizer

ov_model = InferenceOptimizer.trace(model,
                                    accelerator="openvino")

📝 Note

Note that when you have a custom model (e.g. inherited from tf.keras.Model), parameter input_spec, which should be a (list or tuple of) tf.TensorSpec, is required for the trace function to let OpenVINO accelerator know the shape of the model input.

Please refer to API documentation for more information on InferenceOptimizer.trace.

You could then do the normal inference steps with the model optimized by OpenVINO:

[ ]:
x = tf.random.normal(shape=(2, 224, 224, 3))
# use the optimized model here
y_hat = ov_model(x)
predictions = tf.argmax(y_hat, axis=1)
print(predictions)