View the runnable example on GitHub

Save and Load OpenVINO Model#

This example illustrates how to save and load a model accelerated by openVINO.

In this example, we use a pretrained ResNet18 model. Then, by calling trace(..., accelerator="openvino"), we can obtain a model accelarated by openVINO method provided by BigDL-Nano for inference. By calling save(model=..., path=...) , we could save the Nano optimized model to a folder. By calling load(path=...), we could load the OpenVINO optimized model from a folder.

First, prepare model. We need to load the pretrained ResNet18 model:

[ ]:
import torch
from torchvision.models import resnet18

model_ft = resnet18(pretrained=True)

Accelerate Inference Using OpenVINO#

[ ]:
from bigdl.nano.pytorch import InferenceOptimizer

ov_model = InferenceOptimizer.trace(model_ft,
                                    accelerator="openvino",
                                    input_sample=torch.rand(1, 3, 224, 224))

Save Optimized Model#

The saved model files will be saved at “./optimized_model_ov” directory.

There are 3 files in optimized_model_ov, users only need to take “.bin” and “.xml” file for further usage:

  • nano_model_meta.yml: meta information of the saved model checkpoint

  • ov_saved_model.bin: contains the weights and biases binary data of model

  • ov_saved_model.xml: model checkpoint for general use, describes model structure

[ ]:
InferenceOptimizer.save(ov_model, "./optimized_model_ov")

Load the Optimized Model#

[ ]:
loaded_model = InferenceOptimizer.load("./optimized_model_ov")

📝 Note

For a model accelerated by OpenVINO, we save the structure of its network. So, the original model is not needed when we load the optimized model.

Inference with the Loaded Model#

[ ]:
with InferenceOptimizer.get_context(loaded_model):
    x = torch.rand(2, 3, 224, 224)
    y_hat = loaded_model(x)
    predictions = y_hat.argmax(dim=1)
    print(predictions)

📚 Related Readings