View the runnable example on GitHub
Save and Load Optimized JIT Model#
This example illustrates how to save and load a model accelerated by JIT.
In this example, we use a pretrained ResNet18 model. Then, by calling InferenceOptimizer.trace(..., accelerator="jit")
, we can obtain a model accelarated by JIT method. By calling InferenceOptimizer.save(model=..., path=...)
, we could save the Nano optimized model to a folder. By calling InferenceOptimizer.load(path=...)
, we could load the JIT optimized model from a folder.
First, prepare model. We need to load the pretrained ResNet18 model:
[ ]:
import torch
from torchvision.models import resnet18
model_ft = resnet18(pretrained=True)
Accelerate Inference Using JIT#
[ ]:
from bigdl.nano.pytorch import InferenceOptimizer
jit_model = InferenceOptimizer.trace(model_ft,
accelerator="jit",
input_sample=torch.rand(1, 3, 224, 224))
Save Optimized JIT Model#
The saved model files will be saved at “./optimized_model_jit” directory.
There are 2 files in optimized_model_jit, users only need to take “ckpt.pth” file for further usage:
nano_model_meta.yml: meta information of the saved model checkpoint
ckpt.pth: JIT model checkpoint for general use, describes model structure
[ ]:
InferenceOptimizer.save(jit_model, "./optimized_model_jit")
Load the Optimized Model#
[ ]:
loaded_model = InferenceOptimizer.load("./optimized_model_jit")
📝 Note
For a model accelerated by JIT, we save the structure of its network. So, the original model is not needed when we load the optimized model.
Inference with the Loaded Model#
[ ]:
with InferenceOptimizer.get_context(loaded_model):
x = torch.rand(2, 3, 224, 224)
y_hat = loaded_model(x)
predictions = y_hat.argmax(dim=1)
print(predictions)
📚 Related Readings