Quantize PyTorch Model for Inference using Intel Neural Compressor#
With Intel Neural Compressor (INC) as quantization engine, you can apply
InferenceOptimizer.quantize API to realize post-training quantization on your PyTorch
InferenceOptimizer.quantize also supports ONNXRuntime acceleration at the meantime through specifying
accelerator='onnxruntime'. All acceleration takes only a few lines.
from torchvision.models import resnet18 model = resnet18(pretrained=True) _, train_dataset, val_dataset = finetune_pet_dataset(model)
The full definition of function
finetune_pet_dataset could be found in the runnable example.
Then we set the model in evaluation mode:
To enable quantization using INC for inference, you could simply import BigDL-Nano
InferenceOptimizer, and use
InferenceOptimizer to quantize your PyTorch model:
from bigdl.nano.pytorch import InferenceOptimizer q_model = InferenceOptimizer.quantize(model, calib_data=DataLoader(train_dataset, batch_size=32))
If you want to enable the ONNXRuntime acceleration at the meantime, you could just specify the
from bigdl.nano.pytorch import InferenceOptimizer q_model = InferenceOptimizer.quantize(model, accelerator='onnxruntime', calib_data=DataLoader(train_dataset, batch_size=32))
InferenceOptimizerwill by default quantize your PyTorch
nn.Modulethrough static post-training quantization. For this case,
calib_dataloader(for calibration data) is required. Batch size is not important to
calib_dataloader, as it intends to read 100 samples. And there could be no label in calibration data.
If you would like to implement dynamic post-training quantization, you could set parameter
approach='dynamic'. In this case,
None. Compared to dynamic quantization, static quantization could lead to faster inference as it eliminates the data conversion costs between layers.
Please refer to API documentation for more information on
You could then do the normal inference steps with the quantized model:
with InferenceOptimizer.get_context(q_model): x = torch.stack([val_dataset, val_dataset]) # use the quantized model here y_hat = q_model(x) predictions = y_hat.argmax(dim=1) print(predictions)