View the runnable example on GitHub

Find Acceleration Method with the Minimum Inference Latency using InferenceOptimizer#

This example illustrates how to apply InferenceOptimizer to quickly find acceleration method with the minimum inference latency under specific restrictions or without restrictions for a trained model. In this example, we first train ResNet18 model on the cats and dogs dataset. Then, by calling optimize(), we can obtain all available accelaration combinations provided by BigDL-Nano for inference. By calling get_best_model() , we could get the best model under specific restrictions or without restrictions.

First, prepare model and dataset. We use a pretrained ResNet18 model and train the model on cats and dogs dataset in this example.

[ ]:
from torchvision.models import resnet18

model = resnet18(pretrained=True)
_, train_dataset, val_dataset = prepare_model_and_dataset(model, val_size=500)

     The full definition of function prepare_model_and_dataset could be found in the runnable example.

Obtain available accelaration combinations by optimize#

1. Default search mode#

To find acceleration method with the minimum inference latency, you could import InferenceOptimizer and call optimize method. The optimize method will run all possible acceleration combinations and output the result, it will take about 1 minute.

[ ]:
from bigdl.nano.pytorch import InferenceOptimizer
from torch.utils.data import DataLoader

# Define metric for accuracy calculation
def accuracy(pred, target):
    pred = torch.sigmoid(pred)
    return multiclass_accuracy(pred, target, num_classes=2)

optimizer = InferenceOptimizer()

# To obtain the latency of single sample, set batch_size=1
train_dataloader = DataLoader(train_dataset, batch_size=1)
val_dataloader = DataLoader(val_dataset)

optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   validation_data=val_dataloader,
                   metric=accuracy,
                   direction="max",
                   thread_num=1,
                   latency_sample_num=100)

The example output of optimizer.optimize is shown below.

 -------------------------------- ---------------------- -------------- ----------------------
|             method             |        status        | latency(ms)  |     metric value     |
 -------------------------------- ---------------------- -------------- ----------------------
|            original            |      successful      |    29.796    |        0.794         |
|              bf16              |      successful      |    16.853    |        0.794         |
|          static_int8           |      successful      |    12.149    |        0.786         |
|         jit_fp32_ipex          |      successful      |    18.647    |        0.794*        |
|  jit_fp32_ipex_channels_last   |      successful      |    21.505    |        0.794*        |
|         jit_bf16_ipex          |      successful      |     9.7      |        0.792         |
|  jit_bf16_ipex_channels_last   |      successful      |     9.84     |        0.792         |
|         openvino_fp32          |      successful      |    24.205    |        0.794*        |
|         openvino_int8          |      successful      |    5.805     |        0.792         |
|        onnxruntime_fp32        |      successful      |    19.792    |        0.794*        |
|    onnxruntime_int8_qlinear    |      successful      |     7.34     |         0.79         |
 -------------------------------- ---------------------- -------------- ----------------------
* means we assume the metric value of the traced model does not change, so we don't recompute metric value to save time.
Optimization cost 94.2s in total.

📝 Note

When specifying training_data parameter, make sure to set batch size of the training data to the same batch size you may want to use in real deploy environment, as the batch size may impact on latency.

For more information, please refer to the API Documentation.

2. All search mode#

When calling optimize, to make sure the runnng time is not too long, as shown in the above table, by default, we only iterate 10 acceleration methods that we think are generally good. However currently we have 22 acceleration methods in all, if you want to get the global optimal acceleration model, you can specify search_mode=all when calling optimize.

[ ]:
optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   validation_data=val_dataloader,
                   metric=accuracy,
                   direction="max",
                   thread_num=1,
                   search_mode='all',
                   latency_sample_num=20)

The example output of optimizer.optimize is shown below.

 -------------------------------- ---------------------- -------------- ----------------------
|             method             |        status        | latency(ms)  |     metric value     |
 -------------------------------- ---------------------- -------------- ----------------------
|            original            |      successful      |    30.457    |        0.794         |
|       fp32_channels_last       |      successful      |    28.973    |        0.794*        |
|           fp32_ipex            |      successful      |    22.663    |        0.794*        |
|    fp32_ipex_channels_last     |      successful      |    22.669    |        0.794*        |
|              bf16              |      successful      |    17.378    |        0.794         |
|       bf16_channels_last       |      successful      |    17.207    |        0.794         |
|           bf16_ipex            |      successful      |    12.634    |        0.792         |
|    bf16_ipex_channels_last     |      successful      |    13.36     |        0.792         |
|          static_int8           |      successful      |    12.317    |        0.786         |
|        static_int8_ipex        |   fail to convert    |     None     |         None         |
|            jit_fp32            |      successful      |    18.114    |        0.794*        |
|     jit_fp32_channels_last     |      successful      |    18.434    |        0.794*        |
|            jit_bf16            |      successful      |    28.988    |        0.794         |
|     jit_bf16_channels_last     |      successful      |    28.907    |        0.794         |
|         jit_fp32_ipex          |      successful      |    18.021    |        0.794*        |
|  jit_fp32_ipex_channels_last   |      successful      |    18.088    |        0.794*        |
|         jit_bf16_ipex          |      successful      |    9.838     |        0.792         |
|  jit_bf16_ipex_channels_last   |      successful      |    10.315    |        0.792         |
|         openvino_fp32          |      successful      |    24.521    |        0.794*        |
|         openvino_int8          |      successful      |    5.774     |        0.794         |
|        onnxruntime_fp32        |      successful      |    19.682    |        0.794*        |
|    onnxruntime_int8_qlinear    |      successful      |    7.726     |         0.79         |
|    onnxruntime_int8_integer    |   fail to convert    |     None     |         None         |
 -------------------------------- ---------------------- -------------- ----------------------
* means we assume the metric value of the traced model does not change, so we don't recompute metric value to save time.
Optimization cost 152.7s in total.

3. Filter acceleration methods#

In some cases, you may just want to test or compare several specific methods, there are two ways to achieve this.

  1. If you just want to test very little methods, you could just set includes parameter:

[ ]:
optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   validation_data=val_dataloader,
                   metric=accuracy,
                   direction="max",
                   thread_num=1,
                   includes=["openvino_fp32", "onnxruntime_fp32"],
                   latency_sample_num=100)

The example output of optimizer.optimize is shown below.

 -------------------------------- ---------------------- -------------- ----------------------
|             method             |        status        | latency(ms)  |     metric value     |
 -------------------------------- ---------------------- -------------- ----------------------
|            original            |      successful      |    29.859    |        0.794         |
|         openvino_fp32          |      successful      |    24.334    |        0.794*        |
|        onnxruntime_fp32        |      successful      |    20.872    |        0.794*        |
 -------------------------------- ---------------------- -------------- ----------------------
* means we assume the metric value of the traced model does not change, so we don't recompute metric value to save time.
Optimization cost 22.8s in total.
  1. If you want to test methods with specific precision / accelerator, or you want to test methods with / without ipex, you could specify precision / accelerator / use_ipex parameter:

[ ]:
optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   validation_data=val_dataloader,
                   metric=accuracy,
                   direction="max",
                   thread_num=1,
                   accelerator=('openvino', 'jit', None),
                   precision=('fp32', 'bf16'),
                   use_ipex=False,
                   latency_sample_num=100)

The example output of optimizer.optimize is shown below.

 -------------------------------- ---------------------- -------------- ----------------------
|             method             |        status        | latency(ms)  |     metric value     |
 -------------------------------- ---------------------- -------------- ----------------------
|            original            |      successful      |    30.978    |        0.794         |
|       fp32_channels_last       |      successful      |    29.663    |        0.794*        |
|              bf16              |      successful      |    17.12     |        0.794         |
|       bf16_channels_last       |      successful      |    17.709    |        0.794         |
|            jit_fp32            |      successful      |    18.411    |        0.794*        |
|     jit_fp32_channels_last     |      successful      |    18.872    |        0.794*        |
|            jit_bf16            |      successful      |    29.355    |        0.794         |
|     jit_bf16_channels_last     |      successful      |    29.236    |        0.794         |
|         openvino_fp32          |      successful      |    24.312    |        0.794*        |
 -------------------------------- ---------------------- -------------- ----------------------
* means we assume the metric value of the traced model does not change, so we don't recompute metric value to save time.
Optimization cost 60.8s in total.

📝 Note

You must pass a tuple input for parameter accelerator / precision.

In some cases, if you expect that some acceleration methods will not work for your model / not work well / run for too long / cause exceptions to the program, you could avoid running these methods by specifying excludes paramater:

[ ]:
optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   validation_data=val_dataloader,
                   metric=accuracy,
                   direction="max",
                   thread_num=1,
                   excludes=["onnxruntime_int8_qlinear", "openvino_int8"],
                   latency_sample_num=100)

The example output of optimizer.optimize is shown below.

 -------------------------------- ---------------------- -------------- ----------------------
|             method             |        status        | latency(ms)  |     metric value     |
 -------------------------------- ---------------------- -------------- ----------------------
|            original            |      successful      |    31.872    |        0.794         |
|              bf16              |      successful      |    17.326    |        0.794         |
|          static_int8           |      successful      |    12.39     |        0.786         |
|         jit_fp32_ipex          |      successful      |    18.871    |        0.794*        |
|  jit_fp32_ipex_channels_last   |      successful      |    18.453    |        0.794*        |
|         jit_bf16_ipex          |      successful      |    9.863     |        0.792         |
|  jit_bf16_ipex_channels_last   |      successful      |    9.871     |        0.792         |
|         openvino_fp32          |      successful      |    24.585    |        0.794*        |
|        onnxruntime_fp32        |      successful      |    19.452    |        0.794*        |
 -------------------------------- ---------------------- -------------- ----------------------
* means we assume the metric value of the traced model does not change, so we don't recompute metric value to save time.
Optimization cost 53.2s in total.

4. Disable validation during optimization#

If you can’t get corresponding validation dataloader for you model, or you don’t care about the possible accuracy drop, you could omit validation_data, metric, direction paramaters to disable validation:

[ ]:
optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   thread_num=1,
                   latency_sample_num=100)

The example output of optimizer.optimize is shown below.

 -------------------------------- ---------------------- --------------
|             method             |        status        | latency(ms)  |
 -------------------------------- ---------------------- --------------
|            original            |      successful      |    29.387    |
|              bf16              |      successful      |    16.657    |
|          static_int8           |      successful      |    12.323    |
|         jit_fp32_ipex          |      successful      |    18.645    |
|  jit_fp32_ipex_channels_last   |      successful      |    18.478    |
|         jit_bf16_ipex          |      successful      |    9.964     |
|  jit_bf16_ipex_channels_last   |      successful      |    9.993     |
|         openvino_fp32          |      successful      |    23.547    |
|         openvino_int8          |      successful      |    5.711     |
|        onnxruntime_fp32        |      successful      |    20.283    |
|    onnxruntime_int8_qlinear    |      successful      |    7.141     |
 -------------------------------- ---------------------- --------------
Optimization cost 49.9s in total.

5. More flexible input format#

Now that, optimize can not only accept Dataloader, but also accept Tensor or tuple of Tensor as input, as we will automatic turn them into Dataloader internally.

📝 Note

This function is mainly aimed at users who cannot obtain the corresponding dataloader and help users debug.

If you want to maximize the accuracy of quantized model, please pass in the original training/validation Dataloader as much as possible

[ ]:
sample = next(iter(train_dataloader))

optimizer.optimize(model=model,
                   training_data=sample,
                   thread_num=1,
                   latency_sample_num=100)

Obtain specific model#

You could call get_best_model method to obtain the best model under specific restrictions or without restrictions. Here we get the model with minimal latency when accuracy drop less than 5%.

[5]:
optimizer.optimize(model=model,
                   training_data=train_dataloader,
                   validation_data=val_dataloader,
                   metric=accuracy,
                   direction="max",
                   thread_num=1,
                   latency_sample_num=100)

acc_model, option = optimizer.get_best_model(accuracy_criterion=0.05)
print("When accuracy drop less than 5%, the model with minimal latency is: ", option)
When accuracy drop less than 5%, the model with minimal latency is:  openvino + int8

📝 Note

If you want to find the best model with accuracy_criterion paramter, make sure you have called optimize with validation data.

If you just want to obtain a specific model although it doesn’t have the minimal latency, you could call get_model method and specify method_name. Here we take openvino_fp32 as an example:

[ ]:
oepnvino_model = optimizer.get_model(method_name='openvino_fp32')

Inference#

Then you could use the obtained model for inference.

[6]:
with InferenceOptimizer.get_context(acc_model):
    x = next(iter(train_dataloader))[0]
    output = acc_model(x)

📝 Note

For all Nano optimized models by InferenceOptimizer.optimize, you need to wrap the inference steps with an automatic context manager InferenceOptimizer.get_context(model=...) provided by Nano. You could refer to here for more detailed usage of the context manager.

Export model#

To export the obtained model, you could simply call InferenceOptimizer.save method and pass the path to it.

[7]:
save_dir = "./best_model"
InferenceOptimizer.save(acc_model, save_dir)

The model files will be saved at ./best_model directory. For different type of the obtained model, you only need to take the following files for further usage.

  • OpenVINO

    ov_saved_model.bin: Contains the weights and biases binary data of model

    ov_saved_model.xml: Model checkpoint for general use, describes model structure

  • onnxruntime

    onnx_saved_model.onnx: Represents model checkpoint for general use, describes model structure

  • int8

    best_model.pt: Represents model optimized by Intel® Neural Compressor

  • ipex | channel_last | jit | bf16

    ckpt.pt: If jit in option, it stores model optimized using just-in-time compilation, otherwise, it stores original model weight by torch.save(model.state_dict()).

  • Others

    saved_weight.pt: Saved by torch.save(model.state_dict()).