Nano How-to Guides#
Note
This page is still a work in progress. We are adding more guides.
In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly.
Preprocessing Optimization#
PyTorch#
Training Optimization#
PyTorch Lightning#
PyTorch#
How to convert your PyTorch training loop to use
TorchNano
for accelerationHow to accelerate your PyTorch training loop with
@nano
decoratorHow to accelerate a PyTorch application on training workloads through Intel® Extension for PyTorch*
How to accelerate a PyTorch application on training workloads through multiple instances
How to use the channels last memory format in your PyTorch application for training
How to conduct BFloat16 Mixed Precision training in your PyTorch application
TensorFlow#
How to accelerate a TensorFlow Keras application on training workloads through multiple instances
How to optimize your model with a sparse
Embedding
layer andSparseAdam
optimizerHow to conduct BFloat16 Mixed Precision training in your TensorFlow Keras application
How to accelerate TensorFlow Keras customized training loop through multiple instances
General#
Inference Optimization#
OpenVINO#
PyTorch#
How to find accelerated method with minimal latency using InferenceOptimizer
How to accelerate a PyTorch inference pipeline through ONNXRuntime
How to accelerate a PyTorch inference pipeline through OpenVINO
How to accelerate a PyTorch inference pipeline through JIT/IPEX
How to quantize your PyTorch model in INT8 for inference using Intel Neural Compressor
How to enable automatic context management for PyTorch inference on Nano optimized models
How to accelerate a PyTorch inference pipeline through multiple instances
How to accelerate a PyTorch inference pipeline using Intel ARC series dGPU
How to accelerate PyTorch inference using async multi-stage pipeline