Nano How-to Guides ========================= .. note:: This page is still a work in progress. We are adding more guides. In Nano How-to Guides, you could expect to find multiple task-oriented, bite-sized, and executable examples. These examples will show you various tasks that BigDL-Nano could help you accomplish smoothly. Preprocessing Optimization --------------------------- PyTorch ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a computer vision data processing pipeline `_ Training Optimization ------------------------- PyTorch Lightning ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a PyTorch Lightning application on training workloads through IntelĀ® Extension for PyTorch* `_ * `How to accelerate a PyTorch Lightning application on training workloads through multiple instances `_ * `How to use the channels last memory format in your PyTorch Lightning application for training `_ * `How to conduct BFloat16 Mixed Precision training in your PyTorch Lightning application `_ PyTorch ~~~~~~~~~~~~~~~~~~~~~~~~~ * |convert_pytorch_training_torchnano|_ * |use_nano_decorator_pytorch_training|_ * `How to accelerate a PyTorch application on training workloads through IntelĀ® Extension for PyTorch* `_ * `How to accelerate a PyTorch application on training workloads through multiple instances `_ * `How to use the channels last memory format in your PyTorch application for training `_ * `How to conduct BFloat16 Mixed Precision training in your PyTorch application `_ .. |use_nano_decorator_pytorch_training| replace:: How to accelerate your PyTorch training loop with ``@nano`` decorator .. _use_nano_decorator_pytorch_training: Training/PyTorch/use_nano_decorator_pytorch_training.html .. |convert_pytorch_training_torchnano| replace:: How to convert your PyTorch training loop to use ``TorchNano`` for acceleration .. _convert_pytorch_training_torchnano: Training/PyTorch/convert_pytorch_training_torchnano.html TensorFlow ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a TensorFlow Keras application on training workloads through multiple instances `_ * |tensorflow_training_embedding_sparseadam_link|_ * `How to conduct BFloat16 Mixed Precision training in your TensorFlow application `_ .. |tensorflow_training_embedding_sparseadam_link| replace:: How to optimize your model with a sparse ``Embedding`` layer and ``SparseAdam`` optimizer .. _tensorflow_training_embedding_sparseadam_link: Training/TensorFlow/tensorflow_training_embedding_sparseadam.html General ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to choose the number of processes for multi-instance training `_ Inference Optimization ------------------------- OpenVINO ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to run inference on OpenVINO model `_ * `How to run asynchronous inference on OpenVINO model `_ * `How to accelerate a PyTorch / TensorFlow inference pipeline on Intel GPUs through OpenVINO `_ PyTorch ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a PyTorch inference pipeline through ONNXRuntime `_ * `How to accelerate a PyTorch inference pipeline through OpenVINO `_ * `How to accelerate a PyTorch inference pipeline through JIT/IPEX `_ * `How to accelerate a PyTorch inference pipeline through multiple instances `_ * `How to quantize your PyTorch model for inference using Intel Neural Compressor `_ * `How to quantize your PyTorch model for inference using OpenVINO Post-training Optimization Tools `_ * |pytorch_inference_context_manager_link|_ * `How to save and load optimized IPEX model `_ * `How to save and load optimized JIT model `_ * `How to save and load optimized ONNXRuntime model `_ * `How to save and load optimized OpenVINO model `_ * `How to find accelerated method with minimal latency using InferenceOptimizer `_ .. |pytorch_inference_context_manager_link| replace:: How to use context manager through ``get_context`` .. _pytorch_inference_context_manager_link: Inference/PyTorch/pytorch_context_manager.html TensorFlow ~~~~~~~~~~~~~~~~~~~~~~~~~ * `How to accelerate a TensorFlow inference pipeline through ONNXRuntime `_ * `How to accelerate a TensorFlow inference pipeline through OpenVINO `_ * `How to save and load optimized ONNXRuntime model in TensorFlow `_ * `How to save and load optimized OpenVINO model in TensorFlow `_ Install ------------------------- * `How to install BigDL-Nano in Google Colab `_ * `How to install BigDL-Nano on Windows `_