Ctrl+K
Logo image Logo image
latest

Site Navigation

  • User guide
  • Powered by
  • Orca
  • Nano
  • DLlib
  • Chronos
  • Friesian
  • PPML
  • Contributor guide
  • Cluster serving
  • Presentations
  • Blogs

Site Navigation

  • User guide
  • Powered by
  • Orca
  • Nano
  • DLlib
  • Chronos
  • Friesian
  • PPML
  • Contributor guide
  • Cluster serving
  • Presentations
  • Blogs
Back to Homepage ↵

Section Navigation

  • BigDL-Nano Document
  • Nano in 5 minutes
  • Installation
  • Key Features
    • PyTorch Training
    • PyTorch Inference
    • PyTorch CUDA Patch
    • TensorFlow Training
    • TensorFlow Inference
    • AutoML
  • Tutorials
    • BigDL-Nano PyTorch Trainer Quickstart
    • BigDL-Nano Pytorch TorchNano Quickstart
    • BigDL-Nano PyTorch ONNXRuntime Acceleration Quickstart
    • BigDL-Nano PyTorch OpenVINO Acceleration Quickstart
    • BigDL-Nano PyTorch Quantization with ONNXRuntime accelerator Quickstart
    • BigDL-Nano PyTorch Quantization with INC Quickstart
    • BigDL-Nano PyTorch Quantization with POT Quickstart
    • BigDL-Nano TensorFlow Training Quickstart
    • BigDL-Nano TensorFlow SparseEmbedding and SparseAdam
    • BigDL-Nano TensorFLow Quantization Quickstart
  • How-to Guides
    • Accelerate Computer Vision Data Processing Pipeline
    • Accelerate PyTorch Lightning Training using Intel® Extension for PyTorch*
    • Accelerate PyTorch Lightning Training using Multiple Instances
    • Use Channels Last Memory Format in PyTorch Lightning Training
    • Use BFloat16 Mixed Precision for PyTorch Lightning Training
    • Convert PyTorch Training Loop to Use TorchNano
    • Use @nano Decorator to Accelerate PyTorch Training Loop
    • Accelerate PyTorch Training using Intel® Extension for PyTorch*
    • Accelerate PyTorch Training using Multiple Instances
    • Use Channels Last Memory Format in PyTorch Training
    • Use BFloat16 Mixed Precision for PyTorch Training
    • Accelerate TensorFlow Keras Training using Multiple Instances
    • Apply SparseAdam Optimizer for Large Embeddings
    • Use BFloat16 Mixed Precision for TensorFlow Keras Training
    • Choose the Number of Processes for Multi-Instance Training
    • OpenVINO Inference using Nano API
    • OpenVINO Asynchronous Inference using Nano API
    • Accelerate Inference on Intel GPUs Using OpenVINO
    • Accelerate PyTorch Inference using ONNXRuntime
    • Accelerate PyTorch Inference using OpenVINO
    • Accelerate PyTorch Inference using JIT/IPEX
    • Accelerate PyTorch Inference using multiple instances
    • Quantize PyTorch Model for Inference using Intel Neural Compressor
    • Quantize PyTorch Model for Inference using OpenVINO Post-training Optimization Tools
    • Automatic inference context management by get_context
    • Save and Load Optimized IPEX Model
    • Save and Load Optimized JIT Model
    • Save and Load ONNXRuntime Model
    • Save and Load OpenVINO Model
    • Find Acceleration Method with the Minimum Inference Latency using InferenceOptimizer
    • Accelerate TensorFlow Inference using ONNXRuntime
    • Accelerate TensorFlow Inference using OpenVINO
    • Save and Load ONNXRuntime Model in TensorFlow
    • Save and Load OpenVINO Model in TensorFlow
    • Install BigDL-Nano in Google Colab
    • Install BigDL-Nano on Windows
  • Tips and Known Issues
  • Troubleshooting Guide
  • API Reference
    • Nano PyTorch API
    • Nano Tensorflow API
    • Nano HPO API

Nano Key Features#

  • PyTorch Training

  • PyTorch Inference

  • PyTorch CUDA patch

  • Tensorflow Training

  • Tensorflow Inference

  • AutoML

previous

Nano Installation

next

PyTorch Training

Show Source

© Copyright 2020, BigDL Authors.

Created using Sphinx 4.5.0.