Accelerate PyTorch Lightning Training using Intel® Extension for PyTorch*#
bigdl.nano.pytorch.Trainer API extends PyTorch Lightning Trainer with multiple integrated optimizations. You can instantiate a BigDL-Nano
use_ipex=True to apply Intel® Extension for PyTorch* (also known as IPEX) for an extra performance boost on Intel hardware.
Before starting your PyTorch Lightning application, it is highly recommended to run
source bigdl-nano-initto set several environment variables based on your current hardware. Empirically, these variables will bring big performance increase for most PyTorch Lightning applications on training workloads.
model = MyLightningModule() train_loader, val_loader = create_dataloaders()
The definition of
create_dataloaders can be found in the runnable example.
To use IPEX for better performance, you could simply import BigDL-Nano Trainer, and set
use_ipex to be
from bigdl.nano.pytorch import Trainer trainer = Trainer(max_epochs=5, use_ipex=True)
You could then do the normal training (and evaluation) steps with the IPEX accelerated trainer:
trainer.fit(model, train_dataloaders=train_loader) trainer.validate(model, dataloaders=val_loader)
📚 Related Readings