View the runnable example on GitHub
Accelerate PyTorch Lightning Training using Intel® Extension for PyTorch*#
bigdl.nano.pytorch.Trainer
API extends PyTorch Lightning Trainer with multiple integrated optimizations. You can instantiate a BigDL-Nano Trainer
with use_ipex=True
to apply Intel® Extension for PyTorch* (also known as IPEX) for an extra performance boost on Intel hardware.
📝 Note
Before starting your PyTorch Lightning application, it is highly recommended to run
source bigdl-nano-init
to set several environment variables based on your current hardware. Empirically, these variables will bring big performance increase for most PyTorch Lightning applications on training workloads.
Let’s take a self-defined LightningModule
(based on a ResNet-18 model pretrained on ImageNet dataset) and dataloaders to finetune the model on OxfordIIITPet dataset as an example:
[ ]:
model = MyLightningModule()
train_loader, val_loader = create_dataloaders()
The definition of MyLightningModule
and create_dataloaders
can be found in the runnable example.
To use IPEX for better performance, you could simply import BigDL-Nano Trainer, and set use_ipex
to be True
.
[ ]:
from bigdl.nano.pytorch import Trainer
trainer = Trainer(max_epochs=5, use_ipex=True)
You could then do the normal training (and evaluation) steps with the IPEX accelerated trainer:
[ ]:
trainer.fit(model, train_dataloaders=train_loader)
trainer.validate(model, dataloaders=val_loader)
📚 Related Readings