View the runnable example on GitHub

Use Channels Last Memory Format in PyTorch Training#

TorchNano (bigdl.nano.pytorch.TorchNano) supports the channels last memory format to store models and tensors, i.e. NHWC (batch size, height, width, channels), in substitution for classic/contiguous NCHW order. Here we provide 2 ways to achieve this: A) subclass TorchNano or B) use @nano decorator. You can choose the appropriate one depending on your (preferred) code structure.

📝 Note

Before starting your PyTorch application, it is highly recommended to run source bigdl-nano-init to set several environment variables based on your current hardware. Empirically, these variables will greatly improve performance for most PyTorch applications on training workloads.

A) Subclass TorchNano#

In general, two steps are required if you choose to subclass TorchNano:

  1. import and subclass TorchNano, and override its train() method

  2. instantiate it with setting channels_last=True, then call the train() method

For step 1, you can refer to this page to achieve it (for consistency, we use the same model and dataset as an example). Supposing that you’ve already got a well-defined subclass MyNano, below line will instantiate it with enabling channels last memory format, and call its train() method.

[ ]:
MyNano(channels_last=True).train()

      The detailed definition of MyNano can be found in the runnable example.

📝 Note

Channels last memory format (NHWC) is currently only implemented as an alternative to 4-dimensional NCHW tensors.

To use the channels last memory format, there is no need to modify your torch.nn.Module and dataloaders, the only change to make is setting channels_last=True.

B) Use @nano decorator#

@nano decorator is very friendly since you can only add 2 new lines (import it and wrap the training function) and enjoy the features brought by BigDL-Nano if you have already defined a PyTorch training function with a model, optimizers, and dataloaders as parameters. You can learn the usage and notes of it from here. The only difference when using channels last memory format is that you should specify the decorator as @nano(channels_last=True).

[ ]:
from tqdm import tqdm
from bigdl.nano.pytorch import nano # import nano decorator

@nano(channels_last=True) # apply the decorator to the training loop
def training_loop(model, optimizer, train_loader, num_epochs, loss_func):

    for epoch in range(num_epochs):

        model.train()
        train_loss, num = 0, 0
        with tqdm(train_loader, unit="batch") as tepoch:
            for data, target in tepoch:
                tepoch.set_description(f"Epoch {epoch}")
                optimizer.zero_grad()
                output = model(data)
                loss = loss_func(output, target)
                loss.backward()
                optimizer.step()
                loss_value = loss.sum()
                train_loss += loss_value
                num += 1
                tepoch.set_postfix(loss=loss_value)
            print(f'Train Epoch: {epoch}, avg_loss: {train_loss / num}')

      A runnable example including this training_loop can be seen from here.

📝 Note

Channels last memory format (NHWC) is currently only implemented as an alternative to 4-dimensional NCHW tensors.

To use the channels last memory format, there is no need to modify your torch.nn.Module and dataloaders, the only change to make is setting channels_last=True.