Nano HPO API

Search Space

class bigdl.nano.automl.hpo.space.Categorical(*data, prefix=None)[source]

Examples.

>>> a = space.Categorical('a', 'b', 'c', 'd')
>>> b = space.Categorical('resnet50', AutoObj())

Nested search space for hyperparameters which are categorical.

Such a hyperparameter takes one value out of the discrete set of provided options. The first value in the list of options will be the default value that gets tried first during HPO.

param: datasearch space or python built-in objects.

The first value will be the default value tried first during HPO. e.g.space.Dict(hp1=space.Int(1,2), hp2=space.Int(4,5))

param: prefix: string (optional). This is useful for distinguishing

the same hyperparameter in the same layer when a layer is used more than once in the model. Defaults to None.

class bigdl.nano.automl.hpo.space.Real(lower, upper, default=None, log=False, prefix=None)[source]

Examples.

>>> learning_rate = space.Real(0.01, 0.1, log=True)

Search space for numeric hyperparameter that takes continuous values.

Example: space.Real(0.01, 0.1, log=True)

Parameters
  • lower – a float. The lower bound of the search space (minimum possible value of hyperparameter)

  • upper – a float. The upper bound of the search space (maximum possible value of hyperparameter)

  • default – a float (optional) Default value tried first during hyperparameter optimization

  • log – boolean (optional). whether to search the values on a logarithmic rather than linear scale. This is useful for numeric hyperparameters (such as learning rates) whose search space spans many orders of magnitude.

  • prefix – string (optional). This is useful for distinguishing the same hyperparameter in the same layer when a layer is used more than once in the model. Defaults to None.

class bigdl.nano.automl.hpo.space.Int(lower, upper, default=None, prefix=None)[source]

Examples.

>>> range = space.Int(0, 100)

Search space for numeric hyperparameter that takes integer values.

Parameters
  • lower – int. The lower bound of the search space (minimum possible value of hyperparameter)

  • upper – int. The upper bound of the search space (maximum possible value of hyperparameter)

  • default – int (optional) Default value tried first time during hyperparameter optimization

  • prefix – string (optional). This is useful for distinguishing the same hyperparameter in the same layer when a layer is used more than once in the model. Defaults to None.

class bigdl.nano.automl.hpo.space.Bool(default=None, prefix=None)[source]

Examples.

>>> pretrained = space.Bool()

Search space for hyperparameter that is either True or False.

space.Bool() serves as shorthand for: space.Categorical(True, False)

HPO for Tensorflow

bigdl.nano.automl.tf.keras.Model

class bigdl.nano.automl.tf.keras.Model.Model(**kwargs)[source]

Tf.keras.Model with HPO capabilities.

Initializer.

search(resume=False, target_metric=None, n_parallels=1, target_metric_mode='last', **kwargs)

Do the hyper param search.

Parameters
  • resume – bool, optional. whether to resume the previous tuning. Defaults to False.

  • target_metric – str, optional. the target metric to optimize. Defaults to “accuracy”.

  • n_parallels – number of parallel processes to run trials.

  • target_metric_mode

    target metric of which epoch to report as the final result, possible options are:

    ’max’: maximum value of all epochs’s results ‘min’: minimum value of all epochs’s results ‘last’: result of the last epoch ‘auto’: if direction is maximize, use max mode

    if direction is minimize, use min mode

  • kwargs – model.fit arguments (e.g. batch_size, validation_data, etc.) and search backend arguments (e.g. n_trials, pruner, etc.) are allowed in kwargs.

search_summary()

Retrive a summary of trials.

Returns

A summary of all the trials. Currently the entire study is returned to allow more flexibility for further analysis and visualization.

bigdl.nano.automl.tf.keras.Sequential

class bigdl.nano.automl.tf.keras.Sequential.Sequential(layers=None, name=None)[source]

Tf.keras.Sequential with HPO capabilities.

Initialzier.

Parameters
  • layers – a list of layers (optional). Defults to None.

  • name – str(optional), name of the model. Defaults to None

search(resume=False, target_metric=None, n_parallels=1, target_metric_mode='last', **kwargs)

Do the hyper param search.

Parameters
  • resume – bool, optional. whether to resume the previous tuning. Defaults to False.

  • target_metric – str, optional. the target metric to optimize. Defaults to “accuracy”.

  • n_parallels – number of parallel processes to run trials.

  • target_metric_mode

    target metric of which epoch to report as the final result, possible options are:

    ’max’: maximum value of all epochs’s results ‘min’: minimum value of all epochs’s results ‘last’: result of the last epoch ‘auto’: if direction is maximize, use max mode

    if direction is minimize, use min mode

  • kwargs – model.fit arguments (e.g. batch_size, validation_data, etc.) and search backend arguments (e.g. n_trials, pruner, etc.) are allowed in kwargs.

search_summary()

Retrive a summary of trials.

Returns

A summary of all the trials. Currently the entire study is returned to allow more flexibility for further analysis and visualization.

HPO for PyTorch

bigdl.nano.pytorch.Trainer

class bigdl.nano.pytorch.Trainer(*args: Any, **kwargs: Any)[source]

Trainer for BigDL-Nano pytorch.

This Trainer extends PyTorch Lightning Trainer by adding various options to accelerate pytorch training.

A pytorch lightning trainer that uses bigdl-nano optimization.

Parameters
  • num_processes – number of processes in distributed training. default: 4.

  • use_ipex – whether we use ipex as accelerator for trainer. default: False.

  • cpu_for_each_process – A list of length num_processes, each containing a list of indices of cpus each process will be using. default: None, and the cpu will be automatically and evenly distributed among processes.

  • precision – Double precision (64), full precision (32), half precision (16) or bfloat16 precision (bf16), defaults to 32. Enable ipex bfloat16 weight prepack when use_ipex=True and precision=’bf16’

search(model, resume: bool = False, target_metric=None, n_parallels=1, acceleration=False, input_sample=None, **kwargs)[source]

Run HPO search. It will be called in Trainer.search().

Parameters
  • model – The model to be searched. It should be an auto model.

  • resume – whether to resume the previous or start a new one, defaults to False.

  • target_metric – the object metric to optimize, defaults to None.

  • n_parallels – the number of parallel processes for running trials.

  • acceleration – Whether to automatically consider the model after inference acceleration in the search process. It will only take effect if target_metric contains “latency”. Default value is False.

  • input_sample – A set of inputs for trace, defaults to None if you have trace before or model is a LightningModule with any dataloader attached.

Returns

the model with study meta info attached.

search_summary()[source]

Retrive a summary of trials.

Returns

A summary of all the trials. Currently the entire study is returned to allow more flexibility for further analysis and visualization.