Orca API

orca.learn.bigdl.estimator

class bigdl.orca.learn.bigdl.estimator.Estimator[source]

Bases: object

static from_bigdl(*, model, loss=None, optimizer=None, metrics=None, feature_preprocessing=None, label_preprocessing=None, model_dir=None)[source]

Construct an Estimator with BigDL model, loss function and Preprocessing for feature and label data.

Parameters
  • model – BigDL Model to be trained.

  • loss – BigDL criterion.

  • optimizer – BigDL optimizer.

  • metrics – A evaluation metric or a list of evaluation metrics

  • feature_preprocessing

    Used when data in fit and predict is a Spark DataFrame. The param converts the data in feature column to a Tensor or to a Sample directly. It expects a List of Int as the size of the converted Tensor, or a Preprocessing[F, Tensor[T]]

    If a List of Int is set as feature_preprocessing, it can only handle the case that feature column contains the following data types: Float, Double, Int, Array[Float], Array[Double], Array[Int] and MLlib Vector. The feature data are converted to Tensors with the specified sizes before sending to the model. Internally, a SeqToTensor is generated according to the size, and used as the feature_preprocessing.

    Alternatively, user can set feature_preprocessing as Preprocessing[F, Tensor[T]] that transforms the feature data to a Tensor[T]. Some pre-defined Preprocessing are provided in package bigdl.dllib.feature. Multiple Preprocessing can be combined as a ChainedPreprocessing.

    The feature_preprocessing will also be copied to the generated NNModel and applied to feature column during transform.

  • label_preprocessing – Used when data in fit and predict is a Spark DataFrame. similar to feature_preprocessing, but applies to Label data.

  • model_dir – The path to save model. During the training, if checkpoint_trigger is defined and triggered, the model will be saved to model_dir.

Returns

class bigdl.orca.learn.bigdl.estimator.BigDLEstimator(*, model, loss, optimizer=None, metrics=None, feature_preprocessing=None, label_preprocessing=None, model_dir=None)[source]

Bases: bigdl.orca.learn.spark_estimator.Estimator

fit(data, epochs, batch_size=32, feature_cols='features', label_cols='label', caching_sample=True, validation_data=None, validation_trigger=None, checkpoint_trigger=None)[source]

Train this BigDL model with train data.

Parameters
  • data – train data. It can be XShards or Spark DataFrame. If data is XShards, each partition is a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a list of numpy arrays.

  • epochs – Number of epochs to train the model.

  • batch_size – Batch size used for training. Default: 32.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame. Default: “features”.

  • label_cols – Label column name(s) of data. Only used when data is a Spark DataFrame. Default: “label”.

  • caching_sample – whether to cache the Samples after preprocessing. Default: True

  • validation_data – Validation data. XShards and Spark DataFrame are supported. If data is XShards, each partition is a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a list of numpy arrays.

  • validation_trigger – Orca Trigger to trigger validation computation.

  • checkpoint_trigger – Orca Trigger to set a checkpoint.

Returns

predict(data, batch_size=4, feature_cols='features', sample_preprocessing=None)[source]

Predict input data

Parameters
  • data – predict input data. It can be XShards or Spark DataFrame. If data is XShards, each partition is a dictionary of {‘x’: feature}, where feature is a numpy array or a list of numpy arrays.

  • batch_size – Batch size used for inference. Default: 4.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame. Default: “features”.

  • sample_preprocessing – Used when data is a Spark DataFrame. If the user want change the default feature_preprocessing specified in Estimator.from_bigdl, the user can pass the new sample_preprocessing methods.

Returns

predicted result. If input data is Spark DataFrame, the predict result is a DataFrame which includes original columns plus ‘prediction’ column. The ‘prediction’ column can be FloatType, VectorUDT or Array of VectorUDT depending on model outputs shape. If input data is an XShards, the predict result is a XShards, each partition of the XShards is a dictionary of {‘prediction’: result}, where result is a numpy array or a list of numpy arrays.

evaluate(data, batch_size=32, feature_cols='features', label_cols='label')[source]

Evaluate model.

Parameters
  • data – validation data. It can be XShardsor or Spark DataFrame, each partition is a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a list of numpy arrays.

  • batch_size – Batch size used for validation. Default: 32.

  • feature_cols – (Not supported yet) Feature column name(s) of data. Only used when data is a Spark DataFrame. Default: None.

  • label_cols – (Not supported yet) Label column name(s) of data. Only used when data is a Spark DataFrame. Default: None.

Returns

get_model()[source]

Get the trained BigDL model

Returns

The trained BigDL model

save(model_path)[source]

Save the BigDL model to model_path

Parameters

model_path – path to save the trained model.

Returns

load(checkpoint, optimizer=None, loss=None, feature_preprocessing=None, label_preprocessing=None, model_dir=None, is_checkpoint=False)[source]

Load existing BigDL model or checkpoint

Parameters
  • checkpoint – Path to the existing model or checkpoint.

  • optimizer – BigDL optimizer.

  • loss – BigDL criterion.

  • feature_preprocessing

    Used when data in fit and predict is a Spark DataFrame. The param converts the data in feature column to a Tensor or to a Sample directly. It expects a List of Int as the size of the converted Tensor, or a Preprocessing[F, Tensor[T]]

    If a List of Int is set as feature_preprocessing, it can only handle the case that feature column contains the following data types: Float, Double, Int, Array[Float], Array[Double], Array[Int] and MLlib Vector. The feature data are converted to Tensors with the specified sizes before sending to the model. Internally, a SeqToTensor is generated according to the size, and used as the feature_preprocessing.

    Alternatively, user can set feature_preprocessing as Preprocessing[F, Tensor[T]] that transforms the feature data to a Tensor[T]. Some pre-defined Preprocessing are provided in package bigdl.dllib.feature. Multiple Preprocessing can be combined as a ChainedPreprocessing.

    The feature_preprocessing will also be copied to the generated NNModel and applied to feature column during transform.

  • label_preprocessing – Used when data in fit and predict is a Spark DataFrame. similar to feature_preprocessing, but applies to Label data.

  • model_dir – The path to save model. During the training, if checkpoint_trigger is defined and triggered, the model will be saved to model_dir.

  • is_checkpoint – Whether the path is a checkpoint or a saved BigDL model. Default: False.

Returns

The loaded estimator object.

load_orca_checkpoint(path, version=None, prefix=None)[source]

Load existing checkpoint. To load a specific checkpoint, please provide both version and perfix. If version is None, then the latest checkpoint under the specified directory will be loaded.

Parameters
  • path – Path to the existing checkpoint (or directory containing Orca checkpoint files).

  • version – checkpoint version, which is the suffix of model.* file, i.e., for modle.4 file, the version is 4. If it is None, then load the latest checkpoint.

  • prefix – optimMethod prefix, for example ‘optimMethod-Sequentialf53bddcc’

Returns

clear_gradient_clipping()[source]

Clear gradient clipping parameters. In this case, gradient clipping will not be applied. In order to take effect, it needs to be called before fit.

Returns

set_constant_gradient_clipping(min, max)[source]

Set constant gradient clipping during the training process. In order to take effect, it needs to be called before fit.

Parameters
  • min – The minimum value to clip by.

  • max – The maximum value to clip by.

Returns

set_l2_norm_gradient_clipping(clip_norm)[source]

Clip gradient to a maximum L2-Norm during the training process. In order to take effect, it needs to be called before fit.

Parameters

clip_norm – Gradient L2-Norm threshold.

Returns

get_train_summary(tag=None)[source]

Get the scalar from model train summary.

This method will return a list of summary data of [iteration_number, scalar_value, timestamp].

Parameters

tag – The string variable represents the scalar wanted

get_validation_summary(tag=None)[source]

Get the scalar from model validation summary.

This method will return a list of summary data of [iteration_number, scalar_value, timestamp]. Note that the metric and tag may not be consistent. Please look up following form to pass tag parameter. Left side is your metric during compile. Right side is the tag you should pass.

>>> 'Accuracy'                  |   'Top1Accuracy'
>>> 'BinaryAccuracy'            |   'Top1Accuracy'
>>> 'CategoricalAccuracy'       |   'Top1Accuracy'
>>> 'SparseCategoricalAccuracy' |   'Top1Accuracy'
>>> 'AUC'                       |   'AucScore'
>>> 'HitRatio'                  |   'HitRate@k' (k is Top-k)
>>> 'Loss'                      |   'Loss'
>>> 'MAE'                       |   'MAE'
>>> 'NDCG'                      |   'NDCG'
>>> 'TFValidationMethod'        |   '${name + " " + valMethod.toString()}'
>>> 'Top5Accuracy'              |   'Top5Accuracy'
>>> 'TreeNNAccuracy'            |   'TreeNNAccuracy()'
>>> 'MeanAveragePrecision'      |   'MAP@k' (k is Top-k) (BigDL)
>>> 'MeanAveragePrecision'      |   'PascalMeanAveragePrecision' (Zoo)
>>> 'StatelessMetric'           |   '${name}'
Parameters

tag – The string variable represents the scalar wanted

orca.learn.tf.estimator

class bigdl.orca.learn.tf.estimator.Estimator[source]

Bases: bigdl.orca.learn.spark_estimator.Estimator

fit(data, epochs, batch_size=32, feature_cols=None, label_cols=None, validation_data=None, session_config=None, checkpoint_trigger=None, auto_shard_files=False)[source]

Train the model with train data.

Parameters
  • data – train data. It can be XShards, Spark DataFrame, tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays.

  • epochs – number of epochs to train.

  • batch_size – total batch size for each iteration. Default: 32.

  • feature_cols – feature column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • label_cols – label column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • validation_data – validation data. Validation data type should be the same as train data.

  • session_config – tensorflow session configuration for training. Should be object of tf.ConfigProto

  • checkpoint_trigger – when to trigger checkpoint during training. Should be a bigdl.orca.learn.trigger, like EveryEpoch(), SeveralIteration( num_iterations),etc.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

predict(data, batch_size=4, feature_cols=None, auto_shard_files=False)[source]

Predict input data

Parameters
  • data – data to be predicted. It can be XShards, Spark DataFrame. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature}, where feature is a numpy array or a tuple of numpy arrays.

  • batch_size – batch size per thread

  • feature_cols – list of feature column names if input data is Spark DataFrame or XShards of Pandas DataFrame.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

Returns

predicted result. If input data is XShards or tf.data.Dataset, the predict result is a XShards, each partition of the XShards is a dictionary of {‘prediction’: result}, where the result is a numpy array or a list of numpy arrays. If input data is Spark DataFrame, the predict result is a DataFrame which includes original columns plus ‘prediction’ column. The ‘prediction’ column can be FloatType, VectorUDT or Array of VectorUDT depending on model outputs shape.

evaluate(data, batch_size=32, feature_cols=None, label_cols=None, auto_shard_files=False)[source]

Evaluate model.

Parameters
  • data – evaluation data. It can be XShards, Spark DataFrame, tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays. If data is tf.data.Dataset, each element is a tuple of input tensors.

  • batch_size – batch size per thread.

  • feature_cols – feature_cols: feature column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • label_cols – label column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

Returns

evaluation result as a dictionary of {‘metric name’: metric value}

get_model()[source]

Get the trained Tensorflow model

Returns

Trained model

save(model_path)[source]

Save model to model_path

Parameters

model_path – path to save the trained model.

Returns

load(model_path)[source]

Load existing model

Parameters

model_path – Path to the existing model.

Returns

clear_gradient_clipping()[source]

Clear gradient clipping parameters. In this case, gradient clipping will not be applied. In order to take effect, it needs to be called before fit.

Returns

set_constant_gradient_clipping(min, max)[source]

Set constant gradient clipping during the training process. In order to take effect, it needs to be called before fit.

Parameters
  • min – The minimum value to clip by.

  • max – The maximum value to clip by.

Returns

set_l2_norm_gradient_clipping(clip_norm)[source]

Clip gradient to a maximum L2-Norm during the training process. In order to take effect, it needs to be called before fit.

Parameters

clip_norm – Gradient L2-Norm threshold.

Returns

get_train_summary(tag=None)[source]

Get the scalar from model train summary.

This method will return a list of summary data of [iteration_number, scalar_value, timestamp].

Parameters

tag – The string variable represents the scalar wanted

get_validation_summary(tag=None)[source]

Get the scalar from model validation summary.

This method will return a list of summary data of [iteration_number, scalar_value, timestamp]. Note that the metric and tag may not be consistent. Please look up following form to pass tag parameter. Left side is your metric during compile. Right side is the tag you should pass.

>>> 'Accuracy'                  |   'Top1Accuracy'
>>> 'BinaryAccuracy'            |   'Top1Accuracy'
>>> 'CategoricalAccuracy'       |   'Top1Accuracy'
>>> 'SparseCategoricalAccuracy' |   'Top1Accuracy'
>>> 'AUC'                       |   'AucScore'
>>> 'HitRatio'                  |   'HitRate@k' (k is Top-k)
>>> 'Loss'                      |   'Loss'
>>> 'MAE'                       |   'MAE'
>>> 'NDCG'                      |   'NDCG'
>>> 'TFValidationMethod'        |   '${name + " " + valMethod.toString()}'
>>> 'Top5Accuracy'              |   'Top5Accuracy'
>>> 'TreeNNAccuracy'            |   'TreeNNAccuracy()'
>>> 'MeanAveragePrecision'      |   'MAP@k' (k is Top-k) (BigDL)
>>> 'MeanAveragePrecision'      |   'PascalMeanAveragePrecision' (Zoo)
>>> 'StatelessMetric'           |   '${name}'
Parameters

tag – The string variable represents the scalar wanted

save_tf_checkpoint(path)[source]

Save tensorflow checkpoint in this estimator.

Parameters

path – tensorflow checkpoint path.

load_tf_checkpoint(path)[source]

Load tensorflow checkpoint to this estimator.

Parameters

path – tensorflow checkpoint path.

save_keras_model(path, overwrite=True)[source]

Save tensorflow keras model in this estimator.

Parameters
  • path – keras model save path.

  • overwrite – Whether to silently overwrite any existing file at the target location.

save_keras_weights(filepath, overwrite=True, save_format=None)[source]

Save tensorflow keras model weights in this estimator.

Parameters
  • filepath – keras model weights save path.

  • overwrite – Whether to silently overwrite any existing file at the target location.

  • save_format – Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or ‘.keras’ will default to HDF5 if save_format is None. Otherwise None defaults to ‘tf’.

load_keras_weights(filepath, by_name=False)[source]

Save tensorflow keras model in this estimator.

Parameters
  • filepath – keras model weights save path.

  • by_name – Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format.

load_orca_checkpoint(path, version=None)[source]

Load Orca checkpoint. To load a specific checkpoint, please provide a version. If version is None, then the latest checkpoint will be loaded.

Parameters
  • path – checkpoint directory which contains model.* and optimMethod-TFParkTraining.* files.

  • version – checkpoint version, which is the suffix of model.* file, i.e., for modle.4 file, the version is 4.

static from_graph(*, inputs, outputs=None, labels=None, loss=None, optimizer=None, metrics=None, clip_norm=None, clip_value=None, updates=None, sess=None, model_dir=None, backend='bigdl')[source]

Create an Estimator for tesorflow graph.

Parameters
  • inputs – input tensorflow tensors.

  • outputs – output tensorflow tensors.

  • labels – label tensorflow tensors.

  • loss – The loss tensor of the TensorFlow model, should be a scalar

  • optimizer – tensorflow optimization method.

  • clip_norm – float >= 0. Gradients will be clipped when their L2 norm exceeds this value.

  • clip_value – a float >= 0 or a tuple of two floats. If clip_value is a float, gradients will be clipped when their absolute value exceeds this value. If clip_value is a tuple of two floats, gradients will be clipped when their value less than clip_value[0] or larger than clip_value[1].

  • metrics – metric tensor.

  • updates – Collection for the update ops. For example, when performing batch normalization, the moving_mean and moving_variance should be updated and the user should add tf.GraphKeys.UPDATE_OPS to updates. Default is None.

  • sess – the current TensorFlow Session, if you want to used a pre-trained model, you should use the Session to load the pre-trained variables and pass it to estimator

  • model_dir – location to save model checkpoint and summaries.

  • backend – backend for estimator. Now it only can be “bigdl”.

Returns

an Estimator object.

static from_keras(keras_model, metrics=None, model_dir=None, optimizer=None, backend='bigdl')[source]

Create an Estimator from a tensorflow.keras model. The model must be compiled.

Parameters
  • keras_model – the tensorflow.keras model, which must be compiled.

  • metrics – user specified metric.

  • model_dir – location to save model checkpoint and summaries.

  • optimizer – an optional orca optimMethod that will override the optimizer in keras_model.compile

  • backend – backend for estimator. Now it only can be “bigdl”.

Returns

an Estimator object.

static load_keras_model(path)[source]

Create Estimator by loading an existing keras model (with weights) from HDF5 file.

Parameters

path – String. The path to the pre-defined model.

Returns

Orca TF Estimator.

bigdl.orca.learn.tf.estimator.is_tf_data_dataset(data)[source]
bigdl.orca.learn.tf.estimator.to_dataset(data, batch_size, batch_per_thread, validation_data, feature_cols, label_cols, hard_code_batch_size, sequential_order, shuffle, auto_shard_files, memory_type='DRAM')[source]
bigdl.orca.learn.tf.estimator.save_model_dir(model_dir)[source]
class bigdl.orca.learn.tf.estimator.TensorFlowEstimator(*, inputs, outputs, labels, loss, optimizer, clip_norm, clip_value, metrics, updates, sess, model_dir)[source]

Bases: bigdl.orca.learn.tf.estimator.Estimator

fit(data, epochs=1, batch_size=32, feature_cols=None, label_cols=None, validation_data=None, session_config=None, checkpoint_trigger=None, auto_shard_files=False, feed_dict=None)[source]

Train this graph model with train data.

Parameters
  • data – train data. It can be XShards, Spark DataFrame, tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays. If data is tf.data.Dataset, each element is a tuple of input tensors.

  • epochs – number of epochs to train.

  • batch_size – total batch size for each iteration.

  • feature_cols – feature column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • label_cols – label column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • validation_data – validation data. Validation data type should be the same as train data.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

  • session_config – tensorflow session configuration for training. Should be object of tf.ConfigProto

  • feed_dict – a dictionary. The key is TensorFlow tensor, usually a placeholder, the value of the dictionary is a tuple of two elements. The first one of the tuple is the value to feed to the tensor in training phase and the second one is the value to feed to the tensor in validation phase.

  • checkpoint_trigger – when to trigger checkpoint during training. Should be a bigdl.orca.learn.trigger, like EveryEpoch(), SeveralIteration( num_iterations),etc.

predict(data, batch_size=4, feature_cols=None, auto_shard_files=False)[source]

Predict input data

Parameters
  • data – data to be predicted. It can be XShards, Spark DataFrame. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature}, where feature is a numpy array or a tuple of numpy arrays.

  • batch_size – batch size per thread

  • feature_cols – list of feature column names if input data is Spark DataFrame or XShards of Pandas DataFrame.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

Returns

predicted result. If input data is XShards or tf.data.Dataset, the predict result is a XShards, each partition of the XShards is a dictionary of {‘prediction’: result}, where the result is a numpy array or a list of numpy arrays. If input data is Spark DataFrame, the predict result is a DataFrame which includes original columns plus ‘prediction’ column. The ‘prediction’ column can be FloatType, VectorUDT or Array of VectorUDT depending on model outputs shape.

evaluate(data, batch_size=32, feature_cols=None, label_cols=None, auto_shard_files=False)[source]

Evaluate model.

Parameters
  • data – evaluation data. It can be XShards, Spark DataFrame, tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays. If data is tf.data.Dataset, each element is a tuple of input tensors.

  • batch_size – batch size per thread.

  • feature_cols – feature_cols: feature column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • label_cols – label column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

Returns

evaluation result as a dictionary of {‘metric name’: metric value}

save_tf_checkpoint(path)[source]

Save tensorflow checkpoint in this estimator.

Parameters

path – tensorflow checkpoint path.

load_tf_checkpoint(path)[source]

Load tensorflow checkpoint to this estimator. :param path: tensorflow checkpoint path.

get_model()[source]

Get_model is not supported in tensorflow graph estimator

save(model_path)[source]

Save model (tensorflow checkpoint) to model_path

Parameters

model_path – path to save the trained model.

Returns

load(model_path)[source]

Load existing model (tensorflow checkpoint) from model_path :param model_path: Path to the existing tensorflow checkpoint. :return:

clear_gradient_clipping()[source]

Clear gradient clipping is not supported in TensorFlowEstimator.

set_constant_gradient_clipping(min, max)[source]

Set constant gradient clipping is not supported in TensorFlowEstimator. Please pass the clip_value to Estimator.from_graph.

set_l2_norm_gradient_clipping(clip_norm)[source]

Set l2 norm gradient clipping is not supported in TensorFlowEstimator. Please pass the clip_norm to Estimator.from_graph.

shutdown()[source]

Close TensorFlow session and release resources.

class bigdl.orca.learn.tf.estimator.KerasEstimator(keras_model, metrics, model_dir, optimizer)[source]

Bases: bigdl.orca.learn.tf.estimator.Estimator

fit(data, epochs=1, batch_size=32, feature_cols=None, label_cols=None, validation_data=None, session_config=None, checkpoint_trigger=None, auto_shard_files=False)[source]

Train this keras model with train data.

Parameters
  • data – train data. It can be XShards, Spark DataFrame, tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays. If data is tf.data.Dataset, each element is [feature tensor tuple, label tensor tuple]

  • epochs – number of epochs to train.

  • batch_size – total batch size for each iteration.

  • feature_cols – feature column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • label_cols – label column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • validation_data – validation data. Validation data type should be the same as train data.

  • session_config – tensorflow session configuration for training. Should be object of tf.ConfigProto

  • checkpoint_trigger – when to trigger checkpoint during training. Should be a bigdl.orca.learn.trigger, like EveryEpoch(), SeveralIteration( num_iterations),etc.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

predict(data, batch_size=4, feature_cols=None, auto_shard_files=False)[source]

Predict input data

Parameters
  • data – data to be predicted. It can be XShards, Spark DataFrame, or tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature}, where feature is a numpy array or a tuple of numpy arrays. If data is tf.data.Dataset, each element is feature tensor tuple

  • batch_size – batch size per thread

  • feature_cols – list of feature column names if input data is Spark DataFrame or XShards of Pandas DataFrame.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

Returns

predicted result. If input data is XShards or tf.data.Dataset, the predict result is also a XShards, and the schema for each result is: {‘prediction’: predicted numpy array or list of predicted numpy arrays}. If input data is Spark DataFrame, the predict result is a DataFrame which includes original columns plus ‘prediction’ column. The ‘prediction’ column can be FloatType, VectorUDT or Array of VectorUDT depending on model outputs shape.

evaluate(data, batch_size=32, feature_cols=None, label_cols=None, auto_shard_files=False)[source]

Evaluate model.

Parameters
  • data – evaluation data. It can be XShards, Spark DataFrame, tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays. If data is tf.data.Dataset, each element is [feature tensor tuple, label tensor tuple]

  • batch_size – batch size per thread.

  • feature_cols – feature_cols: feature column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • label_cols – label column names if train data is Spark DataFrame or XShards of Pandas DataFrame.

  • auto_shard_files – whether to automatically detect if the dataset is file-based and and apply sharding on files, otherwise sharding on records. Default is False.

Returns

evaluation result as a dictionary of {‘metric name’: metric value}

save_keras_model(path, overwrite=True)[source]

Save tensorflow keras model in this estimator.

Parameters
  • path – keras model save path.

  • overwrite – Whether to silently overwrite any existing file at the target location.

get_model()[source]

Get the trained Keras model

Returns

The trained Keras model

save(model_path, overwrite=True)[source]

Save model to model_path

Parameters
  • model_path – path to save the trained model.

  • overwrite – Whether to silently overwrite any existing file at the target location.

Returns

load(model_path)[source]

Load existing keras model

Parameters

model_path – Path to the existing keras model.

Returns

clear_gradient_clipping()[source]

Clear gradient clipping parameters. In this case, gradient clipping will not be applied. In order to take effect, it needs to be called before fit.

Returns

set_constant_gradient_clipping(min, max)[source]

Set constant gradient clipping during the training process. In order to take effect, it needs to be called before fit.

Parameters
  • min – The minimum value to clip by.

  • max – The maximum value to clip by.

Returns

set_l2_norm_gradient_clipping(clip_norm)[source]

Clip gradient to a maximum L2-Norm during the training process. In order to take effect, it needs to be called before fit.

Parameters

clip_norm – Gradient L2-Norm threshold.

Returns

save_keras_weights(filepath, overwrite=True, save_format=None)[source]

Save tensorflow keras model weights in this estimator.

Parameters
  • filepath – keras model weights save path.

  • overwrite – Whether to silently overwrite any existing file at the target location.

  • save_format – Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or ‘.keras’ will default to HDF5 if save_format is None. Otherwise None defaults to ‘tf’.

load_keras_weights(filepath, by_name=False)[source]

Save tensorflow keras model in this estimator.

Parameters
  • filepath – keras model weights save path.

  • by_name – Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format.

orca.learn.tf2.estimator

class bigdl.orca.learn.tf2.estimator.Estimator[source]

Bases: object

static from_keras(*, model_creator, config=None, verbose=False, workers_per_node=1, compile_args_creator=None, backend='ray', cpu_binding=False, log_to_driver=True, model_dir=None, **kwargs)[source]

Create an Estimator for tensorflow 2.

Parameters
  • model_creator – (dict -> Model) This function takes in the config dict and returns a compiled TF model.

  • config – (dict) configuration passed to ‘model_creator’, ‘data_creator’. Also contains fit_config, which is passed into model.fit(data, **fit_config) and evaluate_config which is passed into model.evaluate.

  • verbose – (bool) Prints output of one model if true.

  • workers_per_node – (Int) worker number on each node. default: 1.

  • compile_args_creator – (dict -> dict of loss, optimizer and metrics) Only used when the backend=”horovod”. This function takes in the config dict and returns a dictionary like {“optimizer”: tf.keras.optimizers.SGD(lr), “loss”: “mean_squared_error”, “metrics”: [“mean_squared_error”]}

  • backend – (string) You can choose “horovod”, “ray” or “spark” as backend. Default: ray.

  • cpu_binding – (bool) Whether to binds threads to specific CPUs. Default: False

  • log_to_driver – (bool) Whether display executor log on driver in cluster mode. Default: True. This option is only for “spark” backend.

  • model_dir – (str) The directory to save model states. It is required for “spark”

backend. For cluster mode, it should be a share filesystem path which can be accessed by executors.

static latest_checkpoint(checkpoint_dir)[source]
bigdl.orca.learn.tf2.estimator.make_data_creator(refs)[source]
bigdl.orca.learn.tf2.estimator.data_length(data)[source]

orca.learn.tf2.tf2_ray_estimator

Orca TF2Estimator with backend of “horovod” or “ray”.

class bigdl.orca.learn.tf2.ray_estimator.TensorFlow2Estimator(model_creator, compile_args_creator=None, config=None, verbose=False, backend='ray', workers_per_node=1, cpu_binding=False)[source]

Bases: bigdl.orca.learn.ray_estimator.Estimator

fit(data, epochs=1, batch_size=32, verbose=1, callbacks=None, validation_data=None, class_weight=None, steps_per_epoch=None, validation_steps=None, validation_freq=1, data_config=None, feature_cols=None, label_cols=None)[source]

Train this tensorflow model with train data.

Parameters
  • data – train data. It can be XShards, Spark DataFrame, Ray Dataset or creator function which returns Iter or DataLoader. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays.

  • epochs – Number of epochs to train the model. Default: 1.

  • batch_size – Batch size used for training. Default: 32.

  • verbose – Prints output of one model if true.

  • callbacks – List of Keras compatible callbacks to apply during training.

  • validation_data – validation data. Validation data type should be the same as train data.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function. This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

  • steps_per_epoch – Total number of steps (batches of samples) before declaring one epoch finished and starting the next epoch. If steps_pre_epoch is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the step_per_epoch argument.

  • validation_steps – Total number of steps (batches of samples) to draw before stopping when performing validation at the end of every epoch. Default: None.

  • validation_freq – Only relevant if validation data is provided. Integer of collections_abc.Container instance (e.g. list, tuple, etc.). If an integer, specifies how many training epochs to run before a new validation run is performed, e.g. validation_freq=2 runs validation every 2 epochs. If a Container, specifies the epochs on which to run validation, e.g. validation_freq=[1, 2, 10] runs validation at the end of the 1st, 2nd, and 10th epochs.

  • data_config – An optional dictionary that can be passed to data creator function. If data is a Ray Dataset, specifies output_signature same as in tf.data.Dataset.from_generator (If label_cols is specified, a 2-element tuple of tf.TypeSpec objects corresponding to (features, label). Otherwise, a single tf.TypeSpec corresponding to features tensor).

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame, an XShards of Pandas DataFrame or a Ray Dataset. Default: None.

  • label_cols – Label column name(s) of data. Only used when data is a Spark DataFrame, an XShards of Pandas DataFrame or a Ray Dataset. Default: None.

Returns

evaluate(data, batch_size=32, num_steps=None, verbose=1, sample_weight=None, callbacks=None, data_config=None, feature_cols=None, label_cols=None)[source]

Evaluates the model on the validation data set.

Parameters
  • data – evaluate data. It can be XShards, Spark DataFrame, Ray Dataset or creator function which returns Iter or DataLoader. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays.

  • batch_size – Batch size used for evaluation. Default: 32.

  • num_steps – Total number of steps (batches of samples) before declaring the evaluation round finished. Ignored with the default value of None.

  • verbose – Prints output of one model if true.

  • sample_weight – Optional Numpy array of weights for the training samples, used for weighting the loss function. You can either pass a flat (1D) Numpy array with the same length as the input samples (1:1 mapping between weights and samples), or in the case of temporal data, you can pass a 2D array with shape (samples, sequence_length), to apply a different weight to every timestep of every sample.

  • callbacks – List of Keras compatible callbacks to apply during evaluation.

  • data_config – An optional dictionary that can be passed to data creator function. If data is a Ray Dataset, specifies output_signature same as in tf.data.Dataset.from_generator (If label_cols is specified, a 2-element tuple of tf.TypeSpec objects corresponding to (features, label). Otherwise, a single tf.TypeSpec corresponding to features tensor).

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame, an XShards of Pandas DataFrame or a Ray Dataset. Default: None.

  • label_cols – Label column name(s) of data. Only used when data is a Spark DataFrame, an XShards of Pandas DataFrame or a Ray Dataset. Default: None.

Returns

validation result

process_ray_dataset(shard, label_cols, feature_cols, data_config)[source]
predict(data, batch_size=None, verbose=1, steps=None, callbacks=None, data_config=None, feature_cols=None, min_partition_num=None)[source]

Predict the input data

Parameters
  • data – predict input data. It can be XShards, Spark DataFrame or orca.data.tf.data.Dataset. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature}, where feature is a numpy array or a tuple of numpy arrays.

  • batch_size – Batch size used for inference. Default: None.

  • verbose – Prints output of one model if true.

  • steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None.

  • callbacks – List of Keras compatible callbacks to apply during prediction.

  • data_config – An optional dictionary that can be passed to data creator function.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

  • min_partition_num – Int. An optional param for repartition the input data when data is an orca.data.tf.data.Dataset. If min_partition_num != None, the input data will be repartitioned to max(min_partition_num, worker_num) partitions. This parameter is usually used to improve the prediction performance when the model is a customized Keras model, and the number of input partitions is significantly larger than the number of workers. Note that if you set this parameter, the order of the prediction results is not guaranteed to be the same as the input order, so you need to add id information to the input to identify the corresponding prediction results. Default: None.

Returns

get_model(sample_input=None)[source]

Returns the learned model.

Returns

the learned model.

save_checkpoint(checkpoint)[source]

Saves the model at the provided checkpoint.

Parameters

checkpoint – (str) Path to the target checkpoint file.

load_checkpoint(checkpoint, **kwargs)[source]

Loads the model from the provided checkpoint.

Parameters

checkpoint – (str) Path to target checkpoint file.

save(filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model. It can be local/hdfs/s3 filepath

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

load(filepath, custom_objects=None, compile=True, options=None)[source]

Loads a model saved via `estimator.save()

Parameters
  • filepath – (str) Path of saved model (SavedModel or H5 file). It can be local/hdfs filepath

  • custom_objects – Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.

  • compile – Boolean, whether to compile the model after loading.

  • options – Optional tf.saved_model.LoadOptions object that specifies options for loading from SavedModel.

save_weights(filepath, overwrite=True, save_format=None, options=None)[source]

Save the model weights at the provided filepath. param filepath: String or PathLike, path to the file to save the weights to.

When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format.

param overwrite: Whether to silently overwrite any existing file at the target location,

or provide the user with a manual prompt.

param save_format: Either ‘tf’ or ‘h5’.

A filepath ending in ‘.h5’ or ‘.keras’ will default to HDF5 if save_format is None. Otherwise None defaults to ‘tf’.

param options: Optional tf.train.CheckpointOptions object that specifies options for saving

weights.

Returns

load_weights(filepath, by_name=False, skip_mismatch=False, options=None)[source]

Load tensorflow keras model weights from the provided path. param filepath: String, path to the weights file to load. For weight files in TensorFlow

format, this is the file prefix (the same as was passed to save_weights). This can also be a path to a SavedModel saved from model.save.

param by_name: Boolean, whether to load weights by name or by topological order.

Only topological loading is supported for weight files in TensorFlow format.

param skip_mismatch: Boolean, whether to skip loading of layers where there is a mismatch

in the number of weights, or a mismatch in the shape of the weight (only valid when by_name=True).

param options: Optional tf.train.CheckpointOptions object that specifies options for loading

weights.

Returns

shutdown()[source]

Shuts down workers and releases resources.

orca.learn.tf2.tf2_spark_estimator

Orca TF2Estimator with backend of “spark”.

class bigdl.orca.learn.tf2.pyspark_estimator.SparkTFEstimator(model_creator, config=None, compile_args_creator=None, verbose=False, workers_per_node=1, model_dir=None, log_to_driver=True, **kwargs)[source]

Bases: object

fit(data, epochs=1, batch_size=32, verbose=1, callbacks=None, validation_data=None, class_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_freq=1, data_config=None, feature_cols=None, label_cols=None)[source]

Train this tensorflow model with train data. :param data: train data. It can be XShards, Spark DataFrame or creator function which

returns Iter or DataLoader. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays.

Parameters
  • epochs – Number of epochs to train the model. Default: 1.

  • batch_size – Batch size used for training. Default: 32.

  • verbose – Prints output of one model if true.

  • callbacks – List of Keras compatible callbacks to apply during training.

  • validation_data – validation data. Validation data type should be the same as train data.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function. This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

Returns

evaluate(data, batch_size=32, num_steps=None, verbose=1, sample_weight=None, callbacks=None, data_config=None, feature_cols=None, label_cols=None)[source]

Evaluates the model on the validation data set. :param data: evaluate data. It can be XShards, Spark DataFrame or creator function which

returns Iter or DataLoader. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a tuple of numpy arrays.

Parameters
  • validation_data – validation data. Validation data type should be the same as train data.

  • batch_size – Batch size used for evaluation. Default: 32.

  • verbose – Prints output of one model if true.

  • callbacks – List of Keras compatible callbacks to apply during evaluation.

  • class_weight – Optional dictionary mapping class indices (integers) to a weight (float) value, used for weighting the loss function. This can be useful to tell the model to “pay more attention” to samples from an under-represented class.

Returns

validation result

predict(data, batch_size=None, verbose=1, steps=None, callbacks=None, data_config=None, feature_cols=None)[source]

Predict the input data :param data: predict input data. It can be XShards or Spark DataFrame.

If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature}, where feature is a numpy array or a tuple of numpy arrays.

Parameters
  • batch_size – Batch size used for inference. Default: None.

  • verbose – Prints output of one model if true.

  • steps – Total number of steps (batches of samples) before declaring the prediction round finished. Ignored with the default value of None.

  • callbacks – List of Keras compatible callbacks to apply during prediction.

  • data_config – An optional dictionary that can be passed to data creator function.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

Returns

save_weights(filepath, overwrite=True, save_format=None)[source]

Save model weights at the provided path. :param filepath: String or PathLike, path to the file to save the weights to. When saving in TensorFlow format, this is the prefix used for checkpoint files (multiple files are generated). Note that the ‘.h5’ suffix causes weights to be saved in HDF5 format. It can be local, hdfs, or s3 filepath. :param overwrite: Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt. :param save_format: Either ‘tf’ or ‘h5’. A filepath ending in ‘.h5’ or ‘.keras’ will default to HDF5 if save_format is None. Otherwise None defaults to ‘tf’.

load_weights(filepath, by_name=False)[source]

Load tensorflow keras model weights in this estimator.

Parameters
  • filepath – keras model weights save path.

  • by_name – Boolean, whether to load weights by name or by topological order. Only topological loading is supported for weight files in TensorFlow format.

save(filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None)[source]

Saves the model to Tensorflow SavedModel or a single HDF5 file.

Parameters
  • filepath – String, PathLike, path to SavedModel or H5 file to save the model. It can be local/hdfs/s3 filepath

  • overwrite – Whether to silently overwrite any existing file at the target location, or provide the user with a manual prompt.

  • include_optimizer – If True, save optimizer’s state together.

  • save_format – Either ‘tf’ or ‘h5’, indicating whether to save the model to Tensorflow SavedModel or HDF5. Defaults to ‘tf’ in TF 2.X, and ‘h5’ in TF 1.X.

  • signatures – Signatures to save with the SavedModel. Applicable to the ‘tf’ format only. Please see the signatures argument in tf.saved_model.save for details.

  • options – (only applies to SavedModel format) tf.saved_model.SaveOptions object that specifies options for saving to SavedModel.

load(filepath, custom_objects=None, compile=True)[source]

Loads a model saved via `estimator.save()

Parameters
  • filepath – (str) Path of saved model.

  • custom_objects – Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization.

  • compile – Boolean, whether to compile the model after loading.

  • options – Optional tf.saved_model.LoadOptions object that specifies

options for loading from SavedModel.

get_model()[source]

Returns the learned model.

Returns

the learned model.

shutdown()[source]

Shutdown estimator and release resources.

orca.learn.pytorch.estimator

class bigdl.orca.learn.pytorch.estimator.Estimator[source]

Bases: object

static from_torch(*, model, optimizer, loss=None, metrics=None, scheduler_creator=None, training_operator_cls=<class 'bigdl.orca.learn.pytorch.training_operator.TrainingOperator'>, initialization_hook=None, config=None, scheduler_step_freq='batch', use_tqdm=False, workers_per_node=1, model_dir=None, backend='bigdl', sync_stats=False, log_level=20, log_to_driver=True)[source]

Create an Estimator for torch.

Parameters
  • model – PyTorch model or model creator function if backend=”bigdl”, PyTorch model creator function if backend=”horovod” or “ray”

  • optimizer – Orca/PyTorch optimizer or optimizer creator function if backend=”bigdl” , PyTorch optimizer creator function if backend=”horovod” or “ray”

  • loss – PyTorch loss or loss creator function if backend=”bigdl”, PyTorch loss creator function if backend=”horovod” or “ray”

  • metrics – Orca validation methods for evaluate.

  • scheduler_creator – parameter for horovod and ray backends. a learning rate scheduler wrapping the optimizer. You will need to set scheduler_step_freq="epoch" for the scheduler to be incremented correctly.

  • config – parameter config dict to create model, optimizer loss and data.

  • scheduler_step_freq – parameter for horovod and ray backends. “batch”, “epoch” or None. This will determine when scheduler.step is called. If “batch”, step will be called after every optimizer step. If “epoch”, step will be called after one pass of the DataLoader. If a scheduler is passed in, this value is expected to not be None.

  • use_tqdm – parameter for horovod and ray backends. You can monitor training progress if use_tqdm=True.

  • workers_per_node – parameter for horovod and ray backends. worker number on each node. default: 1.

  • model_dir – parameter for bigdl and spark backend. The path to save model. During the training, if checkpoint_trigger is defined and triggered, the model will be saved to model_dir.

  • backend – You can choose “horovod”, “ray”, “bigdl” or “spark” as backend. Default: bigdl.

  • sync_stats – Whether to sync metrics across all distributed workers after each epoch. If set to False, only rank 0’s metrics are printed. This param only works horovod, ray and pyspark backend. For spark backend, the metrics printed are are always synced. This param only affects the printed metrics, the returned metrics are always averaged across workers. Default: True

  • log_level – Setting the log_level of each distributed worker. This param only works horovod, ray and pyspark backend.

  • log_to_driver – (bool) Whether display executor log on driver in cluster mode. Default: True. This option is only for “spark” backend.

Returns

an Estimator object.

static latest_checkpoint(checkpoint_dir)[source]

orca.learn.pytorch.pytorch_ray_estimator

Orca Pytorch Estimator with backend of “horovod” or “ray”.

class bigdl.orca.learn.pytorch.pytorch_ray_estimator.PyTorchRayEstimator(*, model_creator, optimizer_creator, loss_creator=None, metrics=None, scheduler_creator=None, training_operator_cls=<class 'bigdl.orca.learn.pytorch.training_operator.TrainingOperator'>, initialization_hook=None, config=None, scheduler_step_freq='batch', use_tqdm=False, backend='ray', workers_per_node=1, sync_stats=True, log_level=20)[source]

Bases: bigdl.orca.learn.ray_estimator.Estimator

fit(data, epochs=1, batch_size=32, profile=False, reduce_results=True, info=None, feature_cols=None, label_cols=None, validation_data=None, callbacks=[])[source]

Trains a PyTorch model given training data for several epochs. Calls TrainingOperator.train_epoch() on N parallel workers simultaneously underneath the hood.

Parameters
  • data – An instance of SparkXShards, a Ray Dataset, a Spark DataFrame or a function that takes config and batch_size as argument and returns a PyTorch DataLoader for training.

  • epochs – The number of epochs to train the model. Default is 1.

  • batch_size – The number of samples per batch for each worker. Default is 32. The total batch size would be workers_per_node*num_nodes. If your training data is a function, you can set batch_size to be the input batch_size of the function for the PyTorch DataLoader.

  • profile – Boolean. Whether to return time stats for the training procedure. Default is False.

  • reduce_results – Boolean. Whether to average all metrics across all workers into one dict. If a metric is a non-numerical value (or nested dictionaries), one value will be randomly selected among the workers. If False, returns a list of dicts for all workers. Default is True.

  • info – An optional dictionary that can be passed to the TrainingOperator for train_epoch and train_batch.

  • feature_cols – feature column names if data is Spark DataFrame or Ray Dataset.

  • label_cols – label column names if data is Spark DataFrame or Ray Dataset.

  • validation_data – validation data. Validation data type should be the same as train data.

  • callbacks – A list for all callbacks.

Returns

A list of dictionary of metrics for every training epoch. If reduce_results is False, this will return a nested list of metric dictionaries whose length will be equal to the total number of workers. You can also provide custom metrics by passing in a custom training_operator_cls when creating the Estimator.

predict(data, batch_size=32, feature_cols=None, profile=False)[source]

Using this PyTorch model to make predictions on the data.

Parameters
  • data – An instance of SparkXShards, a Ray Dataset or a Spark DataFrame

  • batch_size – The number of samples per batch for each worker. Default is 32.

  • profile – Boolean. Whether to return time stats for the training procedure. Default is False.

  • feature_cols – feature column names if data is a Spark DataFrame or Ray Dataset.

Returns

A SparkXShards or a list that contains the predictions with key “prediction” in each shard

evaluate(data, batch_size=32, num_steps=None, profile=False, info=None, feature_cols=None, label_cols=None)[source]

Evaluates a PyTorch model given validation data. Note that only accuracy for classification with zero-based label is supported by default. You can override validate_batch in TrainingOperator for other metrics. Calls TrainingOperator.validate() on N parallel workers simultaneously underneath the hood.

Parameters
  • data – An instance of SparkXShards, a Spark DataFrame, a Ray Dataset or a function that takes config and batch_size as argument and returns a PyTorch DataLoader for validation.

  • batch_size – The number of samples per batch for each worker. Default is 32. The total batch size would be workers_per_node*num_nodes. If your validation data is a function, you can set batch_size to be the input batch_size of the function for the PyTorch DataLoader.

  • num_steps – The number of batches to compute the validation results on. This corresponds to the number of times TrainingOperator.validate_batch is called.

  • profile – Boolean. Whether to return time stats for the training procedure. Default is False.

  • info – An optional dictionary that can be passed to the TrainingOperator for validate.

  • feature_cols – feature column names if train data is Spark DataFrame or Ray Dataset.

  • label_cols – label column names if train data is Spark DataFrame or Ray Dataset.

Returns

A dictionary of metrics for the given data, including validation accuracy and loss. You can also provide custom metrics by passing in a custom training_operator_cls when creating the Estimator.

get_model()[source]

Returns the learned PyTorch model.

Returns

The learned PyTorch model.

save(model_path)[source]

Saves the Estimator state (including model and optimizer) to the provided model_path.

Parameters

model_path – (str) Path to save the model.

Returns

load(model_path)[source]

Loads the Estimator state (including model and optimizer) from the provided model_path.

Parameters

model_path – (str) Path to the existing model.

save_checkpoint(model_path)[source]

Manually saves the Estimator state (including model and optimizer) to the provided model_path.

Parameters

model_path – (str) Path to save the model. Both local and remote path are supported. e.g. “/tmp/estimator.ckpt” or “hdfs:///tmp/estimator.ckpt”

Returns

None

load_checkpoint(model_path)[source]

Loads the Estimator state (including model and optimizer) from the provided model_path.

Parameters

model_path – (str) Path to the existing model. Both local and remote path are supported. e.g. “/tmp/estimator.ckpt” or “hdfs:///tmp/estimator.ckpt”

Returns

None

shutdown(force=False)[source]

Shuts down workers and releases resources.

Returns

get_state_dict()[source]
load_state_dict(state_dict, blocking=True)[source]

orca.learn.pytorch.pytorch_spark_estimator

Orca Pytorch Estimator with backend of “bigdl”.

class bigdl.orca.learn.pytorch.pytorch_spark_estimator.PyTorchSparkEstimator(model, loss, optimizer, config=None, metrics=None, model_dir=None, bigdl_type='float')[source]

Bases: bigdl.orca.learn.spark_estimator.Estimator

fit(data, epochs=1, batch_size=None, feature_cols=None, label_cols=None, validation_data=None, checkpoint_trigger=None)[source]

Train this torch model with train data.

Parameters
  • data – train data. It can be a XShards, Spark Dataframe, PyTorch DataLoader and PyTorch DataLoader creator function that takes config and batch_size as argument and returns a PyTorch DataLoader for training. If data is an XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a list of numpy arrays.

  • epochs – Number of epochs to train the model. Default: 1.

  • batch_size – Batch size used for training. Only used when data is an XShards. Default: 32.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

  • label_cols – Label column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

  • validation_data – Validation data. XShards, PyTorch DataLoader and PyTorch DataLoader creator function are supported. If data is XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a list of numpy arrays.

  • checkpoint_trigger – Orca Trigger to set a checkpoint.

Returns

The trained estimator object.

predict(data, batch_size=4, feature_cols=None)[source]

Predict input data.

Parameters
  • data – data to be predicted. It can be an XShards or a Spark Dataframe. If it is an XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature}, where feature is a numpy array or a list of numpy arrays.

  • batch_size – batch size used for inference.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

Returns

predicted result. The predict result is a XShards, each partition of the XShards is a dictionary of {‘prediction’: result}, where result is a numpy array or a list of numpy arrays.

evaluate(data, batch_size=None, feature_cols=None, label_cols=None, validation_metrics=None)[source]

Evaluate model.

Parameters
  • data – data: evaluation data. It can be an XShards, Spark Dataframe, PyTorch DataLoader and PyTorch DataLoader creator function. If data is an XShards, each partition can be a Pandas DataFrame or a dictionary of {‘x’: feature, ‘y’: label}, where feature(label) is a numpy array or a list of numpy arrays.

  • batch_size – Batch size used for evaluation. Only used when data is a SparkXShard.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

  • label_cols – Label column name(s) of data. Only used when data is a Spark DataFrame or an XShards of Pandas DataFrame. Default: None.

  • validation_metrics – Orca validation metrics to be computed on validation_data.

Returns

validation results.

get_model()[source]

Get the trained PyTorch model.

Returns

The trained PyTorch model.

save(model_path)[source]

Saves the Estimator state (including model and optimizer) to the provided model_path.

Parameters

model_path – path to save the model.

Returns

model_path

load(model_path)[source]

Load the Estimator state (model and possibly with optimizer) from provided model_path. The model file should be generated by the save method of this estimator, or by torch.save(state_dict, model_path), where state_dict can be obtained by the state_dict() method of a pytorch model.

Parameters

model_path – path to the saved model.

Returns

load_orca_checkpoint(path, version=None, prefix=None)[source]

Load existing checkpoint. To load a specific checkpoint, please provide both version and perfix. If version is None, then the latest checkpoint will be loaded.

Parameters
  • path – Path to the existing checkpoint (or directory containing Orca checkpoint files).

  • version – checkpoint version, which is the suffix of model.* file, i.e., for modle.4 file, the version is 4. If it is None, then load the latest checkpoint.

  • prefix – optimMethod prefix, for example ‘optimMethod-TorchModelf53bddcc’.

Returns

get_train_summary(tag=None)[source]

Get the scalar from model train summary.

This method will return a list of summary data of [iteration_number, scalar_value, timestamp].

Parameters

tag – The string variable represents the scalar wanted

get_validation_summary(tag=None)[source]

Get the scalar from model validation summary.

This method will return a list of summary data of [iteration_number, scalar_value, timestamp]. Note that the metric and tag may not be consistent. Please look up following form to pass tag parameter. Left side is your metric during compile. Right side is the tag you should pass.

>>> 'Accuracy'                  |   'Top1Accuracy'
>>> 'BinaryAccuracy'            |   'Top1Accuracy'
>>> 'CategoricalAccuracy'       |   'Top1Accuracy'
>>> 'SparseCategoricalAccuracy' |   'Top1Accuracy'
>>> 'AUC'                       |   'AucScore'
>>> 'HitRatio'                  |   'HitRate@k' (k is Top-k)
>>> 'Loss'                      |   'Loss'
>>> 'MAE'                       |   'MAE'
>>> 'NDCG'                      |   'NDCG'
>>> 'TFValidationMethod'        |   '${name + " " + valMethod.toString()}'
>>> 'Top5Accuracy'              |   'Top5Accuracy'
>>> 'TreeNNAccuracy'            |   'TreeNNAccuracy()'
>>> 'MeanAveragePrecision'      |   'MAP@k' (k is Top-k) (BigDL)
>>> 'MeanAveragePrecision'      |   'PascalMeanAveragePrecision' (Zoo)
>>> 'StatelessMetric'           |   '${name}'
Parameters

tag – The string variable represents the scalar wanted

clear_gradient_clipping()[source]

Clear gradient clipping parameters. In this case, gradient clipping will not be applied. In order to take effect, it needs to be called before fit.

Returns

set_constant_gradient_clipping(min, max)[source]

Set constant gradient clipping during the training process. In order to take effect, it needs to be called before fit.

Parameters
  • min – The minimum value to clip by.

  • max – The maximum value to clip by.

Returns

set_l2_norm_gradient_clipping(clip_norm)[source]

Clip gradient to a maximum L2-Norm during the training process. In order to take effect, it needs to be called before fit.

Parameters

clip_norm – Gradient L2-Norm threshold.

Returns

orca.learn.openvino.estimator

class bigdl.orca.learn.openvino.estimator.Estimator[source]

Bases: object

static from_openvino(*, model_path)[source]

Load an openVINO Estimator.

Parameters

model_path – String. The file path to the OpenVINO IR xml file.

class bigdl.orca.learn.openvino.estimator.OpenvinoEstimator(*, model_path)[source]

Bases: bigdl.orca.learn.spark_estimator.Estimator

fit(data, epochs, batch_size=32, feature_cols=None, label_cols=None, validation_data=None, checkpoint_trigger=None)[source]

Fit is not supported in OpenVINOEstimator

predict(data, feature_cols=None, batch_size=4, input_cols=None)[source]

Predict input data

Parameters
  • batch_size – Int. Set batch Size, default is 4.

  • data – data to be predicted. XShards, Spark DataFrame, numpy array and list of numpy arrays are supported. If data is XShards, each partition is a dictionary of {‘x’: feature}, where feature(label) is a numpy array or a list of numpy arrays.

  • feature_cols – Feature column name(s) of data. Only used when data is a Spark DataFrame. Default: None.

  • input_cols – Str or List of str. The model input list(order related). Users can specify the input order using the inputs parameter. If inputs=None, The default OpenVINO model input list will be used. Default: None.

Returns

predicted result. If the input data is XShards, the predict result is a XShards, each partition of the XShards is a dictionary of {‘prediction’: result}, where the result is a numpy array or a list of numpy arrays. If the input data is numpy arrays or list of numpy arrays, the predict result is a numpy array or a list of numpy arrays.

evaluate(data, batch_size=32, feature_cols=None, label_cols=None)[source]

Evaluate is not supported in OpenVINOEstimator

get_model()[source]

Get_model is not supported in OpenVINOEstimator

save(model_path)[source]

Save is not supported in OpenVINOEstimator

load(model_path)[source]

Load an openVINO model.

Parameters

model_path – String. The file path to the OpenVINO IR xml file.

Returns

set_tensorboard(log_dir, app_name)[source]

Set_tensorboard is not supported in OpenVINOEstimator

clear_gradient_clipping()[source]

Clear_gradient_clipping is not supported in OpenVINOEstimator

set_constant_gradient_clipping(min, max)[source]

Set_constant_gradient_clipping is not supported in OpenVINOEstimator

set_l2_norm_gradient_clipping(clip_norm)[source]

Set_l2_norm_gradient_clipping is not supported in OpenVINOEstimator

get_train_summary(tag=None)[source]

Get_train_summary is not supported in OpenVINOEstimator

get_validation_summary(tag=None)[source]

Get_validation_summary is not supported in OpenVINOEstimator

load_orca_checkpoint(path, version)[source]

Load_orca_checkpoint is not supported in OpenVINOEstimator

AutoML