neuraxle.metaopt.trial

Neuraxle’s Trial Classes

Trial objects used by AutoML algorithm classes.

Classes

TRIAL_STATUS

Trial status.

Trial(hyperparams, main_metric_name, …)

Trial data container for AutoML.

TrialSplit(trial, split_number, …)

One split of a trial.

Trials(trials)

Data object containing auto ml trials.

class neuraxle.metaopt.trial.TRIAL_STATUS[source]

Trial status.

FAILED = 'failed'[source]
PLANNED = 'planned'[source]
STARTED = 'started'[source]
SUCCESS = 'success'[source]
class neuraxle.metaopt.trial.Trial(hyperparams: neuraxle.hyperparams.space.HyperparameterSamples, main_metric_name: str, save_trial_function: Callable, status: Optional[neuraxle.metaopt.trial.TRIAL_STATUS] = None, pipeline: neuraxle.base.BaseStep = None, validation_splits: List[TrialSplit] = None, cache_folder: str = None, error: str = None, error_traceback: str = None, start_time: datetime.datetime = None, end_time: datetime.datetime = None)[source]

Trial data container for AutoML. A Trial contains the results for each validation split. Each trial split contains both the training set results, and the validation set results.

See also

AutoML, TrialSplit, HyperparamsRepository, BaseHyperparameterSelectionStrategy, RandomSearchHyperparameterSelectionStrategy, DataContainer

_get_trial_hash(hp_dict: Dict[KT, VT])[source]

Hash hyperparams with md5 to create a trial hash.

Parameters

hp_dict – hyperparams dict

Returns

static from_json(update_trial_function: Callable, trial_json: Dict[KT, VT]) → neuraxle.metaopt.trial.Trial[source]
get_trained_pipeline(split_number: int = 0)[source]

Get trained pipeline inside the validation splits.

Parameters

split_number – split number to get trained pipeline from

Returns

get_validation_score() → float[source]

Return the latest validation score for the main scoring metric. Returns the average score for all validation splits.

Returns

validation score

is_higher_score_better() → bool[source]

Return True if higher scores are better for the main metric.

Returns

if higher score is better

new_validation_split(pipeline: neuraxle.base.BaseStep, delete_pipeline_on_completion: bool = True) → neuraxle.metaopt.trial.TrialSplit[source]

Create a new trial split. A trial has one split when the validation splitter function is validation split. A trial has one or many split when the validation splitter function is kfold_cross_validation_split.

Parameters

delete_pipeline_on_completion – bool to delete pipeline on completion or not

Returns

one trial split

save_model()[source]

Save fitted model in the trial hash folder.

save_trial() → neuraxle.metaopt.trial.Trial[source]

Update trial with the hyperparams repository.

Returns

set_failed(error: Exception) → neuraxle.metaopt.trial.Trial[source]

Set failed trial with exception.

Parameters

error – catched exception

Returns

self

set_hyperparams(hyperparams: neuraxle.hyperparams.space.HyperparameterSamples) → neuraxle.metaopt.trial.Trial[source]

Set trial hyperparams.

Parameters

hyperparams – trial hyperparams

Returns

set_main_metric_name(name: str) → neuraxle.metaopt.trial.Trial[source]

Set trial main metric name.

Returns

self

set_success() → neuraxle.metaopt.trial.Trial[source]

Set trial status to success.

Returns

self

to_json()[source]
update_final_trial_status()[source]

Set trial status to success.

class neuraxle.metaopt.trial.TrialSplit(trial: neuraxle.metaopt.trial.Trial, split_number: int, main_metric_name: str, status: Optional[neuraxle.metaopt.trial.TRIAL_STATUS] = None, error: Exception = None, error_traceback: str = None, metrics_results: Dict[KT, VT] = None, start_time: datetime.datetime = None, end_time: datetime.datetime = None, pipeline: neuraxle.base.BaseStep = None, delete_pipeline_on_completion: bool = True)[source]

One split of a trial.

See also

AutoML, HyperparamsRepository, BaseHyperparameterSelectionStrategy, RandomSearchHyperparameterSelectionStrategy, DataContainer

add_metric_results_train(name: str, score: float, higher_score_is_better: bool)[source]

Add a train metric result in the metric results dictionary.

Parameters
  • name – name of the metric

  • score – score

  • higher_score_is_better – if higher score is better or not for this metric

Returns

add_metric_results_validation(name: str, score: float, higher_score_is_better: bool)[source]

Add a validation metric result in the metric results dictionary.

Parameters
  • name – name of the metric

  • score – score

  • higher_score_is_better – if higher score is better or not for this metric

Returns

fit_trial_split(train_data_container: neuraxle.data_container.DataContainer, context: neuraxle.base.ExecutionContext) → neuraxle.metaopt.trial.TrialSplit[source]

Fit the trial split pipeline with the training data container.

Parameters
  • train_data_container – training data container

  • context – execution context

Returns

trial split with its fitted pipeline.

static from_json(trial: neuraxle.metaopt.trial.Trial, trial_split_json: Dict[KT, VT]) → neuraxle.metaopt.trial.TrialSplit[source]

Create a trial split object from json.

Parameters
  • trial – parent trial

  • trial_split_json – trial json

Returns

get_metric_names() → List[str][source]
get_metric_train_results(metric_name)[source]
get_metric_validation_results(metric_name)[source]
get_pipeline()[source]

Return the trained pipeline

Returns

get_validation_score()[source]

Return the latest validation score for the main scoring metric.

Returns

get_validation_scores()[source]

Return the validation scores for the main scoring metric.

Returns

is_higher_score_better() → bool[source]

Return True if higher scores are better for the main metric.

Returns

is_new_best_score()[source]

Return True if the latest validation score is the new best score.

Returns

is_success()[source]

Set trial status to success.

predict_with_pipeline(data_container: neuraxle.data_container.DataContainer, context: neuraxle.base.ExecutionContext) → neuraxle.data_container.DataContainer[source]

Predict data with the fitted trial split pipeline.

Parameters
  • data_container – data container to predict

  • context – execution context

Returns

predicted data container

save_parent_trial() → neuraxle.metaopt.trial.TrialSplit[source]

Save parent trial.

Returns

self

set_failed(error: Exception) → neuraxle.metaopt.trial.TrialSplit[source]

Set failed trial with exception.

Parameters

error – catched exception

Returns

self

set_main_metric_name(name: str) → neuraxle.metaopt.trial.TrialSplit[source]

Set main metric name.

Parameters

name – main metric name.

Returns

self

set_success() → neuraxle.metaopt.trial.TrialSplit[source]

Set trial status to success.

Returns

self

to_json() → dict[source]

Return the trial in a json format.

Returns

class neuraxle.metaopt.trial.Trials(trials: List[neuraxle.metaopt.trial.Trial] = None)[source]

Data object containing auto ml trials.

See also

AutoMLSequentialWrapper, RandomSearch, HyperparamsRepository, MetaStepMixin, BaseStep

append(trial: neuraxle.metaopt.trial.Trial)[source]

Add a new trial.

Parameters

trial – new trial

Returns

filter(status: neuraxle.metaopt.trial.TRIAL_STATUS) → neuraxle.metaopt.trial.Trials[source]

Get all the trials with the given trial status.

Parameters

status – trial status

Returns

get_best_hyperparams() → neuraxle.hyperparams.space.HyperparameterSamples[source]

Get best hyperparams from all trials.

Returns

get_metric_names() → List[str][source]
get_number_of_split()[source]
is_higher_score_better() → bool[source]

Return true if higher score is better.

:return

split_good_and_bad_trials(quantile_threshold: float, number_of_good_trials_max_cap: int) → Tuple[neuraxle.metaopt.trial.Trials, neuraxle.metaopt.trial.Trials][source]