neuraxle.metaopt.data.reporting¶
Module-level documentation for neuraxle.metaopt.data.reporting. Here is an inheritance diagram, including dependencies to other base modules of Neuraxle:
Neuraxle’s AutoML Metric Reporting classes.¶
Classes are splitted like this for the metric analysis:
ProjectReport
ClientReport
RoundReport
TrialReport
TrialSplitReport
These AutoML reports are used to get information from the nested dataclasses, such as to create visuals.
Classes
|
A report contains a dataclass of the same subclass-level of itself, so as to be able to dig into the dataclass so as to observe it, such as to generate statistics and query its information. |
|
|
|
|
|
|
|
|
|
|
|
-
neuraxle.metaopt.data.reporting.
_filter_df_hps
(wildcarded_hps: OrderedDict[str, Any], wildcards_to_keep: List[str] = None) → OrderedDict[str, Any][source]¶ Filters the hyperparameters so as to keep only the indicated wildcards to keep. Also parses the hyperparameters to strings if not numeric.
-
class
neuraxle.metaopt.data.reporting.
BaseReport
(dc: SubDataclassT)[source]¶ Bases:
typing.Generic
A report contains a dataclass of the same subclass-level of itself, so as to be able to dig into the dataclass so as to observe it, such as to generate statistics and query its information. The dataclasses represent the results of an AutoML optimization round, even multiple rounds. These AutoML reports are used to get information from the nested dataclasses, such as to create visuals.
See also
neuraxle.metaopt.data.vanilla.BaseDataclass
neuraxle.metaopt.data.reporting.BaseAggregate
-
class
neuraxle.metaopt.data.reporting.
ClientReport
(dc: SubDataclassT)[source]¶ Bases:
neuraxle.metaopt.data.reporting.BaseReport
-
CLIENT_ID_COLUMN_NAME
= 'client_name'¶
-
-
class
neuraxle.metaopt.data.reporting.
RoundReport
(dc: SubDataclassT)[source]¶ Bases:
neuraxle.metaopt.data.reporting.BaseReport
-
ROUND_ID_COLUMN_NAME
= 'round_number'¶
-
SUMMARY_STATUS_COLUMNS_NAME
= 'status'¶
-
main_metric_name
¶
-
get_best_trial
(metric_name: str = None) → Optional[neuraxle.metaopt.data.reporting.TrialReport][source]¶ Return trial report with best score from all trials, provided that this trial has a score and was successful.
-
get_best_trial_id
(metric_name: str = None) → Union[str, int, None][source]¶ Get best trial id from all trials. Will return None if there are no successful trial with such score.
-
get_best_hyperparams
(metric_name: str = None) → neuraxle.hyperparams.space.HyperparameterSamples[source]¶ Get best hyperparams from all trials.
-
is_higher_score_better
(metric_name: str = None) → bool[source]¶ Return true if higher score is better. If metric_name is None, the optimizer’s metric is taken.
-
get_n_val_splits
()[source]¶ Finds the number of validation splits on record in this round’s first trial.
-
best_result_summary
(metric_name: str = None, use_wildcards: bool = False) → Tuple[float, int, neuraxle.base.TrialStatus, OrderedDict[str, Any]][source]¶ Return the best result summary for the given metric, as the [score, trial_number, hyperparams_flat_dict].
-
summary
(metric_name: str = None, use_wildcards: bool = False) → List[Tuple[float, int, OrderedDict[str, Any]]][source]¶ Get a summary of the round. Best score is first. Values in the returned triplet tuples are: (score, trial_number, hyperparams), sorted by score such that the best score is first.
-
get_all_hyperparams
(as_flat: bool = True, use_wildcards: bool = False) → List[OrderedDict[str, Any]][source]¶ Get the list of hyperparams for all trials.
-
list_hyperparameters_wildcards
(discard_singles=False) → List[str][source]¶ Returns a list of all the hyperparameters wildcards used in the round. Discarding singles would prune out the hyperparameters with values that never vary.
-
list_successful_avg_validation_scores
() → List[float][source]¶ Returns a list of all the average validation scores on record, only for succesful trials.
-
successful_trials
¶ Returns a list of all the succesful trials on record.
-
-
class
neuraxle.metaopt.data.reporting.
TrialReport
(dc: SubDataclassT)[source]¶ Bases:
neuraxle.metaopt.data.reporting.BaseReport
-
TRIAL_ID_COLUMN_NAME
= 'trial_number'¶
-
get_hyperparams
() → neuraxle.hyperparams.space.RecursiveDict[source]¶ Get the hyperparameters of the trial.
-
get_avg_validation_score
(metric_name: str, over_time=False) → Union[float, List[float], None][source]¶ Returns the average score for all validation splits’s best validation score for the specified scoring metric.
: param metric_name: The name of the metric to use. : param over_time: If true, return all the avg scores over time instead of the best avg score. : return: validation score
-
-
class
neuraxle.metaopt.data.reporting.
TrialSplitReport
(dc: SubDataclassT)[source]¶ Bases:
neuraxle.metaopt.data.reporting.BaseReport
-
TRIAL_SPLIT_ID_COLUMN_NAME
= 'split_number'¶
-
get_hyperparams
() → neuraxle.hyperparams.space.RecursiveDict[source]¶ Get the hyperparameters of the trial.
-
-
class
neuraxle.metaopt.data.reporting.
MetricResultsReport
(dc: SubDataclassT)[source]¶ Bases:
neuraxle.metaopt.data.reporting.BaseReport
-
METRIC_COLUMN_NAME
= 'metric'¶
-
EPOCH_COLUMN_NAME
= 'epoch'¶
-
TRAIN_VAL_COLUMN_NAME
= 'phase'¶
-
metric_name
¶
-
get_valid_scores
() → List[float][source]¶ Return the validation scores for the given scoring metric.
-
get_final_validation_score
() → float[source]¶ Return the latest validation score for the given scoring metric.
-
get_best_validation_score
() → Optional[float][source]¶ Return the best validation score for the given scoring metric.
-