Externally hosted experiments¶
-
class
previsionio.experiment_version.
ExternallyHostedExperimentVersion
(**experiment_version_info)¶ Bases:
previsionio.experiment_version.BaseExperimentVersion
Class for externally hosted experiments objects.
-
best_model
¶ Get the model with the best predictive performance over all models (including Blend models), where the best performance corresponds to a minimal loss.
Returns: Model with the best performance in the experiment, or None
if no model matched the search filter.Return type: ( Model
, None)
-
delete
()¶ Delete an experiment version from the actual [client] workspace.
Raises: PrevisionException
– If the experiment version does not existrequests.exceptions.ConnectionError
– Error processing the request
-
delete_prediction
(prediction_id: str)¶ Delete a prediction in the list for the current experiment from the actual [client] workspace.
Parameters: prediction_id (str) – Unique id of the prediction to delete Returns: Deletion process results Return type: dict
-
delete_predictions
()¶ Delete all predictions in the list for the current experiment from the actual [client] workspace.
Returns: Deletion process results Return type: dict
-
done
¶ Get a flag indicating whether or not the experiment is currently done.
Returns: done status Return type: bool
-
fastest_model
¶ Returns the model that predicts with the lowest response time
Returns: Model object – corresponding to the fastest model
-
classmethod
from_id
(_id: str) → previsionio.experiment_version.BaseExperimentVersion¶ Get an experiment version from the platform by its unique id.
Parameters: _id (str) – Unique id of the experiment to retrieve Returns: Fetched experiment Return type: BaseExperimentVersion
Raises: PrevisionException
– Any error while fetching data from the platform or parsing result
-
get_holdout_predictions
(full: bool = False)¶ Retrieves the list of holdout predictions for the current experiment from client workspace (with the full predictions object if necessary) :param full: If true, return full holdout prediction objects (else only metadata) :type full: bool
-
get_predictions
(full: bool = False)¶ Retrieves the list of predictions for the current experiment from client workspace (with the full predictions object if necessary) :param full: If true, return full prediction objects (else only metadata) :type full: bool
-
holdout_dataset
¶ Get the
Dataset
object corresponding to the holdout dataset of this experiment version.Returns: Associated holdout dataset Return type: Dataset
-
model_class
¶
-
models
¶ Get the list of models generated for the current experiment version. Only the models that are done training are retrieved.
Returns: List of models found by the platform for the experiment Return type: list( Model
)
-
new_version
()¶ Create a new external experiment version from this version (on the platform). The external_models parameter is mandatory. The other parameters are copied from the current version and then overridden for those provided.
Parameters: - external_models (list(tuple)) –
The external models to add in the experiment version to create. Each tuple contains 2 items describing an external model as follows:
- The name you want to give to the model
- The path to a yaml file containing metadata about the model
- holdout_dataset (
Dataset
, optional) – Reference to the holdout dataset object to use for as holdout dataset - target_column (str, optional) – The name of the target column for this experiment version
- metric (metrics.Enum, optional) – Specific metric to use for the experiment version
- pred_dataset (
Dataset
, optional) – Reference to the pred dataset object (default:None
) - description (str, optional) – The description of this experiment version (default:
None
)
Returns: Newly created external experiment object (new version)
Return type: - external_models (list(tuple)) –
-
print_info
()¶ Print all info on the experiment.
-
running
¶ Get a flag indicating whether or not the experiment is currently running.
Returns: Running status Return type: bool
-
score
¶ Get the current score of the experiment (i.e. the score of the model that is currently considered the best performance-wise for this experiment).
Returns: Experiment score (or infinity if not available). Return type: float
-
status
¶ Get a flag indicating whether or not the experiment is currently running.
Returns: Running status Return type: bool
-
stop
()¶ Stop an experiment (stopping all nodes currently in progress).
-
train_dataset
¶ Get the
Dataset
object corresponding to the training dataset of the experiment.Returns: Associated training dataset Return type: Dataset
-
update_status
()¶ Get an update on the status of a resource.
Parameters: specific_url (str, optional) – Specific (already parametrized) url to fetch the resource from (otherwise the url is built from the resource type and unique _id
)Returns: Updated status info Return type: dict
-
wait_until
(condition, raise_on_error: bool = True, timeout: float = 14400.0)¶ Wait until condition is fulfilled, then break.
Parameters: - (func (condition) – (
BaseExperimentVersion
) -> bool.): Function to use to check the break condition - raise_on_error (bool, optional) – If true then the function will stop on error,
otherwise it will continue waiting (default:
True
) - timeout (float, optional) – Maximal amount of time to wait before forcing exit
Example:
experiment.wait_until(lambda experimentv: len(experimentv.models) > 3)
Raises: PrevisionException
– If the resource could not be fetched or there was a timeout.- (func (condition) – (
-