Experiment Deployment

class previsionio.experiment_deployment.ExperimentDeployment(_id: str, name: str, experiment_id: str, current_version: int, versions: List[Dict], deploy_state: str, current_type_violation_policy: str, access_type: str, project_id: str, training_type: str, models: List[Dict], url: str = None, **kwargs)

Bases: previsionio.experiment_deployment.BaseExperimentDeployment

create_api_key() → Dict

Create an api key of the experiment deployment from the actual [client] workspace.

Raises:
  • PrevisionException – If the dataset does not exist
  • requests.exceptions.ConnectionError – Error processing the request
delete()

Delete an experiment deployment from the actual [client] workspace.

Raises:
  • PrevisionException – If the experiment deployment does not exist
  • requests.exceptions.ConnectionError – Error processing the request
classmethod from_id(_id: str) → previsionio.experiment_deployment.BaseExperimentDeployment

Get a deployed experiment from the platform by its unique id.

Parameters:_id (str) – Unique id of the experiment version to retrieve
Returns:Fetched deployed experiment
Return type:BaseExperimentDeployment
Raises:PrevisionException – Any error while fetching data from the platform or parsing result
get_api_keys() → List[Dict]

Fetch the api keys client id and cient secret of the experiment deployment from the actual [client] workspace.

Raises:
  • PrevisionException – If the dataset does not exist
  • requests.exceptions.ConnectionError – Error processing the request
classmethod list(project_id: str, all: bool = True) → List[previsionio.experiment_deployment.ExperimentDeployment]

List all the available experiment in the current active [client] workspace.

Warning

Contrary to the parent list() function, this method returns actual ExperimentDeployment objects rather than plain dictionaries with the corresponding data.

Parameters:
  • project_id (str) – project id
  • all (bool, optional) – Whether to force the SDK to load all items of the given type (by calling the paginated API several times). Else, the query will only return the first page of result.
Returns:

Fetched dataset objects

Return type:

list(BaseExperimentDeployment)

list_predictions() → List[previsionio.prediction.DeploymentPrediction]

List all the available predictions in the current active [client] workspace.

Returns:Fetched deployed predictions objects
Return type:list(DeploymentPrediction)
new_version(name: str, main_model: previsionio.model.Model, challenger_model: previsionio.model.Model = None) → previsionio.experiment_deployment.BaseExperimentDeployment

Create a new experiment deployment version.

Parameters:
  • name (str) – experiment deployment name
  • main_model (Model) – main model
  • challenger_model (Model, optional) – challenger model. Main and challenger models should be in the same experiment
Returns:

The registered experiment deployment object in the current project

Return type:

BaseExperimentDeployment

Raises:
  • PrevisionException – Any error while creating experiment deployment to the platform or parsing the result
  • Exception – For any other unknown error
predict_from_dataset(dataset: previsionio.dataset.Dataset) → previsionio.prediction.DeploymentPrediction

Make a prediction for a dataset stored in the current active [client] workspace (using the current SDK dataset object).

Parameters:dataset (Dataset) – Dataset resource to make a prediction for
Returns:The registered prediction object in the current workspace
Return type:DeploymentPrediction
update_status(specific_url: str = None) → Dict

Get an update on the status of a resource.

Parameters:specific_url (str, optional) – Specific (already parametrized) url to fetch the resource from (otherwise the url is built from the resource type and unique _id)
Returns:Updated status info
Return type:dict
wait_until(condition, timeout: float = 3600.0)

Wait until condition is fulfilled, then break.

Parameters:
  • (func (condition) – (BaseExperimentVersion) -> bool.): Function to use to check the break condition
  • raise_on_error (bool, optional) – If true then the function will stop on error, otherwise it will continue waiting (default: True)
  • timeout (float, optional) – Maximal amount of time to wait before forcing exit

Example:

experiment.wait_until(lambda experimentv: len(experimentv.models) > 3)
Raises:PrevisionException – If the resource could not be fetched or there was a timeout.
class previsionio.experiment_deployment.ExternallyHostedModelDeployment(_id: str, name: str, experiment_id: str, current_version: int, versions: List[Dict], deploy_state: str, project_id: str, training_type: str, models: List[Dict], current_type_violation_policy: str, **kwargs)

Bases: previsionio.experiment_deployment.BaseExperimentDeployment

delete()

Delete an experiment deployment from the actual [client] workspace.

Raises:
  • PrevisionException – If the experiment deployment does not exist
  • requests.exceptions.ConnectionError – Error processing the request
classmethod from_id(_id: str) → previsionio.experiment_deployment.BaseExperimentDeployment

Get a deployed experiment from the platform by its unique id.

Parameters:_id (str) – Unique id of the experiment version to retrieve
Returns:Fetched deployed experiment
Return type:BaseExperimentDeployment
Raises:PrevisionException – Any error while fetching data from the platform or parsing result
classmethod list(project_id: str, all: bool = True) → List[previsionio.experiment_deployment.ExperimentDeployment]

List all the available experiment in the current active [client] workspace.

Warning

Contrary to the parent list() function, this method returns actual ExperimentDeployment objects rather than plain dictionaries with the corresponding data.

Parameters:
  • project_id (str) – project id
  • all (bool, optional) – Whether to force the SDK to load all items of the given type (by calling the paginated API several times). Else, the query will only return the first page of result.
Returns:

Fetched dataset objects

Return type:

list(BaseExperimentDeployment)

list_log_bulk_predictions() → List[Dict]

List all the available log bulk predictions.

Returns:Fetched log bulk predictions
Return type:list(dict)
log_bulk_prediction(input_file_path: str, output_file_path: str, model_role: str = 'main') → Dict

Log bulk prediction from local parquet files.

Parameters:
  • input_file_path (str) – Path to an input parquet file
  • output_file_path (str) – Path to an ouput parquet file
  • model_role (str, optional) – main / challenger
Raises:
  • PrevisionException – If error while logging bulk prediction
  • requests.exceptions.ConnectionError – Error processing the request
log_unit_prediction(_input: Dict, output: Dict, model_role: str = 'main', deployment_version: int = None) → Dict

Log unit prediction.

Parameters:
  • input (dict) – input prediction data
  • output (dict) – output prediction data
  • model_role (str, optional) – main / challenger
  • deployment_version (int, optional) – deployment version to use. Last version is used by default
Raises:
  • PrevisionException – If error while logging unit prediction
  • requests.exceptions.ConnectionError – Error processing the request
new_version(name: str, main_model: previsionio.model.Model, challenger_model: previsionio.model.Model = None) → previsionio.experiment_deployment.BaseExperimentDeployment

Create a new experiment deployment version.

Parameters:
  • name (str) – experiment deployment name
  • main_model (Model) – main model
  • challenger_model (Model, optional) – challenger model. Main and challenger models should be in the same experiment
Returns:

The registered experiment deployment object in the current project

Return type:

BaseExperimentDeployment

Raises:
  • PrevisionException – Any error while creating experiment deployment to the platform or parsing the result
  • Exception – For any other unknown error
update_status(specific_url: str = None) → Dict

Get an update on the status of a resource.

Parameters:specific_url (str, optional) – Specific (already parametrized) url to fetch the resource from (otherwise the url is built from the resource type and unique _id)
Returns:Updated status info
Return type:dict
wait_until(condition, timeout: float = 3600.0)

Wait until condition is fulfilled, then break.

Parameters:
  • (func (condition) – (BaseExperimentVersion) -> bool.): Function to use to check the break condition
  • raise_on_error (bool, optional) – If true then the function will stop on error, otherwise it will continue waiting (default: True)
  • timeout (float, optional) – Maximal amount of time to wait before forcing exit

Example:

experiment.wait_until(lambda experimentv: len(experimentv.models) > 3)
Raises:PrevisionException – If the resource could not be fetched or there was a timeout.