Deployed model¶
Prevision.io’s SDK allows to make a prediction from a model deployed with the Prevision.io’s platform.
import previsionio as pio
# Initialize the deployed model object from the url of the model, your client id and client secret for this model, and your credentials
model = pio.DeployedModel(prevision_app_url, client_id, client_secret)
# Make a prediction
prediction, confidance, explain = model.predict(
predict_data={'feature1': 1, 'feature2': 2},
use_confidence=True,
explain=True,
)
-
class
previsionio.deployed_model.
DeployedModel
(prevision_app_url: str, client_id: str, client_secret: str, prevision_token_url: str = None)¶ DeployedModel class to interact with a deployed model.
Parameters: - prevision_app_url (str) – URL of the App. Can be retrieved on your app dashbord.
- client_id (str) – Your app client id. Can be retrieved on your app dashbord.
- client_secret (str) – Your app client secret. Can be retrieved on your app dashbord.
- prevision_token_url (str) – URL to get the OAuth2 token of the deployed model. Required only if working on-premise (custom IP address) otherwise it is retrieved automatically.
-
predict
(predict_data: Dict, use_confidence: bool = False, explain: bool = False)¶ Get a prediction on a single instance using the best model of the experiment.
Parameters: Returns: Tuple containing the prediction value, confidence and explain. In case of regression problem type, confidence format is a list. In case of multiclassification problem type, prediction value format is a string.
Return type:
-
request
(endpoint: str, method, files: Dict = None, data: Dict = None, allow_redirects: bool = True, content_type: str = None, check_response: bool = True, message_prefix: str = None, **requests_kwargs)¶ Make a request on the desired endpoint with the specified method & data.
Requires initialization.
Parameters: Returns: request response
Raises: Exception
– Error if url/token not configured