Deployed model¶
Prevision.io’s SDK allows to make a prediction from a model deployed with the Prevision.io’s platform.
import previsionio as pio
# Initialize the deployed model object from the url of the model, your client id and client secret for this model, and your credentials
model = pio.DeployedModel(prevision_app_url, client_id, client_secret)
# Make a prediction
prediction, confidance, explain = model.predict(
predict_data={'feature1': 1, 'feature2': 2},
use_confidence=True,
explain=True,
)
-
class
previsionio.deployed_model.
DeployedModel
(prevision_app_url: str, client_id: str, client_secret: str, prevision_token_url: str = None)¶ DeployedModel class to interact with a deployed model.
Parameters: - prevision_app_url (str) – URL of the App. Can be retrieved on your app dashbord.
- client_id (str) – Your app client id. Can be retrieved on your app dashbord.
- client_secret (str) – Your app client secret. Can be retrieved on your app dashbord.
- prevision_token_url (str) – URL of get token. Should be https://accounts.prevision.io/auth/realms/prevision.io/protocol/openid-connect/token if you’re in the cloud, or a custom IP address if installed on-premise.
-
predict
(predict_data: Dict, use_confidence: bool = False, explain: bool = False)¶ Get a prediction on a single instance using the best model of the usecase.
Parameters: Returns: Tuple containing the prediction value, confidence and explain. In case of regression problem type, confidence format is a list. In case of multiclassification problem type, prediction value format is a string.
Return type:
-
request
(endpoint, method, files=None, data=None, allow_redirects=True, content_type=None, check_response=True, message_prefix=None, **requests_kwargs)¶ Make a request on the desired endpoint with the specified method & data.
Requires initialization.
Parameters: Returns: request response
Raises: Exception
– Error if url/token not configured