API Reference

Advanced Options

class datarobot.helpers.AdvancedOptions(weights: Optional[str] = None, response_cap: Union[bool, float, None] = None, blueprint_threshold: Optional[int] = None, seed: Optional[int] = None, smart_downsampled: Optional[bool] = None, majority_downsampling_rate: Optional[float] = None, offset: Optional[List[str]] = None, exposure: Optional[str] = None, accuracy_optimized_mb: Optional[bool] = None, scaleout_modeling_mode: Optional[str] = None, events_count: Optional[str] = None, monotonic_increasing_featurelist_id: Optional[str] = None, monotonic_decreasing_featurelist_id: Optional[str] = None, only_include_monotonic_blueprints: Optional[bool] = None, allowed_pairwise_interaction_groups: Optional[List[Tuple[str, ...]]] = None, blend_best_models: Optional[bool] = None, scoring_code_only: Optional[bool] = None, prepare_model_for_deployment: Optional[bool] = None, consider_blenders_in_recommendation: Optional[bool] = None, min_secondary_validation_model_count: Optional[int] = None, shap_only_mode: Optional[bool] = None, autopilot_data_sampling_method: Optional[str] = None, run_leakage_removed_feature_list: Optional[bool] = None, autopilot_with_feature_discovery: Optional[bool] = False, feature_discovery_supervised_feature_reduction: Optional[bool] = None, exponentially_weighted_moving_alpha: Optional[float] = None, external_time_series_baseline_dataset_id: Optional[str] = None, use_supervised_feature_reduction: Optional[bool] = True, primary_location_column: Optional[str] = None, protected_features: Optional[List[str]] = None, preferable_target_value: Optional[str] = None, fairness_metrics_set: Optional[str] = None, fairness_threshold: Optional[str] = None, bias_mitigation_feature_name: Optional[str] = None, bias_mitigation_technique: Optional[str] = None, include_bias_mitigation_feature_as_predictor_variable: Optional[bool] = None, default_monotonic_increasing_featurelist_id: Optional[str] = None, default_monotonic_decreasing_featurelist_id: Optional[str] = None)

Used when setting the target of a project to set advanced options of modeling process.

Parameters:
weights : string, optional

The name of a column indicating the weight of each row

response_cap : bool or float in [0.5, 1), optional

Defaults to none here, but server defaults to False. If specified, it is the quantile of the response distribution to use for response capping.

blueprint_threshold : int, optional

Number of hours models are permitted to run before being excluded from later autopilot stages Minimum 1

seed : int, optional

a seed to use for randomization

smart_downsampled : bool, optional

whether to use smart downsampling to throw away excess rows of the majority class. Only applicable to classification and zero-boosted regression projects.

majority_downsampling_rate : float, optional

the percentage between 0 and 100 of the majority rows that should be kept. Specify only if using smart downsampling. May not cause the majority class to become smaller than the minority class.

offset : list of str, optional

(New in version v2.6) the list of the names of the columns containing the offset of each row

exposure : string, optional

(New in version v2.6) the name of a column containing the exposure of each row

accuracy_optimized_mb : bool, optional

(New in version v2.6) Include additional, longer-running models that will be run by the autopilot and available to run manually.

scaleout_modeling_mode : string, optional

(Deprecated in 2.28. Will be removed in 2.30) DataRobot no longer supports scaleout models. Please remove any usage of this parameter as it will be removed from the API soon.

events_count : string, optional

(New in version v2.8) the name of a column specifying events count.

monotonic_increasing_featurelist_id : string, optional

(new in version 2.11) the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced. When specified, this will set a default for the project that can be overriden at model submission time if desired.

monotonic_decreasing_featurelist_id : string, optional

(new in version 2.11) the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced. When specified, this will set a default for the project that can be overriden at model submission time if desired.

only_include_monotonic_blueprints : bool, optional

(new in version 2.11) when true, only blueprints that support enforcing monotonic constraints will be available in the project or selected for the autopilot.

allowed_pairwise_interaction_groups : list of tuple, optional

(New in version v2.19) For GA2M models - specify groups of columns for which pairwise interactions will be allowed. E.g. if set to [(A, B, C), (C, D)] then GA2M models will allow interactions between columns AxB, BxC, AxC, CxD. All others (AxD, BxD) will not be considered.

blend_best_models: bool, optional

(New in version v2.19) blend best models during Autopilot run

scoring_code_only: bool, optional

(New in version v2.19) Keep only models that can be converted to scorable java code during Autopilot run

shap_only_mode: bool, optional

(New in version v2.21) Keep only models that support SHAP values during Autopilot run. Use SHAP-based insights wherever possible. Defaults to False.

prepare_model_for_deployment: bool, optional

(New in version v2.19) Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning “RECOMMENDED FOR DEPLOYMENT” label.

consider_blenders_in_recommendation: bool, optional

(New in version 2.22.0) Include blenders when selecting a model to prepare for deployment in an Autopilot Run. Defaults to False.

min_secondary_validation_model_count: int, optional

(New in version v2.19) Compute “All backtest” scores (datetime models) or cross validation scores for the specified number of highest ranking models on the Leaderboard, if over the Autopilot default.

autopilot_data_sampling_method: str, optional

(New in version v2.23) one of datarobot.enums.DATETIME_AUTOPILOT_DATA_SAMPLING_METHOD. Applicable for OTV projects only, defines if autopilot uses “random” or “latest” sampling when iteratively building models on various training samples. Defaults to “random” for duration-based projects and to “latest” for row-based projects.

run_leakage_removed_feature_list: bool, optional

(New in version v2.23) Run Autopilot on Leakage Removed feature list (if exists).

autopilot_with_feature_discovery: bool, default ``False``, optional

(New in version v2.23) If true, autopilot will run on a feature list that includes features found via search for interactions.

feature_discovery_supervised_feature_reduction: bool, optional

(New in version v2.23) Run supervised feature reduction for feature discovery projects.

exponentially_weighted_moving_alpha: float, optional

(New in version v2.26) defaults to None, value between 0 and 1 (inclusive), indicates alpha parameter used in exponentially weighted moving average within feature derivation window.

external_time_series_baseline_dataset_id: str, optional

(New in version v2.26) If provided, will generate metrics scaled by external model predictions metric for time series projects. The external predictions catalog must be validated before autopilot starts, see Project.validate_external_time_series_baseline and external baseline predictions documentation for further explanation.

use_supervised_feature_reduction: bool, default ``True` optional

Time Series only. When true, during feature generation DataRobot runs a supervised algorithm to retain only qualifying features. Setting to false can severely impact autopilot duration, especially for datasets with many features.

primary_location_column: str, optional.

The name of primary location column.

protected_features: list of str, optional.

(New in version v2.24) A list of project features to mark as protected for Bias and Fairness testing calculations. Max number of protected features allowed is 10.

preferable_target_value: str, optional.

(New in version v2.24) A target value that should be treated as a favorable outcome for the prediction. For example, if we want to check gender discrimination for giving a loan and our target is named is_bad, then the positive outcome for the prediction would be No, which means that the loan is good and that’s what we treat as a favorable result for the loaner.

fairness_metrics_set: str, optional.

(New in version v2.24) Metric to use for calculating fairness. Can be one of proportionalParity, equalParity, predictionBalance, trueFavorableAndUnfavorableRateParity or favorableAndUnfavorablePredictiveValueParity. Used and required only if Bias & Fairness in AutoML feature is enabled.

fairness_threshold: str, optional.

(New in version v2.24) Threshold value for the fairness metric. Can be in a range of [0.0, 1.0]. If the relative (i.e. normalized) fairness score is below the threshold, then the user will see a visual indication on the

bias_mitigation_feature_name : str, optional

The feature from protected features that will be used in a bias mitigation task to mitigate bias

bias_mitigation_technique : str, optional

One of datarobot.enums.BiasMitigationTechnique Options: - ‘preprocessingReweighing’ - ‘postProcessingRejectionOptionBasedClassification’ The technique by which we’ll mitigate bias, which will inform which bias mitigation task we insert into blueprints

include_bias_mitigation_feature_as_predictor_variable : bool, optional

Whether we should also use the mitigation feature as in input to the modeler just like any other categorical used for training, i.e. do we want the model to “train on” this feature in addition to using it for bias mitigation

default_monotonic_increasing_featurelist_id : str, optional

Returned from server on Project GET request - not able to be updated by user

default_monotonic_decreasing_featurelist_id : str, optional

Returned from server on Project GET request - not able to be updated by user

Examples

import datarobot as dr
advanced_options = dr.AdvancedOptions(
    weights='weights_column',
    offset=['offset_column'],
    exposure='exposure_column',
    response_cap=0.7,
    blueprint_threshold=2,
    smart_downsampled=True, majority_downsampling_rate=75.0)
update_individual_options(**kwargs) → None

Update individual attributes of an instance of AdvancedOptions.

Anomaly Assessment

class datarobot.models.anomaly_assessment.AnomalyAssessmentRecord(status, status_details, start_date, end_date, prediction_threshold, preview_location, delete_location, latest_explanations_location, **record_kwargs)

Object which keeps metadata about anomaly assessment insight for the particular subset, backtest and series and the links to proceed to get the anomaly assessment data.

New in version v2.25.

Notes

Record contains:

  • record_id : the ID of the record.
  • project_id : the project ID of the record.
  • model_id : the model ID of the record.
  • backtest : the backtest of the record.
  • source : the source of the record.
  • series_id : the series id of the record for the multiseries projects.
  • status : the status of the insight.
  • status_details : the explanation of the status.
  • start_date : the ISO-formatted timestamp of the first prediction in the subset. Will be None if status is not AnomalyAssessmentStatus.COMPLETED.
  • end_date : the ISO-formatted timestamp of the last prediction in the subset. Will be None if status is not AnomalyAssessmentStatus.COMPLETED.
  • prediction_threshold : the threshold, all rows with anomaly scores greater or equal to it have shap explanations computed. Will be None if status is not AnomalyAssessmentStatus.COMPLETED.
  • preview_location : URL to retrieve predictions preview for the subset. Will be None if status is not AnomalyAssessmentStatus.COMPLETED.
  • latest_explanations_location : the URL to retrieve the latest predictions with the shap explanations. Will be None if status is not AnomalyAssessmentStatus.COMPLETED.
  • delete_location : the URL to delete anomaly assessment record and relevant insight data.
Attributes:
record_id: str

The ID of the record.

project_id: str

The ID of the project record belongs to.

model_id: str

The ID of the model record belongs to.

backtest: int or “holdout”

The backtest of the record.

source: “training” or “validation”

The source of the record

series_id: str or None

The series id of the record for the multiseries projects. Defined only for the multiseries projects.

status: str

The status of the insight. One of datarobot.enums.AnomalyAssessmentStatus

status_details: str

The explanation of the status.

start_date: str or None

See start_date info in Notes for more details.

end_date: str or None

See end_date info in Notes for more details.

prediction_threshold: float or None

See prediction_threshold info in Notes for more details.

preview_location: str or None

See preview_location info in Notes for more details.

latest_explanations_location: str or None

See latest_explanations_location info in Notes for more details.

delete_location: str

The URL to delete anomaly assessment record and relevant insight data.

classmethod list(project_id, model_id, backtest=None, source=None, series_id=None, limit=100, offset=0, with_data_only=False)

Retrieve the list of the anomaly assessment records for the project and model. Output can be filtered and limited.

Parameters:
project_id: str

The ID of the project record belongs to.

model_id: str

The ID of the model record belongs to.

backtest: int or “holdout”

The backtest to filter records by.

source: “training” or “validation”

The source to filter records by.

series_id: str, optional

The series id to filter records by. Can be specified for multiseries projects.

limit: int, optional

100 by default. At most this many results are returned.

offset: int, optional

This many results will be skipped.

with_data_only: bool, False by default

Filter by status == AnomalyAssessmentStatus.COMPLETED. If True, records with no data or not supported will be omitted.

Returns:
AnomalyAssessmentRecord

The anomaly assessment record.

classmethod compute(project_id, model_id, backtest, source, series_id=None)

Request anomaly assessment insight computation on the specified subset.

Parameters:
project_id: str

The ID of the project to compute insight for.

model_id: str

The ID of the model to compute insight for.

backtest: int or “holdout”

The backtest to compute insight for.

source: “training” or “validation”

The source to compute insight for.

series_id: str, optional

The series id to compute insight for. Required for multiseries projects.

Returns:
AnomalyAssessmentRecord

The anomaly assessment record.

delete()

Delete anomaly assessment record with preview and explanations.

get_predictions_preview()

Retrieve aggregated predictions statistics for the anomaly assessment record.

Returns:
AnomalyAssessmentPredictionsPreview
get_latest_explanations()

Retrieve latest predictions along with shap explanations for the most anomalous records.

Returns:
AnomalyAssessmentExplanations
get_explanations(start_date=None, end_date=None, points_count=None)

Retrieve predictions along with shap explanations for the most anomalous records in the specified date range/for defined number of points. Two out of three parameters: start_date, end_date or points_count must be specified.

Parameters:
start_date: str, optional

The start of the date range to get explanations in. Example: 2020-01-01T00:00:00.000000Z

end_date: str, optional

The end of the date range to get explanations in. Example: 2020-10-01T00:00:00.000000Z

points_count: int, optional

The number of the rows to return.

Returns:
AnomalyAssessmentExplanations
get_explanations_data_in_regions(regions, prediction_threshold=0.0)

Get predictions along with explanations for the specified regions, sorted by predictions in descending order.

Parameters:
regions: list of preview_bins

For each region explanations will be retrieved and merged.

prediction_threshold: float, optional

If specified, only points with score greater or equal to the threshold will be returned.

Returns:
dict in a form of {‘explanations’: explanations, ‘shap_base_value’: shap_base_value}
class datarobot.models.anomaly_assessment.AnomalyAssessmentExplanations(shap_base_value, data, start_date, end_date, count, **record_kwargs)

Object which keeps predictions along with shap explanations for the most anomalous records in the specified date range/for defined number of points.

New in version v2.25.

Notes

AnomalyAssessmentExplanations contains:

  • record_id : the id of the corresponding anomaly assessment record.
  • project_id : the project ID of the corresponding anomaly assessment record.
  • model_id : the model ID of the corresponding anomaly assessment record.
  • backtest : the backtest of the corresponding anomaly assessment record.
  • source : the source of the corresponding anomaly assessment record.
  • series_id : the series id of the corresponding anomaly assessment record for the multiseries projects.
  • start_date : the ISO-formatted first timestamp in the response. Will be None of there is no data in the specified range.
  • end_date : the ISO-formatted last timestamp in the response. Will be None of there is no data in the specified range.
  • count : The number of points in the response.
  • shap_base_value : the shap base value.
  • data : list of DataPoint objects in the specified date range.

DataPoint contains:

  • shap_explanation : None or an array of up to 10 ShapleyFeatureContribution objects. Only rows with the highest anomaly scores have Shapley explanations calculated. Value is None if prediction is lower than prediction_threshold.
  • timestamp (str) : ISO-formatted timestamp for the row.
  • prediction (float) : The output of the model for this row.

ShapleyFeatureContribution contains:

  • feature_value (str) : the feature value for this row. First 50 characters are returned.
  • strength (float) : the shap value for this feature and row.
  • feature (str) : the feature name.
Attributes:
record_id: str

The ID of the record.

project_id: str

The ID of the project record belongs to.

model_id: str

The ID of the model record belongs to.

backtest: int or “holdout”

The backtest of the record.

source: “training” or “validation”

The source of the record.

series_id: str or None

The series id of the record for the multiseries projects. Defined only for the multiseries projects.

start_date: str or None

The ISO-formatted datetime of the first row in the data.

end_date: str or None

The ISO-formatted datetime of the last row in the data.

data: array of `data_point` objects or None

See data info in Notes for more details.

shap_base_value: float

Shap base value.

count: int

The number of points in the data.

classmethod get(project_id, record_id, start_date=None, end_date=None, points_count=None)

Retrieve predictions along with shap explanations for the most anomalous records in the specified date range/for defined number of points. Two out of three parameters: start_date, end_date or points_count must be specified.

Parameters:
project_id: str

The ID of the project.

record_id: str

The ID of the anomaly assessment record.

start_date: str, optional

The start of the date range to get explanations in. Example: 2020-01-01T00:00:00.000000Z

end_date: str, optional

The end of the date range to get explanations in. Example: 2020-10-01T00:00:00.000000Z

points_count: int, optional

The number of the rows to return.

Returns:
AnomalyAssessmentExplanations
class datarobot.models.anomaly_assessment.AnomalyAssessmentPredictionsPreview(start_date, end_date, preview_bins, **record_kwargs)

Aggregated predictions over time for the corresponding anomaly assessment record. Intended to find the bins with highest anomaly scores.

New in version v2.25.

Notes

AnomalyAssessmentPredictionsPreview contains:

  • record_id : the id of the corresponding anomaly assessment record.
  • project_id : the project ID of the corresponding anomaly assessment record.
  • model_id : the model ID of the corresponding anomaly assessment record.
  • backtest : the backtest of the corresponding anomaly assessment record.
  • source : the source of the corresponding anomaly assessment record.
  • series_id : the series id of the corresponding anomaly assessment record for the multiseries projects.
  • start_date : the ISO-formatted timestamp of the first prediction in the subset.
  • end_date : the ISO-formatted timestamp of the last prediction in the subset.
  • preview_bins : list of PreviewBin objects. The aggregated predictions for the subset. Bins boundaries may differ from actual start/end dates because this is an aggregation.

PreviewBin contains:

  • start_date (str) : the ISO-formatted datetime of the start of the bin.
  • end_date (str) : the ISO-formatted datetime of the end of the bin.
  • avg_predicted (float or None) : the average prediction of the model in the bin. None if there are no entries in the bin.
  • max_predicted (float or None) : the maximum prediction of the model in the bin. None if there are no entries in the bin.
  • frequency (int) : the number of the rows in the bin.
Attributes:
record_id: str

The ID of the record.

project_id: str

The ID of the project record belongs to.

model_id: str

The ID of the model record belongs to.

backtest: int or “holdout”

The backtest of the record.

source: “training” or “validation”

The source of the record

series_id: str or None

The series id of the record for the multiseries projects. Defined only for the multiseries projects.

start_date: str

the ISO-formatted timestamp of the first prediction in the subset.

end_date: str

the ISO-formatted timestamp of the last prediction in the subset.

preview_bins: list of preview_bin objects.

The aggregated predictions for the subset. See more info in Notes.

classmethod get(project_id, record_id)

Retrieve aggregated predictions over time.

Parameters:
project_id: str

The ID of the project.

record_id: str

The ID of the anomaly assessment record.

Returns:
AnomalyAssessmentPredictionsPreview
find_anomalous_regions(max_prediction_threshold=0.0)
Sort preview bins by max_predicted value and select those with max predicted value
greater or equal to max prediction threshold. Sort the result by max predicted value in descending order.
Parameters:
max_prediction_threshold: float, optional

Return bins with maximum anomaly score greater or equal to max_prediction_threshold.

Returns:
preview_bins: list of preview_bin

Filtered and sorted preview bins

Batch Predictions

class datarobot.models.BatchPredictionJob(data: Dict[str, Any], completed_resource_url: Optional[str] = None)

A Batch Prediction Job is used to score large data sets on prediction servers using the Batch Prediction API.

Attributes:
id : str

the id of the job

classmethod score(deployment: DeploymentType, intake_settings: Optional[IntakeSettings] = None, output_settings: Optional[OutputSettings] = None, csv_settings: Optional[CsvSettings] = None, timeseries_settings: Optional[TimeSeriesSettings] = None, num_concurrent: Optional[int] = None, chunk_size: Optional[Union[int, str]] = None, passthrough_columns: Optional[List[str]] = None, passthrough_columns_set: Optional[str] = None, max_explanations: Optional[int] = None, max_ngram_explanations: Optional[Union[int, str]] = None, threshold_high: Optional[float] = None, threshold_low: Optional[float] = None, prediction_warning_enabled: Optional[bool] = None, include_prediction_status: bool = False, skip_drift_tracking: bool = False, prediction_instance: Optional[PredictionInstance] = None, abort_on_error: bool = True, column_names_remapping: Optional[Dict[str, str]] = None, include_probabilities: bool = True, include_probabilities_classes: Optional[List[str]] = None, download_timeout: Optional[int] = 120, download_read_timeout: Optional[int] = 660, upload_read_timeout: Optional[int] = 600, explanations_mode: Optional[PredictionExplanationsMode] = None) → BatchPredictionJob

Create new batch prediction job, upload the scoring dataset and return a batch prediction job.

The default intake and output options are both localFile which requires the caller to pass the file parameter and either download the results using the download() method afterwards or pass a path to a file where the scored data will be downloaded to afterwards.

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
deployment : Deployment or string ID

Deployment which will be used for scoring.

intake_settings : dict (optional)

A dict configuring how data is coming from. Supported options:

  • type : string, either localFile, s3, azure, gcp, dataset, jdbc snowflake, synapse or bigquery

Note that to pass a dataset, you not only need to specify the type parameter as dataset, but you must also set the dataset parameter as a dr.Dataset object.

To score from a local file, add the this parameter to the settings:

  • file : file-like object, string path to file or a pandas.DataFrame of scoring data

To score from S3, add the next parameters to the settings:

  • url : string, the URL to score (e.g.: s3://bucket/key)
  • credential_id : string (optional)
  • endpoint_url : string (optional), any non-default endpoint URL for S3 access (omit to use the default)

To score from JDBC, add the next parameters to the settings:

  • data_store_id : string, the ID of the external data store connected to the JDBC data source (see Database Connectivity).
  • query : string (optional if table, schema and/or catalog is specified), a self-supplied SELECT statement of the data set you wish to predict.
  • table : string (optional if query is specified), the name of specified database table.
  • schema : string (optional if query is specified), the name of specified database schema.
  • catalog : string (optional if query is specified), (new in v2.22) the name of specified database catalog.
  • fetch_size : int (optional), Changing the fetchSize can be used to balance throughput and memory usage.
  • credential_id : string (optional) the ID of the credentials holding information about a user with read-access to the JDBC data source (see Credentials).
output_settings : dict (optional)

A dict configuring how scored data is to be saved. Supported options:

  • type : string, either localFile, s3, azure, gcp, jdbc, snowflake, synapse or bigquery

To save scored data to a local file, add this parameters to the settings:

  • path : string (optional), path to save the scored data as CSV. If a path is not specified, you must download the scored data yourself with job.download(). If a path is specified, the call will block until the job is done. if there are no other jobs currently processing for the targeted prediction instance, uploading, scoring, downloading will happen in parallel without waiting for a full job to complete. Otherwise, it will still block, but start downloading the scored data as soon as it starts generating data. This is the fastest method to get predictions.

To save scored data to S3, add the next parameters to the settings:

  • url : string, the URL for storing the results (e.g.: s3://bucket/key)
  • credential_id : string (optional)
  • endpoint_url : string (optional), any non-default endpoint URL for S3 access (omit to use the default)

To save scored data to JDBC, add the next parameters to the settings:

  • data_store_id : string, the ID of the external data store connected to the JDBC data source (see Database Connectivity).
  • table : string, the name of specified database table.
  • schema : string (optional), the name of specified database schema.
  • catalog : string (optional), (new in v2.22) the name of specified database catalog.
  • statement_type : string, the type of insertion statement to create, one of datarobot.enums.AVAILABLE_STATEMENT_TYPES.
  • update_columns : list(string) (optional), a list of strings containing those column names to be updated in case statement_type is set to a value related to update or upsert.
  • where_columns : list(string) (optional), a list of strings containing those column names to be selected in case statement_type is set to a value related to insert or update.
  • credential_id : string, the ID of the credentials holding information about a user with write-access to the JDBC data source (see Credentials).
  • create_table_if_not_exists : bool (optional), If no existing table is detected, attempt to create it before writing data with the strategy defined in the statementType parameter.
csv_settings : dict (optional)

CSV intake and output settings. Supported options:

  • delimiter : string (optional, default ,), fields are delimited by this character. Use the string tab to denote TSV (TAB separated values). Must be either a one-character string or the string tab.
  • quotechar : string (optional, default ), fields containing the delimiter must be quoted using this character.
  • encoding : string (optional, default utf-8), encoding for the CSV files. For example (but not limited to): shift_jis, latin_1 or mskanji.
timeseries_settings : dict (optional)

Configuration for time-series scoring. Supported options:

  • type : string, must be forecast or historical (default if not passed is forecast). forecast mode makes predictions using forecast_point or rows in the dataset without target. historical enables bulk prediction mode which calculates predictions for all possible forecast points and forecast distances in the dataset within predictions_start_date/predictions_end_date range.
  • forecast_point : datetime (optional), forecast point for the dataset, used for the forecast predictions, by default value will be inferred from the dataset. May be passed if timeseries_settings.type=forecast.
  • predictions_start_date : datetime (optional), used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset. May be passed if timeseries_settings.type=historical.
  • predictions_end_date : datetime (optional), used for historical predictions in order to override date from which predictions should be calculated. By default value will be inferred automatically from the dataset. May be passed if timeseries_settings.type=historical.
  • relax_known_in_advance_features_check : bool, (default False). If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.
num_concurrent : int (optional)

Number of concurrent chunks to score simultaneously. Defaults to the available number of cores of the deployment. Lower it to leave resources for real-time scoring.

chunk_size : string or int (optional)

Which strategy should be used to determine the chunk size. Can be either a named strategy or a fixed size in bytes. - auto: use fixed or dynamic based on flipper - fixed: use 1MB for explanations, 5MB for regular requests - dynamic: use dynamic chunk sizes - int: use this many bytes per chunk

passthrough_columns : list[string] (optional)

Keep these columns from the scoring dataset in the scored dataset. This is useful for correlating predictions with source data.

passthrough_columns_set : string (optional)

To pass through every column from the scoring dataset, set this to all. Takes precedence over passthrough_columns if set.

max_explanations : int (optional)

Compute prediction explanations for this amount of features.

max_ngram_explanations : int or str (optional)

Compute text explanations for this amount of ngrams. Set to all to return all ngram explanations, or set to a positive integer value to limit the amount of ngram explanations returned. By default no ngram explanations will be computed and returned.

threshold_high : float (optional)

Only compute prediction explanations for predictions above this threshold. Can be combined with threshold_low.

threshold_low : float (optional)

Only compute prediction explanations for predictions below this threshold. Can be combined with threshold_high.

explanations_mode : PredictionExplanationsMode, optional

Mode of prediction explanations calculation for multiclass models, if not specified - server default is to explain only the predicted class, identical to passing TopPredictionsMode(1).

prediction_warning_enabled : boolean (optional)

Add prediction warnings to the scored data. Currently only supported for regression models.

include_prediction_status : boolean (optional)

Include the prediction_status column in the output, defaults to False.

skip_drift_tracking : boolean (optional)

Skips drift tracking on any predictions made from this job. This is useful when running non-production workloads to not affect drift tracking and cause unnecessary alerts. Defaults to False.

prediction_instance : dict (optional)

Defaults to instance specified by deployment or system configuration. Supported options:

  • hostName : string
  • sslEnabled : boolean (optional, default true). Set to false to run prediction requests from the batch prediction job without SSL.
  • datarobotKey : string (optional), if running a job against a prediction instance in the Managed AI Cloud, you must provide the organization level DataRobot-Key
  • apiKey : string (optional), by default, prediction requests will use the API key of the user that created the job. This allows you to make requests on behalf of other users.
abort_on_error : boolean (optional)

Default behavior is to abort the job if too many rows fail scoring. This will free up resources for other jobs that may score successfully. Set to false to unconditionally score every row no matter how many errors are encountered. Defaults to True.

column_names_remapping : dict (optional)

Mapping with column renaming for output table. Defaults to {}.

include_probabilities : boolean (optional)

Flag that enables returning of all probability columns. Defaults to True.

include_probabilities_classes : list (optional)

List the subset of classes if a user doesn’t want all the classes. Defaults to [].

download_timeout : int (optional)

New in version 2.22.

If using localFile output, wait this many seconds for the download to become available. See download().

download_read_timeout : int (optional, default 660)

New in version 2.22.

If using localFile output, wait this many seconds for the server to respond between chunks.

upload_read_timeout: int (optional, default 600)

New in version 2.28.

If using localFile intake, wait this many seconds for the server to respond after whole dataset upload.

classmethod score_to_file(deployment: DeploymentType, intake_path, output_path: str, **kwargs)

Create new batch prediction job, upload the scoring dataset and download the scored CSV file concurrently.

Will block until the entire file is scored.

Refer to the score method for details on the other kwargs parameters.

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
deployment : Deployment or string ID

Deployment which will be used for scoring.

intake_path : file-like object/string path to file/pandas.DataFrame

Scoring data

output_path : str

Filename to save the result under

classmethod score_s3(deployment: DeploymentType, source_url: str, destination_url: str, credential=None, endpoint_url: Optional[str] = None, **kwargs)

Create new batch prediction job, with a scoring dataset from S3 and writing the result back to S3.

This returns immediately after the job has been created. You must poll for job completion using get_status() or wait_for_completion() (see datarobot.models.Job)

Refer to the score method for details on the other kwargs parameters.

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
deployment : Deployment or string ID

Deployment which will be used for scoring.

source_url : string

The URL for the prediction dataset (e.g.: s3://bucket/key)

destination_url : string

The URL for the scored dataset (e.g.: s3://bucket/key)

credential : string or Credential (optional)

The AWS Credential object or credential id

endpoint_url : string (optional)

Any non-default endpoint URL for S3 access (omit to use the default)

classmethod score_azure(deployment: DeploymentType, source_url: str, destination_url: str, credential=None, **kwargs)

Create new batch prediction job, with a scoring dataset from Azure blob storage and writing the result back to Azure blob storage.

This returns immediately after the job has been created. You must poll for job completion using get_status() or wait_for_completion() (see datarobot.models.Job).

Refer to the score method for details on the other kwargs parameters.

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
deployment : Deployment or string ID

Deployment which will be used for scoring.

source_url : string

The URL for the prediction dataset (e.g.: https://storage_account.blob.endpoint/container/blob_name)

destination_url : string

The URL for the scored dataset (e.g.: https://storage_account.blob.endpoint/container/blob_name)

credential : string or Credential (optional)

The Azure Credential object or credential id

classmethod score_gcp(deployment: DeploymentType, source_url: str, destination_url: str, credential=None, **kwargs)

Create new batch prediction job, with a scoring dataset from Google Cloud Storage and writing the result back to one.

This returns immediately after the job has been created. You must poll for job completion using get_status() or wait_for_completion() (see datarobot.models.Job).

Refer to the score method for details on the other kwargs parameters.

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
deployment : Deployment or string ID

Deployment which will be used for scoring.

source_url : string

The URL for the prediction dataset (e.g.: http(s)://storage.googleapis.com/[bucket]/[object])

destination_url : string

The URL for the scored dataset (e.g.: http(s)://storage.googleapis.com/[bucket]/[object])

credential : string or Credential (optional)

The GCP Credential object or credential id

classmethod score_from_existing(batch_prediction_job_id: str) → datarobot.models.batch_prediction_job.BatchPredictionJob

Create a new batch prediction job based on the settings from a previously created one

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
batch_prediction_job_id: str

ID of the previous batch prediction job

classmethod score_pandas(deployment: DeploymentType, df: pd.DataFrame, read_timeout: int = 660, **kwargs) → Tuple[BatchPredictionJob, pd.DataFrame]

Run a batch prediction job, with a scoring dataset from a pandas dataframe. The output from the prediction will be joined to the passed DataFrame and returned.

Use columnNamesRemapping to drop or rename columns in the output

This method blocks until the job has completed or raises an exception on errors.

Refer to the score method for details on the other kwargs parameters.

Returns:
BatchPredictionJob

Instance of BatchPredictonJob

pandas.DataFrame

The original dataframe merged with the predictions

Attributes:
deployment : Deployment or string ID

Deployment which will be used for scoring.

df : pandas.DataFrame

The dataframe to score

classmethod get(batch_prediction_job_id: str) → datarobot.models.batch_prediction_job.BatchPredictionJob

Get batch prediction job

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Attributes:
batch_prediction_job_id: str

ID of batch prediction job

download(fileobj, timeout: int = 120, read_timeout: int = 660) → None

Downloads the CSV result of a prediction job

Attributes:
fileobj: file-like object

Write CSV data to this file-like object

timeout : int (optional, default 120)

New in version 2.22.

Seconds to wait for the download to become available.

The download will not be available before the job has started processing. In case other jobs are occupying the queue, processing may not start immediately.

If the timeout is reached, the job will be aborted and RuntimeError is raised.

Set to -1 to wait infinitely.

read_timeout : int (optional, default 660)

New in version 2.22.

Seconds to wait for the server to respond between chunks.

delete(ignore_404_errors: bool = False) → None

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_status()

Get status of batch prediction job

Returns:
BatchPredictionJob status data

Dict with job status

classmethod list_by_status(statuses: Optional[List[str]] = None) → List[datarobot.models.batch_prediction_job.BatchPredictionJob]

Get jobs collection for specific set of statuses

Returns:
BatchPredictionJob statuses

List of job statuses dicts with specific statuses

Attributes:
statuses

List of statuses to filter jobs ([ABORTED|COMPLETED…]) if statuses is not provided, returns all jobs for user

class datarobot.models.BatchPredictionJobDefinition(id: Optional[str] = None, name: Optional[str] = None, enabled: Optional[bool] = None, schedule: Optional[Schedule] = None, batch_prediction_job=None, created: Optional[str] = None, updated: Optional[str] = None, created_by=None, updated_by=None, last_failed_run_time: Optional[str] = None, last_successful_run_time: Optional[str] = None, last_started_job_status: Optional[str] = None, last_scheduled_run_time: Optional[str] = None)
classmethod get(batch_prediction_job_definition_id: str) → datarobot.models.batch_prediction_job.BatchPredictionJobDefinition

Get batch prediction job definition

Returns:
BatchPredictionJobDefinition

Instance of BatchPredictionJobDefinition

Examples

>>> import datarobot as dr
>>> definition = dr.BatchPredictionJobDefinition.get('5a8ac9ab07a57a0001be501f')
>>> definition
BatchPredictionJobDefinition(60912e09fd1f04e832a575c1)
Attributes:
batch_prediction_job_definition_id: str

ID of batch prediction job definition

classmethod list() → List[datarobot.models.batch_prediction_job.BatchPredictionJobDefinition]

Get job all definitions

Returns:
List[BatchPredictionJobDefinition]

List of job definitions the user has access to see

Examples

>>> import datarobot as dr
>>> definition = dr.BatchPredictionJobDefinition.list()
>>> definition
[
    BatchPredictionJobDefinition(60912e09fd1f04e832a575c1),
    BatchPredictionJobDefinition(6086ba053f3ef731e81af3ca)
]
classmethod create(enabled: bool, batch_prediction_job, name: Optional[str] = None, schedule: Optional[Schedule] = None) → BatchPredictionJobDefinition

Creates a new batch prediction job definition to be run either at scheduled interval or as a manual run.

Returns:
BatchPredictionJobDefinition

Instance of BatchPredictionJobDefinition

Examples

>>> import datarobot as dr
>>> job_spec = {
...    "num_concurrent": 4,
...    "deployment_id": "foobar",
...    "intake_settings": {
...        "url": "s3://foobar/123",
...        "type": "s3",
...        "format": "csv"
...    },
...    "output_settings": {
...        "url": "s3://foobar/123",
...        "type": "s3",
...        "format": "csv"
...    },
...}
>>> schedule = {
...    "day_of_week": [
...        1
...    ],
...    "month": [
...        "*"
...    ],
...    "hour": [
...        16
...    ],
...    "minute": [
...        0
...    ],
...    "day_of_month": [
...        1
...    ]
...}
>>> definition = BatchPredictionJobDefinition.create(
...    enabled=False,
...    batch_prediction_job=job_spec,
...    name="some_definition_name",
...    schedule=schedule
... )
>>> definition
BatchPredictionJobDefinition(60912e09fd1f04e832a575c1)
Attributes:
enabled : bool (default False)

Whether or not the definition should be active on a scheduled basis. If True, schedule is required.

batch_prediction_job: dict

The job specifications for your batch prediction job. It requires the same job input parameters as used with score(), only it will not initialize a job scoring, only store it as a definition for later use.

name : string (optional)

The name you want your job to be identified with. Must be unique across the organization’s existing jobs. If you don’t supply a name, a random one will be generated for you.

schedule : dict (optional)

The schedule payload defines at what intervals the job should run, which can be combined in various ways to construct complex scheduling terms if needed. In all of the elements in the objects, you can supply either an asterisk ["*"] denoting “every” time denomination or an array of integers (e.g. [1, 2, 3]) to define a specific interval.

The schedule payload is split up in the following items:

Minute:

The minute(s) of the day that the job will run. Allowed values are either ["*"] meaning every minute of the day or [0 ... 59]

Hour: The hour(s) of the day that the job will run. Allowed values are either ["*"] meaning every hour of the day or [0 ... 23].

Day of Month: The date(s) of the month that the job will run. Allowed values are either [1 ... 31] or ["*"] for all days of the month. This field is additive with dayOfWeek, meaning the job will run both on the date(s) defined in this field and the day specified by dayOfWeek (for example, dates 1st, 2nd, 3rd, plus every Tuesday). If dayOfMonth is set to ["*"] and dayOfWeek is defined, the scheduler will trigger on every day of the month that matches dayOfWeek (for example, Tuesday the 2nd, 9th, 16th, 23rd, 30th). Invalid dates such as February 31st are ignored.

Month: The month(s) of the year that the job will run. Allowed values are either [1 ... 12] or ["*"] for all months of the year. Strings, either 3-letter abbreviations or the full name of the month, can be used interchangeably (e.g., “jan” or “october”). Months that are not compatible with dayOfMonth are ignored, for example {"dayOfMonth": [31], "month":["feb"]}

Day of Week: The day(s) of the week that the job will run. Allowed values are [0 .. 6], where (Sunday=0), or ["*"], for all days of the week. Strings, either 3-letter abbreviations or the full name of the day, can be used interchangeably (e.g., “sunday”, “Sunday”, “sun”, or “Sun”, all map to [0]. This field is additive with dayOfMonth, meaning the job will run both on the date specified by dayOfMonth and the day defined in this field.

update(enabled: bool, batch_prediction_job=None, name: Optional[str] = None, schedule: Optional[Schedule] = None) → BatchPredictionJobDefinition

Updates a job definition with the changed specs.

Takes the same input as create()

Returns:
BatchPredictionJobDefinition

Instance of the updated BatchPredictionJobDefinition

Examples

>>> import datarobot as dr
>>> job_spec = {
...    "num_concurrent": 5,
...    "deployment_id": "foobar_new",
...    "intake_settings": {
...        "url": "s3://foobar/123",
...        "type": "s3",
...        "format": "csv"
...    },
...    "output_settings": {
...        "url": "s3://foobar/123",
...        "type": "s3",
...        "format": "csv"
...    },
...}
>>> schedule = {
...    "day_of_week": [
...        1
...    ],
...    "month": [
...        "*"
...    ],
...    "hour": [
...        "*"
...    ],
...    "minute": [
...        30, 59
...    ],
...    "day_of_month": [
...        1, 2, 6
...    ]
...}
>>> definition = BatchPredictionJobDefinition.create(
...    enabled=False,
...    batch_prediction_job=job_spec,
...    name="updated_definition_name",
...    schedule=schedule
... )
>>> definition
BatchPredictionJobDefinition(60912e09fd1f04e832a575c1)
Attributes:
enabled : bool (default False)

Same as enabled in create().

batch_prediction_job: dict

Same as batch_prediction_job in create().

name : string (optional)

Same as name in create().

schedule : dict

Same as schedule in create().

run_on_schedule(schedule: Schedule) → BatchPredictionJobDefinition

Sets the run schedule of an already created job definition.

If the job was previously not enabled, this will also set the job to enabled.

Returns:
BatchPredictionJobDefinition

Instance of the updated BatchPredictionJobDefinition with the new / updated schedule.

Examples

>>> import datarobot as dr
>>> definition = dr.BatchPredictionJobDefinition.create('...')
>>> schedule = {
...    "day_of_week": [
...        1
...    ],
...    "month": [
...        "*"
...    ],
...    "hour": [
...        "*"
...    ],
...    "minute": [
...        30, 59
...    ],
...    "day_of_month": [
...        1, 2, 6
...    ]
...}
>>> definition.run_on_schedule(schedule)
BatchPredictionJobDefinition(60912e09fd1f04e832a575c1)
Attributes:
schedule : dict

Same as schedule in create().

run_once() → datarobot.models.batch_prediction_job.BatchPredictionJob

Manually submits a batch prediction job to the queue, based off of an already created job definition.

Returns:
BatchPredictionJob

Instance of BatchPredictionJob

Examples

>>> import datarobot as dr
>>> definition = dr.BatchPredictionJobDefinition.create('...')
>>> job = definition.run_once()
>>> job.wait_for_completion()
delete() → None

Deletes the job definition and disables any future schedules of this job if any. If a scheduled job is currently running, this will not be cancelled.

Examples

>>> import datarobot as dr
>>> definition = dr.BatchPredictionJobDefinition.get('5a8ac9ab07a57a0001be501f')
>>> definition.delete()

Blueprint

class datarobot.models.Blueprint(id: Optional[str] = None, processes: Optional[List[str]] = None, model_type: Optional[str] = None, project_id: Optional[str] = None, blueprint_category: Optional[str] = None, monotonic_increasing_featurelist_id: Optional[str] = None, monotonic_decreasing_featurelist_id: Optional[str] = None, supports_monotonic_constraints: Optional[bool] = None, recommended_featurelist_id: Optional[str] = None, supports_composable_ml: Optional[bool] = None)

A Blueprint which can be used to fit models

Attributes:
id : str

the id of the blueprint

processes : list of str

the processes used by the blueprint

model_type : str

the model produced by the blueprint

project_id : str

the project the blueprint belongs to

blueprint_category : str

(New in version v2.6) Describes the category of the blueprint and the kind of model it produces.

recommended_featurelist_id: str or null

(New in v2.18) The ID of the feature list recommended for this blueprint. If this field is not present, then there is no recommended feature list.

supports_composable_ml : bool or None

(New in version v2.26) whether this blueprint is supported in the Composable ML.

classmethod get(project_id: str, blueprint_id: str) → datarobot.models.blueprint.Blueprint

Retrieve a blueprint.

Parameters:
project_id : str

The project’s id.

blueprint_id : str

Id of blueprint to retrieve.

Returns:
blueprint : Blueprint

The queried blueprint.

get_chart() → datarobot.models.blueprint.BlueprintChart

Retrieve a chart.

Returns:
BlueprintChart

The current blueprint chart.

get_documents() → List[datarobot.models.blueprint.BlueprintTaskDocument]

Get documentation for tasks used in the blueprint.

Returns:
list of BlueprintTaskDocument

All documents available for blueprint.

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

class datarobot.models.BlueprintTaskDocument(title: Optional[str] = None, task: Optional[str] = None, description: Optional[str] = None, parameters: Optional[List[ParameterType]] = None, links: Optional[List[LinkType]] = None, references: Optional[List[ReferenceType]] = None)

Document describing a task from a blueprint.

Attributes:
title : str

Title of document.

task : str

Name of the task described in document.

description : str

Task description.

parameters : list of dict(name, type, description)

Parameters that task can receive in human-readable format.

links : list of dict(name, url)

External links used in document

references : list of dict(name, url)

References used in document. When no link available url equals None.

class datarobot.models.BlueprintChart(nodes: List[Dict[str, str]], edges: List[Tuple[str, str]])

A Blueprint chart that can be used to understand data flow in blueprint.

Attributes:
nodes : list of dict (id, label)

Chart nodes, id unique in chart.

edges : list of tuple (id1, id2)

Directions of data flow between blueprint chart nodes.

classmethod get(project_id: str, blueprint_id: str) → datarobot.models.blueprint.BlueprintChart

Retrieve a blueprint chart.

Parameters:
project_id : str

The project’s id.

blueprint_id : str

Id of blueprint to retrieve chart.

Returns:
BlueprintChart

The queried blueprint chart.

to_graphviz() → str

Get blueprint chart in graphviz DOT format.

Returns:
unicode

String representation of chart in graphviz DOT language.

class datarobot.models.ModelBlueprintChart(nodes: List[Dict[str, str]], edges: List[Tuple[str, str]])

A Blueprint chart that can be used to understand data flow in model. Model blueprint chart represents reduced repository blueprint chart with only elements that used to build this particular model.

Attributes:
nodes : list of dict (id, label)

Chart nodes, id unique in chart.

edges : list of tuple (id1, id2)

Directions of data flow between blueprint chart nodes.

classmethod get(project_id: str, model_id: str) → datarobot.models.blueprint.ModelBlueprintChart

Retrieve a model blueprint chart.

Parameters:
project_id : str

The project’s id.

model_id : str

Id of model to retrieve model blueprint chart.

Returns:
ModelBlueprintChart

The queried model blueprint chart.

to_graphviz() → str

Get blueprint chart in graphviz DOT format.

Returns:
unicode

String representation of chart in graphviz DOT language.

Calendar File

class datarobot.CalendarFile(calendar_end_date: Optional[str] = None, calendar_start_date: Optional[str] = None, created: Optional[str] = None, id: Optional[str] = None, name: Optional[str] = None, num_event_types: Optional[int] = None, num_events: Optional[int] = None, project_ids: Optional[List[str]] = None, role: Optional[str] = None, multiseries_id_columns: Optional[List[str]] = None)

Represents the data for a calendar file.

For more information about calendar files, see the calendar documentation.

Attributes:
id : str

The id of the calendar file.

calendar_start_date : str

The earliest date in the calendar.

calendar_end_date : str

The last date in the calendar.

created : str

The date this calendar was created, i.e. uploaded to DR.

name : str

The name of the calendar.

num_event_types : int

The number of different event types.

num_events : int

The number of events this calendar has.

project_ids : list of strings

A list containing the projectIds of the projects using this calendar.

multiseries_id_columns: list of str or None

A list of columns in calendar which uniquely identify events for different series. Currently, only one column is supported. If multiseries id columns are not provided, calendar is considered to be single series.

role : str

The access role the user has for this calendar.

classmethod create(file_path: str, calendar_name: Optional[str] = None, multiseries_id_columns: Optional[List[str]] = None) → datarobot.models.calendar_file.CalendarFile

Creates a calendar using the given file. For information about calendar files, see the calendar documentation

The provided file must be a CSV in the format:

Date,   Event,          Series ID,    Event Duration
<date>, <event_type>,   <series id>,  <event duration>
<date>, <event_type>,              ,  <event duration>

A header row is required, and the “Series ID” and “Event Duration” columns are optional.

Once the CalendarFile has been created, pass its ID with the DatetimePartitioningSpecification when setting the target for a time series project in order to use it.

Parameters:
file_path : string

A string representing a path to a local csv file.

calendar_name : string, optional

A name to assign to the calendar. Defaults to the name of the file if not provided.

multiseries_id_columns : list of str or None

A list of the names of multiseries id columns to define which series an event belongs to. Currently only one multiseries id column is supported.

Returns:
calendar_file : CalendarFile

Instance with initialized data.

Raises:
AsyncProcessUnsuccessfulError

Raised if there was an error processing the provided calendar file.

Examples

# Creating a calendar with a specified name
cal = dr.CalendarFile.create('/home/calendars/somecalendar.csv',
                                         calendar_name='Some Calendar Name')
cal.id
>>> 5c1d4904211c0a061bc93013
cal.name
>>> Some Calendar Name

# Creating a calendar without specifying a name
cal = dr.CalendarFile.create('/home/calendars/somecalendar.csv')
cal.id
>>> 5c1d4904211c0a061bc93012
cal.name
>>> somecalendar.csv

# Creating a calendar with multiseries id columns
cal = dr.CalendarFile.create('/home/calendars/somemultiseriescalendar.csv',
                             calendar_name='Some Multiseries Calendar Name',
                             multiseries_id_columns=['series_id'])
cal.id
>>> 5da9bb21962d746f97e4daee
cal.name
>>> Some Multiseries Calendar Name
cal.multiseries_id_columns
>>> ['series_id']
classmethod create_calendar_from_dataset(dataset_id: str, dataset_version_id: Optional[str] = None, calendar_name: Optional[str] = None, multiseries_id_columns: Optional[List[str]] = None, delete_on_error: Optional[bool] = False) → datarobot.models.calendar_file.CalendarFile

Creates a calendar using the given dataset. For information about calendar files, see the calendar documentation

The provided dataset have the following format:

Date,   Event,          Series ID,    Event Duration
<date>, <event_type>,   <series id>,  <event duration>
<date>, <event_type>,              ,  <event duration>

The “Series ID” and “Event Duration” columns are optional.

Once the CalendarFile has been created, pass its ID with the DatetimePartitioningSpecification when setting the target for a time series project in order to use it.

Parameters:
dataset_id : string

The identifier of the dataset from which to create the calendar.

dataset_version_id : string, optional

The identifier of the dataset version from which to create the calendar.

calendar_name : string, optional

A name to assign to the calendar. Defaults to the name of the dataset if not provided.

multiseries_id_columns : list of str, optional

A list of the names of multiseries id columns to define which series an event belongs to. Currently only one multiseries id column is supported.

delete_on_error : boolean, optional

Whether delete calendar file from Catalog if it’s not valid.

Returns:
calendar_file : CalendarFile

Instance with initialized data.

Raises:
AsyncProcessUnsuccessfulError

Raised if there was an error processing the provided calendar file.

Examples

# Creating a calendar from a dataset
dataset = dr.Dataset.create_from_file('/home/calendars/somecalendar.csv')
cal = dr.CalendarFile.create_calendar_from_dataset(
    dataset.id, calendar_name='Some Calendar Name'
)
cal.id
>>> 5c1d4904211c0a061bc93013
cal.name
>>> Some Calendar Name

# Creating a calendar from a new dataset version
new_dataset_version = dr.Dataset.create_version_from_file(
    dataset.id, '/home/calendars/anothercalendar.csv'
)
cal = dr.CalendarFile.create(
    new_dataset_version.id, dataset_version_id=new_dataset_version.version_id
)
cal.id
>>> 5c1d4904211c0a061bc93012
cal.name
>>> anothercalendar.csv
classmethod create_calendar_from_country_code(country_code: str, start_date: datetime.datetime, end_date: datetime.datetime) → datarobot.models.calendar_file.CalendarFile

Generates a calendar based on the provided country code and dataset start date and end dates. The provided country code should be uppercase and 2-3 characters long. See CalendarFile.get_allowed_country_codes for a list of allowed country codes.

Parameters:
country_code : string

The country code for the country to use for generating the calendar.

start_date : datetime.datetime

The earliest date to include in the generated calendar.

end_date : datetime.datetime

The latest date to include in the generated calendar.

Returns:
calendar_file : CalendarFile

Instance with initialized data.

classmethod get_allowed_country_codes(offset: Optional[int] = None, limit: Optional[int] = None) → List[CountryCode]

Retrieves the list of allowed country codes that can be used for generating the preloaded calendars.

Parameters:
offset : int

Optional, defaults to 0. This many results will be skipped.

limit : int

Optional, defaults to 100, maximum 1000. At most this many results are returned.

Returns:
list

A list dicts, each of which represents an allowed country codes. Each item has the following structure:

classmethod get(calendar_id: str) → datarobot.models.calendar_file.CalendarFile

Gets the details of a calendar, given the id.

Parameters:
calendar_id : str

The identifier of the calendar.

Returns:
calendar_file : CalendarFile

The requested calendar.

Raises:
DataError

Raised if the calendar_id is invalid, i.e. the specified CalendarFile does not exist.

Examples

cal = dr.CalendarFile.get(some_calendar_id)
cal.id
>>> some_calendar_id
classmethod list(project_id: Optional[str] = None, batch_size: Optional[int] = None) → List[datarobot.models.calendar_file.CalendarFile]

Gets the details of all calendars this user has view access for.

Parameters:
project_id : str, optional

If provided, will filter for calendars associated only with the specified project.

batch_size : int, optional

The number of calendars to retrieve in a single API call. If specified, the client may make multiple calls to retrieve the full list of calendars. If not specified, an appropriate default will be chosen by the server.

Returns:
calendar_list : list of CalendarFile

A list of CalendarFile objects.

Examples

calendars = dr.CalendarFile.list()
len(calendars)
>>> 10
classmethod delete(calendar_id: str) → None

Deletes the calendar specified by calendar_id.

Parameters:
calendar_id : str

The id of the calendar to delete. The requester must have OWNER access for this calendar.

Raises:
ClientError

Raised if an invalid calendar_id is provided.

Examples

# Deleting with a valid calendar_id
status_code = dr.CalendarFile.delete(some_calendar_id)
status_code
>>> 204
dr.CalendarFile.get(some_calendar_id)
>>> ClientError: Item not found
classmethod update_name(calendar_id: str, new_calendar_name: str) → int

Changes the name of the specified calendar to the specified name. The requester must have at least READ_WRITE permissions on the calendar.

Parameters:
calendar_id : str

The id of the calendar to update.

new_calendar_name : str

The new name to set for the specified calendar.

Returns:
status_code : int

200 for success

Raises:
ClientError

Raised if an invalid calendar_id is provided.

Examples

response = dr.CalendarFile.update_name(some_calendar_id, some_new_name)
response
>>> 200
cal = dr.CalendarFile.get(some_calendar_id)
cal.name
>>> some_new_name
classmethod share(calendar_id: str, access_list: List[datarobot.models.sharing.SharingAccess]) → int

Shares the calendar with the specified users, assigning the specified roles.

Parameters:
calendar_id : str

The id of the calendar to update

access_list:

A list of dr.SharingAccess objects. Specify None for the role to delete a user’s access from the specified CalendarFile. For more information on specific access levels, see the sharing documentation.

Returns:
status_code : int

200 for success

Raises:
ClientError

Raised if unable to update permissions for a user.

AssertionError

Raised if access_list is invalid.

Examples

# assuming some_user is a valid user, share this calendar with some_user
sharing_list = [dr.SharingAccess(some_user_username,
                                 dr.enums.SHARING_ROLE.READ_WRITE)]
response = dr.CalendarFile.share(some_calendar_id, sharing_list)
response.status_code
>>> 200

# delete some_user from this calendar, assuming they have access of some kind already
delete_sharing_list = [dr.SharingAccess(some_user_username,
                                        None)]
response = dr.CalendarFile.share(some_calendar_id, delete_sharing_list)
response.status_code
>>> 200

# Attempt to add an invalid user to a calendar
invalid_sharing_list = [dr.SharingAccess(invalid_username,
                                         dr.enums.SHARING_ROLE.READ_WRITE)]
dr.CalendarFile.share(some_calendar_id, invalid_sharing_list)
>>> ClientError: Unable to update access for this calendar
classmethod get_access_list(calendar_id: str, batch_size: Optional[int] = None) → List[datarobot.models.sharing.SharingAccess]

Retrieve a list of users that have access to this calendar.

Parameters:
calendar_id : str

The id of the calendar to retrieve the access list for.

batch_size : int, optional

The number of access records to retrieve in a single API call. If specified, the client may make multiple calls to retrieve the full list of calendars. If not specified, an appropriate default will be chosen by the server.

Returns:
access_control_list : list of SharingAccess

A list of SharingAccess objects.

Raises:
ClientError

Raised if user does not have access to calendar or calendar does not exist.

Automated Documentation

class datarobot.models.automated_documentation.AutomatedDocument(entity_id=None, document_type=None, output_format=None, locale=None, template_id=None, id=None, filepath=None, created_at=None)

An automated documentation object.

New in version v2.24.

Attributes:
document_type : str or None

Type of automated document. You can specify: MODEL_COMPLIANCE, AUTOPILOT_SUMMARY depending on your account settings. Required for document generation.

entity_id : str or None

ID of the entity to generate the document for. It can be model ID or project ID. Required for document generation.

output_format : str or None

Format of the generate document, either docx or html. Required for document generation.

locale : str or None

Localization of the document, dependent on your account settings. Default setting is EN_US.

template_id : str or None

Template ID to use for the document outline. Defaults to standard DataRobot template. See the documentation for ComplianceDocTemplate for more information.

id : str or None

ID of the document. Required to download or delete a document.

filepath : str or None

Path to save a downloaded document to. Either include a file path and name or the file will be saved to the directory from which the script is launched.

created_at : datetime or None

Document creation timestamp.

classmethod list_available_document_types()

Get a list of all available document types and locales.

Returns:
List of dicts

Examples

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
doc_types = dr.AutomatedDocument.list_available_document_types()
is_model_compliance_initialized

Check if model compliance documentation pre-processing is initialized. Model compliance documentation pre-processing must be initialized before generating documentation for a custom model.

Returns:
Tuple of (boolean, string)
  • boolean flag is whether model compliance documentation pre-processing is initialized
  • string value is the initialization status
initialize_model_compliance()

Initialize model compliance documentation pre-processing. Must be called before generating documentation for a custom model.

Returns:
Tuple of (boolean, string)
  • boolean flag is whether model compliance documentation pre-processing is initialized
  • string value is the initialization status

Examples

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)

# NOTE: entity_id is either a model id or a model package id
doc = dr.AutomatedDocument(
        document_type="MODEL_COMPLIANCE",
        entity_id="6f50cdb77cc4f8d1560c3ed5",
        output_format="docx",
        locale="EN_US")

doc.initialize_model_compliance()
generate(max_wait: int = 600) → requests.models.Response

Request generation of an automated document.

Required attributes to request document generation: document_type, entity_id, and output_format.

Returns:
requests.models.Response

Examples

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)

doc = dr.AutomatedDocument(
        document_type="MODEL_COMPLIANCE",
        entity_id="6f50cdb77cc4f8d1560c3ed5",
        output_format="docx",
        locale="EN_US",
        template_id="50efc9db8aff6c81a374aeec",
        filepath="/Users/username/Documents/example.docx"
        )

doc.generate()
doc.download()
download()

Download a generated Automated Document. Document ID is required to download a file.

Returns:
requests.models.Response

Examples

Generating and downloading the generated document:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)

doc = dr.AutomatedDocument(
        document_type="AUTOPILOT_SUMMARY",
        entity_id="6050d07d9da9053ebb002ef7",
        output_format="docx",
        filepath="/Users/username/Documents/Project_Report_1.docx"
        )

doc.generate()
doc.download()

Downloading an earlier generated document when you know the document ID:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
doc = dr.AutomatedDocument(id='5e8b6a34d2426053ab9a39ed')
doc.download()

Notice that filepath was not set for this document. In this case, the file is saved to the directory from which the script was launched.

Downloading a document chosen from a list of earlier generated documents:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)

model_id = "6f5ed3de855962e0a72a96fe"
docs = dr.AutomatedDocument.list_generated_documents(entity_ids=[model_id])
doc = docs[0]
doc.filepath = "/Users/me/Desktop/Recommended_model_doc.docx"
doc.download()
delete()

Delete a document using its ID.

Returns:
requests.models.Response

Examples

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
doc = dr.AutomatedDocument(id="5e8b6a34d2426053ab9a39ed")
doc.delete()

If you don’t know the document ID, you can follow the same workflow to get the ID as in the examples for the AutomatedDocument.download method.

classmethod list_generated_documents(document_types=None, entity_ids=None, output_formats=None, locales=None, offset=None, limit=None)

Get information about all previously generated documents available for your account. The information includes document ID and type, ID of the entity it was generated for, time of creation, and other information.

Parameters:
document_types : List of str or None

Query for one or more document types.

entity_ids : List of str or None

Query generated documents by one or more entity IDs.

output_formats : List of str or None

Query for one or more output formats.

locales : List of str or None

Query generated documents by one or more locales.

offset: int or None

Number of items to skip. Defaults to 0 if not provided.

limit: int or None

Number of items to return, maximum number of items is 1000.

Returns:
List of AutomatedDocument objects, where each object contains attributes described in
AutomatedDocument

Examples

To get a list of all generated documents:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
docs = AutomatedDocument.list_generated_documents()

To get a list of all AUTOPILOT_SUMMARY documents:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
docs = AutomatedDocument.list_generated_documents(document_types=["AUTOPILOT_SUMMARY"])

To get a list of 5 recently created automated documents in html format:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
docs = AutomatedDocument.list_generated_documents(output_formats=["html"], limit=5)

To get a list of automated documents created for specific entities (projects or models):

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)
docs = AutomatedDocument.list_generated_documents(
    entity_ids=["6051d3dbef875eb3be1be036",
                "6051d3e1fbe65cd7a5f6fde6",
                "6051d3e7f86c04486c2f9584"]
    )

Note, that the list of results contains AutomatedDocument objects, which means that you can execute class-related methods on them. Here’s how you can list, download, and then delete from the server all automated documents related to a certain entity:

import datarobot as dr

dr.Client(token=my_token, endpoint=endpoint)

ids = ["6051d3dbef875eb3be1be036", "5fe1d3d55cd810ebdb60c517f"]
docs = AutomatedDocument.list_generated_documents(entity_ids=ids)
for doc in docs:
    doc.download()
    doc.delete()

Class Mapping Aggregation Settings

For multiclass projects with a lot of unique values in target column you can specify the parameters for aggregation of rare values to improve the modeling performance and decrease the runtime and resource usage of resulting models.

class datarobot.helpers.ClassMappingAggregationSettings(max_unaggregated_class_values: Optional[int] = None, min_class_support: Optional[int] = None, excluded_from_aggregation: Optional[List[str]] = None, aggregation_class_name: Optional[str] = None)

Class mapping aggregation settings. For multiclass projects allows fine control over which target values will be preserved as classes. Classes which aren’t preserved will be - aggregated into a single “catch everything else” class in case of multiclass - or will be ignored in case of multilabel. All attributes are optional, if not specified - server side defaults will be used.

Attributes:
max_unaggregated_class_values : int, optional

Maximum amount of unique values allowed before aggregation kicks in.

min_class_support : int, optional

Minimum number of instances necessary for each target value in the dataset. All values with less instances will be aggregated.

excluded_from_aggregation : list, optional

List of target values that should be guaranteed to kept as is, regardless of other settings.

aggregation_class_name : str, optional

If some of the values will be aggregated - this is the name of the aggregation class that will replace them.

Client Configuration

datarobot.client.Client(token: Optional[str] = None, endpoint: Optional[str] = None, config_path: Optional[str] = None, connect_timeout: Optional[int] = None, user_agent_suffix: Optional[str] = None, ssl_verify: bool = True, max_retries: Union[int, urllib3.util.retry.Retry, None] = None, token_type: str = 'Token') → datarobot.rest.RESTClientObject

Configures the global API client for the Python SDK with optional configuration. Missing configuration will be read from env or config file.

Parameters:
token : str, optional

API token

endpoint : str, optional

Base url of API

config_path : str, optional

Alternate location of config file

connect_timeout : int, optional

How long the client should be willing to wait before establishing a connection with the server.

user_agent_suffix : str, optional

Additional text that is appended to the User-Agent HTTP header when communicating with the DataRobot REST API. This can be useful for identifying different applications that are built on top of the DataRobot Python Client, which can aid debugging and help track usage.

ssl_verify : bool or str, optional

Whether to check SSL certificate. Could be set to path with certificates of trusted certification authorities.

max_retries : int or datarobot.rest.Retry, optional

Either an integer number of times to retry connection errors, or a urllib3.util.retry.Retry object to configure retries.

token_type: str, “Token” by default

Authentication token type: Token, Bearer. “Bearer” is for DataRobot OAuth2 token, “Token” for token generated in Developer Tools.

Returns:
The RESTClientObject instance created.
datarobot.client.set_client(client: datarobot.rest.RESTClientObject) → Optional[datarobot.rest.RESTClientObject]

Configure the global HTTP client for the Python SDK. Returns previous instance.

datarobot.client.client_configuration(*args, **kwargs)

This context manager can be used to temporarily change the global HTTP client.

In multithreaded scenarios, it is highly recommended to use a fresh manager object per thread.

DataRobot does not recommend nesting these contexts.

Parameters:
args : Parameters passed to datarobot.client.Client()
kwargs : Keyword arguments passed to datarobot.Client()

Examples

from datarobot.client import client_configuration
from datarobot.models import Project

with client_configuration(token="api-key-here", endpoint="https://host-name.com"):
    Project.list()
from datarobot.client import Client, client_configuration
from datarobot.models import Project

Client()  # Interact with DataRobot using the default configuration.
Project.list()

with client_configuration(config_path="/path/to/a/drconfig.yaml"):
    # Interact with DataRobot using a different configuration.
    Project.list()
class datarobot.rest.RESTClientObject(auth: str, endpoint: str, connect_timeout: Optional[int] = 6.05, verify: bool = True, user_agent_suffix: Optional[str] = None, max_retries: Union[int, urllib3.util.retry.Retry, None] = None, authentication_type: Optional[str] = None)
Parameters
connect_timeout
timeout for http request and connection
headers
headers for outgoing requests
open_in_browser() → None

Opens the DataRobot app in a web browser, or logs the URL if a browser is not available.

Clustering

class datarobot.models.ClusteringModel(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, model_type=None, model_category=None, is_frozen=None, is_n_clusters_dynamically_determined=None, blueprint_id=None, metrics=None, project=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, n_clusters=None, has_empty_clusters=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, model_number=None, parent_model_id=None, use_project_settings=None, supports_composable_ml=None)

ClusteringModel extends Model class. It provides provides properties and methods specific to clustering projects.

compute_insights(max_wait: int = 600) → List[datarobot.models.cluster_insight.ClusterInsight]

Compute and retrieve cluster insights for model. This method awaits completion of job computing cluster insights and returns results after it is finished. If computation takes longer than specified max_wait exception will be raised.

Parameters:
project_id: str

Project to start creation in.

model_id: str

Project’s model to start creation in.

max_wait: int

Maximum number of seconds to wait before giving up

Returns:
List of ClusterInsight
Raises:
ClientError

Server rejected creation due to client error. Most likely cause is bad project_id or model_id.

AsyncFailureError

If any of the responses from the server are unexpected

AsyncProcessUnsuccessfulError

If the cluster insights computation has failed or was cancelled.

AsyncTimeoutError

If the cluster insights computation did not resolve in time

insights

Return actual list of cluster insights if already computed.

Returns:
List of ClusterInsight
clusters

Return actual list of Clusters.

Returns:
List of Cluster
update_cluster_names(cluster_name_mappings: List[Tuple[str, str]]) → List[datarobot.models.cluster.Cluster]

Change many cluster names at once based on list of name mappings.

Parameters:
cluster_name_mappings: List of tuples

Cluster names mapping consisting of current cluster name and old cluster name. Example:

cluster_name_mappings = [
    ("current cluster name 1", "new cluster name 1"),
    ("current cluster name 2", "new cluster name 2")]
Returns:
List of Cluster
Raises:
datarobot.errors.ClientError

Server rejected update of cluster names. Possible reasons include: incorrect format of mapping, mapping introduces duplicates.

update_cluster_name(current_name: str, new_name: str) → List[datarobot.models.cluster.Cluster]

Change cluster name from current_name to new_name.

Parameters:
current_name: str

Current cluster name.

new_name: str

New cluster name.

Returns:
List of Cluster
Raises:
datarobot.errors.ClientError

Server rejected update of cluster names.

class datarobot.models.cluster.Cluster(**kwargs)

Representation of a single cluster.

Attributes:
name: str

Current cluster name

percent: float

Percent of data contained in the cluster. This value is reported after cluster insights are computed for the model.

classmethod list(project_id: str, model_id: str) → List[datarobot.models.cluster.Cluster]

Retrieve a list of clusters in the model.

Parameters:
project_id: str

ID of the project that the model is part of.

model_id: str

ID of the model.

Returns:
List of clusters
classmethod update_multiple_names(project_id: str, model_id: str, cluster_name_mappings: List[Tuple[str, str]]) → List[datarobot.models.cluster.Cluster]

Update many clusters at once based on list of name mappings.

Parameters:
project_id: str

ID of the project that the model is part of.

model_id: str

ID of the model.

cluster_name_mappings: List of tuples

Cluster name mappings, consisting of current and previous names for each cluster. Example:

cluster_name_mappings = [
    ("current cluster name 1", "new cluster name 1"),
    ("current cluster name 2", "new cluster name 2")]
Returns:
List of clusters
Raises:
datarobot.errors.ClientError

Server rejected update of cluster names.

ValueError

Invalid cluster name mapping provided.

classmethod update_name(project_id: str, model_id: str, current_name: str, new_name: str) → List[datarobot.models.cluster.Cluster]

Change cluster name from current_name to new_name

Parameters:
project_id: str

ID of the project that the model is part of.

model_id: str

ID of the model.

current_name: str

Current cluster name

new_name: str

New cluster name

Returns:
List of Cluster
class datarobot.models.cluster_insight.ClusterInsight(**kwargs)

Holds data on all insights related to feature as well as breakdown per cluster.

Parameters:
feature_name: str

Name of a feature from the dataset.

feature_type: str

Type of feature.

insights : List of classes (ClusterInsight)

List provides information regarding the importance of a specific feature in relation to each cluster. Results help understand how the model is grouping data and what each cluster represents.

feature_impact: float

Impact of a feature ranging from 0 to 1.

classmethod compute(project_id: str, model_id: str, max_wait: int = 600) → List[datarobot.models.cluster_insight.ClusterInsight]

Starts creation of cluster insights for the model and if successful, returns computed ClusterInsights. This method allows calculation to continue for a specified time and if not complete, cancels the request.

Parameters:
project_id: str

ID of the project to begin creation of cluster insights for.

model_id: str

ID of the project model to begin creation of cluster insights for.

max_wait: int

Maximum number of seconds to wait canceling the request.

Returns:
List[ClusterInsight]
Raises:
ClientError

Server rejected creation due to client error. Most likely cause is bad project_id or model_id.

AsyncFailureError

Indicates whether any of the responses from the server are unexpected.

AsyncProcessUnsuccessfulError

Indicates whether the cluster insights computation failed or was cancelled.

AsyncTimeoutError

Indicates whether the cluster insights computation did not resolve within the specified time limit (max_wait).

Compliance Documentation Templates

class datarobot.models.compliance_doc_template.ComplianceDocTemplate(id, creator_id, creator_username, name, org_id=None, sections=None)

A compliance documentation template. Templates are used to customize contents of AutomatedDocument.

New in version v2.14.

Notes

Each section dictionary has the following schema:

  • title : title of the section
  • type : type of section. Must be one of “datarobot”, “user” or “table_of_contents”.

Each type of section has a different set of attributes described bellow.

Section of type "datarobot" represent a section owned by DataRobot. DataRobot sections have the following additional attributes:

  • content_id : The identifier of the content in this section. You can get the default template with get_default for a complete list of possible DataRobot section content ids.
  • sections : list of sub-section dicts nested under the parent section.

Section of type "user" represent a section with user-defined content. Those sections may contain text generated by user and have the following additional fields:

  • regularText : regular text of the section, optionally separated by \n to split paragraphs.
  • highlightedText : highlighted text of the section, optionally separated by \n to split paragraphs.
  • sections : list of sub-section dicts nested under the parent section.

Section of type "table_of_contents" represent a table of contents and has no additional attributes.

Attributes:
id : str

the id of the template

name : str

the name of the template.

creator_id : str

the id of the user who created the template

creator_username : str

username of the user who created the template

org_id : str

the id of the organization the template belongs to

sections : list of dicts

the sections of the template describing the structure of the document. Section schema is described in Notes section above.

classmethod get_default(template_type=None)

Get a default DataRobot template. This template is used for generating compliance documentation when no template is specified.

Parameters:
template_type : str or None

Type of the template. Currently supported values are “normal” and “time_series”

Returns:
template : ComplianceDocTemplate

the default template object with sections attribute populated with default sections.

classmethod create_from_json_file(name, path)

Create a template with the specified name and sections in a JSON file.

This is useful when working with sections in a JSON file. Example:

default_template = ComplianceDocTemplate.get_default()
default_template.sections_to_json_file('path/to/example.json')
# ... edit example.json in your editor
my_template = ComplianceDocTemplate.create_from_json_file(
    name='my template',
    path='path/to/example.json'
)
Parameters:
name : str

the name of the template. Must be unique for your user.

path : str

the path to find the JSON file at

Returns:
template : ComplianceDocTemplate

the created template

classmethod create(name, sections)

Create a template with the specified name and sections.

Parameters:
name : str

the name of the template. Must be unique for your user.

sections : list

list of section objects

Returns:
template : ComplianceDocTemplate

the created template

classmethod get(template_id)

Retrieve a specific template.

Parameters:
template_id : str

the id of the template to retrieve

Returns:
template : ComplianceDocTemplate

the retrieved template

classmethod list(name_part=None, limit=None, offset=None)

Get a paginated list of compliance documentation template objects.

Parameters:
name_part : str or None

Return only the templates with names matching specified string. The matching is case-insensitive.

limit : int

The number of records to return. The server will use a (possibly finite) default if not specified.

offset : int

The number of records to skip.

Returns:
templates : list of ComplianceDocTemplate

the list of template objects

sections_to_json_file(path, indent=2)

Save sections of the template to a json file at the specified path

Parameters:
path : str

the path to save the file to

indent : int

indentation to use in the json file.

update(name=None, sections=None)

Update the name or sections of an existing doc template.

Note that default or non-existent templates can not be updated.

Parameters:
name : str, optional

the new name for the template

sections : list of dicts

list of sections

delete()

Delete the compliance documentation template.

Confusion Chart

class datarobot.models.confusion_chart.ConfusionChart(source, data, source_model_id)

Confusion Chart data for model.

Notes

ClassMetrics is a dict containing the following:

  • class_name (string) name of the class
  • actual_count (int) number of times this class is seen in the validation data
  • predicted_count (int) number of times this class has been predicted for the validation data
  • f1 (float) F1 score
  • recall (float) recall score
  • precision (float) precision score
  • was_actual_percentages (list of dict) one vs all actual percentages in format specified below.
    • other_class_name (string) the name of the other class
    • percentage (float) the percentage of the times this class was predicted when is was actually class (from 0 to 1)
  • was_predicted_percentages (list of dict) one vs all predicted percentages in format specified below.
    • other_class_name (string) the name of the other class
    • percentage (float) the percentage of the times this class was actual predicted (from 0 to 1)
  • confusion_matrix_one_vs_all (list of list) 2d list representing 2x2 one vs all matrix.
    • This represents the True/False Negative/Positive rates as integer for each class. The data structure looks like:
    • [ [ True Negative, False Positive ], [ False Negative, True Positive ] ]
Attributes:
source : str

Confusion Chart data source. Can be ‘validation’, ‘crossValidation’ or ‘holdout’.

raw_data : dict

All of the raw data for the Confusion Chart

confusion_matrix : list of list

The NxN confusion matrix

classes : list

The names of each of the classes

class_metrics : list of dicts

List of dicts with schema described as ClassMetrics above.

source_model_id : str

ID of the model this Confusion chart represents; in some cases, insights from the parent of a frozen model may be used

Credentials

class datarobot.models.Credential(credential_id: Optional[str] = None, name: Optional[str] = None, credential_type: Optional[str] = None, creation_date: Optional[datetime.datetime] = None, description: Optional[str] = None)
classmethod list() → List[datarobot.models.credential.Credential]

Returns list of available credentials.

Returns:
credentials : list of Credential instances

contains a list of available credentials.

Examples

>>> import datarobot as dr
>>> data_sources = dr.Credential.list()
>>> data_sources
[
    Credential('5e429d6ecf8a5f36c5693e03', 'my_s3_cred', 's3'),
    Credential('5e42cc4dcf8a5f3256865840', 'my_jdbc_cred', 'jdbc'),
]
classmethod get(credential_id: str) → datarobot.models.credential.Credential

Gets the Credential.

Parameters:
credential_id : str

the identifier of the credential.

Returns:
credential : Credential

the requested credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.get('5a8ac9ab07a57a0001be501f')
>>> cred
Credential('5e429d6ecf8a5f36c5693e03', 'my_s3_cred', 's3'),
delete() → None

Deletes the Credential the store.

Parameters:
credential_id : str

the identifier of the credential.

Returns:
credential : Credential

the requested credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.get('5a8ac9ab07a57a0001be501f')
>>> cred.delete()
classmethod create_basic(name: str, user: str, password: str, description: Optional[str] = None) → datarobot.models.credential.Credential

Creates the credentials.

Parameters:
name : str

the name to use for this set of credentials.

user : str

the username to store for this set of credentials.

password : str

the password to store for this set of credentials.

description : str, optional

the description to use for this set of credentials.

Returns:
credential : Credential

the created credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.create_basic(
...     name='my_basic_cred',
...     user='username',
...     password='password',
... )
>>> cred
Credential('5e429d6ecf8a5f36c5693e03', 'my_basic_cred', 'basic'),
classmethod create_oauth(name: str, token: str, refresh_token: str, description: Optional[str] = None) → datarobot.models.credential.Credential

Creates the OAUTH credentials.

Parameters:
name : str

the name to use for this set of credentials.

token: str

the OAUTH token

refresh_token: str

The OAUTH token

description : str, optional

the description to use for this set of credentials.

Returns:
credential : Credential

the created credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.create_oauth(
...     name='my_oauth_cred',
...     token='XXX',
...     refresh_token='YYY',
... )
>>> cred
Credential('5e429d6ecf8a5f36c5693e03', 'my_oauth_cred', 'oauth'),
classmethod create_s3(name: str, aws_access_key_id: Optional[str] = None, aws_secret_access_key: Optional[str] = None, aws_session_token: Optional[str] = None, description: Optional[str] = None) → datarobot.models.credential.Credential

Creates the S3 credentials.

Parameters:
name : str

the name to use for this set of credentials.

aws_access_key_id : str, optional

the AWS access key id.

aws_secret_access_key : str, optional

the AWS secret access key.

aws_session_token : str, optional

the AWS session token.

description : str, optional

the description to use for this set of credentials.

Returns:
credential : Credential

the created credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.create_s3(
...     name='my_s3_cred',
...     aws_access_key_id='XXX',
...     aws_secret_access_key='YYY',
...     aws_session_token='ZZZ',
... )
>>> cred
Credential('5e429d6ecf8a5f36c5693e03', 'my_s3_cred', 's3'),
classmethod create_azure(name: str, azure_connection_string: str, description: Optional[str] = None) → datarobot.models.credential.Credential

Creates the Azure storage credentials.

Parameters:
name : str

the name to use for this set of credentials.

azure_connection_string : str

the Azure connection string.

description : str, optional

the description to use for this set of credentials.

Returns:
credential : Credential

the created credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.create_azure(
...     name='my_azure_cred',
...     azure_connection_string='XXX',
... )
>>> cred
Credential('5e429d6ecf8a5f36c5693e03', 'my_azure_cred', 'azure'),
classmethod create_gcp(name: str, gcp_key: Union[str, Dict[str, str], None] = None, description: Optional[str] = None) → datarobot.models.credential.Credential

Creates the GCP credentials.

Parameters:
name : str

the name to use for this set of credentials.

gcp_key : str | dict

the GCP key in json format or parsed as dict.

description : str, optional

the description to use for this set of credentials.

Returns:
credential : Credential

the created credential.

Examples

>>> import datarobot as dr
>>> cred = dr.Credential.create_gcp(
...     name='my_gcp_cred',
...     gcp_key='XXX',
... )
>>> cred
Credential('5e429d6ecf8a5f36c5693e03', 'my_gcp_cred', 'gcp'),

Custom Models

class datarobot.models.custom_model_version.CustomModelFileItem(id, file_name, file_path, file_source, created_at=None)

A file item attached to a DataRobot custom model version.

New in version v2.21.

Attributes:
id: str

id of the file item

file_name: str

name of the file item

file_path: str

path of the file item

file_source: str

source of the file item

created_at: str, optional

ISO-8601 formatted timestamp of when the version was created

class datarobot.CustomInferenceModel(**kwargs)

A custom inference model.

New in version v2.21.

Attributes:
id: str

id of the custom model

name: str

name of the custom model

language: str

programming language of the custom model. Can be “python”, “r”, “java” or “other”

description: str

description of the custom model

target_type: datarobot.TARGET_TYPE

target type of the custom inference model. Values: [datarobot.TARGET_TYPE.BINARY, datarobot.TARGET_TYPE.REGRESSION, datarobot.TARGET_TYPE.MULTICLASS, datarobot.TARGET_TYPE.UNSTRUCTURED, datarobot.TARGET_TYPE.ANOMALY]

target_name: str, optional

Target feature name; it is optional(ignored if provided) for datarobot.TARGET_TYPE.UNSTRUCTURED or datarobot.TARGET_TYPE.ANOMALY target type

latest_version: datarobot.CustomModelVersion or None

latest version of the custom model if the model has a latest version

deployments_count: int

number of a deployments of the custom models

target_name: str

custom model target name

positive_class_label: str

for binary classification projects, a label of a positive class

negative_class_label: str

for binary classification projects, a label of a negative class

prediction_threshold: float

for binary classification projects, a threshold used for predictions

training_data_assignment_in_progress: bool

flag describing if training data assignment is in progress

training_dataset_id: str, optional

id of a dataset assigned to the custom model

training_dataset_version_id: str, optional

id of a dataset version assigned to the custom model

training_data_file_name: str, optional

name of assigned training data file

training_data_partition_column: str, optional

name of a partition column in a training dataset assigned to the custom model

created_by: str

username of a user who user who created the custom model

updated_at: str

ISO-8601 formatted timestamp of when the custom model was updated

created_at: str

ISO-8601 formatted timestamp of when the custom model was created

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

classmethod list(is_deployed=None, search_for=None, order_by=None)

List custom inference models available to the user.

New in version v2.21.

Parameters:
is_deployed: bool, optional

flag for filtering custom inference models. If set to True, only deployed custom inference models are returned. If set to False, only not deployed custom inference models are returned

search_for: str, optional

string for filtering custom inference models - only custom inference models that contain the string in name or description will be returned. If not specified, all custom models will be returned

order_by: str, optional

property to sort custom inference models by. Supported properties are “created” and “updated”. Prefix the attribute name with a dash to sort in descending order, e.g. order_by=’-created’. By default, the order_by parameter is None which will result in custom models being returned in order of creation time descending

Returns:
List[CustomInferenceModel]

a list of custom inference models.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(custom_model_id)

Get custom inference model by id.

New in version v2.21.

Parameters:
custom_model_id: str

id of the custom inference model

Returns:
CustomInferenceModel

retrieved custom inference model

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

download_latest_version(file_path)

Download the latest custom inference model version.

New in version v2.21.

Parameters:
file_path: str

path to create a file with custom model version content

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

classmethod create(name, target_type, target_name=None, language=None, description=None, positive_class_label=None, negative_class_label=None, prediction_threshold=None, class_labels=None, class_labels_file=None, network_egress_policy=None, maximum_memory=None, replicas=None)

Create a custom inference model.

New in version v2.21.

Parameters:
name: str

name of the custom inference model

target_type: datarobot.TARGET_TYPE

target type of the custom inference model. Values: [datarobot.TARGET_TYPE.BINARY, datarobot.TARGET_TYPE.REGRESSION, datarobot.TARGET_TYPE.MULTICLASS, datarobot.TARGET_TYPE.UNSTRUCTURED]

target_name: str, optional

Target feature name; it is optional(ignored if provided) for datarobot.TARGET_TYPE.UNSTRUCTURED target type

language: str, optional

programming language of the custom learning model

description: str, optional

description of the custom learning model

positive_class_label: str, optional

custom inference model positive class label for binary classification

negative_class_label: str, optional

custom inference model negative class label for binary classification

prediction_threshold: float, optional

custom inference model prediction threshold

class_labels: List[str], optional

custom inference model class labels for multiclass classification Cannot be used with class_labels_file

class_labels_file: str, optional

path to file containing newline separated class labels for multiclass classification. Cannot be used with class_labels

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

Returns:
CustomInferenceModel

created a custom inference model

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod copy_custom_model(custom_model_id)

Create a custom inference model by copying existing one.

New in version v2.21.

Parameters:
custom_model_id: str

id of the custom inference model to copy

Returns:
CustomInferenceModel

created a custom inference model

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

update(name=None, language=None, description=None, target_name=None, positive_class_label=None, negative_class_label=None, prediction_threshold=None, class_labels=None, class_labels_file=None)

Update custom inference model properties.

New in version v2.21.

Parameters:
name: str, optional

new custom inference model name

language: str, optional

new custom inference model programming language

description: str, optional

new custom inference model description

target_name: str, optional

new custom inference model target name

positive_class_label: str, optional

new custom inference model positive class label

negative_class_label: str, optional

new custom inference model negative class label

prediction_threshold: float, optional

new custom inference model prediction threshold

class_labels: List[str], optional

custom inference model class labels for multiclass classification Cannot be used with class_labels_file

class_labels_file: str, optional

path to file containing newline separated class labels for multiclass classification. Cannot be used with class_labels

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

refresh()

Update custom inference model with the latest data from server.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

delete()

Delete custom inference model.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

assign_training_data(dataset_id, partition_column=None, max_wait=600)

Assign training data to the custom inference model.

New in version v2.21.

Parameters:
dataset_id: str

the id of the training dataset to be assigned

partition_column: str, optional

name of a partition column in the training dataset

max_wait: int, optional

max time to wait for a training data assignment. If set to None - method will return without waiting. Defaults to 10 min

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

class datarobot.CustomModelTest(**kwargs)

An custom model test.

New in version v2.21.

Attributes:
id: str

test id

custom_model_image_id: str

id of a custom model image

image_type: str

the type of the image, either CUSTOM_MODEL_IMAGE_TYPE.CUSTOM_MODEL_IMAGE if the testing attempt is using a CustomModelImage as its model or CUSTOM_MODEL_IMAGE_TYPE.CUSTOM_MODEL_VERSION if the testing attempt is using a CustomModelVersion with dependency management

overall_status: str

a string representing testing status. Status can be - ‘not_tested’: the check not run - ‘failed’: the check failed - ‘succeeded’: the check succeeded - ‘warning’: the check resulted in a warning, or in non-critical failure - ‘in_progress’: the check is in progress

detailed_status: dict

detailed testing status - maps the testing types to their status and message. The keys of the dict are one of ‘errorCheck’, ‘nullValueImputation’, ‘longRunningService’, ‘sideEffects’. The values are dict with ‘message’ and ‘status’ keys.

created_by: str

a user who created a test

dataset_id: str, optional

id of a dataset used for testing

dataset_version_id: str, optional

id of a dataset version used for testing

completed_at: str, optional

ISO-8601 formatted timestamp of when the test has completed

created_at: str, optional

ISO-8601 formatted timestamp of when the version was created

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

classmethod create(custom_model_id, custom_model_version_id, dataset_id=None, max_wait=600, network_egress_policy=None, maximum_memory=None, replicas=None)

Create and start a custom model test.

New in version v2.21.

Parameters:
custom_model_id: str

the id of the custom model

custom_model_version_id: str

the id of the custom model version

dataset_id: str, optional

The id of the testing dataset for non-unstructured custom models. Ignored and not required for unstructured models.

max_wait: int, optional

max time to wait for a test completion. If set to None - method will return without waiting.

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

Returns:
CustomModelTest

created custom model test

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod list(custom_model_id)

List custom model tests.

New in version v2.21.

Parameters:
custom_model_id: str

the id of the custom model

Returns:
List[CustomModelTest]

a list of custom model tests

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(custom_model_test_id)

Get custom model test by id.

New in version v2.21.

Parameters:
custom_model_test_id: str

the id of the custom model test

Returns:
CustomModelTest

retrieved custom model test

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

get_log()

Get log of a custom model test.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

get_log_tail()

Get log tail of a custom model test.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

cancel()

Cancel custom model test that is in progress.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

refresh()

Update custom model test with the latest data from server.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

class datarobot.CustomModelVersion(**kwargs)

A version of a DataRobot custom model.

New in version v2.21.

Attributes:
id: str

id of the custom model version

custom_model_id: str

id of the custom model

version_minor: int

a minor version number of custom model version

version_major: int

a major version number of custom model version

is_frozen: bool

a flag if the custom model version is frozen

items: List[CustomModelFileItem]

a list of file items attached to the custom model version

base_environment_id: str

id of the environment to use with the model

base_environment_version_id: str

id of the environment version to use with the model

label: str, optional

short human readable string to label the version

description: str, optional

custom model version description

created_at: str, optional

ISO-8601 formatted timestamp of when the version was created

dependencies: List[CustomDependency]

the parsed dependencies of the custom model version if the version has a valid requirements.txt file

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

classmethod from_server_data(data, keep_attrs=None)

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

classmethod create_clean(custom_model_id, base_environment_id, is_major_update=True, folder_path=None, files=None, network_egress_policy=None, maximum_memory=None, replicas=None, required_metadata_values=None)

Create a custom model version without files from previous versions.

New in version v2.21.

Parameters:
custom_model_id: str

the id of the custom model

base_environment_id: str

the id of the base environment to use with the custom model version

is_major_update: bool

the flag defining if a custom model version will be a minor or a major version. Default to True

folder_path: str, optional

the path to a folder containing files to be uploaded. Each file in the folder is uploaded under path relative to a folder path

files: list, optional

the list of tuples, where values in each tuple are the local filesystem path and the path the file should be placed in the model. if list is of strings, then basenames will be used for tuples Example: [(“/home/user/Documents/myModel/file1.txt”, “file1.txt”), (“/home/user/Documents/myModel/folder/file2.txt”, “folder/file2.txt”)] or [“/home/user/Documents/myModel/file1.txt”, “/home/user/Documents/myModel/folder/file2.txt”]

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

Returns:
CustomModelVersion

created custom model version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod create_from_previous(custom_model_id, base_environment_id, is_major_update=True, folder_path=None, files=None, files_to_delete=None, network_egress_policy=None, maximum_memory=None, replicas=None, required_metadata_values=None)

Create a custom model version containing files from a previous version.

New in version v2.21.

Parameters:
custom_model_id: str

the id of the custom model

base_environment_id: str

the id of the base environment to use with the custom model version

is_major_update: bool, optional

the flag defining if a custom model version will be a minor or a major version. Default to True

folder_path: str, optional

the path to a folder containing files to be uploaded. Each file in the folder is uploaded under path relative to a folder path

files: list, optional

the list of tuples, where values in each tuple are the local filesystem path and the path the file should be placed in the model. if list is of strings, then basenames will be used for tuples Example: [(“/home/user/Documents/myModel/file1.txt”, “file1.txt”), (“/home/user/Documents/myModel/folder/file2.txt”, “folder/file2.txt”)] or [“/home/user/Documents/myModel/file1.txt”, “/home/user/Documents/myModel/folder/file2.txt”]

files_to_delete: list, optional

the list of a file items ids to be deleted Example: [“5ea95f7a4024030aba48e4f9”, “5ea6b5da402403181895cc51”]

network_egress_policy: datarobot.NETWORK_EGRESS_POLICY, optional

Determines whether the given custom model is isolated, or can access the public network. Can be either ‘datarobot.NONE’ or ‘datarobot.PUBLIC’

maximum_memory: int, optional

The maximum memory that might be allocated by the custom-model. If exceeded, the custom-model will be killed by k8s

replicas: int, optional

A fixed number of replicas that will be deployed in the cluster

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

Returns:
CustomModelVersion

created custom model version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod list(custom_model_id)

List custom model versions.

New in version v2.21.

Parameters:
custom_model_id: str

the id of the custom model

Returns:
List[CustomModelVersion]

a list of custom model versions

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(custom_model_id, custom_model_version_id)

Get custom model version by id.

New in version v2.21.

Parameters:
custom_model_id: str

the id of the custom model

custom_model_version_id: str

the id of the custom model version to retrieve

Returns:
CustomModelVersion

retrieved custom model version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

download(file_path)

Download custom model version.

New in version v2.21.

Parameters:
file_path: str

path to create a file with custom model version content

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

update(description=None, required_metadata_values=None)

Update custom model version properties.

New in version v2.21.

Parameters:
description: str

new custom model version description

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

refresh()

Update custom model version with the latest data from server.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

get_feature_impact(with_metadata=False)

Get custom model feature impact.

New in version v2.23.

Parameters:
with_metadata : bool

The flag indicating if the result should include the metadata as well.

Returns:
feature_impacts : list of dict

The feature impact data. Each item is a dict with the keys ‘featureName’, ‘impactNormalized’, and ‘impactUnnormalized’, and ‘redundantWith’.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

calculate_feature_impact(max_wait=600)

Calculate custom model feature impact.

New in version v2.23.

Parameters:
max_wait: int, optional

max time to wait for feature impact calculation. If set to None - method will return without waiting. Defaults to 10 min

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

class datarobot.models.execution_environment.RequiredMetadataKey(**kwargs)

Definition of a metadata key that custom models using this environment must define

New in version v2.25.

Attributes:
field_name: str

The required field key. This value will be added as an environment variable when running custom models.

display_name: str

A human readable name for the required field.

class datarobot.models.CustomModelVersionConversion(**kwargs)

A conversion of a DataRobot custom model version.

New in version v2.27.

Attributes:
id: str

ID of the custom model version conversion.

custom_model_version_id: str

ID of the custom model version.

created: str

ISO-8601 timestamp of when the custom model conversion created.

main_program_item_id: str or None

ID of the main program item.

log_message: str or None

The conversion output log message.

generated_metadata: dict or None

The dict contains two items: ‘outputDataset’ & ‘outputColumns’.

conversion_succeeded: bool

Whether the conversion succeeded or not.

conversion_in_progress: bool

Whether a given conversion is in progress or not.

should_stop: bool

Whether the user asked to stop a conversion.

classmethod run_conversion(custom_model_id, custom_model_version_id, main_program_item_id, max_wait=None)

Initiate a new custom model version conversion.

Parameters:
custom_model_id : str

The associated custom model ID.

custom_model_version_id : str

The associated custom model version ID.

main_program_item_id : str

The selected main program item ID. This should be one of the SAS items in the associated custom model version.

max_wait: int or None

Max wait time in seconds. If None, than don’t wait.

Returns:
conversion_id : str

The ID of the newly created conversion entity.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx statuscustom model conversion

classmethod stop_conversion(custom_model_id, custom_model_version_id, conversion_id)

Stop a conversion that is in progress.

Parameters:
custom_model_id : str

ID of the associated custom model.

custom_model_version_id : str

ID of the associated custom model version.

conversion_id

ID of a conversion that is in-progress.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

classmethod get(custom_model_id, custom_model_version_id, conversion_id)

Get custom model version conversion by id.

New in version v2.27.

Parameters:
custom_model_id: str

The ID of the custom model.

custom_model_version_id: str

The ID of the custom model version.

conversion_id: str

The ID of the conversion to retrieve.

Returns:
CustomModelVersionConversion

Retrieved custom model version conversion.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

classmethod get_latest(custom_model_id, custom_model_version_id)

Get latest custom model version conversion for a given custom model version.

New in version v2.27.

Parameters:
custom_model_id: str

The ID of the custom model.

custom_model_version_id: str

The ID of the custom model version.

Returns:
CustomModelVersionConversion or None

Retrieved latest conversion for a given custom model version.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

classmethod list(custom_model_id, custom_model_version_id)

Get custom model version conversions list per custom model version.

New in version v2.27.

Parameters:
custom_model_id: str

The ID of the custom model.

custom_model_version_id: str

The ID of the custom model version.

Returns:
List[CustomModelVersionConversion]

Retrieved conversions for a given custom model version.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

class datarobot.CustomModelVersionDependencyBuild(**kwargs)

Metadata about a DataRobot custom model version’s dependency build

New in version v2.22.

Attributes:
custom_model_id: str

id of the custom model

custom_model_version_id: str

id of the custom model version

build_status: str

the status of the custom model version’s dependency build

started_at: str

ISO-8601 formatted timestamp of when the build was started

completed_at: str, optional

ISO-8601 formatted timestamp of when the build has completed

classmethod get_build_info(custom_model_id, custom_model_version_id)

Retrieve information about a custom model version’s dependency build

New in version v2.22.

Parameters:
custom_model_id: str

the id of the custom model

custom_model_version_id: str

the id of the custom model version

Returns:
CustomModelVersionDependencyBuild

the dependency build information

classmethod start_build(custom_model_id, custom_model_version_id, max_wait=600)

Start the dependency build for a custom model version dependency build

New in version v2.22.

Parameters:
custom_model_id: str

the id of the custom model

custom_model_version_id: str

the id of the custom model version

max_wait: int, optional

max time to wait for a build completion. If set to None - method will return without waiting.

get_log()

Get log of a custom model version dependency build.

New in version v2.22.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

cancel()

Cancel custom model version dependency build that is in progress.

New in version v2.22.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

refresh()

Update custom model version dependency build with the latest data from server.

New in version v2.22.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

class datarobot.ExecutionEnvironment(**kwargs)

An execution environment entity.

New in version v2.21.

Attributes:
id: str

the id of the execution environment

name: str

the name of the execution environment

description: str, optional

the description of the execution environment

programming_language: str, optional

the programming language of the execution environment. Can be “python”, “r”, “java” or “other”

is_public: bool, optional

public accessibility of environment, visible only for admin user

created_at: str, optional

ISO-8601 formatted timestamp of when the execution environment version was created

latest_version: ExecutionEnvironmentVersion, optional

the latest version of the execution environment

classmethod create(name, description=None, programming_language=None, required_metadata_keys=None)

Create an execution environment.

New in version v2.21.

Parameters:
name: str

execution environment name

description: str, optional

execution environment description

programming_language: str, optional

programming language of the environment to be created. Can be “python”, “r”, “java” or “other”. Default value - “other”

required_metadata_keys: List[RequiredMetadataKey]

Definition of a metadata keys that custom models using this environment must define

Returns:
ExecutionEnvironment

created execution environment

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod list(search_for=None)

List execution environments available to the user.

New in version v2.21.

Parameters:
search_for: str, optional

the string for filtering execution environment - only execution environments that contain the string in name or description will be returned.

Returns:
List[ExecutionEnvironment]

a list of execution environments.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(execution_environment_id)

Get execution environment by it’s id.

New in version v2.21.

Parameters:
execution_environment_id: str

ID of the execution environment to retrieve

Returns:
ExecutionEnvironment

retrieved execution environment

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

delete()

Delete execution environment.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

update(name=None, description=None, required_metadata_keys=None)

Update execution environment properties.

New in version v2.21.

Parameters:
name: str, optional

new execution environment name

description: str, optional

new execution environment description

required_metadata_keys: List[RequiredMetadataKey]

Definition of a metadata keys that custom models using this environment must define

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

refresh()

Update execution environment with the latest data from server.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

class datarobot.ExecutionEnvironmentVersion(**kwargs)

A version of a DataRobot execution environment.

New in version v2.21.

Attributes:
id: str

the id of the execution environment version

environment_id: str

the id of the execution environment the version belongs to

build_status: str

the status of the execution environment version build

label: str, optional

the label of the execution environment version

description: str, optional

the description of the execution environment version

created_at: str, optional

ISO-8601 formatted timestamp of when the execution environment version was created

docker_context_size: int, optional

The size of the uploaded Docker context in bytes if available or None if not

docker_image_size: int, optional

The size of the built Docker image in bytes if available or None if not

classmethod create(execution_environment_id, docker_context_path, label=None, description=None, max_wait=600)

Create an execution environment version.

New in version v2.21.

Parameters:
execution_environment_id: str

the id of the execution environment

docker_context_path: str

the path to a docker context archive or folder

label: str, optional

short human readable string to label the version

description: str, optional

execution environment version description

max_wait: int, optional

max time to wait for a final build status (“success” or “failed”). If set to None - method will return without waiting.

Returns:
ExecutionEnvironmentVersion

created execution environment version

Raises:
datarobot.errors.AsyncTimeoutError

if version did not reach final state during timeout seconds

datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod list(execution_environment_id, build_status=None)

List execution environment versions available to the user.

New in version v2.21.

Parameters:
execution_environment_id: str

the id of the execution environment

build_status: str, optional

build status of the execution environment version to filter by. See datarobot.enums.EXECUTION_ENVIRONMENT_VERSION_BUILD_STATUS for valid options

Returns:
List[ExecutionEnvironmentVersion]

a list of execution environment versions.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(execution_environment_id, version_id)

Get execution environment version by id.

New in version v2.21.

Parameters:
execution_environment_id: str

the id of the execution environment

version_id: str

the id of the execution environment version to retrieve

Returns:
ExecutionEnvironmentVersion

retrieved execution environment version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

download(file_path)

Download execution environment version.

New in version v2.21.

Parameters:
file_path: str

path to create a file with execution environment version content

Returns:
ExecutionEnvironmentVersion

retrieved execution environment version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

get_build_log()

Get execution environment version build log and error.

New in version v2.21.

Returns:
Tuple[str, str]

retrieved execution environment version build log and error. If there is no build error - None is returned.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

refresh()

Update execution environment version with the latest data from server.

New in version v2.21.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

Custom Tasks

class datarobot.CustomTask(id: str, target_type: datarobot.enums.CUSTOM_TASK_TARGET_TYPE, latest_version: Optional[datarobot.models.custom_task_version.CustomTaskVersion], created_at: str, updated_at: str, name: str, description: str, language: datarobot.enums.Enum, created_by: str, calibrate_predictions: Optional[bool] = None)

A custom task. This can be in a partial state or a complete state. When the latest_version is None, the empty task has been initialized with some metadata. It is not yet use-able for actual training. Once the first CustomTaskVersion has been created, you can put the CustomTask in UserBlueprints to train Models in Projects

New in version v2.26.

Attributes:
id: str

id of the custom task

name: str

name of the custom task

language: str

programming language of the custom task. Can be “python”, “r”, “java” or “other”

description: str

description of the custom task

target_type: datarobot.enums.CUSTOM_TASK_TARGET_TYPE

the target type of the custom task. One of:

  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.BINARY
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.REGRESSION
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.MULTICLASS
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.ANOMALY
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.TRANSFORM
latest_version: datarobot.CustomTaskVersion or None

latest version of the custom task if the task has a latest version. If the latest version is None, the custom task is not ready for use in user blueprints. You must create its first CustomTaskVersion before you can use the CustomTask

created_by: str

username of a user who user who created the custom task

updated_at: str

ISO-8601 formatted timestamp of when the custom task was updated

created_at: str

ISO-8601 formatted timestamp of when the custom task was created

calibrate_predictions: bool

whether anomaly predictions should be calibrated to be between 0 and 1 by DR. only applies to custom estimators with target type datarobot.enums.CUSTOM_TASK_TARGET_TYPE.ANOMALY

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → datarobot.models.custom_task.CustomTask

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

classmethod list(order_by: Optional[str] = None, search_for: Optional[str] = None) → List[datarobot.models.custom_task.CustomTask]

List custom tasks available to the user.

New in version v2.26.

Parameters:
search_for: str, optional

string for filtering custom tasks - only tasks that contain the string in name or description will be returned. If not specified, all custom task will be returned

order_by: str, optional

property to sort custom tasks by. Supported properties are “created” and “updated”. Prefix the attribute name with a dash to sort in descending order, e.g. order_by=’-created’. By default, the order_by parameter is None which will result in custom tasks being returned in order of creation time descending

Returns:
List[CustomTask]

a list of custom tasks.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(custom_task_id: str) → datarobot.models.custom_task.CustomTask

Get custom task by id.

New in version v2.26.

Parameters:
custom_task_id: str

id of the custom task

Returns:
CustomTask

retrieved custom task

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

classmethod copy(custom_task_id: str) → datarobot.models.custom_task.CustomTask

Create a custom task by copying existing one.

New in version v2.26.

Parameters:
custom_task_id: str

id of the custom task to copy

Returns:
CustomTask
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod create(name: str, target_type: datarobot.enums.CUSTOM_TASK_TARGET_TYPE, language: Optional[datarobot.enums.Enum] = None, description: Optional[str] = None, calibrate_predictions: Optional[bool] = None, **kwargs) → datarobot.models.custom_task.CustomTask

Creates only the metadata for a custom task. This task will not be use-able until you have created a CustomTaskVersion attached to this task.

New in version v2.26.

Parameters:
name: str

name of the custom task

target_type: datarobot.enums.CUSTOM_TASK_TARGET_TYPE

the target typed based on the following values. Anything else will raise an error

  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.BINARY
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.REGRESSION
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.MULTICLASS
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.ANOMALY
  • datarobot.enums.CUSTOM_TASK_TARGET_TYPE.TRANSFORM
language: str, optional

programming language of the custom task. Can be “python”, “r”, “java” or “other”

description: str, optional

description of the custom task

calibrate_predictions: bool, optional

whether anomaly predictions should be calibrated to be between 0 and 1 by DR. if None, uses default value from DR app (True). only applies to custom estimators with target type datarobot.enums.CUSTOM_TASK_TARGET_TYPE.ANOMALY

Returns:
CustomTask
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

update(name: Optional[str] = None, language: Optional[datarobot.enums.Enum] = None, description: Optional[str] = None, **kwargs) → None

Update custom task properties.

New in version v2.26.

Parameters:
name: str, optional

new custom task name

language: str, optional

new custom task programming language

description: str, optional

new custom task description

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

refresh() → None

Update custom task with the latest data from server.

New in version v2.26.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

delete() → None

Delete custom task.

New in version v2.26.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

download_latest_version(file_path: str) → None

Download the latest custom task version.

New in version v2.26.

Parameters:
file_path: str

the full path of the target zip file

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

get_access_list() → List[datarobot.models.sharing.SharingAccess]

Retrieve access control settings of this custom task.

New in version v2.27.

Returns:
list of : class:SharingAccess <datarobot.SharingAccess>
share(access_list: List[datarobot.models.sharing.SharingAccess]) → None

Update the access control settings of this custom task.

New in version v2.27.

Parameters:
access_list : list of SharingAccess

A list of SharingAccess to update.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

Examples

Transfer access to the custom task from old_user@datarobot.com to new_user@datarobot.com

import datarobot as dr

new_access = dr.SharingAccess(new_user@datarobot.com,
                              dr.enums.SHARING_ROLE.OWNER, can_share=True)
access_list = [dr.SharingAccess(old_user@datarobot.com, None), new_access]

dr.CustomTask.get('custom-task-id').share(access_list)
class datarobot.models.custom_task_version.CustomTaskFileItem(id, file_name, file_path, file_source, created_at=None)

A file item attached to a DataRobot custom task version.

New in version v2.26.

Attributes:
id: str

id of the file item

file_name: str

name of the file item

file_path: str

path of the file item

file_source: str

source of the file item

created_at: str

ISO-8601 formatted timestamp of when the version was created

class datarobot.CustomTaskVersion(id, custom_task_id, version_major, version_minor, label, created_at, is_frozen, items, description=None, base_environment_id=None, maximum_memory=None, base_environment_version_id=None, dependencies=None, required_metadata_values=None, arguments=None)

A version of a DataRobot custom task.

New in version v2.26.

Attributes:
id: str

id of the custom task version

custom_task_id: str

id of the custom task

version_minor: int

a minor version number of custom task version

version_major: int

a major version number of custom task version

label: str

short human readable string to label the version

created_at: str

ISO-8601 formatted timestamp of when the version was created

is_frozen: bool

a flag if the custom task version is frozen

items: List[CustomTaskFileItem]

a list of file items attached to the custom task version

description: str, optional

custom task version description

base_environment_id: str, optional

id of the environment to use with the task

base_environment_version_id: str, optional

id of the environment version to use with the task

dependencies: List[CustomDependency]

the parsed dependencies of the custom task version if the version has a valid requirements.txt file

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

arguments: List[UserBlueprintTaskArgument]

A list of custom task version arguments.

classmethod from_server_data(data, keep_attrs=None)

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

classmethod create_clean(custom_task_id, base_environment_id, maximum_memory=None, is_major_update=True, folder_path=None, required_metadata_values=None)

Create a custom task version without files from previous versions.

New in version v2.26.

Parameters:
custom_task_id: str

the id of the custom task

base_environment_id: str

the id of the base environment to use with the custom task version

is_major_update: bool, optional

if the current version is 2.3, True would set the new version at 3.0. False would set the new version at 2.4. Default to True

folder_path: str, optional

the path to a folder containing files to be uploaded. Each file in the folder is uploaded under path relative to a folder path

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

maximum_memory: int

A number in bytes about how much memory custom tasks’ inference containers can run with.

Returns:
CustomTaskVersion

created custom task version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod create_from_previous(custom_task_id, base_environment_id, is_major_update=True, folder_path=None, files_to_delete=None, required_metadata_values=None, maximum_memory=None)

Create a custom task version containing files from a previous version.

New in version v2.26.

Parameters:
custom_task_id: str

the id of the custom task

base_environment_id: str

the id of the base environment to use with the custom task version

is_major_update: bool, optional

if the current version is 2.3, True would set the new version at 3.0. False would set the new version at 2.4. Default to True

folder_path: str, optional

the path to a folder containing files to be uploaded. Each file in the folder is uploaded under path relative to a folder path

files_to_delete: list, optional

the list of a file items ids to be deleted Example: [“5ea95f7a4024030aba48e4f9”, “5ea6b5da402403181895cc51”]

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

maximum_memory: int

A number in bytes about how much memory custom tasks’ inference containers can run with.

Returns:
CustomTaskVersion

created custom task version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod list(custom_task_id)

List custom task versions.

New in version v2.26.

Parameters:
custom_task_id: str

the id of the custom task

Returns:
List[CustomTaskVersion]

a list of custom task versions

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(custom_task_id, custom_task_version_id)

Get custom task version by id.

New in version v2.26.

Parameters:
custom_task_id: str

the id of the custom task

custom_task_version_id: str

the id of the custom task version to retrieve

Returns:
CustomTaskVersion

retrieved custom task version

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

download(file_path)

Download custom task version.

New in version v2.26.

Parameters:
file_path: str

path to create a file with custom task version content

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

update(description=None, required_metadata_values=None)

Update custom task version properties.

New in version v2.26.

Parameters:
description: str

new custom task version description

required_metadata_values: List[RequiredMetadataValue]

Additional parameters required by the execution environment. The required keys are defined by the fieldNames in the base environment’s requiredMetadataKeys.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

refresh()

Update custom task version with the latest data from server.

New in version v2.26.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

start_dependency_build()

Start the dependency build for a custom task version and return build status. .. versionadded:: v2.27

Returns:
CustomTaskVersionDependencyBuild

DTO of custom task version dependency build.

start_dependency_build_and_wait(max_wait)

Start the dependency build for a custom task version and wait while pulling status. .. versionadded:: v2.27

Parameters:
max_wait: int

max time to wait for a build completion

Returns:
CustomTaskVersionDependencyBuild

DTO of custom task version dependency build.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

datarobot.errors.AsyncTimeoutError

Raised if the dependency build is not finished after max_wait.

cancel_dependency_build()

Cancel custom task version dependency build that is in progress. .. versionadded:: v2.27

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

get_dependency_build()

Retrieve information about a custom task version’s dependency build. .. versionadded:: v2.27

Returns:
CustomTaskVersionDependencyBuild

DTO of custom task version dependency build.

download_dependency_build_log(file_directory='.')

Get log of a custom task version dependency build. .. versionadded:: v2.27

Parameters:
file_directory: str (optional, default is “.”)

Directory path where downloaded file is to save.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

Database Connectivity

class datarobot.DataDriver(id: Optional[str] = None, creator: Optional[str] = None, base_names: Optional[List[str]] = None, class_name: Optional[str] = None, canonical_name: Optional[str] = None)

A data driver

Attributes:
id : str

the id of the driver.

class_name : str

the Java class name for the driver.

canonical_name : str

the user-friendly name of the driver.

creator : str

the id of the user who created the driver.

base_names : list of str

a list of the file name(s) of the jar files.

classmethod list() → List[datarobot.models.driver.DataDriver]

Returns list of available drivers.

Returns:
drivers : list of DataDriver instances

contains a list of available drivers.

Examples

>>> import datarobot as dr
>>> drivers = dr.DataDriver.list()
>>> drivers
[DataDriver('mysql'), DataDriver('RedShift'), DataDriver('PostgreSQL')]
classmethod get(driver_id: str) → datarobot.models.driver.DataDriver

Gets the driver.

Parameters:
driver_id : str

the identifier of the driver.

Returns:
driver : DataDriver

the required driver.

Examples

>>> import datarobot as dr
>>> driver = dr.DataDriver.get('5ad08a1889453d0001ea7c5c')
>>> driver
DataDriver('PostgreSQL')
classmethod create(class_name: str, canonical_name: str, files: List[str]) → datarobot.models.driver.DataDriver

Creates the driver. Only available to admin users.

Parameters:
class_name : str

the Java class name for the driver.

canonical_name : str

the user-friendly name of the driver.

files : list of str

a list of the file paths on file system file_path(s) for the driver.

Returns:
driver : DataDriver

the created driver.

Raises:
ClientError

raised if user is not granted for Can manage JDBC database drivers feature

Examples

>>> import datarobot as dr
>>> driver = dr.DataDriver.create(
...     class_name='org.postgresql.Driver',
...     canonical_name='PostgreSQL',
...     files=['/tmp/postgresql-42.2.2.jar']
... )
>>> driver
DataDriver('PostgreSQL')
update(class_name: Optional[str] = None, canonical_name: Optional[str] = None) → None

Updates the driver. Only available to admin users.

Parameters:
class_name : str

the Java class name for the driver.

canonical_name : str

the user-friendly name of the driver.

Raises:
ClientError

raised if user is not granted for Can manage JDBC database drivers feature

Examples

>>> import datarobot as dr
>>> driver = dr.DataDriver.get('5ad08a1889453d0001ea7c5c')
>>> driver.canonical_name
'PostgreSQL'
>>> driver.update(canonical_name='postgres')
>>> driver.canonical_name
'postgres'
delete() → None

Removes the driver. Only available to admin users.

Raises:
ClientError

raised if user is not granted for Can manage JDBC database drivers feature

class datarobot.Connector(id: Optional[str] = None, creator_id: Optional[str] = None, configuration_id: Optional[str] = None, base_name: Optional[str] = None, canonical_name: Optional[str] = None)

A connector

Attributes:
id : str

the id of the connector.

creator_id : str

the id of the user who created the connector.

base_name : str

the file name of the jar file.

canonical_name : str

the user-friendly name of the connector.

configuration_id : str

the id of the configuration of the connector.

classmethod list() → List[datarobot.models.connector.Connector]

Returns list of available connectors.

Returns:
connectors : list of Connector instances

contains a list of available connectors.

Examples

>>> import datarobot as dr
>>> connectors = dr.Connector.list()
>>> connectors
[Connector('ADLS Gen2 Connector'), Connector('S3 Connector')]
classmethod get(connector_id: str) → datarobot.models.connector.Connector

Gets the connector.

Parameters:
connector_id : str

the identifier of the connector.

Returns:
connector : Connector

the required connector.

Examples

>>> import datarobot as dr
>>> connector = dr.Connector.get('5fe1063e1c075e0245071446')
>>> connector
Connector('ADLS Gen2 Connector')
classmethod create(file_path: str) → datarobot.models.connector.Connector

Creates the connector from a jar file. Only available to admin users.

Parameters:
file_path : str

the file path on file system file_path(s) for the connector.

Returns:
connector : Connector

the created connector.

Raises:
ClientError

raised if user is not granted for Can manage connectors feature

Examples

>>> import datarobot as dr
>>> connector = dr.Connector.create('/tmp/connector-adls-gen2.jar')
>>> connector
Connector('ADLS Gen2 Connector')
update(file_path: str) → datarobot.models.connector.Connector

Updates the connector with new jar file. Only available to admin users.

Parameters:
file_path : str

the file path on file system file_path(s) for the connector.

Returns:
connector : Connector

the updated connector.

Raises:
ClientError

raised if user is not granted for Can manage connectors feature

Examples

>>> import datarobot as dr
>>> connector = dr.Connector.get('5fe1063e1c075e0245071446')
>>> connector.base_name
'connector-adls-gen2.jar'
>>> connector.update('/tmp/connector-s3.jar')
>>> connector.base_name
'connector-s3.jar'
delete() → None

Removes the connector. Only available to admin users.

Raises:
ClientError

raised if user is not granted for Can manage connectors feature

class datarobot.DataStore(data_store_id: Optional[str] = None, data_store_type: Optional[str] = None, canonical_name: Optional[str] = None, creator: Optional[str] = None, updated: Optional[datetime.datetime] = None, params: Optional[datarobot.models.data_store.DataStoreParameters] = None, role: Optional[str] = None)

A data store. Represents database

Attributes:
id : str

The id of the data store.

data_store_type : str

The type of data store.

canonical_name : str

The user-friendly name of the data store.

creator : str

The id of the user who created the data store.

updated : datetime.datetime

The time of the last update

params : DataStoreParameters

A list specifying data store parameters.

role : str

Your access role for this data store.

classmethod list() → List[datarobot.models.data_store.DataStore]

Returns list of available data stores.

Returns:
data_stores : list of DataStore instances

contains a list of available data stores.

Examples

>>> import datarobot as dr
>>> data_stores = dr.DataStore.list()
>>> data_stores
[DataStore('Demo'), DataStore('Airlines')]
classmethod get(data_store_id: str) → datarobot.models.data_store.DataStore

Gets the data store.

Parameters:
data_store_id : str

the identifier of the data store.

Returns:
data_store : DataStore

the required data store.

Examples

>>> import datarobot as dr
>>> data_store = dr.DataStore.get('5a8ac90b07a57a0001be501e')
>>> data_store
DataStore('Demo')
classmethod create(data_store_type: str, canonical_name: str, driver_id: str, jdbc_url: str) → datarobot.models.data_store.DataStore

Creates the data store.

Parameters:
data_store_type : str

the type of data store.

canonical_name : str

the user-friendly name of the data store.

driver_id : str

the identifier of the DataDriver.

jdbc_url : str

the full JDBC url, for example jdbc:postgresql://my.dbaddress.org:5432/my_db.

Returns:
data_store : DataStore

the created data store.

Examples

>>> import datarobot as dr
>>> data_store = dr.DataStore.create(
...     data_store_type='jdbc',
...     canonical_name='Demo DB',
...     driver_id='5a6af02eb15372000117c040',
...     jdbc_url='jdbc:postgresql://my.db.address.org:5432/perftest'
... )
>>> data_store
DataStore('Demo DB')
update(canonical_name: Optional[str] = None, driver_id: Optional[str] = None, jdbc_url: Optional[str] = None) → None

Updates the data store.

Parameters:
canonical_name : str

optional, the user-friendly name of the data store.

driver_id : str

optional, the identifier of the DataDriver.

jdbc_url : str

optional, the full JDBC url, for example jdbc:postgresql://my.dbaddress.org:5432/my_db.

Examples

>>> import datarobot as dr
>>> data_store = dr.DataStore.get('5ad5d2afef5cd700014d3cae')
>>> data_store
DataStore('Demo DB')
>>> data_store.update(canonical_name='Demo DB updated')
>>> data_store
DataStore('Demo DB updated')
delete() → None

Removes the DataStore

test(username: str, password: str) → TestResponse

Tests database connection.

Parameters:
username : str

the username for database authentication.

password : str

the password for database authentication. The password is encrypted at server side and never saved / stored

Returns:
message : dict

message with status.

Examples

>>> import datarobot as dr
>>> data_store = dr.DataStore.get('5ad5d2afef5cd700014d3cae')
>>> data_store.test(username='db_username', password='db_password')
{'message': 'Connection successful'}
schemas(username: str, password: str) → SchemasResponse

Returns list of available schemas.

Parameters:
username : str

the username for database authentication.

password : str

the password for database authentication. The password is encrypted at server side and never saved / stored

Returns:
response : dict

dict with database name and list of str - available schemas

Examples

>>> import datarobot as dr
>>> data_store = dr.DataStore.get('5ad5d2afef5cd700014d3cae')
>>> data_store.schemas(username='db_username', password='db_password')
{'catalog': 'perftest', 'schemas': ['demo', 'information_schema', 'public']}
tables(username: str, password: str, schema: Optional[str] = None) → TablesResponse

Returns list of available tables in schema.

Parameters:
username : str

optional, the username for database authentication.

password : str

optional, the password for database authentication. The password is encrypted at server side and never saved / stored

schema : str

optional, the schema name.

Returns:
response : dict

dict with catalog name and tables info

Examples

>>> import datarobot as dr
>>> data_store = dr.DataStore.get('5ad5d2afef5cd700014d3cae')
>>> data_store.tables(username='db_username', password='db_password', schema='demo')
{'tables': [{'type': 'TABLE', 'name': 'diagnosis', 'schema': 'demo'}, {'type': 'TABLE',
'name': 'kickcars', 'schema': 'demo'}, {'type': 'TABLE', 'name': 'patient',
'schema': 'demo'}, {'type': 'TABLE', 'name': 'transcript', 'schema': 'demo'}],
'catalog': 'perftest'}
classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[List[str]] = None) → datarobot.models.data_store.DataStore

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

get_access_list() → List[datarobot.models.sharing.SharingAccess]

Retrieve what users have access to this data store

New in version v2.14.

Returns:
list of : class:SharingAccess <datarobot.SharingAccess>
share(access_list: List[datarobot.models.sharing.SharingAccess]) → None

Modify the ability of users to access this data store

New in version v2.14.

Parameters:
access_list : list of SharingAccess

the modifications to make.

Raises:
datarobot.ClientError :

if you do not have permission to share this data store, if the user you’re sharing with doesn’t exist, if the same user appears multiple times in the access_list, or if these changes would leave the data store without an owner.

Examples

Transfer access to the data store from old_user@datarobot.com to new_user@datarobot.com

import datarobot as dr

new_access = dr.SharingAccess(new_user@datarobot.com,
                              dr.enums.SHARING_ROLE.OWNER, can_share=True)
access_list = [dr.SharingAccess(old_user@datarobot.com, None), new_access]

dr.DataStore.get('my-data-store-id').share(access_list)
class datarobot.DataSource(data_source_id: Optional[str] = None, data_source_type: Optional[str] = None, canonical_name: Optional[str] = None, creator: Optional[str] = None, updated: Optional[datetime.datetime] = None, params: Optional[datarobot.models.data_source.DataSourceParameters] = None, role: Optional[str] = None)

A data source. Represents data request

Attributes:
id : str

the id of the data source.

type : str

the type of data source.

canonical_name : str

the user-friendly name of the data source.

creator : str

the id of the user who created the data source.

updated : datetime.datetime

the time of the last update.

params : DataSourceParameters

a list specifying data source parameters.

role : str or None

if a string, represents a particular level of access and should be one of datarobot.enums.SHARING_ROLE. For more information on the specific access levels, see the sharing documentation. If None, can be passed to a share function to revoke access for a specific user.

classmethod list() → List[datarobot.models.data_source.DataSource]

Returns list of available data sources.

Returns:
data_sources : list of DataSource instances

contains a list of available data sources.

Examples

>>> import datarobot as dr
>>> data_sources = dr.DataSource.list()
>>> data_sources
[DataSource('Diagnostics'), DataSource('Airlines 100mb'), DataSource('Airlines 10mb')]
classmethod get(data_source_id: str) → TDataSource

Gets the data source.

Parameters:
data_source_id : str

the identifier of the data source.

Returns:
data_source : DataSource

the requested data source.

Examples

>>> import datarobot as dr
>>> data_source = dr.DataSource.get('5a8ac9ab07a57a0001be501f')
>>> data_source
DataSource('Diagnostics')
classmethod create(data_source_type: str, canonical_name: str, params: datarobot.models.data_source.DataSourceParameters) → TDataSource

Creates the data source.

Parameters:
data_source_type : str

the type of data source.

canonical_name : str

the user-friendly name of the data source.

params : DataSourceParameters

a list specifying data source parameters.

Returns:
data_source : DataSource

the created data source.

Examples

>>> import datarobot as dr
>>> params = dr.DataSourceParameters(
...     data_store_id='5a8ac90b07a57a0001be501e',
...     query='SELECT * FROM airlines10mb WHERE "Year" >= 1995;'
... )
>>> data_source = dr.DataSource.create(
...     data_source_type='jdbc',
...     canonical_name='airlines stats after 1995',
...     params=params
... )
>>> data_source
DataSource('airlines stats after 1995')
update(canonical_name: Optional[str] = None, params: Optional[datarobot.models.data_source.DataSourceParameters] = None) → None

Creates the data source.

Parameters:
canonical_name : str

optional, the user-friendly name of the data source.

params : DataSourceParameters

optional, the identifier of the DataDriver.

Examples

>>> import datarobot as dr
>>> data_source = dr.DataSource.get('5ad840cc613b480001570953')
>>> data_source
DataSource('airlines stats after 1995')
>>> params = dr.DataSourceParameters(
...     query='SELECT * FROM airlines10mb WHERE "Year" >= 1990;'
... )
>>> data_source.update(
...     canonical_name='airlines stats after 1990',
...     params=params
... )
>>> data_source
DataSource('airlines stats after 1990')
delete() → None

Removes the DataSource

classmethod from_server_data(data: ServerDataType, keep_attrs: Optional[Iterable[str]] = None) → TDataSource

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

get_access_list() → List[datarobot.models.sharing.SharingAccess]

Retrieve what users have access to this data source

New in version v2.14.

Returns:
list of : class:SharingAccess <datarobot.SharingAccess>
share(access_list: List[datarobot.models.sharing.SharingAccess]) → None

Modify the ability of users to access this data source

New in version v2.14.

Parameters:
access_list: list of : class:SharingAccess <datarobot.SharingAccess>

The modifications to make.

Raises:
datarobot.ClientError:

If you do not have permission to share this data source, if the user you’re sharing with doesn’t exist, if the same user appears multiple times in the access_list, or if these changes would leave the data source without an owner.

Examples

Transfer access to the data source from old_user@datarobot.com to new_user@datarobot.com

from datarobot.enums import SHARING_ROLE
from datarobot.models.data_source import DataSource
from datarobot.models.sharing import SharingAccess

new_access = SharingAccess(
    "[email protected]",
    SHARING_ROLE.OWNER,
    can_share=True,
)
access_list = [
    SharingAccess("[email protected]", SHARING_ROLE.OWNER, can_share=True),
    new_access,
]

DataSource.get('my-data-source-id').share(access_list)
create_dataset(username: Optional[str] = None, password: Optional[str] = None, do_snapshot: Optional[bool] = None, persist_data_after_ingestion: Optional[bool] = None, categories: Optional[List[str]] = None, credential_id: Optional[str] = None, use_kerberos: Optional[bool] = None) → datarobot.models.dataset.Dataset

Create a Dataset from this data source.

New in version v2.22.

Parameters:
username: string, optional

The username for database authentication.

password: string, optional

The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.

do_snapshot: bool, optional

If unset, uses the server default: True. If true, creates a snapshot dataset; if false, creates a remote dataset. Creating snapshots from non-file sources requires an additional permission, Enable Create Snapshot Data Source.

persist_data_after_ingestion: bool, optional

If unset, uses the server default: True. If true, will enforce saving all data (for download and sampling) and will allow a user to view extended data profile (which includes data statistics like min/max/median/mean, histogram, etc.). If false, will not enforce saving data. The data schema (feature names and types) still will be available. Specifying this parameter to false and doSnapshot to true will result in an error.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

credential_id: string, optional

The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional.

use_kerberos: bool, optional

If unset, uses the server default: False. If true, use kerberos authentication for database authentication.

Returns:
response: Dataset

The Dataset created from the uploaded data

class datarobot.DataSourceParameters(data_store_id: Optional[str] = None, table: Optional[str] = None, schema: Optional[str] = None, partition_column: Optional[str] = None, query: Optional[str] = None, fetch_size: Optional[int] = None)

Data request configuration

Attributes:
data_store_id : str

the id of the DataStore.

table : str

optional, the name of specified database table.

schema : str

optional, the name of the schema associated with the table.

partition_column : str

optional, the name of the partition column.

query : str

optional, the user specified SQL query.

fetch_size : int

optional, a user specified fetch size in the range [1, 20000]. By default a fetchSize will be assigned to balance throughput and memory usage

Datasets

class datarobot.models.Dataset(dataset_id: str, version_id: str, name: str, categories: List[str], created_at: str, is_data_engine_eligible: bool, is_latest_version: bool, is_snapshot: bool, processing_state: str, created_by: Optional[str] = None, data_persisted: Optional[bool] = None, size: Optional[int] = None, row_count: Optional[int] = None)

Represents a Dataset returned from the api/v2/datasets/ endpoints.

Attributes:
id: string

The ID of this dataset

name: string

The name of this dataset in the catalog

is_latest_version: bool

Whether this dataset version is the latest version of this dataset

version_id: string

The object ID of the catalog_version the dataset belongs to

categories: list(string)

An array of strings describing the intended use of the dataset. The supported options are “TRAINING” and “PREDICTION”.

created_at: string

The date when the dataset was created

created_by: string, optional

Username of the user who created the dataset

is_snapshot: bool

Whether the dataset version is an immutable snapshot of data which has previously been retrieved and saved to Data_robot

data_persisted: bool, optional

If true, user is allowed to view extended data profile (which includes data statistics like min/max/median/mean, histogram, etc.) and download data. If false, download is not allowed and only the data schema (feature names and types) will be available.

is_data_engine_eligible: bool

Whether this dataset can be a data source of a data engine query.

processing_state: string

Current ingestion process state of the dataset

row_count: int, optional

The number of rows in the dataset.

size: int, optional

The size of the dataset as a CSV in bytes.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to this dataset in AI Catalog.

classmethod upload(source: Union[str, pandas.core.frame.DataFrame, io.IOBase]) → TDataset

This method covers Dataset creation from local materials (file & DataFrame) and a URL.

Parameters:
source: str, pd.DataFrame or file object

Pass a URL, filepath, file or DataFrame to create and return a Dataset.

Returns:
response: Dataset

The Dataset created from the uploaded data source.

Raises:
InvalidUsageError

If the source parameter cannot be determined to be a URL, filepath, file or DataFrame.

Examples

# Upload a local file
dataset_one = Dataset.upload("./data/examples.csv")

# Create a dataset via URL
dataset_two = Dataset.upload(
    "https://raw.githubusercontent.com/curran/data/gh-pages/dbpedia/cities/data.csv"
)

# Create dataset with a pandas Dataframe
dataset_three = Dataset.upload(my_df)

# Create dataset using a local file
with open("./data/examples.csv", "rb") as file_pointer:
    dataset_four = Dataset.create_from_file(filelike=file_pointer)
classmethod create_from_file(file_path: Optional[str] = None, filelike: Optional[io.IOBase] = None, categories: Optional[List[str]] = None, read_timeout: int = 600, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset from a file. Returns when the dataset has been successfully uploaded and processed.

Warning: This function does not clean up it’s open files. If you pass a filelike, you are responsible for closing it. If you pass a file_path, this will create a file object from the file_path but will not close it.

Parameters:
file_path: string, optional

The path to the file. This will create a file object pointing to that file but will not close it.

filelike: file, optional

An open and readable file object.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

read_timeout: int, optional

The maximum number of seconds to wait for the server to respond indicating that the initial upload is complete

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

A fully armed and operational Dataset

classmethod create_from_in_memory_data(data_frame: Optional[pandas.core.frame.DataFrame] = None, records: Optional[List[Dict[str, Any]]] = None, categories: Optional[List[str]] = None, read_timeout: int = 600, max_wait: int = 600, fname: Optional[str] = None) → TDataset

A blocking call that creates a new Dataset from in-memory data. Returns when the dataset has been successfully uploaded and processed.

The data can be either a pandas DataFrame or a list of dictionaries with identical keys.

Parameters:
data_frame: DataFrame, optional

The data frame to upload

records: list[dict], optional

A list of dictionaries with identical keys to upload

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

read_timeout: int, optional

The maximum number of seconds to wait for the server to respond indicating that the initial upload is complete

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

fname: string, optional

The file name, “data.csv” by default

Returns:
response: Dataset

The Dataset created from the uploaded data.

Raises:
InvalidUsageError

If neither a DataFrame or list of records is passed.

classmethod create_from_url(url: str, do_snapshot: Optional[bool] = None, persist_data_after_ingestion: Optional[bool] = None, categories: Optional[List[str]] = None, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset from data stored at a url. Returns when the dataset has been successfully uploaded and processed.

Parameters:
url: string

The URL to use as the source of data for the dataset being created.

do_snapshot: bool, optional

If unset, uses the server default: True. If true, creates a snapshot dataset; if false, creates a remote dataset. Creating snapshots from non-file sources may be disabled by the permission, Disable AI Catalog Snapshots.

persist_data_after_ingestion: bool, optional

If unset, uses the server default: True. If true, will enforce saving all data (for download and sampling) and will allow a user to view extended data profile (which includes data statistics like min/max/median/mean, histogram, etc.). If false, will not enforce saving data. The data schema (feature names and types) still will be available. Specifying this parameter to false and doSnapshot to true will result in an error.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

The Dataset created from the uploaded data

classmethod create_from_data_source(data_source_id: str, username: Optional[str] = None, password: Optional[str] = None, do_snapshot: Optional[bool] = None, persist_data_after_ingestion: Optional[bool] = None, categories: Optional[List[str]] = None, credential_id: Optional[str] = None, use_kerberos: Optional[bool] = None, credential_data: Optional[Dict[str, str]] = None, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset from data stored at a DataSource. Returns when the dataset has been successfully uploaded and processed.

New in version v2.22.

Parameters:
data_source_id: string

The ID of the DataSource to use as the source of data.

username: string, optional

The username for database authentication.

password: string, optional

The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.

do_snapshot: bool, optional

If unset, uses the server default: True. If true, creates a snapshot dataset; if false, creates a remote dataset. Creating snapshots from non-file sources requires may be disabled by the permission, Disable AI Catalog Snapshots.

persist_data_after_ingestion: bool, optional

If unset, uses the server default: True. If true, will enforce saving all data (for download and sampling) and will allow a user to view extended data profile (which includes data statistics like min/max/median/mean, histogram, etc.). If false, will not enforce saving data. The data schema (feature names and types) still will be available. Specifying this parameter to false and doSnapshot to true will result in an error.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

credential_id: string, optional

The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional.

use_kerberos: bool, optional

If unset, uses the server default: False. If true, use kerberos authentication for database authentication.

credential_data: dict, optional

The credentials to authenticate with the database, to use instead of user/password or credential ID.

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

The Dataset created from the uploaded data

classmethod create_from_query_generator(generator_id: str, dataset_id: Optional[str] = None, dataset_version_id: Optional[str] = None, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset from the query generator. Returns when the dataset has been successfully processed. If optional parameters are not specified the query is applied to the dataset_id and dataset_version_id stored in the query generator. If specified they will override the stored dataset_id/dataset_version_id, e.g. to prep a prediction dataset.

Parameters:
generator_id: str

The id of the query generator to use.

dataset_id: str, optional

The id of the dataset to apply the query to.

dataset_version_id: str, optional

The id of the dataset version to apply the query to. If not specified the latest version associated with dataset_id (if specified) is used.

max_wait : int

optional, the maximum number of seconds to wait before giving up.

Returns:
response: Dataset

The Dataset created from the query generator

classmethod get(dataset_id: str) → TDataset

Get information about a dataset.

Parameters:
dataset_id : string

the id of the dataset

Returns:
dataset : Dataset

the queried dataset

classmethod delete(dataset_id: str) → None

Soft deletes a dataset. You cannot get it or list it or do actions with it, except for un-deleting it.

Parameters:
dataset_id: string

The id of the dataset to mark for deletion

Returns:
None
classmethod un_delete(dataset_id: str) → None

Un-deletes a previously deleted dataset. If the dataset was not deleted, nothing happens.

Parameters:
dataset_id: string

The id of the dataset to un-delete

Returns:
None
classmethod list(category: Optional[str] = None, filter_failed: Optional[bool] = None, order_by: Optional[str] = None) → List[TDataset]

List all datasets a user can view.

Parameters:
category: string, optional

Optional. If specified, only dataset versions that have the specified category will be included in the results. Categories identify the intended use of the dataset; supported categories are “TRAINING” and “PREDICTION”.

filter_failed: bool, optional

If unset, uses the server default: False. Whether datasets that failed during import should be excluded from the results. If True invalid datasets will be excluded.

order_by: string, optional

If unset, uses the server default: “-created”. Sorting order which will be applied to catalog list, valid options are: - “created” – ascending order by creation datetime; - “-created” – descending order by creation datetime.

Returns:
list[Dataset]

a list of datasets the user can view

classmethod iterate(offset: Optional[int] = None, limit: Optional[int] = None, category: Optional[str] = None, order_by: Optional[str] = None, filter_failed: Optional[bool] = None) → Generator[TDataset, None, None]

Get an iterator for the requested datasets a user can view. This lazily retrieves results. It does not get the next page from the server until the current page is exhausted.

Parameters:
offset: int, optional

If set, this many results will be skipped

limit: int, optional

Specifies the size of each page retrieved from the server. If unset, uses the server default.

category: string, optional

Optional. If specified, only dataset versions that have the specified category will be included in the results. Categories identify the intended use of the dataset; supported categories are “TRAINING” and “PREDICTION”.

filter_failed: bool, optional

If unset, uses the server default: False. Whether datasets that failed during import should be excluded from the results. If True invalid datasets will be excluded.

order_by: string, optional

If unset, uses the server default: “-created”. Sorting order which will be applied to catalog list, valid options are: - “created” – ascending order by creation datetime; - “-created” – descending order by creation datetime.

Yields:
Dataset

An iterator of the datasets the user can view

update() → None

Updates the Dataset attributes in place with the latest information from the server.

Returns:
None
modify(name: Optional[str] = None, categories: Optional[List[str]] = None) → None

Modifies the Dataset name and/or categories. Updates the object in place.

Parameters:
name: string, optional

The new name of the dataset

categories: list[string], optional

A list of strings describing the intended use of the dataset. The supported options are “TRAINING” and “PREDICTION”. If any categories were previously specified for the dataset, they will be overwritten.

Returns:
None
share(access_list: List[datarobot.models.sharing.SharingAccess], apply_grant_to_linked_objects: bool = False) → None

Modify the ability of users to access this dataset

Parameters:
access_list: list of : class:SharingAccess <datarobot.SharingAccess>

The modifications to make.

apply_grant_to_linked_objects: bool

If true for any users being granted access to the dataset, grant the user read access to any linked objects such as DataSources and DataStores that may be used by this dataset. Ignored if no such objects are relevant for dataset, defaults to False.

Raises:
datarobot.ClientError:

If you do not have permission to share this dataset, if the user you’re sharing with doesn’t exist, if the same user appears multiple times in the access_list, or if these changes would leave the dataset without an owner.

Examples

Transfer access to the dataset from old_user@datarobot.com to new_user@datarobot.com

from datarobot.enums import SHARING_ROLE
from datarobot.models.dataset import Dataset
from datarobot.models.sharing import SharingAccess

new_access = SharingAccess(
    "[email protected]",
    SHARING_ROLE.OWNER,
    can_share=True,
)
access_list = [
    SharingAccess(
        "[email protected]",
        SHARING_ROLE.OWNER,
        can_share=True,
        can_use_data=True,
    ),
    new_access,
]

Dataset.get('my-dataset-id').share(access_list)
get_details() → datarobot.models.dataset.DatasetDetails

Gets the details for this Dataset

Returns:
DatasetDetails
get_all_features(order_by: Optional[str] = None) → List[datarobot.models.feature.DatasetFeature]

Get a list of all the features for this dataset.

Parameters:
order_by: string, optional

If unset, uses the server default: ‘name’. How the features should be ordered. Can be ‘name’ or ‘featureType’.

Returns:
list[DatasetFeature]
iterate_all_features(offset: Optional[int] = None, limit: Optional[int] = None, order_by: Optional[str] = None) → Generator[datarobot.models.feature.DatasetFeature, None, None]

Get an iterator for the requested features of a dataset. This lazily retrieves results. It does not get the next page from the server until the current page is exhausted.

Parameters:
offset: int, optional

If set, this many results will be skipped.

limit: int, optional

Specifies the size of each page retrieved from the server. If unset, uses the server default.

order_by: string, optional

If unset, uses the server default: ‘name’. How the features should be ordered. Can be ‘name’ or ‘featureType’.

Yields:
DatasetFeature
get_featurelists() → List[datarobot.models.featurelist.DatasetFeaturelist]

Get DatasetFeaturelists created on this Dataset

Returns:
feature_lists: list[DatasetFeaturelist]
create_featurelist(name: str, features: List[str]) → datarobot.models.featurelist.DatasetFeaturelist

Create a new dataset featurelist

Parameters:
name : str

the name of the modeling featurelist to create. Names must be unique within the dataset, or the server will return an error.

features : list of str

the names of the features to include in the dataset featurelist. Each feature must be a dataset feature.

Returns:
featurelist : DatasetFeaturelist

the newly created featurelist

Examples

dataset = Dataset.get('1234deadbeeffeeddead4321')
dataset_features = dataset.get_all_features()
selected_features = [feat.name for feat in dataset_features][:5]  # select first five
new_flist = dataset.create_featurelist('Simple Features', selected_features)
get_file(file_path: Optional[str] = None, filelike: Optional[io.IOBase] = None) → None

Retrieves all the originally uploaded data in CSV form. Writes it to either the file or a filelike object that can write bytes.

Only one of file_path or filelike can be provided and it must be provided as a keyword argument (i.e. file_path=’path-to-write-to’). If a file-like object is provided, the user is responsible for closing it when they are done.

The user must also have permission to download data.

Parameters:
file_path: string, optional

The destination to write the file to.

filelike: file, optional

A file-like object to write to. The object must be able to write bytes. The user is responsible for closing the object

Returns:
None
get_as_dataframe() → pandas.core.frame.DataFrame

Retrieves all the originally uploaded data in a pandas DataFrame.

New in version v3.0.

Returns:
pd.DataFrame
get_projects() → List[datarobot.models.dataset.ProjectLocation]

Retrieves the Dataset’s projects as ProjectLocation named tuples.

Returns:
locations: list[ProjectLocation]
create_project(project_name: Optional[str] = None, user: Optional[str] = None, password: Optional[str] = None, credential_id: Optional[str] = None, use_kerberos: Optional[bool] = None, credential_data: Optional[Dict[str, str]] = None) → datarobot.models.project.Project

Create a datarobot.models.Project from this dataset

Parameters:
project_name: string, optional

The name of the project to be created. If not specified, will be “Untitled Project” for database connections, otherwise the project name will be based on the file used.

user: string, optional

The username for database authentication.

password: string, optional

The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored

credential_id: string, optional

The ID of the set of credentials to use instead of user and password.

use_kerberos: bool, optional

Server default is False. If true, use kerberos authentication for database authentication.

credential_data: dict, optional

The credentials to authenticate with the database, to use instead of user/password or credential ID.

Returns:
Project
classmethod create_version_from_file(dataset_id: str, file_path: Optional[str] = None, filelike: Optional[io.IOBase] = None, categories: Optional[List[str]] = None, read_timeout: int = 600, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset version from a file. Returns when the new dataset version has been successfully uploaded and processed.

Warning: This function does not clean up it’s open files. If you pass a filelike, you are responsible for closing it. If you pass a file_path, this will create a file object from the file_path but will not close it.

New in version v2.23.

Parameters:
dataset_id: string

The ID of the dataset for which new version to be created

file_path: string, optional

The path to the file. This will create a file object pointing to that file but will not close it.

filelike: file, optional

An open and readable file object.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

read_timeout: int, optional

The maximum number of seconds to wait for the server to respond indicating that the initial upload is complete

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

A fully armed and operational Dataset version

classmethod create_version_from_in_memory_data(dataset_id: str, data_frame: Optional[pandas.core.frame.DataFrame] = None, records: Optional[List[Dict[str, Any]]] = None, categories: Optional[List[str]] = None, read_timeout: int = 600, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset version for a dataset from in-memory data. Returns when the dataset has been successfully uploaded and processed.

The data can be either a pandas DataFrame or a list of dictionaries with identical keys.

New in version v2.23.

Parameters:
dataset_id: string

The ID of the dataset for which new version to be created

data_frame: DataFrame, optional

The data frame to upload

records: list[dict], optional

A list of dictionaries with identical keys to upload

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

read_timeout: int, optional

The maximum number of seconds to wait for the server to respond indicating that the initial upload is complete

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

The Dataset version created from the uploaded data

Raises:
InvalidUsageError

If neither a DataFrame or list of records is passed.

classmethod create_version_from_url(dataset_id: str, url: str, categories: Optional[List[str]] = None, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset from data stored at a url for a given dataset. Returns when the dataset has been successfully uploaded and processed.

New in version v2.23.

Parameters:
dataset_id: string

The ID of the dataset for which new version to be created

url: string

The URL to use as the source of data for the dataset being created.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

The Dataset version created from the uploaded data

classmethod create_version_from_data_source(dataset_id: str, data_source_id: str, username: Optional[str] = None, password: Optional[str] = None, categories: Optional[List[str]] = None, credential_id: Optional[str] = None, use_kerberos: Optional[bool] = None, credential_data: Optional[Dict[str, str]] = None, max_wait: int = 600) → TDataset

A blocking call that creates a new Dataset from data stored at a DataSource. Returns when the dataset has been successfully uploaded and processed.

New in version v2.23.

Parameters:
dataset_id: string

The ID of the dataset for which new version to be created

data_source_id: string

The ID of the DataSource to use as the source of data.

username: string, optional

The username for database authentication.

password: string, optional

The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored.

categories: list[string], optional

An array of strings describing the intended use of the dataset. The current supported options are “TRAINING” and “PREDICTION”.

credential_id: string, optional

The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional.

use_kerberos: bool, optional

If unset, uses the server default: False. If true, use kerberos authentication for database authentication.

credential_data: dict, optional

The credentials to authenticate with the database, to use instead of user/password or credential ID.

max_wait: int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
response: Dataset

The Dataset version created from the uploaded data

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

class datarobot.DatasetDetails(dataset_id: str, version_id: str, categories: List[str], created_by: str, created_at: str, data_source_type: str, error: str, is_latest_version: bool, is_snapshot: bool, is_data_engine_eligible: bool, last_modification_date: str, last_modifier_full_name: str, name: str, uri: str, processing_state: str, data_persisted: Optional[bool] = None, data_engine_query_id: Optional[str] = None, data_source_id: Optional[str] = None, description: Optional[str] = None, eda1_modification_date: Optional[str] = None, eda1_modifier_full_name: Optional[str] = None, feature_count: Optional[int] = None, feature_count_by_type: Optional[List[datarobot.models.dataset.FeatureTypeCount]] = None, row_count: Optional[int] = None, size: Optional[int] = None, tags: Optional[List[str]] = None)

Represents a detailed view of a Dataset. The to_dataset method creates a Dataset from this details view.

Attributes:
dataset_id: string

The ID of this dataset

name: string

The name of this dataset in the catalog

is_latest_version: bool

Whether this dataset version is the latest version of this dataset

version_id: string

The object ID of the catalog_version the dataset belongs to

categories: list(string)

An array of strings describing the intended use of the dataset. The supported options are “TRAINING” and “PREDICTION”.

created_at: string

The date when the dataset was created

created_by: string

Username of the user who created the dataset

is_snapshot: bool

Whether the dataset version is an immutable snapshot of data which has previously been retrieved and saved to Data_robot

data_persisted: bool, optional

If true, user is allowed to view extended data profile (which includes data statistics like min/max/median/mean, histogram, etc.) and download data. If false, download is not allowed and only the data schema (feature names and types) will be available.

is_data_engine_eligible: bool

Whether this dataset can be a data source of a data engine query.

processing_state: string

Current ingestion process state of the dataset

row_count: int, optional

The number of rows in the dataset.

size: int, optional

The size of the dataset as a CSV in bytes.

data_engine_query_id: string, optional

ID of the source data engine query

data_source_id: string, optional

ID of the datasource used as the source of the dataset

data_source_type: string

the type of the datasource that was used as the source of the dataset

description: string, optional

the description of the dataset

eda1_modification_date: string, optional

the ISO 8601 formatted date and time when the EDA1 for the dataset was updated

eda1_modifier_full_name: string, optional

the user who was the last to update EDA1 for the dataset

error: string

details of exception raised during ingestion process, if any

feature_count: int, optional

total number of features in the dataset

feature_count_by_type: list[FeatureTypeCount]

number of features in the dataset grouped by feature type

last_modification_date: string

the ISO 8601 formatted date and time when the dataset was last modified

last_modifier_full_name: string

full name of user who was the last to modify the dataset

tags: list[string]

list of tags attached to the item

uri: string

the uri to datasource like: - ‘file_name.csv’ - ‘jdbc:DATA_SOURCE_GIVEN_NAME/SCHEMA.TABLE_NAME’ - ‘jdbc:DATA_SOURCE_GIVEN_NAME/<query>’ - for query based datasources - ‘https://s3.amazonaws.com/datarobot_test/kickcars-sample-200.csv’ - etc.

classmethod get(dataset_id: str) → TDatasetDetails

Get details for a Dataset from the server

Parameters:
dataset_id: str

The id for the Dataset from which to get details

Returns:
DatasetDetails
to_dataset() → datarobot.models.dataset.Dataset

Build a Dataset object from the information in this object

Returns:
Dataset

Data Engine Query Generator

class datarobot.DataEngineQueryGenerator(**generator_kwargs)

DataEngineQueryGenerator is used to set up time series data prep.

New in version v2.27.

Attributes:
id: str

id of the query generator

query: str

text of the generated Spark SQL query

datasets: list(QueryGeneratorDataset)

datasets associated with the query generator

generator_settings: QueryGeneratorSettings

the settings used to define the query

generator_type: str

“TimeSeries” is the only supported type

classmethod create(generator_type, datasets, generator_settings)

Creates a query generator entity.

New in version v2.27.

Parameters:
generator_type : str

Type of data engine query generator

datasets : List[QueryGeneratorDataset]

Source datasets in the Data Engine workspace.

generator_settings : dict

Data engine generator settings of the given generator_type.

Returns:
query_generator : DataEngineQueryGenerator

The created generator

Examples

import datarobot as dr
from datarobot.models.data_engine_query_generator import (
   QueryGeneratorDataset,
   QueryGeneratorSettings,
)
dataset = QueryGeneratorDataset(
   alias='My_Awesome_Dataset_csv',
   dataset_id='61093144cabd630828bca321',
   dataset_version_id=1,
)
settings = QueryGeneratorSettings(
   datetime_partition_column='date',
   time_unit='DAY',
   time_step=1,
   default_numeric_aggregation_method='sum',
   default_categorical_aggregation_method='mostFrequent',
)
g = dr.DataEngineQueryGenerator.create(
   generator_type='TimeSeries',
   datasets=[dataset],
   generator_settings=settings,
)
g.id
>>>'54e639a18bd88f08078ca831'
g.generator_type
>>>'TimeSeries'
classmethod get(generator_id)

Gets information about a query generator.

Parameters:
generator_id : str

The identifier of the query generator you want to load.

Returns:
query_generator : DataEngineQueryGenerator

The queried generator

Examples

import datarobot as dr
g = dr.DataEngineQueryGenerator.get(generator_id='54e639a18bd88f08078ca831')
g.id
>>>'54e639a18bd88f08078ca831'
g.generator_type
>>>'TimeSeries'
create_dataset(dataset_id=None, dataset_version_id=None, max_wait=600)

A blocking call that creates a new Dataset from the query generator. Returns when the dataset has been successfully processed. If optional parameters are not specified the query is applied to the dataset_id and dataset_version_id stored in the query generator. If specified they will override the stored dataset_id/dataset_version_id, i.e. to prep a prediction dataset.

Parameters:
dataset_id: str, optional

The id of the unprepped dataset to apply the query to

dataset_version_id: str, optional

The version_id of the unprepped dataset to apply the query to

Returns:
response: Dataset

The Dataset created from the query generator

Datetime Trend Plots

class datarobot.models.datetime_trend_plots.AccuracyOverTimePlotsMetadata(project_id, model_id, forecast_distance, resolutions, backtest_metadata, holdout_metadata, backtest_statuses, holdout_statuses)

Accuracy over Time metadata for datetime model.

New in version v2.25.

Notes

Backtest/holdout status is a dict containing the following:

  • training: string
    Status backtest/holdout training. One of datarobot.enums.DATETIME_TREND_PLOTS_STATUS
  • validation: string
    Status backtest/holdout validation. One of datarobot.enums.DATETIME_TREND_PLOTS_STATUS

Backtest/holdout metadata is a dict containing the following:

  • training: dict
    Start and end dates for the backtest/holdout training.
  • validation: dict
    Start and end dates for the backtest/holdout validation.

Each dict in the training and validation in backtest/holdout metadata is structured like:

  • start_date: datetime.datetime or None
    The datetime of the start of the chart data (inclusive). None if chart data is not computed.
  • end_date: datetime.datetime or None
    The datetime of the end of the chart data (exclusive). None if chart data is not computed.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

forecast_distance: int or None

The forecast distance for which the metadata was retrieved. None for OTV projects.

resolutions: list of string

A list of datarobot.enums.DATETIME_TREND_PLOTS_RESOLUTION, which represents available time resolutions for which plots can be retrieved.

backtest_metadata: list of dict

List of backtest metadata dicts. The list index of metadata dict is the backtest index. See backtest/holdout metadata info in Notes for more details.

holdout_metadata: dict

Holdout metadata dict. See backtest/holdout metadata info in Notes for more details.

backtest_statuses: list of dict

List of backtest statuses dict. The list index of status dict is the backtest index. See backtest/holdout status info in Notes for more details.

holdout_statuses: dict

Holdout status dict. See backtest/holdout status info in Notes for more details.

class datarobot.models.datetime_trend_plots.AccuracyOverTimePlot(project_id, model_id, start_date, end_date, resolution, bins, statistics, calendar_events)

Accuracy over Time plot for datetime model.

New in version v2.25.

Notes

Bin is a dict containing the following:

  • start_date: datetime.datetime
    The datetime of the start of the bin (inclusive).
  • end_date: datetime.datetime
    The datetime of the end of the bin (exclusive).
  • actual: float or None
    Average actual value of the target in the bin. None if there are no entries in the bin.
  • predicted: float or None
    Average prediction of the model in the bin. None if there are no entries in the bin.
  • frequency: int or None
    Indicates number of values averaged in bin.

Statistics is a dict containing the following:

  • durbin_watson: float or None
    The Durbin-Watson statistic for the chart data. Value is between 0 and 4. Durbin-Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals (prediction errors) from a regression analysis. More info https://wikipedia.org/wiki/Durbin%E2%80%93Watson_statistic

Calendar event is a dict containing the following:

  • name: string
    Name of the calendar event.
  • date: datetime
    Date of the calendar event.
  • series_id: string or None
    The series ID for the event. If this event does not specify a series ID, then this will be None, indicating that the event applies to all series.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

resolution: string

The resolution that is used for binning. One of datarobot.enums.DATETIME_TREND_PLOTS_RESOLUTION

start_date: datetime.datetime

The datetime of the start of the chart data (inclusive).

end_date: datetime.datetime

The datetime of the end of the chart data (exclusive).

bins: list of dict

List of plot bins. See bin info in Notes for more details.

statistics: dict

Statistics for plot. See statistics info in Notes for more details.

calendar_events: list of dict

List of calendar events for the plot. See calendar events info in Notes for more details.

class datarobot.models.datetime_trend_plots.AccuracyOverTimePlotPreview(project_id, model_id, start_date, end_date, bins)

Accuracy over Time plot preview for datetime model.

New in version v2.25.

Notes

Bin is a dict containing the following:

  • start_date: datetime.datetime
    The datetime of the start of the bin (inclusive).
  • end_date: datetime.datetime
    The datetime of the end of the bin (exclusive).
  • actual: float or None
    Average actual value of the target in the bin. None if there are no entries in the bin.
  • predicted: float or None
    Average prediction of the model in the bin. None if there are no entries in the bin.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

start_date: datetime.datetime

The datetime of the start of the chart data (inclusive).

end_date: datetime.datetime

The datetime of the end of the chart data (exclusive).

bins: list of dict

List of plot bins. See bin info in Notes for more details.

class datarobot.models.datetime_trend_plots.ForecastVsActualPlotsMetadata(project_id, model_id, resolutions, backtest_metadata, holdout_metadata, backtest_statuses, holdout_statuses)

Forecast vs Actual plots metadata for datetime model.

New in version v2.25.

Notes

Backtest/holdout status is a dict containing the following:

  • training: dict
    Dict containing each of datarobot.enums.DATETIME_TREND_PLOTS_STATUS as dict key, and list of forecast distances for particular status as dict value.
  • validation: dict
    Dict containing each of datarobot.enums.DATETIME_TREND_PLOTS_STATUS as dict key, and list of forecast distances for particular status as dict value.

Backtest/holdout metadata is a dict containing the following:

  • training: dict
    Start and end dates for the backtest/holdout training.
  • validation: dict
    Start and end dates for the backtest/holdout validation.

Each dict in the training and validation in backtest/holdout metadata is structured like:

  • start_date: datetime.datetime or None
    The datetime of the start of the chart data (inclusive). None if chart data is not computed.
  • end_date: datetime.datetime or None
    The datetime of the end of the chart data (exclusive). None if chart data is not computed.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

resolutions: list of string

A list of datarobot.enums.DATETIME_TREND_PLOTS_RESOLUTION, which represents available time resolutions for which plots can be retrieved.

backtest_metadata: list of dict

List of backtest metadata dicts. The list index of metadata dict is the backtest index. See backtest/holdout metadata info in Notes for more details.

holdout_metadata: dict

Holdout metadata dict. See backtest/holdout metadata info in Notes for more details.

backtest_statuses: list of dict

List of backtest statuses dict. The list index of status dict is the backtest index. See backtest/holdout status info in Notes for more details.

holdout_statuses: dict

Holdout status dict. See backtest/holdout status info in Notes for more details.

class datarobot.models.datetime_trend_plots.ForecastVsActualPlot(project_id, model_id, forecast_distances, start_date, end_date, resolution, bins, calendar_events)

Forecast vs Actual plot for datetime model.

New in version v2.25.

Notes

Bin is a dict containing the following:

  • start_date: datetime.datetime
    The datetime of the start of the bin (inclusive).
  • end_date: datetime.datetime
    The datetime of the end of the bin (exclusive).
  • actual: float or None
    Average actual value of the target in the bin. None if there are no entries in the bin.
  • forecasts: list of float
    A list of average forecasts for the model for each forecast distance. Empty if there are no forecasts in the bin. Each index in the forecasts list maps to forecastDistances list index.
  • error: float or None
    Average absolute residual value of the bin. None if there are no entries in the bin.
  • normalized_error: float or None
    Normalized average absolute residual value of the bin. None if there are no entries in the bin.
  • frequency: int or None
    Indicates number of values averaged in bin.

Calendar event is a dict containing the following:

  • name: string
    Name of the calendar event.
  • date: datetime
    Date of the calendar event.
  • series_id: string or None
    The series ID for the event. If this event does not specify a series ID, then this will be None, indicating that the event applies to all series.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

forecast_distances: list of int

A list of forecast distances that were retrieved.

resolution: string

The resolution that is used for binning. One of datarobot.enums.DATETIME_TREND_PLOTS_RESOLUTION

start_date: datetime.datetime

The datetime of the start of the chart data (inclusive).

end_date: datetime.datetime

The datetime of the end of the chart data (exclusive).

bins: list of dict

List of plot bins. See bin info in Notes for more details.

calendar_events: list of dict

List of calendar events for the plot. See calendar events info in Notes for more details.

class datarobot.models.datetime_trend_plots.ForecastVsActualPlotPreview(project_id, model_id, start_date, end_date, bins)

Forecast vs Actual plot preview for datetime model.

New in version v2.25.

Notes

Bin is a dict containing the following:

  • start_date: datetime.datetime
    The datetime of the start of the bin (inclusive).
  • end_date: datetime.datetime
    The datetime of the end of the bin (exclusive).
  • actual: float or None
    Average actual value of the target in the bin. None if there are no entries in the bin.
  • predicted: float or None
    Average prediction of the model in the bin. None if there are no entries in the bin.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

start_date: datetime.datetime

The datetime of the start of the chart data (inclusive).

end_date: datetime.datetime

The datetime of the end of the chart data (exclusive).

bins: list of dict

List of plot bins. See bin info in Notes for more details.

class datarobot.models.datetime_trend_plots.AnomalyOverTimePlotsMetadata(project_id, model_id, resolutions, backtest_metadata, holdout_metadata, backtest_statuses, holdout_statuses)

Anomaly over Time metadata for datetime model.

New in version v2.25.

Notes

Backtest/holdout status is a dict containing the following:

  • training: string
    Status backtest/holdout training. One of datarobot.enums.DATETIME_TREND_PLOTS_STATUS
  • validation: string
    Status backtest/holdout validation. One of datarobot.enums.DATETIME_TREND_PLOTS_STATUS

Backtest/holdout metadata is a dict containing the following:

  • training: dict
    Start and end dates for the backtest/holdout training.
  • validation: dict
    Start and end dates for the backtest/holdout validation.

Each dict in the training and validation in backtest/holdout metadata is structured like:

  • start_date: datetime.datetime or None
    The datetime of the start of the chart data (inclusive). None if chart data is not computed.
  • end_date: datetime.datetime or None
    The datetime of the end of the chart data (exclusive). None if chart data is not computed.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

resolutions: list of string

A list of datarobot.enums.DATETIME_TREND_PLOTS_RESOLUTION, which represents available time resolutions for which plots can be retrieved.

backtest_metadata: list of dict

List of backtest metadata dicts. The list index of metadata dict is the backtest index. See backtest/holdout metadata info in Notes for more details.

holdout_metadata: dict

Holdout metadata dict. See backtest/holdout metadata info in Notes for more details.

backtest_statuses: list of dict

List of backtest statuses dict. The list index of status dict is the backtest index. See backtest/holdout status info in Notes for more details.

holdout_statuses: dict

Holdout status dict. See backtest/holdout status info in Notes for more details.

class datarobot.models.datetime_trend_plots.AnomalyOverTimePlot(project_id, model_id, start_date, end_date, resolution, bins, calendar_events)

Anomaly over Time plot for datetime model.

New in version v2.25.

Notes

Bin is a dict containing the following:

  • start_date: datetime.datetime
    The datetime of the start of the bin (inclusive).
  • end_date: datetime.datetime
    The datetime of the end of the bin (exclusive).
  • predicted: float or None
    Average prediction of the model in the bin. None if there are no entries in the bin.
  • frequency: int or None
    Indicates number of values averaged in bin.

Calendar event is a dict containing the following:

  • name: string
    Name of the calendar event.
  • date: datetime
    Date of the calendar event.
  • series_id: string or None
    The series ID for the event. If this event does not specify a series ID, then this will be None, indicating that the event applies to all series.
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

resolution: string

The resolution that is used for binning. One of datarobot.enums.DATETIME_TREND_PLOTS_RESOLUTION

start_date: datetime.datetime

The datetime of the start of the chart data (inclusive).

end_date: datetime.datetime

The datetime of the end of the chart data (exclusive).

bins: list of dict

List of plot bins. See bin info in Notes for more details.

calendar_events: list of dict

List of calendar events for the plot. See calendar events info in Notes for more details.

class datarobot.models.datetime_trend_plots.AnomalyOverTimePlotPreview(project_id, model_id, prediction_threshold, start_date, end_date, bins)

Anomaly over Time plot preview for datetime model.

New in version v2.25.

Notes

Bin is a dict containing the following:

  • start_date: datetime.datetime
    The datetime of the start of the bin (inclusive).
  • end_date: datetime.datetime
    The datetime of the end of the bin (exclusive).
Attributes:
project_id: string

The project ID.

model_id: string

The model ID.

prediction_threshold: float

Only bins with predictions exceeding this threshold are returned in the response.

start_date: datetime.datetime

The datetime of the start of the chart data (inclusive).

end_date: datetime.datetime

The datetime of the end of the chart data (exclusive).

bins: list of dict

List of plot bins. See bin info in Notes for more details.

Deployment

class datarobot.models.Deployment(id: str, label: Optional[str] = None, description: Optional[str] = None, status: Optional[str] = None, default_prediction_server: Optional[PredictionServer] = None, model: Optional[ModelDict] = None, capabilities: Optional[Dict[str, Any]] = None, prediction_usage: Optional[PredictionUsage] = None, permissions: Optional[List[str]] = None, service_health: Optional[Health] = None, model_health: Optional[Health] = None, accuracy_health: Optional[Health] = None, importance: Optional[str] = None, fairness_health: Optional[Health] = None, governance: Optional[Dict[str, Any]] = None, owners: Optional[Dict[str, Any]] = None, prediction_environment: Optional[Dict[str, Any]] = None)

A deployment created from a DataRobot model.

Attributes:
id : str

the id of the deployment

label : str

the label of the deployment

description : str

the description of the deployment

status : str

(New in version v2.29) deployment status

default_prediction_server : dict

Information about the default prediction server for the deployment. Accepts the following values:

  • id: str. Prediction server ID.
  • url: str, optional. Prediction server URL.
  • datarobot-key: str. Corresponds the to the PredictionServer’s “snake_cased” datarobot_key parameter that allows you to verify and access the prediction server.
importance : str, optional

deployment importance

model : dict

information on the model of the deployment

capabilities : dict

information on the capabilities of the deployment

prediction_usage : dict

information on the prediction usage of the deployment

permissions : list

(New in version v2.18) user’s permissions on the deployment

service_health : dict

information on the service health of the deployment

model_health : dict

information on the model health of the deployment

accuracy_health : dict

information on the accuracy health of the deployment

fairness_health : dict

information on the fairness health of a deployment

governance : dict

information on approval and change requests of a deployment

owners : dict

information on the owners of a deployment

prediction_environment : dict

information on the prediction environment of a deployment

classmethod create_from_learning_model(model_id: str, label: str, description: Optional[str] = None, default_prediction_server_id: Optional[str] = None, importance: Optional[str] = None, prediction_threshold: Optional[float] = None, status: Optional[str] = None) → TDeployment

Create a deployment from a DataRobot model.

New in version v2.17.

Parameters:
model_id : str

id of the DataRobot model to deploy

label : str

a human-readable label of the deployment

description : str, optional

a human-readable description of the deployment

default_prediction_server_id : str, optional

an identifier of a prediction server to be used as the default prediction server

importance : str, optional

deployment importance

prediction_threshold : float, optional

threshold used for binary classification in predictions

status : str, optional

deployment status

Returns:
deployment : Deployment

The created deployment

Examples

from datarobot import Project, Deployment
project = Project.get('5506fcd38bd88f5953219da0')
model = project.get_models()[0]
deployment = Deployment.create_from_learning_model(model.id, 'New Deployment')
deployment
>>> Deployment('New Deployment')
classmethod create_from_custom_model_version(custom_model_version_id: str, label: str, description: Optional[str] = None, default_prediction_server_id: Optional[str] = None, max_wait: int = 600, importance: Optional[str] = None) → TDeployment

Create a deployment from a DataRobot custom model image.

Parameters:
custom_model_version_id : str

id of the DataRobot custom model version to deploy The version must have a base_environment_id.

label : str

a human readable label of the deployment

description : str, optional

a human readable description of the deployment

default_prediction_server_id : str, optional

an identifier of a prediction server to be used as the default prediction server

max_wait : int, optional

seconds to wait for successful resolution of a deployment creation job. Deployment supports making predictions only after a deployment creating job has successfully finished

importance : str, optional

deployment importance

Returns:
deployment : Deployment

The created deployment

classmethod list(order_by: Optional[str] = None, search: Optional[str] = None, filters: Optional[datarobot.models.deployment.DeploymentListFilters] = None) → List[TDeployment]

List all deployments a user can view.

New in version v2.17.

Parameters:
order_by : str, optional

(New in version v2.18) the order to sort the deployment list by, defaults to label

Allowed attributes to sort by are:

  • label
  • serviceHealth
  • modelHealth
  • accuracyHealth
  • recentPredictions
  • lastPredictionTimestamp

If the sort attribute is preceded by a hyphen, deployments will be sorted in descending order, otherwise in ascending order.

For health related sorting, ascending means failing, warning, passing, unknown.

search : str, optional

(New in version v2.18) case insensitive search against deployment’s label and description.

filters : datarobot.models.deployment.DeploymentListFilters, optional

(New in version v2.20) an object containing all filters that you’d like to apply to the resulting list of deployments. See DeploymentListFilters for details on usage.

Returns:
deployments : list

a list of deployments the user can view

Examples

from datarobot import Deployment
deployments = Deployment.list()
deployments
>>> [Deployment('New Deployment'), Deployment('Previous Deployment')]
from datarobot import Deployment
from datarobot.enums import DEPLOYMENT_SERVICE_HEALTH_STATUS
filters = DeploymentListFilters(
    role='OWNER',
    service_health=[DEPLOYMENT_SERVICE_HEALTH.FAILING]
)
filtered_deployments = Deployment.list(filters=filters)
filtered_deployments
>>> [Deployment('Deployment I Own w/ Failing Service Health')]
classmethod get(deployment_id: str) → TDeployment

Get information about a deployment.

New in version v2.17.

Parameters:
deployment_id : str

the id of the deployment

Returns:
deployment : Deployment

the queried deployment

Examples

from datarobot import Deployment
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
deployment.id
>>>'5c939e08962d741e34f609f0'
deployment.label
>>>'New Deployment'
predict_batch(source: Union[str, pandas.core.frame.DataFrame, io.IOBase], passthrough_columns: Optional[List[str]] = None, download_timeout: Optional[int] = None, download_read_timeout: Optional[int] = None, upload_read_timeout: Optional[int] = None) → pandas.core.frame.DataFrame

Using a deployment, make batch predictions and return results as a DataFrame.

If a DataFrame is passed as source, then the prediction results are merged with the original DataFrame and a new DataFrame is returned.

New in version v3.0.

Parameters:
source: str, pd.DataFrame or file object

Pass a filepath, file, or DataFrame for making batch predictions.

passthrough_columns : list[string] (optional)

Keep these columns from the scoring dataset in the scored dataset. This is useful for correlating predictions with source data.

download_timeout: int, optional

Wait this many seconds for the download to become available. See datarobot.models.BatchPredictionJob.score().

download_read_timeout: int, optional

Wait this many seconds for the server to respond between chunks. See datarobot.models.BatchPredictionJob.score().

upload_read_timeout: int, optional

Wait this many seconds for the server to respond after a whole dataset upload. See datarobot.models.BatchPredictionJob.score().

Returns:
pd.DataFrame

Prediction results in a pandas DataFrame.

Raises:
InvalidUsageError

If the source parameter cannot be determined to be a filepath, file, or DataFrame.

Examples

from datarobot.models.deployment import Deployment

deployment = Deployment.get("<MY_DEPLOYMENT_ID>")
prediction_results_as_dataframe = deployment.predict_batch(
    source="./my_local_file.csv",
)
get_uri() → str
Returns:
url : str

Deployment’s overview URI

update(label: Optional[str] = None, description: Optional[str] = None, importance: Optional[str] = None) → None

Update the label and description of this deployment.

New in version v2.19.

delete() → None

Delete this deployment.

New in version v2.17.

activate(max_wait: int = 600) → None

Activates this deployment. When succeeded, deployment status become active.

New in version v2.29.

Parameters:
max_wait : int, optional

The maximum time to wait for deployment activation to complete before erroring

deactivate(max_wait: int = 600) → None

Deactivates this deployment. When succeeded, deployment status become inactive.

New in version v2.29.

Parameters:
max_wait : int, optional

The maximum time to wait for deployment deactivation to complete before erroring

replace_model(new_model_id: str, reason: str, max_wait: int = 600) → None
Replace the model used in this deployment. To confirm model replacement eligibility, use
validate_replacement_model() beforehand.

New in version v2.17.

Model replacement is an asynchronous process, which means some preparatory work may be performed after the initial request is completed. This function will not return until all preparatory work is fully finished.

Predictions made against this deployment will start using the new model as soon as the request is completed. There will be no interruption for predictions throughout the process.

Parameters:
new_model_id : str

The id of the new model to use. If replacing the deployment’s model with a CustomInferenceModel, a specific CustomModelVersion ID must be used.

reason : MODEL_REPLACEMENT_REASON

The reason for the model replacement. Must be one of ‘ACCURACY’, ‘DATA_DRIFT’, ‘ERRORS’, ‘SCHEDULED_REFRESH’, ‘SCORING_SPEED’, or ‘OTHER’. This value will be stored in the model history to keep track of why a model was replaced

max_wait : int, optional

(new in version 2.22) The maximum time to wait for model replacement job to complete before erroring

Examples

from datarobot import Deployment
from datarobot.enums import MODEL_REPLACEMENT_REASON
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
deployment.model['id'], deployment.model['type']
>>>('5c0a979859b00004ba52e431', 'Decision Tree Classifier (Gini)')

deployment.replace_model('5c0a969859b00004ba52e41b', MODEL_REPLACEMENT_REASON.ACCURACY)
deployment.model['id'], deployment.model['type']
>>>('5c0a969859b00004ba52e41b', 'Support Vector Classifier (Linear Kernel)')
validate_replacement_model(new_model_id: str) → Tuple[str, str, Dict[str, Any]]

Validate a model can be used as the replacement model of the deployment.

New in version v2.17.

Parameters:
new_model_id : str

the id of the new model to validate

Returns:
status : str

status of the validation, will be one of ‘passing’, ‘warning’ or ‘failing’. If the status is passing or warning, use replace_model() to perform a model replacement. If the status is failing, refer to checks for more detail on why the new model cannot be used as a replacement.

message : str

message for the validation result

checks : dict

explain why the new model can or cannot replace the deployment’s current model

get_features() → List[FeatureDict]

Retrieve the list of features needed to make predictions on this deployment.

Returns:
features: list

a list of feature dict

Notes

Each feature dict contains the following structure:

  • name : str, feature name
  • feature_type : str, feature type
  • importance : float, numeric measure of the relationship strength between the feature and target (independent of model or other features)
  • date_format : str or None, the date format string for how this feature was interpreted, null if not a date feature, compatible with https://docs.python.org/2/library/time.html#time.strftime.
  • known_in_advance : bool, whether the feature was selected as known in advance in a time series model, false for non-time series models.

Examples

from datarobot import Deployment
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
features = deployment.get_features()
features[0]['feature_type']
>>>'Categorical'
features[0]['importance']
>>>0.133
submit_actuals(data: Union[pd.DataFrame, List[Actual]], batch_size: int = 10000) → None

Submit actuals for processing. The actuals submitted will be used to calculate accuracy metrics.

Parameters:
data: list or pandas.DataFrame
batch_size: the max number of actuals in each request
If `data` is a list, each item should be a dict-like object with the following keys and
values; if `data` is a pandas.DataFrame, it should contain the following columns:
- association_id: str, a unique identifier used with a prediction,

max length 128 characters

- actual_value: str or int or float, the actual value of a prediction;

should be numeric for deployments with regression models or string for deployments with classification model

- was_acted_on: bool, optional, indicates if the prediction was acted on in a way that

could have affected the actual outcome

- timestamp: datetime or string in RFC3339 format, optional. If the datetime provided

does not have a timezone, we assume it is UTC.

Raises:
ValueError

if input data is not a list of dict-like objects or a pandas.DataFrame if input data is empty

Examples

from datarobot import Deployment, AccuracyOverTime
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
data = [{
    'association_id': '439917',
    'actual_value': 'True',
    'was_acted_on': True
}]
deployment.submit_actuals(data)
get_predictions_by_forecast_date_settings() → ForecastDateSettings

Retrieve predictions by forecast date settings of this deployment.

New in version v2.27.

Returns:
settings : dict

Predictions by forecast date settings of the deployment is a dict with the following format:

enabled : bool

Is ‘’True’’ if predictions by forecast date is enabled for this deployment. To update this setting, see update_predictions_by_forecast_date_settings()

column_name : string

The column name in prediction datasets to be used as forecast date.

datetime_format : string

The datetime format of the forecast date column in prediction datasets.

update_predictions_by_forecast_date_settings(enable_predictions_by_forecast_date: bool, forecast_date_column_name: Optional[str] = None, forecast_date_format: Optional[str] = None, max_wait: int = 600) → None

Update predictions by forecast date settings of this deployment.

New in version v2.27.

Updating predictions by forecast date setting is an asynchronous process, which means some preparatory work may be performed after the initial request is completed. This function will not return until all preparatory work is fully finished.

Parameters:
enable_predictions_by_forecast_date : bool

set to ‘’True’’ if predictions by forecast date is to be turned on or set to ‘’False’’ if predictions by forecast date is to be turned off.

forecast_date_column_name: string, optional

The column name in prediction datasets to be used as forecast date. If ‘’enable_predictions_by_forecast_date’’ is set to ‘’False’’, then the parameter will be ignored.

forecast_date_format: string, optional

The datetime format of the forecast date column in prediction datasets. If ‘’enable_predictions_by_forecast_date’’ is set to ‘’False’’, then the parameter will be ignored.

max_wait : int, optional

seconds to wait for successful

Examples

# To set predictions by forecast date settings to the same default settings you see when using
# the DataRobot web application, you use your 'Deployment' object like this:
deployment.update_predictions_by_forecast_date_settings(
   enable_predictions_by_forecast_date=True,
   forecast_date_column_name="date (actual)",
   forecast_date_format="%Y-%m-%d",
)
get_challenger_models_settings() → ChallengerModelsSettings

Retrieve challenger models settings of this deployment.

New in version v2.27.

Returns:
settings : dict

Challenger models settings of the deployment is a dict with the following format:

enabled : bool

Is ‘’True’’ if challenger models is enabled for this deployment. To update existing ‘’challenger_models’’ settings, see update_challenger_models_settings()

update_challenger_models_settings(challenger_models_enabled: bool, max_wait: int = 600) → None

Update challenger models settings of this deployment.

New in version v2.27.

Updating challenger models setting is an asynchronous process, which means some preparatory work may be performed after the initial request is completed. This function will not return until all preparatory work is fully finished.

Parameters:
challenger_models_enabled : bool

set to ‘’True’’ if challenger models is to be turned on or set to ‘’False’’ if challenger models is to be turned off

max_wait : int, optional

seconds to wait for successful resolution

get_segment_analysis_settings() → SegmentAnalysisSettings

Retrieve segment analysis settings of this deployment.

New in version v2.27.

Returns:
settings : dict

Segment analysis settings of the deployment containing two items with keys enabled and attributes, which are further described below.

enabled : bool

Set to ‘’True’’ if segment analysis is enabled for this deployment. To update existing setting, see update_segment_analysis_settings()

attributes : list

To create or update existing segment analysis attributes, see update_segment_analysis_settings()

update_segment_analysis_settings(segment_analysis_enabled: bool, segment_analysis_attributes: Optional[List[str]] = None, max_wait: int = 600) → None

Update segment analysis settings of this deployment.

New in version v2.27.

Updating segment analysis setting is an asynchronous process, which means some preparatory work may be performed after the initial request is completed. This function will not return until all preparatory work is fully finished.

Parameters:
segment_analysis_enabled : bool

set to ‘’True’’ if segment analysis is to be turned on or set to ‘’False’’ if segment analysis is to be turned off

segment_analysis_attributes: list, optional

A list of strings that gives the segment attributes selected for tracking.

max_wait : int, optional

seconds to wait for successful resolution

get_drift_tracking_settings() → DriftTrackingSettings

Retrieve drift tracking settings of this deployment.

New in version v2.17.

Returns:
settings : dict

Drift tracking settings of the deployment containing two nested dicts with key target_drift and feature_drift, which are further described below.

Target drift setting contains:

enabled : bool

If target drift tracking is enabled for this deployment. To create or update existing ‘’target_drift’’ settings, see update_drift_tracking_settings()

Feature drift setting contains:

enabled : bool

If feature drift tracking is enabled for this deployment. To create or update existing ‘’feature_drift’’ settings, see update_drift_tracking_settings()

update_drift_tracking_settings(target_drift_enabled: Optional[bool] = None, feature_drift_enabled: Optional[bool] = None, max_wait: int = 600) → None

Update drift tracking settings of this deployment.

New in version v2.17.

Updating drift tracking setting is an asynchronous process, which means some preparatory work may be performed after the initial request is completed. This function will not return until all preparatory work is fully finished.

Parameters:
target_drift_enabled : bool, optional

if target drift tracking is to be turned on

feature_drift_enabled : bool, optional

if feature drift tracking is to be turned on

max_wait : int, optional

seconds to wait for successful resolution

get_association_id_settings() → str

Retrieve association ID setting for this deployment.

New in version v2.19.

Returns:
association_id_settings : dict in the following format:
column_names : list[string], optional

name of the columns to be used as association ID,

required_in_prediction_requests : bool, optional

whether the association ID column is required in prediction requests

update_association_id_settings(column_names: Optional[List[str]] = None, required_in_prediction_requests: Optional[bool] = None, max_wait: int = 600) → None

Update association ID setting for this deployment.

New in version v2.19.

Parameters:
column_names : list[string], optional

name of the columns to be used as association ID, currently only support a list of one string

required_in_prediction_requests : bool, optional

whether the association ID column is required in prediction requests

max_wait : int, optional

seconds to wait for successful resolution

get_predictions_data_collection_settings() → Dict[str, bool]

Retrieve predictions data collection settings of this deployment.

New in version v2.21.

Returns:
predictions_data_collection_settings : dict in the following format:
enabled : bool

If predictions data collection is enabled for this deployment. To update existing ‘’predictions_data_collection’’ settings, see update_predictions_data_collection_settings()

update_predictions_data_collection_settings(enabled: bool, max_wait: int = 600) → None

Update predictions data collection settings of this deployment.

New in version v2.21.

Updating predictions data collection setting is an asynchronous process, which means some preparatory work may be performed after the initial request is completed. This function will not return until all preparatory work is fully finished.

Parameters:
enabled: bool

if predictions data collection is to be turned on

max_wait : int, optional

seconds to wait for successful resolution

get_prediction_warning_settings() → PredictionWarningSettings

Retrieve prediction warning settings of this deployment.

New in version v2.19.

Returns:
settings : dict in the following format:
enabled : bool

If target prediction_warning is enabled for this deployment. To create or update existing ‘’prediction_warning’’ settings, see update_prediction_warning_settings()

custom_boundaries : dict or None
If None default boundaries for a model are used. Otherwise has following keys:
upper : float

All predictions greater than provided value are considered anomalous

lower : float

All predictions less than provided value are considered anomalous

update_prediction_warning_settings(prediction_warning_enabled: bool, use_default_boundaries: Optional[bool] = None, lower_boundary: Optional[float] = None, upper_boundary: Optional[float] = None, max_wait: int = 600) → None

Update prediction warning settings of this deployment.

New in version v2.19.

Parameters:
prediction_warning_enabled : bool

If prediction warnings should be turned on.

use_default_boundaries : bool, optional

If default boundaries of the model should be used for the deployment.

upper_boundary : float, optional

All predictions greater than provided value will be considered anomalous

lower_boundary : float, optional

All predictions less than provided value will be considered anomalous

max_wait : int, optional

seconds to wait for successful resolution

get_prediction_intervals_settings() → PredictionIntervalsSettings

Retrieve prediction intervals settings for this deployment.

New in version v2.19.

Returns:
dict in the following format:
enabled : bool

Whether prediction intervals are enabled for this deployment

percentiles : list[int]

List of enabled prediction intervals’ sizes for this deployment. Currently we only support one percentile at a time.

Notes

Note that prediction intervals are only supported for time series deployments.

update_prediction_intervals_settings(percentiles: List[int], enabled: bool = True, max_wait: int = 600) → None

Update prediction intervals settings for this deployment.

New in version v2.19.

Parameters:
percentiles : list[int]

The prediction intervals percentiles to enable for this deployment. Currently we only support setting one percentile at a time.

enabled : bool, optional (defaults to True)

Whether to enable showing prediction intervals in the results of predictions requested using this deployment.

max_wait : int, optional

seconds to wait for successful resolution

Raises:
AssertionError

If percentiles is in an invalid format

AsyncFailureError

If any of the responses from the server are unexpected

AsyncProcessUnsuccessfulError

If the prediction intervals calculation job has failed or has been cancelled.

AsyncTimeoutError

If the prediction intervals calculation job did not resolve in time

Notes

Updating prediction intervals settings is an asynchronous process, which means some preparatory work may be performed before the settings request is completed. This function will not return until all work is fully finished.

Note that prediction intervals are only supported for time series deployments.

get_service_stats(model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, execution_time_quantile: Optional[float] = None, response_time_quantile: Optional[float] = None, slow_requests_threshold: Optional[float] = None) → datarobot.models.service_stats.ServiceStats

Retrieve value of service stat metrics over a certain time period.

New in version v2.18.

Parameters:
model_id : str, optional

the id of the model

start_time : datetime, optional

start of the time period

end_time : datetime, optional

end of the time period

execution_time_quantile : float, optional

quantile for executionTime, defaults to 0.5

response_time_quantile : float, optional

quantile for responseTime, defaults to 0.5

slow_requests_threshold : float, optional

threshold for slowRequests, defaults to 1000

Returns:
service_stats : ServiceStats

the queried service stats metrics information

get_service_stats_over_time(metric: Optional[str] = None, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, bucket_size: Optional[str] = None, quantile: Optional[float] = None, threshold: Optional[int] = None) → datarobot.models.service_stats.ServiceStatsOverTime

Retrieve information about how a service stat metric changes over a certain time period.

New in version v2.18.

Parameters:
metric : SERVICE_STAT_METRIC, optional

the service stat metric to retrieve

model_id : str, optional

the id of the model

start_time : datetime, optional

start of the time period

end_time : datetime, optional

end of the time period

bucket_size : str, optional

time duration of a bucket, in ISO 8601 time duration format

quantile : float, optional

quantile for ‘executionTime’ or ‘responseTime’, ignored when querying other metrics

threshold : int, optional

threshold for ‘slowQueries’, ignored when querying other metrics

Returns:
service_stats_over_time : ServiceStatsOverTime

the queried service stats metric over time information

get_target_drift(model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, metric: Optional[str] = None) → datarobot.models.data_drift.TargetDrift

Retrieve target drift information over a certain time period.

New in version v2.21.

Parameters:
model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

metric : str

(New in version v2.22) metric used to calculate the drift score

Returns:
target_drift : TargetDrift

the queried target drift information

get_feature_drift(model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, metric: Optional[str] = None) → List[datarobot.models.data_drift.FeatureDrift]

Retrieve drift information for deployment’s features over a certain time period.

New in version v2.21.

Parameters:
model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

metric : str

(New in version v2.22) The metric used to calculate the drift score. Allowed values include psi, kl_divergence, dissimilarity, hellinger, and js_divergence.

Returns:
feature_drift_data : [FeatureDrift]

the queried feature drift information

get_accuracy(model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, start: Optional[datetime.datetime] = None, end: Optional[datetime.datetime] = None, target_classes: Optional[List[str]] = None) → datarobot.models.accuracy.Accuracy

Retrieve values of accuracy metrics over a certain time period.

New in version v2.18.

Parameters:
model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

target_classes : list[str], optional

Optional list of target class strings

Returns:
accuracy : Accuracy

the queried accuracy metrics information

get_accuracy_over_time(metric: Optional[datarobot.enums.ACCURACY_METRIC] = None, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, bucket_size: Optional[str] = None, target_classes: Optional[List[str]] = None) → datarobot.models.accuracy.AccuracyOverTime

Retrieve information about how an accuracy metric changes over a certain time period.

New in version v2.18.

Parameters:
metric : ACCURACY_METRIC

the accuracy metric to retrieve

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

bucket_size : str

time duration of a bucket, in ISO 8601 time duration format

target_classes : list[str], optional

Optional list of target class strings

Returns:
accuracy_over_time : AccuracyOverTime

the queried accuracy metric over time information

update_secondary_dataset_config(secondary_dataset_config_id: str, credential_ids: Optional[List[str]] = None) → str

Update the secondary dataset config used by Feature discovery model for a given deployment.

New in version v2.23.

Parameters:
secondary_dataset_config_id: str

Id of the secondary dataset config

credential_ids: list or None

List of DatasetsCredentials used by the secondary datasets

Examples

from datarobot import Deployment
deployment = Deployment(deployment_id='5c939e08962d741e34f609f0')
config = deployment.update_secondary_dataset_config('5df109112ca582033ff44084')
config
>>> '5df109112ca582033ff44084'
get_secondary_dataset_config() → str

Get the secondary dataset config used by Feature discovery model for a given deployment.

New in version v2.23.

Returns:
secondary_dataset_config : SecondaryDatasetConfigurations

Id of the secondary dataset config

Examples

from datarobot import Deployment
deployment = Deployment(deployment_id='5c939e08962d741e34f609f0')
deployment.update_secondary_dataset_config('5df109112ca582033ff44084')
config = deployment.get_secondary_dataset_config()
config
>>> '5df109112ca582033ff44084'
get_prediction_results(model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, actuals_present: Optional[bool] = None, offset: Optional[int] = None, limit: Optional[int] = None) → List[Dict[str, Any]]

Retrieve a list of prediction results of the deployment.

New in version v2.24.

Parameters:
model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

actuals_present : bool

filters predictions results to only those who have actuals present or with missing actuals

offset : int

this many results will be skipped

limit : int

at most this many results are returned

Returns:
prediction_results: list[dict]

a list of prediction results

Examples

from datarobot import Deployment
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
results = deployment.get_prediction_results()
download_prediction_results(filepath: str, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, actuals_present: Optional[bool] = None, offset: Optional[int] = None, limit: Optional[int] = None) → None

Download prediction results of the deployment as a CSV file.

New in version v2.24.

Parameters:
filepath : str

path of the csv file

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

actuals_present : bool

filters predictions results to only those who have actuals present or with missing actuals

offset : int

this many results will be skipped

limit : int

at most this many results are returned

Examples

from datarobot import Deployment
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
results = deployment.download_prediction_results('path_to_prediction_results.csv')
download_scoring_code(filepath: str, source_code: bool = False, include_agent: bool = False, include_prediction_explanations: bool = False, include_prediction_intervals: bool = False) → None

Retrieve scoring code of the current deployed model.

New in version v2.24.

Parameters:
filepath : str

path of the scoring code file

source_code : bool

whether source code or binary of the scoring code will be retrieved

include_agent : bool

whether the scoring code retrieved will include tracking agent

include_prediction_explanations : bool

whether the scoring code retrieved will include prediction explanations

include_prediction_intervals : bool

whether the scoring code retrieved will support prediction intervals

Notes

When setting include_agent or include_predictions_explanations or include_prediction_intervals to True, it can take a considerably longer time to download the scoring code.

Examples

from datarobot import Deployment
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
results = deployment.download_scoring_code('path_to_scoring_code.jar')
delete_monitoring_data(model_id: str, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, max_wait: int = 600) → None

Delete deployment monitoring data.

Parameters:
model_id : str

id of the model to delete monitoring data

start_time : datetime, optional

start of the time period to delete monitoring data

end_time : datetime, optional

end of the time period to delete monitoring data

max_wait : int, optional

seconds to wait for successful resolution

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

class datarobot.models.deployment.DeploymentListFilters(role: Optional[str] = None, service_health: Optional[List[str]] = None, model_health: Optional[List[str]] = None, accuracy_health: Optional[List[str]] = None, execution_environment_type: Optional[List[str]] = None, importance: Optional[List[str]] = None)
class datarobot.models.ServiceStats(period: Optional[Period] = None, metrics: Optional[Metrics] = None, model_id: Optional[str] = None)

Deployment service stats information.

Attributes:
model_id : str

the model used to retrieve service stats metrics

period : dict

the time period used to retrieve service stats metrics

metrics : dict

the service stats metrics

classmethod get(deployment_id: str, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, execution_time_quantile: Optional[float] = None, response_time_quantile: Optional[float] = None, slow_requests_threshold: Optional[float] = None) → datarobot.models.service_stats.ServiceStats

Retrieve value of service stat metrics over a certain time period.

New in version v2.18.

Parameters:
deployment_id : str

the id of the deployment

model_id : str, optional

the id of the model

start_time : datetime, optional

start of the time period

end_time : datetime, optional

end of the time period

execution_time_quantile : float, optional

quantile for executionTime, defaults to 0.5

response_time_quantile : float, optional

quantile for responseTime, defaults to 0.5

slow_requests_threshold : float, optional

threshold for slowRequests, defaults to 1000

Returns:
service_stats : ServiceStats

the queried service stats metrics

class datarobot.models.ServiceStatsOverTime(buckets: Optional[List[Bucket]] = None, summary: Optional[Bucket] = None, metric: Optional[str] = None, model_id: Optional[str] = None)

Deployment service stats over time information.

Attributes:
model_id : str

the model used to retrieve accuracy metric

metric : str

the service stat metric being retrieved

buckets : dict

how the service stat metric changes over time

summary : dict

summary for the service stat metric

classmethod get(deployment_id: str, metric: Optional[str] = None, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, bucket_size: Optional[str] = None, quantile: Optional[float] = None, threshold: Optional[int] = None) → datarobot.models.service_stats.ServiceStatsOverTime

Retrieve information about how a service stat metric changes over a certain time period.

New in version v2.18.

Parameters:
deployment_id : str

the id of the deployment

metric : SERVICE_STAT_METRIC, optional

the service stat metric to retrieve

model_id : str, optional

the id of the model

start_time : datetime, optional

start of the time period

end_time : datetime, optional

end of the time period

bucket_size : str, optional

time duration of a bucket, in ISO 8601 time duration format

quantile : float, optional

quantile for ‘executionTime’ or ‘responseTime’, ignored when querying other metrics

threshold : int, optional

threshold for ‘slowQueries’, ignored when querying other metrics

Returns:
service_stats_over_time : ServiceStatsOverTime

the queried service stat over time information

bucket_values

The metric value for all time buckets, keyed by start time of the bucket.

Returns:
bucket_values: OrderedDict
class datarobot.models.TargetDrift(period=None, metric=None, model_id=None, target_name=None, drift_score=None, sample_size=None, baseline_sample_size=None)

Deployment target drift information.

Attributes:
model_id : str

the model used to retrieve target drift metric

period : dict

the time period used to retrieve target drift metric

metric : str

the data drift metric

target_name : str

name of the target

drift_score : float

target drift score

sample_size : int

count of data points for comparison

baseline_sample_size : int

count of data points for baseline

classmethod get(deployment_id: str, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, metric: Optional[str] = None) → datarobot.models.data_drift.TargetDrift

Retrieve target drift information over a certain time period.

New in version v2.21.

Parameters:
deployment_id : str

the id of the deployment

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

metric : str

(New in version v2.22) metric used to calculate the drift score

Returns:
target_drift : TargetDrift

the queried target drift information

Examples

from datarobot import Deployment, TargetDrift
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
target_drift = TargetDrift.get(deployment.id)
target_drift.period['end']
>>>'2019-08-01 00:00:00+00:00'
target_drift.drift_score
>>>0.03423
accuracy.target_name
>>>'readmitted'
class datarobot.models.FeatureDrift(period=None, metric=None, model_id=None, name=None, drift_score=None, feature_impact=None, sample_size=None, baseline_sample_size=None)

Deployment feature drift information.

Attributes:
model_id : str

the model used to retrieve feature drift metric

period : dict

the time period used to retrieve feature drift metric

metric : str

the data drift metric

name : str

name of the feature

drift_score : float

feature drift score

sample_size : int

count of data points for comparison

baseline_sample_size : int

count of data points for baseline

classmethod list(deployment_id: str, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, metric: Optional[str] = None) → List[datarobot.models.data_drift.FeatureDrift]

Retrieve drift information for deployment’s features over a certain time period.

New in version v2.21.

Parameters:
deployment_id : str

the id of the deployment

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

metric : str

(New in version v2.22) metric used to calculate the drift score

Returns:
feature_drift_data : [FeatureDrift]

the queried feature drift information

Examples

from datarobot import Deployment, TargetDrift
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
feature_drift = FeatureDrift.list(deployment.id)[0]
feature_drift.period
>>>'2019-08-01 00:00:00+00:00'
feature_drift.drift_score
>>>0.252
feature_drift.name
>>>'age'
class datarobot.models.Accuracy(period: Optional[Period] = None, metrics: Optional[Dict[str, Metric]] = None, model_id: Optional[str] = None)

Deployment accuracy information.

Attributes:
model_id : str

the model used to retrieve accuracy metrics

period : dict

the time period used to retrieve accuracy metrics

metrics : dict

the accuracy metrics

classmethod get(deployment_id: str, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, target_classes: Optional[List[str]] = None) → datarobot.models.accuracy.Accuracy

Retrieve values of accuracy metrics over a certain time period.

New in version v2.18.

Parameters:
deployment_id : str

the id of the deployment

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

target_classes : list[str], optional

Optional list of target class strings

Returns:
accuracy : Accuracy

the queried accuracy metrics information

Examples

from datarobot import Deployment, Accuracy
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
accuracy = Accuracy.get(deployment.id)
accuracy.period['end']
>>>'2019-08-01 00:00:00+00:00'
accuracy.metric['LogLoss']['value']
>>>0.7533
accuracy.metric_values['LogLoss']
>>>0.7533
metric_values

The value for all metrics, keyed by metric name.

Returns:
metric_values: Dict
metric_baselines

The baseline value for all metrics, keyed by metric name.

Returns:
metric_baselines: Dict
percent_changes

The percent change of value over baseline for all metrics, keyed by metric name.

Returns:
percent_changes: Dict
class datarobot.models.AccuracyOverTime(buckets: Optional[List[Bucket]] = None, summary: Optional[Summary] = None, baseline: Optional[Bucket] = None, metric: Optional[str] = None, model_id: Optional[str] = None)

Deployment accuracy over time information.

Attributes:
model_id : str

the model used to retrieve accuracy metric

metric : str

the accuracy metric being retrieved

buckets : dict

how the accuracy metric changes over time

summary : dict

summary for the accuracy metric

baseline : dict

baseline for the accuracy metric

classmethod get(deployment_id: str, metric: Optional[datarobot.enums.ACCURACY_METRIC] = None, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, bucket_size: Optional[str] = None, target_classes: Optional[List[str]] = None) → datarobot.models.accuracy.AccuracyOverTime

Retrieve information about how an accuracy metric changes over a certain time period.

New in version v2.18.

Parameters:
deployment_id : str

the id of the deployment

metric : ACCURACY_METRIC

the accuracy metric to retrieve

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

bucket_size : str

time duration of a bucket, in ISO 8601 time duration format

target_classes : list[str], optional

Optional list of target class strings

Returns:
accuracy_over_time : AccuracyOverTime

the queried accuracy metric over time information

Examples

from datarobot import Deployment, AccuracyOverTime
from datarobot.enums import ACCURACY_METRICS
deployment = Deployment.get(deployment_id='5c939e08962d741e34f609f0')
accuracy_over_time = AccuracyOverTime.get(deployment.id, metric=ACCURACY_METRIC.LOGLOSS)
accuracy_over_time.metric
>>>'LogLoss'
accuracy_over_time.metric_values
>>>{datetime.datetime(2019, 8, 1): 0.73, datetime.datetime(2019, 8, 2): 0.55}
classmethod get_as_dataframe(deployment_id: str, metrics: Optional[List[datarobot.enums.ACCURACY_METRIC]] = None, model_id: Optional[str] = None, start_time: Optional[datetime.datetime] = None, end_time: Optional[datetime.datetime] = None, bucket_size: Optional[str] = None) → pandas.core.frame.DataFrame

Retrieve information about how a list of accuracy metrics change over a certain time period as pandas DataFrame.

In the returned DataFrame, the columns corresponds to the metrics being retrieved; the rows are labeled with the start time of each bucket.

Parameters:
deployment_id : str

the id of the deployment

metrics : [ACCURACY_METRIC]

the accuracy metrics to retrieve

model_id : str

the id of the model

start_time : datetime

start of the time period

end_time : datetime

end of the time period

bucket_size : str

time duration of a bucket, in ISO 8601 time duration format

Returns:
accuracy_over_time: pd.DataFrame
bucket_values

The metric value for all time buckets, keyed by start time of the bucket.

Returns:
bucket_values: Dict
bucket_sample_sizes

The sample size for all time buckets, keyed by start time of the bucket.

Returns:
bucket_sample_sizes: Dict

External Scores and Insights

class datarobot.ExternalScores(project_id: str, scores: List[Score], model_id: Optional[str] = None, dataset_id: Optional[str] = None, actual_value_column: Optional[str] = None)

Metric scores on prediction dataset with target or actual value column in unsupervised case. Contains project metrics for supervised and special classification metrics set for unsupervised projects.

New in version v2.21.

Examples

List all scores for a dataset

import datarobot as dr
scores = dr.Scores.list(project_id, dataset_id=dataset_id)
Attributes:
project_id: str

id of the project the model belongs to

model_id: str

id of the model

dataset_id: str

id of the prediction dataset with target or actual value column for unsupervised case

actual_value_column: str, optional

For unsupervised projects only. Actual value column which was used to calculate the classification metrics and insights on the prediction dataset.

scores: list of dicts in a form of {‘label’: metric_name, ‘value’: score}

Scores on the dataset.

classmethod create(project_id: str, model_id: str, dataset_id: str, actual_value_column: Optional[str] = None) → Job

Compute an external dataset insights for the specified model.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model for which insights is requested

dataset_id : str

id of the dataset for which insights is requested

actual_value_column : str, optional

actual values column label, for unsupervised projects only

Returns:
job : Job

an instance of created async job

classmethod list(project_id: str, model_id: Optional[str] = None, dataset_id: Optional[str] = None, offset: int = 0, limit: int = 100) → List[datarobot.models.external_dataset_scores_insights.external_scores.ExternalScores]

Fetch external scores list for the project and optionally for model and dataset.

Parameters:
project_id: str

id of the project

model_id: str, optional

if specified, only scores for this model will be retrieved

dataset_id: str, optional

if specified, only scores for this dataset will be retrieved

offset: int, optional

this many results will be skipped, default: 0

limit: int, optional

at most this many results are returned, default: 100, max 1000. To return all results, specify 0

Returns:
A list of : py:class:External Scores <datarobot.ExternalScores> objects
classmethod get(project_id: str, model_id: str, dataset_id: str) → datarobot.models.external_dataset_scores_insights.external_scores.ExternalScores

Retrieve external scores for the project, model and dataset.

Parameters:
project_id: str

id of the project

model_id: str

if specified, only scores for this model will be retrieved

dataset_id: str

if specified, only scores for this dataset will be retrieved

Returns:
External Scores object
class datarobot.ExternalLiftChart(dataset_id: str, bins: List[Bin])

Lift chart for the model and prediction dataset with target or actual value column in unsupervised case.

New in version v2.21.

LiftChartBin is a dict containing the following:

  • actual (float) Sum of actual target values in bin
  • predicted (float) Sum of predicted target values in bin
  • bin_weight (float) The weight of the bin. For weighted projects, it is the sum of the weights of the rows in the bin. For unweighted projects, it is the number of rows in the bin.
Attributes:
dataset_id: str

id of the prediction dataset with target or actual value column for unsupervised case

bins: list of dict

List of dicts with schema described as LiftChartBin above.

classmethod list(project_id: str, model_id: str, dataset_id: Optional[str] = None, offset: int = 0, limit: int = 100) → List[datarobot.models.external_dataset_scores_insights.external_lift_chart.ExternalLiftChart]

Retrieve list of the lift charts for the model.

Parameters:
project_id: str

id of the project

model_id: str

if specified, only lift chart for this model will be retrieved

dataset_id: str, optional

if specified, only lift chart for this dataset will be retrieved

offset: int, optional

this many results will be skipped, default: 0

limit: int, optional

at most this many results are returned, default: 100, max 1000. To return all results, specify 0

Returns:
A list of : py:class:ExternalLiftChart <datarobot.ExternalLiftChart> objects
classmethod get(project_id: str, model_id: str, dataset_id: str) → datarobot.models.external_dataset_scores_insights.external_lift_chart.ExternalLiftChart

Retrieve lift chart for the model and prediction dataset.

Parameters:
project_id: str

project id

model_id: str

model id

dataset_id: str

prediction dataset id with target or actual value column for unsupervised case

Returns:
ExternalLiftChart object
class datarobot.ExternalRocCurve(dataset_id: str, roc_points: List[EstimatedMetric], negative_class_predictions: List[float], positive_class_predictions: List[float])

ROC curve data for the model and prediction dataset with target or actual value column in unsupervised case.

New in version v2.21.

Attributes:
dataset_id: str

id of the prediction dataset with target or actual value column for unsupervised case

roc_points: list of dict

List of precalculated metrics associated with thresholds for ROC curve.

negative_class_predictions: list of float

List of predictions from example for negative class

positive_class_predictions: list of float

List of predictions from example for positive class

classmethod list(project_id: str, model_id: str, dataset_id: Optional[str] = None, offset: int = 0, limit: int = 100) → List[datarobot.models.external_dataset_scores_insights.external_roc_curve.ExternalRocCurve]

Retrieve list of the roc curves for the model.

Parameters:
project_id: str

id of the project

model_id: str

if specified, only lift chart for this model will be retrieved

dataset_id: str, optional

if specified, only lift chart for this dataset will be retrieved

offset: int, optional

this many results will be skipped, default: 0

limit: int, optional

at most this many results are returned, default: 100, max 1000. To return all results, specify 0

Returns:
A list of : py:class:ExternalRocCurve <datarobot.ExternalRocCurve> objects
classmethod get(project_id: str, model_id: str, dataset_id: str) → datarobot.models.external_dataset_scores_insights.external_roc_curve.ExternalRocCurve

Retrieve ROC curve chart for the model and prediction dataset.

Parameters:
project_id: str

project id

model_id: str

model id

dataset_id: str

prediction dataset id with target or actual value column for unsupervised case

Returns:
ExternalRocCurve object

Feature

class datarobot.models.Feature(id, project_id=None, name=None, feature_type=None, importance=None, low_information=None, unique_count=None, na_count=None, date_format=None, min=None, max=None, mean=None, median=None, std_dev=None, time_series_eligible=None, time_series_eligibility_reason=None, time_step=None, time_unit=None, target_leakage=None, feature_lineage_id=None, key_summary=None, multilabel_insights=None)

A feature from a project’s dataset

These are features either included in the originally uploaded dataset or added to it via feature transformations. In time series projects, these will be distinct from the ModelingFeature s created during partitioning; otherwise, they will correspond to the same features. For more information about input and modeling features, see the time series documentation.

The min, max, mean, median, and std_dev attributes provide information about the distribution of the feature in the EDA sample data. For non-numeric features or features created prior to these summary statistics becoming available, they will be None. For features where the summary statistics are available, they will be in a format compatible with the data type, i.e. date type features will have their summary statistics expressed as ISO-8601 formatted date strings.

Attributes:
id : int

the id for the feature - note that name is used to reference the feature instead of id

project_id : str

the id of the project the feature belongs to

name : str

the name of the feature

feature_type : str

the type of the feature, e.g. ‘Categorical’, ‘Text’

importance : float or None

numeric measure of the strength of relationship between the feature and target (independent of any model or other features); may be None for non-modeling features such as partition columns

low_information : bool

whether a feature is considered too uninformative for modeling (e.g. because it has too few values)

unique_count : int

number of unique values

na_count : int or None

number of missing values

date_format : str or None

For Date features, the date format string for how this feature was interpreted, compatible with https://docs.python.org/2/library/time.html#time.strftime . For other feature types, None.

min : str, int, float, or None

The minimum value of the source data in the EDA sample

max : str, int, float, or None

The maximum value of the source data in the EDA sample

mean : str, int, or, float

The arithmetic mean of the source data in the EDA sample

median : str, int, float, or None

The median of the source data in the EDA sample

std_dev : str, int, float, or None

The standard deviation of the source data in the EDA sample

time_series_eligible : bool

Whether this feature can be used as the datetime partition column in a time series project.

time_series_eligibility_reason : str

Why the feature is ineligible for the datetime partition column in a time series project, or ‘suitable’ when it is eligible.

time_step : int or None

For time series eligible features, a positive integer determining the interval at which windows can be specified. If used as the datetime partition column on a time series project, the feature derivation and forecast windows must start and end at an integer multiple of this value. None for features that are not time series eligible.

time_unit : str or None

For time series eligible features, the time unit covered by a single time step, e.g. ‘HOUR’, or None for features that are not time series eligible.

target_leakage : str

Whether a feature is considered to have target leakage or not. A value of ‘SKIPPED_DETECTION’ indicates that target leakage detection was not run on the feature. ‘FALSE’ indicates no leakage, ‘MODERATE’ indicates a moderate risk of target leakage, and ‘HIGH_RISK’ indicates a high risk of target leakage

feature_lineage_id : str

id of a lineage for automatically discovered features or derived time series features.

key_summary: list of dict

Statistics for top 50 keys (truncated to 103 characters) of Summarized Categorical column example:

{‘key’:’DataRobot’, ‘summary’:{‘min’:0, ‘max’:29815.0, ‘stdDev’:6498.029, ‘mean’:1490.75, ‘median’:0.0, ‘pctRows’:5.0}}

where,
key: string or None

name of the key

summary: dict

statistics of the key

max: maximum value of the key. min: minimum value of the key. mean: mean value of the key. median: median value of the key. stdDev: standard deviation of the key. pctRows: percentage occurrence of key in the EDA sample of the feature.

multilabel_insights_key : str or None

For multicategorical columns this will contain a key for multilabel insights. The key is unique for a project, feature and EDA stage combination. This will be the key for the most recent, finished EDA stage.

classmethod get(project_id, feature_name)

Retrieve a single feature

Parameters:
project_id : str

The ID of the project the feature is associated with.

feature_name : str

The name of the feature to retrieve

Returns:
feature : Feature

The queried instance

get_multiseries_properties(multiseries_id_columns, max_wait=600)

Retrieve time series properties for a potential multiseries datetime partition column

Multiseries time series projects use multiseries id columns to model multiple distinct series within a single project. This function returns the time series properties (time step and time unit) of this column if it were used as a datetime partition column with the specified multiseries id columns, running multiseries detection automatically if it had not previously been successfully ran.

Parameters:
multiseries_id_columns : list of str

the name(s) of the multiseries id columns to use with this datetime partition column. Currently only one multiseries id column is supported.

max_wait : int, optional

if a multiseries detection task is run, the maximum amount of time to wait for it to complete before giving up

Returns:
properties : dict

A dict with three keys:

  • time_series_eligible : bool, whether the column can be used as a partition column
  • time_unit : str or null, the inferred time unit if used as a partition column
  • time_step : int or null, the inferred time step if used as a partition column
get_cross_series_properties(datetime_partition_column, cross_series_group_by_columns, max_wait=600)

Retrieve cross-series properties for multiseries ID column.

This function returns the cross-series properties (eligibility as group-by column) of this column if it were used with specified datetime partition column and with current multiseries id column, running cross-series group-by validation automatically if it had not previously been successfully ran.

Parameters:
datetime_partition_column : datetime partition column
cross_series_group_by_columns : list of str

the name(s) of the columns to use with this multiseries ID column. Currently only one cross-series group-by column is supported.

max_wait : int, optional

if a multiseries detection task is run, the maximum amount of time to wait for it to complete before giving up

Returns:
properties : dict

A dict with three keys:

  • name : str, column name
  • eligibility : str, reason for column eligibility
  • isEligible : bool, is column eligible as cross-series group-by
get_multicategorical_histogram()

Retrieve multicategorical histogram for this feature

New in version v2.24.

Returns:
datarobot.models.MulticategoricalHistogram
Raises:
datarobot.errors.InvalidUsageError

if this method is called on a unsuited feature

ValueError

if no multilabel_insights_key is present for this feature

get_pairwise_correlations()

Retrieve pairwise label correlation for multicategorical features

New in version v2.24.

Returns:
datarobot.models.PairwiseCorrelations
Raises:
datarobot.errors.InvalidUsageError

if this method is called on a unsuited feature

ValueError

if no multilabel_insights_key is present for this feature

get_pairwise_joint_probabilities()

Retrieve pairwise label joint probabilities for multicategorical features

New in version v2.24.

Returns:
datarobot.models.PairwiseJointProbabilities
Raises:
datarobot.errors.InvalidUsageError

if this method is called on a unsuited feature

ValueError

if no multilabel_insights_key is present for this feature

get_pairwise_conditional_probabilities()

Retrieve pairwise label conditional probabilities for multicategorical features

New in version v2.24.

Returns:
datarobot.models.PairwiseConditionalProbabilities
Raises:
datarobot.errors.InvalidUsageError

if this method is called on a unsuited feature

ValueError

if no multilabel_insights_key is present for this feature

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

get_histogram(bin_limit=None)

Retrieve a feature histogram

Parameters:
bin_limit : int or None

Desired max number of histogram bins. If omitted, by default endpoint will use 60.

Returns:
featureHistogram : FeatureHistogram

The requested histogram with desired number or bins

class datarobot.models.ModelingFeature(project_id=None, name=None, feature_type=None, importance=None, low_information=None, unique_count=None, na_count=None, date_format=None, min=None, max=None, mean=None, median=None, std_dev=None, parent_feature_names=None, key_summary=None, is_restored_after_reduction=None)

A feature used for modeling

In time series projects, a new set of modeling features is created after setting the partitioning options. These features are automatically derived from those in the project’s dataset and are the features used for modeling. Modeling features are only accessible once the target and partitioning options have been set. In projects that don’t use time series modeling, once the target has been set, ModelingFeatures and Features will behave the same.

For more information about input and modeling features, see the time series documentation.

As with the Feature object, the min, max, `mean, median, and std_dev attributes provide information about the distribution of the feature in the EDA sample data. For non-numeric features, they will be None. For features where the summary statistics are available, they will be in a format compatible with the data type, i.e. date type features will have their summary statistics expressed as ISO-8601 formatted date strings.

Attributes:
project_id : str

the id of the project the feature belongs to

name : str

the name of the feature

feature_type : str

the type of the feature, e.g. ‘Categorical’, ‘Text’

importance : float or None

numeric measure of the strength of relationship between the feature and target (independent of any model or other features); may be None for non-modeling features such as partition columns

low_information : bool

whether a feature is considered too uninformative for modeling (e.g. because it has too few values)

unique_count : int

number of unique values

na_count : int or None

number of missing values

date_format : str or None

For Date features, the date format string for how this feature was interpreted, compatible with https://docs.python.org/2/library/time.html#time.strftime . For other feature types, None.

min : str, int, float, or None

The minimum value of the source data in the EDA sample

max : str, int, float, or None

The maximum value of the source data in the EDA sample

mean : str, int, or, float

The arithmetic mean of the source data in the EDA sample

median : str, int, float, or None

The median of the source data in the EDA sample

std_dev : str, int, float, or None

The standard deviation of the source data in the EDA sample

parent_feature_names : list of str

A list of the names of input features used to derive this modeling feature. In cases where the input features and modeling features are the same, this will simply contain the feature’s name. Note that if a derived feature was used to create this modeling feature, the values here will not necessarily correspond to the features that must be supplied at prediction time.

key_summary: list of dict

Statistics for top 50 keys (truncated to 103 characters) of Summarized Categorical column example:

{‘key’:’DataRobot’, ‘summary’:{‘min’:0, ‘max’:29815.0, ‘stdDev’:6498.029, ‘mean’:1490.75, ‘median’:0.0, ‘pctRows’:5.0}}

where,
key: string or None

name of the key

summary: dict

statistics of the key

max: maximum value of the key. min: minimum value of the key. mean: mean value of the key. median: median value of the key. stdDev: standard deviation of the key. pctRows: percentage occurrence of key in the EDA sample of the feature.

classmethod get(project_id, feature_name)

Retrieve a single modeling feature

Parameters:
project_id : str

The ID of the project the feature is associated with.

feature_name : str

The name of the feature to retrieve

Returns:
feature : ModelingFeature

The requested feature

class datarobot.models.DatasetFeature(id_, dataset_id=None, dataset_version_id=None, name=None, feature_type=None, low_information=None, unique_count=None, na_count=None, date_format=None, min_=None, max_=None, mean=None, median=None, std_dev=None, time_series_eligible=None, time_series_eligibility_reason=None, time_step=None, time_unit=None, target_leakage=None, target_leakage_reason=None)

A feature from a project’s dataset

These are features either included in the originally uploaded dataset or added to it via feature transformations.

The min, max, mean, median, and std_dev attributes provide information about the distribution of the feature in the EDA sample data. For non-numeric features or features created prior to these summary statistics becoming available, they will be None. For features where the summary statistics are available, they will be in a format compatible with the data type, i.e. date type features will have their summary statistics expressed as ISO-8601 formatted date strings.

Attributes:
id : int

the id for the feature - note that name is used to reference the feature instead of id

dataset_id : str

the id of the dataset the feature belongs to

dataset_version_id : str

the id of the dataset version the feature belongs to

name : str

the name of the feature

feature_type : str, optional

the type of the feature, e.g. ‘Categorical’, ‘Text’

low_information : bool, optional

whether a feature is considered too uninformative for modeling (e.g. because it has too few values)

unique_count : int, optional

number of unique values

na_count : int, optional

number of missing values

date_format : str, optional

For Date features, the date format string for how this feature was interpreted, compatible with https://docs.python.org/2/library/time.html#time.strftime . For other feature types, None.

min : str, int, float, optional

The minimum value of the source data in the EDA sample

max : str, int, float, optional

The maximum value of the source data in the EDA sample

mean : str, int, float, optional

The arithmetic mean of the source data in the EDA sample

median : str, int, float, optional

The median of the source data in the EDA sample

std_dev : str, int, float, optional

The standard deviation of the source data in the EDA sample

time_series_eligible : bool, optional

Whether this feature can be used as the datetime partition column in a time series project.

time_series_eligibility_reason : str, optional

Why the feature is ineligible for the datetime partition column in a time series project, or ‘suitable’ when it is eligible.

time_step : int, optional

For time series eligible features, a positive integer determining the interval at which windows can be specified. If used as the datetime partition column on a time series project, the feature derivation and forecast windows must start and end at an integer multiple of this value. None for features that are not time series eligible.

time_unit : str, optional

For time series eligible features, the time unit covered by a single time step, e.g. ‘HOUR’, or None for features that are not time series eligible.

target_leakage : str, optional

Whether a feature is considered to have target leakage or not. A value of ‘SKIPPED_DETECTION’ indicates that target leakage detection was not run on the feature. ‘FALSE’ indicates no leakage, ‘MODERATE’ indicates a moderate risk of target leakage, and ‘HIGH_RISK’ indicates a high risk of target leakage

target_leakage_reason: string, optional

The descriptive text explaining the reason for target leakage, if any.

get_histogram(bin_limit=None)

Retrieve a feature histogram

Parameters:
bin_limit : int or None

Desired max number of histogram bins. If omitted, by default endpoint will use 60.

Returns:
featureHistogram : DatasetFeatureHistogram

The requested histogram with desired number or bins

class datarobot.models.DatasetFeatureHistogram(plot)
classmethod get(dataset_id, feature_name, bin_limit=None, key_name=None)

Retrieve a single feature histogram

Parameters:
dataset_id : str

The ID of the Dataset the feature is associated with.

feature_name : str

The name of the feature to retrieve

bin_limit : int or None

Desired max number of histogram bins. If omitted, by default the endpoint will use 60.

key_name: string or None

(Only required for summarized categorical feature) Name of the top 50 keys for which plot to be retrieved

Returns:
featureHistogram : FeatureHistogram

The queried instance with plot attribute in it.

class datarobot.models.FeatureHistogram(plot)
classmethod get(project_id, feature_name, bin_limit=None, key_name=None)

Retrieve a single feature histogram

Parameters:
project_id : str

The ID of the project the feature is associated with.

feature_name : str

The name of the feature to retrieve

bin_limit : int or None

Desired max number of histogram bins. If omitted, by default endpoint will use 60.

key_name: string or None

(Only required for summarized categorical feature) Name of the top 50 keys for which plot to be retrieved

Returns:
featureHistogram : FeatureHistogram

The queried instance with plot attribute in it.

class datarobot.models.InteractionFeature(rows, source_columns, bars, bubbles)

Interaction feature data

New in version v2.21.

Attributes:
rows: int

Total number of rows

source_columns: list(str)

names of two categorical features which were combined into this one

bars: list(dict)

dictionaries representing frequencies of each independent value from the source columns

bubbles: list(dict)

dictionaries representing frequencies of each combined value in the interaction feature.

classmethod get(project_id, feature_name)

Retrieve a single Interaction feature

Parameters:
project_id : str

The id of the project the feature belongs to

feature_name : str

The name of the Interaction feature to retrieve

Returns:
feature : InteractionFeature

The queried instance

class datarobot.models.MulticategoricalHistogram(feature_name, histogram)

Histogram for Multicategorical feature.

New in version v2.24.

Notes

HistogramValues contains:

  • values.[].label : string - Label name
  • values.[].plot : list - Histogram for label
  • values.[].plot.[].label_relevance : int - Label relevance value
  • values.[].plot.[].row_count : int - Row count where label has given relevance
  • values.[].plot.[].row_pct : float - Percentage of rows where label has given relevance
Attributes:
feature_name : str

Name of the feature

values : list(dict)

List of Histogram values with a schema described as HistogramValues

classmethod get(multilabel_insights_key)

Retrieves multicategorical histogram

You might find it more convenient to use Feature.get_multicategorical_histogram instead.

Parameters:
multilabel_insights_key: string

Key for multilabel insights, unique for a project, feature and EDA stage combination. The multilabel_insights_key can be retrieved via Feature.multilabel_insights_key.

Returns:
MulticategoricalHistogram

The multicategorical histogram for multilabel_insights_key

to_dataframe()

Convenience method to get all the information from this multicategorical_histogram instance in form of a pandas.DataFrame.

Returns:
pandas.DataFrame

Histogram information as a multicategorical_histogram. The dataframe will contain these columns: feature_name, label, label_relevance, row_count and row_pct

class datarobot.models.PairwiseCorrelations(*args, **kwargs)

Correlation of label pairs for multicategorical feature.

New in version v2.24.

Notes

CorrelationValues contain:

  • values.[].label_configuration : list of length 2 - Configuration of the label pair
  • values.[].label_configuration.[].label : str – Label name
  • values.[].statistic_value : float – Statistic value
Attributes:
feature_name : str

Name of the feature

values : list(dict)

List of correlation values with a schema described as CorrelationValues

statistic_dataframe : pandas.DataFrame

Correlation values for all label pairs as a DataFrame

classmethod get(multilabel_insights_key)

Retrieves pairwise correlations

You might find it more convenient to use Feature.get_pairwise_correlations instead.

Parameters:
multilabel_insights_key: string

Key for multilabel insights, unique for a project, feature and EDA stage combination. The multilabel_insights_key can be retrieved via Feature.multilabel_insights_key.

Returns:
PairwiseCorrelations

The pairwise label correlations

as_dataframe()

The pairwise label correlations as a (num_labels x num_labels) DataFrame.

Returns:
pandas.DataFrame

The pairwise label correlations. Index and column names allow the interpretation of the values.

class datarobot.models.PairwiseJointProbabilities(*args, **kwargs)

Joint probabilities of label pairs for multicategorical feature.

New in version v2.24.

Notes

ProbabilityValues contain:

  • values.[].label_configuration : list of length 2 - Configuration of the label pair
  • values.[].label_configuration.[].relevance : int – 0 for absence of the labels, 1 for the presence of labels
  • values.[].label_configuration.[].label : str – Label name
  • values.[].statistic_value : float – Statistic value
Attributes:
feature_name : str

Name of the feature

values : list(dict)

List of joint probability values with a schema described as ProbabilityValues

statistic_dataframes : dict(pandas.DataFrame)

Joint Probability values as DataFrames for different relevance combinations.

E.g. The probability P(A=0,B=1) can be retrieved via: pairwise_joint_probabilities.statistic_dataframes[(0,1)].loc['A', 'B']

classmethod get(multilabel_insights_key)

Retrieves pairwise joint probabilities

You might find it more convenient to use Feature.get_pairwise_joint_probabilities instead.

Parameters:
multilabel_insights_key: string

Key for multilabel insights, unique for a project, feature and EDA stage combination. The multilabel_insights_key can be retrieved via Feature.multilabel_insights_key.

Returns:
PairwiseJointProbabilities

The pairwise joint probabilities

as_dataframe(relevance_configuration)

Joint probabilities of label pairs as a (num_labels x num_labels) DataFrame.

Parameters:
relevance_configuration: tuple of length 2

Valid options are (0, 0), (0, 1), (1, 0) and (1, 1). Values of 0 indicate absence of labels and 1 indicates presence of labels. The first value describes the presence for the labels in axis=0 and the second value describes the presence for the labels in axis=1.

For example the matrix values for a relevance configuration of (0, 1) describe the probabilities of absent labels in the index axis and present labels in the column axis.

E.g. The probability P(A=0,B=1) can be retrieved via: pairwise_joint_probabilities.as_dataframe((0,1)).loc['A', 'B']

Returns:
pandas.DataFrame

The joint probabilities for the requested relevance_configuration. Index and column names allow the interpretation of the values.

class datarobot.models.PairwiseConditionalProbabilities(*args, **kwargs)

Conditional probabilities of label pairs for multicategorical feature.

New in version v2.24.

Notes

ProbabilityValues contain:

  • values.[].label_configuration : list of length 2 - Configuration of the label pair
  • values.[].label_configuration.[].relevance : int – 0 for absence of the labels, 1 for the presence of labels
  • values.[].label_configuration.[].label : str – Label name
  • values.[].statistic_value : float – Statistic value
Attributes:
feature_name : str

Name of the feature

values : list(dict)

List of conditional probability values with a schema described as ProbabilityValues

statistic_dataframes : dict(pandas.DataFrame)

Conditional Probability values as DataFrames for different relevance combinations. The label names in the columns are the events, on which we condition. The label names in the index are the events whose conditional probability given the indexes is in the dataframe.

E.g. The probability P(A=0|B=1) can be retrieved via: pairwise_conditional_probabilities.statistic_dataframes[(0,1)].loc['A', 'B']

classmethod get(multilabel_insights_key)

Retrieves pairwise conditional probabilities

You might find it more convenient to use Feature.get_pairwise_conditional_probabilities instead.

Parameters:
multilabel_insights_key: string

Key for multilabel insights, unique for a project, feature and EDA stage combination. The multilabel_insights_key can be retrieved via Feature.multilabel_insights_key.

Returns:
PairwiseConditionalProbabilities

The pairwise conditional probabilities

as_dataframe(relevance_configuration)

Conditional probabilities of label pairs as a (num_labels x num_labels) DataFrame. The label names in the columns are the events, on which we condition. The label names in the index are the events whose conditional probability given the indexes is in the dataframe.

E.g. The probability P(A=0|B=1) can be retrieved via: pairwise_conditional_probabilities.as_dataframe((0, 1)).loc['A', 'B']

Parameters:
relevance_configuration: tuple of length 2

Valid options are (0, 0), (0, 1), (1, 0) and (1, 1). Values of 0 indicate absence of labels and 1 indicates presence of labels. The first value describes the presence for the labels in axis=0 and the second value describes the presence for the labels in axis=1.

For example the matrix values for a relevance configuration of (0, 1) describe the probabilities of absent labels in the index axis given the presence of labels in the column axis.

Returns:
pandas.DataFrame

The conditional probabilities for the requested relevance_configuration. Index and column names allow the interpretation of the values.

Feature Association

class datarobot.models.FeatureAssociationMatrix(strengths: Optional[List[Strength]] = None, features: Optional[List[Feature]] = None, project_id: Optional[str] = None)

Feature association statistics for a project.

Note

Projects created prior to v2.17 are not supported by this feature.

Examples

import datarobot as dr

# retrieve feature association matrix
feature_association_matrix = dr.FeatureAssociationMatrix.get(project_id)
feature_association_matrix.strengths
feature_association_matrix.features

# retrieve feature association matrix for a metric, association type or a feature list
feature_association_matrix = dr.FeatureAssociationMatrix.get(
    project_id,
    metric=enums.FEATURE_ASSOCIATION_METRIC.SPEARMAN,
    association_type=enums.FEATURE_ASSOCIATION_TYPE.CORRELATION,
    featurelist_id=featurelist_id,
)
Attributes:
project_id : str

Id of the associated project.

strengths : list of dict

Pairwise statistics for the available features as structured below.

features : list of dict

Metadata for each feature and where it goes in the matrix.

classmethod get(project_id: str, metric: Optional[str] = None, association_type: Optional[str] = None, featurelist_id: Optional[str] = None) → datarobot.models.feature_association_matrix.feature_association_matrix.FeatureAssociationMatrix

Get feature association statistics.

Parameters:
project_id : str

Id of the project that contains the requested associations.

metric : enums.FEATURE_ASSOCIATION_METRIC

The name of a metric to get pairwise data for. Since ‘v2.19’ this is optional and defaults to enums.FEATURE_ASSOCIATION_METRIC.MUTUAL_INFO.

association_type : enums.FEATURE_ASSOCIATION_TYPE

The type of dependence for the data. Since ‘v2.19’ this is optional and defaults to enums.FEATURE_ASSOCIATION_TYPE.ASSOCIATION.

featurelist_id : str or None

Optional, the feature list to lookup FAM data for. By default, depending on the type of the project “Informative Features” or “Timeseries Informative Features” list will be used. (New in version v2.19)

Returns:
FeatureAssociationMatrix

Feature association pairwise metric strength data, feature clustering data, and ordering data for Feature Association Matrix visualization.

Feature Association Matrix Details

class datarobot.models.FeatureAssociationMatrixDetails(project_id: Optional[str] = None, chart_type: Optional[str] = None, values: Optional[List[Tuple[Any, Any, float]]] = None, features: Optional[List[str]] = None, types: Optional[List[str]] = None, featurelist_id: Optional[str] = None)

Plotting details for a pair of passed features present in the feature association matrix.

Note

Projects created prior to v2.17 are not supported by this feature.

Attributes:
project_id : str

Id of the project that contains the requested associations.

chart_type : str

Which type of plotting the pair of features gets in the UI. e.g. ‘HORIZONTAL_BOX’, ‘VERTICAL_BOX’, ‘SCATTER’ or ‘CONTINGENCY’

values : list

The data triplets for pairwise plotting e.g. {“values”: [[460.0, 428.5, 0.001], [1679.3, 259.0, 0.001], …] The first entry of each list is a value of feature1, the second entry of each list is a value of feature2, and the third is the relative frequency of the pair of datapoints in the sample.

features : list

A list of the requested features, [feature1, feature2]

types : list

The type of feature1 and feature2. Possible values: “CATEGORICAL”, “NUMERIC”

featurelist_id : str

Id of the feature list to lookup FAM details for.

classmethod get(project_id: str, feature1: str, feature2: str, featurelist_id: Optional[str] = None) → datarobot.models.feature_association_matrix.feature_association_matrix_details.FeatureAssociationMatrixDetails

Get a sample of the actual values used to measure the association between a pair of features

New in version v2.17.

Parameters:
project_id : str

Id of the project of interest.

feature1 : str

Feature name for the first feature of interest.

feature2 : str

Feature name for the second feature of interest.

featurelist_id : str

Optional, the feature list to lookup FAM data for. By default, depending on the type of the project “Informative Features” or “Timeseries Informative Features” list will be used.

Returns:
FeatureAssociationMatrixDetails

The feature association plotting for provided pair of features.

Feature Association Featurelists

class datarobot.models.FeatureAssociationFeaturelists(project_id: Optional[str] = None, featurelists: Optional[List[FeatureListType]] = None)

Featurelists with feature association matrix availability flags for a project.

Attributes:
project_id : str

Id of the project that contains the requested associations.

featurelists : list fo dict

The featurelists with the featurelist_id, title and the has_fam flag.

classmethod get(project_id: str) → datarobot.models.feature_association_matrix.feature_association_featurelists.FeatureAssociationFeaturelists

Get featurelists with feature association status for each.

Parameters:
project_id : str

Id of the project of interest.

Returns:
FeatureAssociationFeaturelists

Featurelist with feature association status for each.

Feature Discovery

Relationships Configuration

class datarobot.models.RelationshipsConfiguration(id, dataset_definitions=None, relationships=None, feature_discovery_mode=None, feature_discovery_settings=None)

A Relationships configuration specifies a set of secondary datasets as well as the relationships among them. It is used to configure Feature Discovery for a project to generate features automatically from these datasets.

Attributes:
id : string

Id of the created relationships configuration

dataset_definitions: list

Each element is a dataset_definitions for a dataset.

relationships: list

Each element is a relationship between two datasets

feature_discovery_mode: str

Mode of feature discovery. Supported values are ‘default’ and ‘manual’

feature_discovery_settings: list

List of feature discovery settings used to customize the feature discovery process

The `dataset_definitions` structure is
identifier: string

Alias of the dataset (used directly as part of the generated feature names)

catalog_id: str, or None

Identifier of the catalog item

catalog_version_id: str

Identifier of the catalog item version

primary_temporal_key: string, optional

Name of the column indicating time of record creation

feature_list_id: string, optional

Identifier of the feature list. This decides which columns in the dataset are used for feature generation

snapshot_policy: str

Policy to use when creating a project or making predictions. Must be one of the following values: ‘specified’: Use specific snapshot specified by catalogVersionId ‘latest’: Use latest snapshot from the same catalog item ‘dynamic’: Get data from the source (only applicable for JDBC datasets)

feature_lists: list

List of feature list info

data_source: dict

Data source info if the dataset is from data source

data_sources: list

List of Data source details for a JDBC datasets

is_deleted: bool, optional

Whether the dataset is deleted or not

The `data source info` structured is
data_store_id: str

Id of the data store.

data_store_name : str

User-friendly name of the data store.

url : str

Url used to connect to the data store.

dbtable : str

Name of table from the data store.

schema: str

Schema definition of the table from the data store

catalog: str

Catalog name of the data source.

The `feature list info` structure is
id : str

Id of the featurelist

name : str

Name of the featurelist

features : list of str

Names of all the Features in the featurelist

dataset_id : str

Project the featurelist belongs to

creation_date : datetime.datetime

When the featurelist was created

user_created : bool

Whether the featurelist was created by a user or by DataRobot automation

created_by: str

Name of user who created it

description : str

Description of the featurelist. Can be updated by the user and may be supplied by default for DataRobot-created featurelists.

dataset_id: str

Dataset which is associated with the feature list

dataset_version_id: str or None

Version of the dataset which is associated with feature list. Only relevant for Informative features

The `relationships` schema is
dataset1_identifier: str or None

Identifier of the first dataset in this relationship. This is specified in the identifier field of dataset_definition structure. If None, then the relationship is with the primary dataset.

dataset2_identifier: str

Identifier of the second dataset in this relationship. This is specified in the identifier field of dataset_definition schema.

dataset1_keys: list of str (max length: 10 min length: 1)

Column(s) from the first dataset which are used to join to the second dataset

dataset2_keys: list of str (max length: 10 min length: 1)

Column(s) from the second dataset that are used to join to the first dataset

time_unit: str, or None

Time unit of the feature derivation window. Supported values are MILLISECOND, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR. If present, the feature engineering Graph will perform time-aware joins.

feature_derivation_window_start: int, or None

How many time_units of each dataset’s primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin. Will be a negative integer, If present, the feature engineering Graph will perform time-aware joins.

feature_derivation_window_end: int, or None

How many timeUnits of each dataset’s record primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end. Will be a non-positive integer, if present. If present, the feature engineering Graph will perform time-aware joins.

feature_derivation_window_time_unit: int or None

Time unit of the feature derivation window. Supported values are MILLISECOND, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR If present, time-aware joins will be used. Only applicable when dataset1Identifier is not provided.

feature_derivation_windows: list of dict, or None

List of feature derivation windows settings. If present, time-aware joins will be used. Only allowed when feature_derivation_window_start, feature_derivation_window_end and feature_derivation_window_time_unit are not provided.

prediction_point_rounding: int, or None

Closest value of prediction_point_rounding_time_unit to round the prediction point into the past when applying the feature derivation window. Will be a positive integer, if present.Only applicable when dataset1_identifier is not provided.

prediction_point_rounding_time_unit: str, or None

time unit of the prediction point rounding. Supported values are MILLISECOND, SECOND, MINUTE, HOUR, DAY, WEEK, MONTH, QUARTER, YEAR Only applicable when dataset1_identifier is not provided.

The `feature_derivation_windows` is a list of dictionary with schema:
start: int

How many time_units of each dataset’s primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin.

end: int

How many timeUnits of each dataset’s record primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end.

unit: string

Time unit of the feature derivation window. One of datarobot.enums.AllowedTimeUnitsSAFER.

The `feature_discovery_settings` structure is:
name: str

Name of the feature discovery setting

value: bool

Value of the feature discovery setting

To see the list of possible settings, create a RelationshipConfiguration without specifying
settings and check its `feature_discovery_settings` attribute, which is a list of possible
settings with their default values.
classmethod create(dataset_definitions, relationships, feature_discovery_settings=None)

Create a Relationships Configuration

Parameters:
dataset_definitions: list of dataset definitions

Each element is a datarobot.helpers.feature_discovery.DatasetDefinition

relationships: list of relationships

Each element is a datarobot.helpers.feature_discovery.Relationship

feature_discovery_settings : list of feature discovery settings, optional

Each element is a dictionary or a datarobot.helpers.feature_discovery.FeatureDiscoverySetting. If not provided, default settings will be used.

Returns:
relationships_configuration: RelationshipsConfiguration

Created relationships configuration

Examples

dataset_definition = dr.DatasetDefinition(
    identifier='profile',
    catalog_id='5fd06b4af24c641b68e4d88f',
    catalog_version_id='5fd06b4af24c641b68e4d88f'
)
relationship = dr.Relationship(
    dataset2_identifier='profile',
    dataset1_keys=['CustomerID'],
    dataset2_keys=['CustomerID'],
    feature_derivation_window_start=-14,
    feature_derivation_window_end=-1,
    feature_derivation_window_time_unit='DAY',
    prediction_point_rounding=1,
    prediction_point_rounding_time_unit='DAY'
)
dataset_definitions = [dataset_definition]
relationships = [relationship]
relationship_config = dr.RelationshipsConfiguration.create(
    dataset_definitions=dataset_definitions,
    relationships=relationships
)
>>> relationship_config.id
'5c88a37770fc42a2fcc62759'
get()

Retrieve the Relationships configuration for a given id

Returns:
relationships_configuration: RelationshipsConfiguration

The requested relationships configuration

Raises:
ClientError

Raised if an invalid relationships config id is provided.

Examples

relationships_config = dr.RelationshipsConfiguration(valid_config_id)
result = relationships_config.get()
>>> result.id
'5c88a37770fc42a2fcc62759'
replace(dataset_definitions, relationships, feature_discovery_settings=None)

Update the Relationships Configuration which is not used in the feature discovery Project

Parameters:
dataset_definitions: list of dataset definition

Each element is a datarobot.helpers.feature_discovery.DatasetDefinition

relationships: list of relationships

Each element is a datarobot.helpers.feature_discovery.Relationship

feature_discovery_settings : list of feature discovery settings, optional

Each element is a dictionary or a datarobot.helpers.feature_discovery.FeatureDiscoverySetting. If not provided, default settings will be used.

Returns:
relationships_configuration: RelationshipsConfiguration

the updated relationships configuration

delete()

Delete the Relationships configuration

Raises:
ClientError

Raised if an invalid relationships config id is provided.

Examples

# Deleting with a valid id
relationships_config = dr.RelationshipsConfiguration(valid_config_id)
status_code = relationships_config.delete()
status_code
>>> 204
relationships_config.get()
>>> ClientError: Relationships Configuration not found

Dataset Definition

class datarobot.helpers.feature_discovery.DatasetDefinition(identifier: str, catalog_id: Optional[str], catalog_version_id: str, snapshot_policy: str = 'latest', feature_list_id: Optional[str] = None, primary_temporal_key: Optional[str] = None)

Dataset definition for the Feature Discovery

New in version v2.25.

Examples

import datarobot as dr
dataset_definition = dr.DatasetDefinition(
    identifier='profile',
    catalog_id='5ec4aec1f072bc028e3471ae',
    catalog_version_id='5ec4aec2f072bc028e3471b1',
)

dataset_definition = dr.DatasetDefinition(
    identifier='transaction',
    catalog_id='5ec4aec1f072bc028e3471ae',
    catalog_version_id='5ec4aec2f072bc028e3471b1',
    primary_temporal_key='Date'
)
Attributes:
identifier: string

Alias of the dataset (used directly as part of the generated feature names)

catalog_id: string, optional

Identifier of the catalog item

catalog_version_id: string

Identifier of the catalog item version

primary_temporal_key: string, optional

Name of the column indicating time of record creation

feature_list_id: string, optional

Identifier of the feature list. This decides which columns in the dataset are used for feature generation

snapshot_policy: string, optional

Policy to use when creating a project or making predictions. If omitted, by default endpoint will use ‘latest’. Must be one of the following values: ‘specified’: Use specific snapshot specified by catalogVersionId ‘latest’: Use latest snapshot from the same catalog item ‘dynamic’: Get data from the source (only applicable for JDBC datasets)

Relationship

class datarobot.helpers.feature_discovery.Relationship(dataset2_identifier: str, dataset1_keys: List[str], dataset2_keys: List[str], dataset1_identifier: Optional[str] = None, feature_derivation_window_start: Optional[int] = None, feature_derivation_window_end: Optional[int] = None, feature_derivation_window_time_unit: Optional[int] = None, feature_derivation_windows: Optional[List[Dict[str, Union[int, str]]]] = None, prediction_point_rounding: Optional[int] = None, prediction_point_rounding_time_unit: Optional[str] = None)

Relationship between dataset defined in DatasetDefinition

New in version v2.25.

Examples

import datarobot as dr
relationship = dr.Relationship(
    dataset1_identifier='profile',
    dataset2_identifier='transaction',
    dataset1_keys=['CustomerID'],
    dataset2_keys=['CustomerID']
)

relationship = dr.Relationship(
    dataset2_identifier='profile',
    dataset1_keys=['CustomerID'],
    dataset2_keys=['CustomerID'],
    feature_derivation_window_start=-14,
    feature_derivation_window_end=-1,
    feature_derivation_window_time_unit='DAY',
    prediction_point_rounding=1,
    prediction_point_rounding_time_unit='DAY'
)
Attributes:
dataset1_identifier: string, optional

Identifier of the first dataset in this relationship. This is specified in the identifier field of dataset_definition structure. If None, then the relationship is with the primary dataset.

dataset2_identifier: string

Identifier of the second dataset in this relationship. This is specified in the identifier field of dataset_definition schema.

dataset1_keys: list of string (max length: 10 min length: 1)

Column(s) from the first dataset which are used to join to the second dataset

dataset2_keys: list of string (max length: 10 min length: 1)

Column(s) from the second dataset that are used to join to the first dataset

feature_derivation_window_start: int, or None

How many time_units of each dataset’s primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin. Will be a negative integer, If present, the feature engineering Graph will perform time-aware joins.

feature_derivation_window_end: int, optional

How many timeUnits of each dataset’s record primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end. Will be a non-positive integer, if present. If present, the feature engineering Graph will perform time-aware joins.

feature_derivation_window_time_unit: int, optional

Time unit of the feature derivation window. One of datarobot.enums.AllowedTimeUnitsSAFER If present, time-aware joins will be used. Only applicable when dataset1_identifier is not provided.

feature_derivation_windows: list of dict, or None

List of feature derivation windows settings. If present, time-aware joins will be used. Only allowed when feature_derivation_window_start, feature_derivation_window_end and feature_derivation_window_time_unit are not provided.

prediction_point_rounding: int, optional

Closest value of prediction_point_rounding_time_unit to round the prediction point into the past when applying the feature derivation window. Will be a positive integer, if present.Only applicable when dataset1_identifier is not provided.

prediction_point_rounding_time_unit: string, optional

Time unit of the prediction point rounding. One of datarobot.enums.AllowedTimeUnitsSAFER Only applicable when dataset1_identifier is not provided.

The `feature_derivation_windows` is a list of dictionary with schema:
start: int

How many time_units of each dataset’s primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should begin.

end: int

How many timeUnits of each dataset’s record primary temporal key into the past relative to the datetimePartitionColumn the feature derivation window should end.

unit: string

Time unit of the feature derivation window. One of datarobot.enums.AllowedTimeUnitsSAFER.

Feature Lineage

class datarobot.models.FeatureLineage(steps=None)

Lineage of an automatically engineered feature.

Attributes:
steps: list

list of steps which were applied to build the feature.

`steps` structure is:
id : int

step id starting with 0.

step_type: str

one of the data/action/json/generatedData.

name: str

name of the step.

description: str

description of the step.

parents: list[int]

references to other steps id.

is_time_aware: bool

indicator of step being time aware. Mandatory only for action and join steps. action step provides additional information about feature derivation window in the timeInfo field.

catalog_id: str

id of the catalog for a data step.

catalog_version_id: str

id of the catalog version for a data step.

group_by: list[str]

list of columns which this action step aggregated by.

columns: list

names of columns involved into the feature generation. Available only for data steps.

time_info: dict

description of the feature derivation window which was applied to this action step.

join_info: list[dict]

join step details.

`columns` structure is
data_type: str

the type of the feature, e.g. ‘Categorical’, ‘Text’

is_input: bool

indicates features which provided data to transform in this lineage.

name: str

feature name.

is_cutoff: bool

indicates a cutoff column.

`time_info` structure is:
latest: dict

end of the feature derivation window applied.

duration: dict

size of the feature derivation window applied.

`latest` and `duration` structure is:
time_unit: str

time unit name like ‘MINUTE’, ‘DAY’, ‘MONTH’ etc.

duration: int

value/size of this duration object.

`join_info` structure is:
join_type: str

kind of join, left/right.

left_table: dict

information about a dataset which was considered as left.

right_table: str

information about a dataset which was considered as right.

`left_table` and `right_table` structure is:
columns: list[str]

list of columns which datasets were joined by.

datasteps: list[int]

list of data steps id which brought the columns into the current step dataset.

classmethod get(project_id, id)

Retrieve a single FeatureLineage.

Parameters:
project_id : str

The id of the project the feature belongs to

id : str

id of a feature lineage to retrieve

Returns:
lineage : FeatureLineage

The queried instance

Secondary Dataset Configurations

class datarobot.models.SecondaryDatasetConfigurations(id: str, project_id: str, config: Optional[List[DatasetConfiguration]] = None, secondary_datasets: Optional[List[SecondaryDataset]] = None, name: Optional[str] = None, creator_full_name: Optional[str] = None, creator_user_id: Optional[str] = None, created: Optional[datetime] = None, featurelist_id: Optional[str] = None, credential_ids: Optional[StoredCredentials] = None, is_default: Optional[bool] = None, project_version: Optional[str] = None)

Create secondary dataset configurations for a given project

New in version v2.20.

Attributes:
id : str

Id of this secondary dataset configuration

project_id : str

Id of the associated project.

config: list of DatasetConfiguration (Deprecated in version v2.23)

List of secondary dataset configurations

secondary_datasets: list of SecondaryDataset (new in v2.23)

List of secondary datasets (secondaryDataset)

name: str

Verbose name of the SecondaryDatasetConfig. null if it wasn’t specified.

created: datetime.datetime

DR-formatted datetime. null for legacy (before DR 6.0) db records.

creator_user_id: str

Id of the user created this config.

creator_full_name: str

fullname or email of the user created this config.

featurelist_id: str, optional

Id of the feature list. null if it wasn’t specified.

credential_ids: list of DatasetsCredentials, optional

credentials used by the secondary datasets if the datasets used in the configuration are from datasource

is_default: bool, optional

Boolean flag if default config created during feature discovery aim

project_version: str, optional

Version of project when its created (Release version)

classmethod create(project_id: str, secondary_datasets: List[datarobot.helpers.feature_discovery.SecondaryDataset], name: str, featurelist_id: Optional[str] = None) → datarobot.models.secondary_dataset.SecondaryDatasetConfigurations

create secondary dataset configurations

New in version v2.20.

Parameters:
project_id : str

id of the associated project.

secondary_datasets: list of SecondaryDataset (New in version v2.23)

list of secondary datasets used by the configuration each element is a datarobot.helpers.feature_discovery.SecondaryDataset

name: str (New in version v2.23)

Name of the secondary datasets configuration

featurelist_id: str, or None (New in version v2.23)

Id of the featurelist

Returns:
an instance of SecondaryDatasetConfigurations
Raises:
ClientError

raised if incorrect configuration parameters are provided

Examples

profile_secondary_dataset = dr.SecondaryDataset(
    identifier='profile',
    catalog_id='5ec4aec1f072bc028e3471ae',
    catalog_version_id='5ec4aec2f072bc028e3471b1',
    snapshot_policy='latest'
)

transaction_secondary_dataset = dr.SecondaryDataset(
    identifier='transaction',
    catalog_id='5ec4aec268f0f30289a03901',
    catalog_version_id='5ec4aec268f0f30289a03900',
    snapshot_policy='latest'
)

secondary_datasets = [profile_secondary_dataset, transaction_secondary_dataset]
new_secondary_dataset_config = dr.SecondaryDatasetConfigurations.create(
    project_id=project.id,
    name='My config',
    secondary_datasets=secondary_datasets
)

>>> new_secondary_dataset_config.id
'5fd1e86c589238a4e635e93d'
delete() → None

Removes the Secondary datasets configuration

New in version v2.21.

Raises:
ClientError

Raised if an invalid or already deleted secondary dataset config id is provided

Examples

# Deleting with a valid secondary_dataset_config id
status_code = dr.SecondaryDatasetConfigurations.delete(some_config_id)
status_code
>>> 204
get() → datarobot.models.secondary_dataset.SecondaryDatasetConfigurations

Retrieve a single secondary dataset configuration for a given id

New in version v2.21.

Returns:
secondary_dataset_configurations : SecondaryDatasetConfigurations

The requested secondary dataset configurations

Examples

config_id = '5fd1e86c589238a4e635e93d'
secondary_dataset_config = dr.SecondaryDatasetConfigurations(id=config_id).get()
>>> secondary_dataset_config
{
     'created': datetime.datetime(2020, 12, 9, 6, 16, 22, tzinfo=tzutc()),
     'creator_full_name': u'[email protected]',
     'creator_user_id': u'asdf4af1gf4bdsd2fba1de0a',
     'credential_ids': None,
     'featurelist_id': None,
     'id': u'5fd1e86c589238a4e635e93d',
     'is_default': True,
     'name': u'My config',
     'project_id': u'5fd06afce2456ec1e9d20457',
     'project_version': None,
     'secondary_datasets': [
            {
                'snapshot_policy': u'latest',
                'identifier': u'profile',
                'catalog_version_id': u'5fd06b4af24c641b68e4d88f',
                'catalog_id': u'5fd06b4af24c641b68e4d88e'
            },
            {
                'snapshot_policy': u'dynamic',
                'identifier': u'transaction',
                'catalog_version_id': u'5fd1e86c589238a4e635e98e',
                'catalog_id': u'5fd1e86c589238a4e635e98d'
            }
     ]
}
classmethod list(project_id: str, featurelist_id: Optional[str] = None, limit: Optional[int] = None, offset: Optional[int] = None) → List[datarobot.models.secondary_dataset.SecondaryDatasetConfigurations]

Returns list of secondary dataset configurations.

New in version v2.23.

Parameters:
project_id: str

The Id of project

featurelist_id: str, optional

Id of the feature list to filter the secondary datasets configurations

Returns:
secondary_dataset_configurations : list of SecondaryDatasetConfigurations

The requested list of secondary dataset configurations for a given project

Examples

pid = '5fd06afce2456ec1e9d20457'
secondary_dataset_configs = dr.SecondaryDatasetConfigurations.list(pid)
>>> secondary_dataset_configs[0]
    {
         'created': datetime.datetime(2020, 12, 9, 6, 16, 22, tzinfo=tzutc()),
         'creator_full_name': u'[email protected]',
         'creator_user_id': u'asdf4af1gf4bdsd2fba1de0a',
         'credential_ids': None,
         'featurelist_id': None,
         'id': u'5fd1e86c589238a4e635e93d',
         'is_default': True,
         'name': u'My config',
         'project_id': u'5fd06afce2456ec1e9d20457',
         'project_version': None,
         'secondary_datasets': [
                {
                    'snapshot_policy': u'latest',
                    'identifier': u'profile',
                    'catalog_version_id': u'5fd06b4af24c641b68e4d88f',
                    'catalog_id': u'5fd06b4af24c641b68e4d88e'
                },
                {
                    'snapshot_policy': u'dynamic',
                    'identifier': u'transaction',
                    'catalog_version_id': u'5fd1e86c589238a4e635e98e',
                    'catalog_id': u'5fd1e86c589238a4e635e98d'
                }
         ]
    }

Secondary Dataset

class datarobot.helpers.feature_discovery.SecondaryDataset(identifier: str, catalog_id: str, catalog_version_id: str, snapshot_policy: str = 'latest')

A secondary dataset to be used for feature discovery

New in version v2.25.

Examples

import datarobot as dr
dataset_definition = dr.SecondaryDataset(
    identifier='profile',
    catalog_id='5ec4aec1f072bc028e3471ae',
    catalog_version_id='5ec4aec2f072bc028e3471b1',
)
Attributes:
identifier: string

Alias of the dataset (used directly as part of the generated feature names)

catalog_id: string

Identifier of the catalog item

catalog_version_id: string

Identifier of the catalog item version

snapshot_policy: string, optional

Policy to use while creating a project or making predictions. If omitted, by default endpoint will use ‘latest’. Must be one of the following values: ‘specified’: Use specific snapshot specified by catalogVersionId ‘latest’: Use latest snapshot from the same catalog item ‘dynamic’: Get data from the source (only applicable for JDBC datasets)

Feature Effects

class datarobot.models.FeatureEffects(project_id, model_id, source, feature_effects, backtest_index=None)

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Notes

featureEffects is a dict containing the following:

  • feature_name (string) Name of the feature
  • feature_type (string) dr.enums.FEATURE_TYPE, Feature type either numeric, categorical or datetime
  • feature_impact_score (float) Feature impact score
  • weight_label (string) optional, Weight label if configured for the project else null
  • partial_dependence (List) Partial dependence results
  • predicted_vs_actual (List) optional, Predicted versus actual results, may be omitted if there are insufficient qualified samples
partial_dependence is a dict containing the following:
  • is_capped (bool) Indicates whether the data for computation is capped
  • data (List) partial dependence results in the following format
data is a list of dict containing the following:
  • label (string) Contains label for categorical and numeric features as string
  • dependence (float) Value of partial dependence
predicted_vs_actual is a dict containing the following:
  • is_capped (bool) Indicates whether the data for computation is capped
  • data (List) pred vs actual results in the following format
data is a list of dict containing the following:
  • label (string) Contains label for categorical features for numeric features contains range or numeric value.
  • bin (List) optional, For numeric features contains labels for left and right bin limits
  • predicted (float) Predicted value
  • actual (float) Actual value. Actual value is null for unsupervised timeseries models
  • row_count (int or float) Number of rows for the label and bin. Type is float if weight or exposure is set for the project.
Attributes:
project_id: string

The project that contains requested model

model_id: string

The model to retrieve Feature Effects for

source: string

The source to retrieve Feature Effects for

feature_effects: list

Feature Effects for every feature

backtest_index: string, required only for DatetimeModels,

The backtest index to retrieve Feature Effects for.

classmethod from_server_data(data, *args, **kwargs)

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing.

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

class datarobot.models.FeatureEffectMetadata(status, sources)

Feature Effect Metadata for model, contains status and available model sources.

Notes

source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.

class datarobot.models.FeatureEffectMetadataDatetime(data)

Feature Effect Metadata for datetime model, contains list of feature effect metadata per backtest.

Notes

feature effect metadata per backtest contains:
  • status : string.
  • backtest_index : string.
  • sources : list(string).

source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.

backtest_index is expected parameter to submit compute request and retrieve Feature Effect. One of provided backtest indexes shall be used.

Attributes:
data : list[FeatureEffectMetadataDatetimePerBacktest]

List feature effect metadata per backtest

class datarobot.models.FeatureEffectMetadataDatetimePerBacktest(ff_metadata_datetime_per_backtest)

Convert dictionary into feature effect metadata per backtest which contains backtest_index, status and sources.

Feature Fit

class datarobot.models.FeatureFit(project_id, model_id, source, feature_fit, backtest_index=None)

Feature Fit provides partial dependence and predicted vs actual values for top-500 features ordered by feature importance score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Notes

featureFit is a dict containing the following:

  • feature_name (string) Name of the feature
  • feature_type (string) dr.enums.FEATURE_TYPE, Feature type either numeric, categorical or datetime
  • feature_importance_score (float) Feature importance score
  • weight_label (string) optional, Weight label if configured for the project else null
  • partial_dependence (List) Partial dependence results
  • predicted_vs_actual (List) optional, Predicted versus actual results, may be omitted if there are insufficient qualified samples
partial_dependence is a dict containing the following:
  • is_capped (bool) Indicates whether the data for computation is capped
  • data (List) partial dependence results in the following format
data is a list of dict containing the following:
  • label (string) Contains label for categorical and numeric features as string
  • dependence (float) Value of partial dependence
predicted_vs_actual is a dict containing the following:
  • is_capped (bool) Indicates whether the data for computation is capped
  • data (List) pred vs actual results in the following format
data is a list of dict containing the following:
  • label (string) Contains label for categorical features for numeric features contains range or numeric value.
  • bin (List) optional, For numeric features contains labels for left and right bin limits
  • predicted (float) Predicted value
  • actual (float) Actual value. Actual value is null for unsupervised timeseries models
  • row_count (int or float) Number of rows for the label and bin. Type is float if weight or exposure is set for the project.
Attributes:
project_id: string

The project that contains requested model

model_id: string

The model to retrieve Feature Fit for

source: string

The source to retrieve Feature Fit for

feature_fit: list

Feature Fit data for every feature

backtest_index: string, required only for DatetimeModels,

The backtest index to retrieve Feature Fit for.

classmethod from_server_data(data, *args, **kwargs)

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing.

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

class datarobot.models.FeatureFitMetadata(status, sources)

Feature Fit Metadata for model, contains status and available model sources.

Notes

source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.

class datarobot.models.FeatureFitMetadataDatetime(data)

Feature Fit Metadata for datetime model, contains list of feature fit metadata per backtest.

Notes

feature fit metadata per backtest contains:

  • status : string.
  • backtest_index : string.
  • sources : list(string).

source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.

backtest_index is expected parameter to submit compute request and retrieve Feature Fit. One of provided backtest indexes shall be used.

Attributes:
data : list[FeatureFitMetadataDatetimePerBacktest]

list feature fit metadata per backtest

class datarobot.models.FeatureFitMetadataDatetimePerBacktest(ff_metadata_datetime_per_backtest)

Convert dictionary into feature fit metadata per backtest which contains backtest_index, status and sources.

Feature List

class datarobot.DatasetFeaturelist(id: Optional[str] = None, name: Optional[str] = None, features: Optional[List[str]] = None, dataset_id: Optional[str] = None, dataset_version_id: Optional[str] = None, creation_date: Optional[datetime.datetime] = None, created_by: Optional[str] = None, user_created: Optional[bool] = None, description: Optional[str] = None)

A set of features attached to a dataset in the AI Catalog

Attributes:
id : str

the id of the dataset featurelist

dataset_id : str

the id of the dataset the featurelist belongs to

dataset_version_id: str, optional

the version id of the dataset this featurelist belongs to

name : str

the name of the dataset featurelist

features : list of str

a list of the names of features included in this dataset featurelist

creation_date : datetime.datetime

when the featurelist was created

created_by : str

the user name of the user who created this featurelist

user_created : bool

whether the featurelist was created by a user or by DataRobot automation

description : str, optional

the description of the featurelist. Only present on DataRobot-created featurelists.

classmethod get(dataset_id: str, featurelist_id: str) → TDatasetFeaturelist

Retrieve a dataset featurelist

Parameters:
dataset_id : str

the id of the dataset the featurelist belongs to

featurelist_id : str

the id of the dataset featurelist to retrieve

Returns:
featurelist : DatasetFeatureList

the specified featurelist

delete() → None

Delete a dataset featurelist

Featurelists configured into the dataset as a default featurelist cannot be deleted.

update(name: Optional[str] = None) → None

Update the name of an existing featurelist

Note that only user-created featurelists can be renamed, and that names must not conflict with names used by other featurelists.

Parameters:
name : str, optional

the new name for the featurelist

class datarobot.models.Featurelist(id: Optional[str] = None, name: Optional[str] = None, features: Optional[List[str]] = None, project_id: Optional[str] = None, created: Optional[datetime.datetime] = None, is_user_created: Optional[bool] = None, num_models: Optional[int] = None, description: Optional[str] = None)

A set of features used in modeling

Attributes:
id : str

the id of the featurelist

name : str

the name of the featurelist

features : list of str

the names of all the Features in the featurelist

project_id : str

the project the featurelist belongs to

created : datetime.datetime

(New in version v2.13) when the featurelist was created

is_user_created : bool

(New in version v2.13) whether the featurelist was created by a user or by DataRobot automation

num_models : int

(New in version v2.13) the number of models currently using this featurelist. A model is considered to use a featurelist if it is used to train the model or as a monotonic constraint featurelist, or if the model is a blender with at least one component model using the featurelist.

description : str

(New in version v2.13) the description of the featurelist. Can be updated by the user and may be supplied by default for DataRobot-created featurelists.

classmethod from_data(data: ServerDataDictType) → TFeaturelist

Overrides the parent method to ensure description is always populated

Parameters:
data : dict

the data from the server, having gone through processing

classmethod get(project_id: str, featurelist_id: str) → TFeaturelist

Retrieve a known feature list

Parameters:
project_id : str

The id of the project the featurelist is associated with

featurelist_id : str

The ID of the featurelist to retrieve

Returns:
featurelist : Featurelist

The queried instance

Raises:
ValueError

passed project_id parameter value is of not supported type

delete(dry_run: bool = False, delete_dependencies: bool = False) → DeleteFeatureListResult

Delete a featurelist, and any models and jobs using it

All models using a featurelist, whether as the training featurelist or as a monotonic constraint featurelist, will also be deleted when the deletion is executed and any queued or running jobs using it will be cancelled. Similarly, predictions made on these models will also be deleted. All the entities that are to be deleted with a featurelist are described as “dependencies” of it. To preview the results of deleting a featurelist, call delete with dry_run=True

When deleting a featurelist with dependencies, users must specify delete_dependencies=True to confirm they want to delete the featurelist and all its dependencies. Without that option, only featurelists with no dependencies may be successfully deleted and others will error.

Featurelists configured into the project as a default featurelist or as a default monotonic constraint featurelist cannot be deleted.

Featurelists used in a model deployment cannot be deleted until the model deployment is deleted.

Parameters:
dry_run : bool, optional

specify True to preview the result of deleting the featurelist, instead of actually deleting it.

delete_dependencies : bool, optional

specify True to successfully delete featurelists with dependencies; if left False by default, featurelists without dependencies can be successfully deleted and those with dependencies will error upon attempting to delete them.

Returns:
result : dict
A dictionary describing the result of deleting the featurelist, with the following keys
  • dry_run : bool, whether the deletion was a dry run or an actual deletion
  • can_delete : bool, whether the featurelist can actually be deleted
  • deletion_blocked_reason : str, why the featurelist can’t be deleted (if it can’t)
  • num_affected_models : int, the number of models using this featurelist
  • num_affected_jobs : int, the number of jobs using this featurelist
classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

update(name: Optional[str] = None, description: Optional[str] = None) → None

Update the name or description of an existing featurelist

Note that only user-created featurelists can be renamed, and that names must not conflict with names used by other featurelists.

Parameters:
name : str, optional

the new name for the featurelist

description : str, optional

the new description for the featurelist

class datarobot.models.ModelingFeaturelist(id: Optional[str] = None, name: Optional[str] = None, features: Optional[List[str]] = None, project_id: Optional[str] = None, created: Optional[datetime.datetime] = None, is_user_created: Optional[bool] = None, num_models: Optional[int] = None, description: Optional[str] = None)

A set of features that can be used to build a model

In time series projects, a new set of modeling features is created after setting the partitioning options. These features are automatically derived from those in the project’s dataset and are the features used for modeling. Modeling features are only accessible once the target and partitioning options have been set. In projects that don’t use time series modeling, once the target has been set, ModelingFeaturelists and Featurelists will behave the same.

For more information about input and modeling features, see the time series documentation.

Attributes:
id : str

the id of the modeling featurelist

project_id : str

the id of the project the modeling featurelist belongs to

name : str

the name of the modeling featurelist

features : list of str

a list of the names of features included in this modeling featurelist

created : datetime.datetime

(New in version v2.13) when the featurelist was created

is_user_created : bool

(New in version v2.13) whether the featurelist was created by a user or by DataRobot automation

num_models : int

(New in version v2.13) the number of models currently using this featurelist. A model is considered to use a featurelist if it is used to train the model or as a monotonic constraint featurelist, or if the model is a blender with at least one component model using the featurelist.

description : str

(New in version v2.13) the description of the featurelist. Can be updated by the user and may be supplied by default for DataRobot-created featurelists.

classmethod get(project_id: str, featurelist_id: str) → TModelingFeaturelist

Retrieve a modeling featurelist

Modeling featurelists can only be retrieved once the target and partitioning options have been set.

Parameters:
project_id : str

the id of the project the modeling featurelist belongs to

featurelist_id : str

the id of the modeling featurelist to retrieve

Returns:
featurelist : ModelingFeaturelist

the specified featurelist

delete(dry_run: bool = False, delete_dependencies: bool = False) → DeleteFeatureListResult

Delete a featurelist, and any models and jobs using it

All models using a featurelist, whether as the training featurelist or as a monotonic constraint featurelist, will also be deleted when the deletion is executed and any queued or running jobs using it will be cancelled. Similarly, predictions made on these models will also be deleted. All the entities that are to be deleted with a featurelist are described as “dependencies” of it. To preview the results of deleting a featurelist, call delete with dry_run=True

When deleting a featurelist with dependencies, users must specify delete_dependencies=True to confirm they want to delete the featurelist and all its dependencies. Without that option, only featurelists with no dependencies may be successfully deleted and others will error.

Featurelists configured into the project as a default featurelist or as a default monotonic constraint featurelist cannot be deleted.

Featurelists used in a model deployment cannot be deleted until the model deployment is deleted.

Parameters:
dry_run : bool, optional

specify True to preview the result of deleting the featurelist, instead of actually deleting it.

delete_dependencies : bool, optional

specify True to successfully delete featurelists with dependencies; if left False by default, featurelists without dependencies can be successfully deleted and those with dependencies will error upon attempting to delete them.

Returns:
result : dict
A dictionary describing the result of deleting the featurelist, with the following keys
  • dry_run : bool, whether the deletion was a dry run or an actual deletion
  • can_delete : bool, whether the featurelist can actually be deleted
  • deletion_blocked_reason : str, why the featurelist can’t be deleted (if it can’t)
  • num_affected_models : int, the number of models using this featurelist
  • num_affected_jobs : int, the number of jobs using this featurelist
update(name: Optional[str] = None, description: Optional[str] = None) → None

Update the name or description of an existing featurelist

Note that only user-created featurelists can be renamed, and that names must not conflict with names used by other featurelists.

Parameters:
name : str, optional

the new name for the featurelist

description : str, optional

the new description for the featurelist

Restoring Discarded Features

class datarobot.models.restore_discarded_features.DiscardedFeaturesInfo(total_restore_limit: int, remaining_restore_limit: int, count: int, features: List[str])

An object containing information about time series features which were reduced during time series feature generation process. These features can be restored back to the project. They will be included into All Time Series Features and can be used to create new feature lists.

New in version v2.27.

Attributes:
total_restore_limit : int

The total limit indicating how many features can be restored in this project.

remaining_restore_limit : int

The remaining available number of the features which can be restored in this project.

features : list of strings

Discarded features which can be restored.

count : int

Discarded features count.

classmethod restore(project_id: str, features_to_restore: List[str], max_wait: int = 600) → datarobot.models.restore_discarded_features.FeatureRestorationStatus

Restore discarded during time series feature generation process features back to the project. After restoration features will be included into All Time Series Features.

New in version v2.27.

Parameters:
project_id: string
features_to_restore: list of strings

List of the feature names to restore

max_wait: int, optional

max time to wait for features to be restored. Defaults to 10 min

Returns:
status: FeatureRestorationStatus

information about features which were restored and which were not.

classmethod retrieve(project_id: str) → datarobot.models.restore_discarded_features.DiscardedFeaturesInfo

Retrieve the discarded features information for a given project.

New in version v2.27.

Parameters:
project_id: string
Returns:
info: DiscardedFeaturesInfo

information about features which were discarded during feature generation process and limits how many features can be restored.

Job

class datarobot.models.Job(data: Dict[str, Any], completed_resource_url: Optional[str] = None)

Tracks asynchronous work being done within a project

Attributes:
id : int

the id of the job

project_id : str

the id of the project the job belongs to

status : str

the status of the job - will be one of datarobot.enums.QUEUE_STATUS

job_type : str

what kind of work the job is doing - will be one of datarobot.enums.JOB_TYPE

is_blocked : bool

if true, the job is blocked (cannot be executed) until its dependencies are resolved

classmethod get(project_id: str, job_id: str) → datarobot.models.job.Job

Fetches one job.

Parameters:
project_id : str

The identifier of the project in which the job resides

job_id : str

The job id

Returns:
job : Job

The job

Raises:
AsyncFailureError

Querying this resource gave a status code other than 200 or 303

cancel()

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_result(params=None)
Parameters:
params : dict or None

Query parameters to be added to request to get results.

For featureEffects and featureFit, source param is required to define source,
otherwise the default is `training`
Returns:
result : object
Return type depends on the job type:
  • for model jobs, a Model is returned
  • for predict jobs, a pandas.DataFrame (with predictions) is returned
  • for featureImpact jobs, a list of dicts by default (see with_metadata parameter of the FeatureImpactJob class and its get() method).
  • for primeRulesets jobs, a list of Rulesets
  • for primeModel jobs, a PrimeModel
  • for primeDownloadValidation jobs, a PrimeFile
  • for predictionExplanationInitialization jobs, a PredictionExplanationsInitialization
  • for predictionExplanations jobs, a PredictionExplanations
  • for featureEffects, a FeatureEffects
  • for featureFit, a FeatureFit
Raises:
JobNotFinished

If the job is not finished, the result is not available.

AsyncProcessUnsuccessfulError

If the job errored or was aborted

get_result_when_complete(max_wait=600, params=None)
Parameters:
max_wait : int, optional

How long to wait for the job to finish.

params : dict, optional

Query parameters to be added to request.

Returns:
result: object

Return type is the same as would be returned by Job.get_result.

Raises:
AsyncTimeoutError

If the job does not finish in time

AsyncProcessUnsuccessfulError

If the job errored or was aborted

refresh()

Update this object with the latest job data from the server.

wait_for_completion(max_wait: int = 600) → None

Waits for job to complete.

Parameters:
max_wait : int, optional

How long to wait for the job to finish.

class datarobot.models.TrainingPredictionsJob(data, model_id, data_subset, **kwargs)
classmethod get(project_id, job_id, model_id=None, data_subset=None)

Fetches one training predictions job.

The resulting TrainingPredictions object will be annotated with model_id and data_subset.

Parameters:
project_id : str

The identifier of the project in which the job resides

job_id : str

The job id

model_id : str

The identifier of the model used for computing training predictions

data_subset : dr.enums.DATA_SUBSET, optional

Data subset used for computing training predictions

Returns:
job : TrainingPredictionsJob

The job

refresh()

Update this object with the latest job data from the server.

cancel()

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_result(params=None)
Parameters:
params : dict or None

Query parameters to be added to request to get results.

For featureEffects and featureFit, source param is required to define source,
otherwise the default is `training`
Returns:
result : object
Return type depends on the job type:
  • for model jobs, a Model is returned
  • for predict jobs, a pandas.DataFrame (with predictions) is returned
  • for featureImpact jobs, a list of dicts by default (see with_metadata parameter of the FeatureImpactJob class and its get() method).
  • for primeRulesets jobs, a list of Rulesets
  • for primeModel jobs, a PrimeModel
  • for primeDownloadValidation jobs, a PrimeFile
  • for predictionExplanationInitialization jobs, a PredictionExplanationsInitialization
  • for predictionExplanations jobs, a PredictionExplanations
  • for featureEffects, a FeatureEffects
  • for featureFit, a FeatureFit
Raises:
JobNotFinished

If the job is not finished, the result is not available.

AsyncProcessUnsuccessfulError

If the job errored or was aborted

get_result_when_complete(max_wait=600, params=None)
Parameters:
max_wait : int, optional

How long to wait for the job to finish.

params : dict, optional

Query parameters to be added to request.

Returns:
result: object

Return type is the same as would be returned by Job.get_result.

Raises:
AsyncTimeoutError

If the job does not finish in time

AsyncProcessUnsuccessfulError

If the job errored or was aborted

wait_for_completion(max_wait: int = 600) → None

Waits for job to complete.

Parameters:
max_wait : int, optional

How long to wait for the job to finish.

class datarobot.models.ShapMatrixJob(data: Dict[str, Any], model_id: Optional[str] = None, dataset_id: Optional[str] = None, **kwargs)
classmethod get(project_id: str, job_id: str, model_id: Optional[str] = None, dataset_id: Optional[str] = None) → datarobot.models.shap_matrix_job.ShapMatrixJob

Fetches one SHAP matrix job.

Parameters:
project_id : str

The identifier of the project in which the job resides

job_id : str

The job identifier

model_id : str

The identifier of the model used for computing prediction explanations

dataset_id : str

The identifier of the dataset against which prediction explanations should be computed

Returns:
job : ShapMatrixJob

The job

Raises:
AsyncFailureError

Querying this resource gave a status code other than 200 or 303

refresh() → None

Update this object with the latest job data from the server.

cancel()

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_result(params=None)
Parameters:
params : dict or None

Query parameters to be added to request to get results.

For featureEffects and featureFit, source param is required to define source,
otherwise the default is `training`
Returns:
result : object
Return type depends on the job type:
  • for model jobs, a Model is returned
  • for predict jobs, a pandas.DataFrame (with predictions) is returned
  • for featureImpact jobs, a list of dicts by default (see with_metadata parameter of the FeatureImpactJob class and its get() method).
  • for primeRulesets jobs, a list of Rulesets
  • for primeModel jobs, a PrimeModel
  • for primeDownloadValidation jobs, a PrimeFile
  • for predictionExplanationInitialization jobs, a PredictionExplanationsInitialization
  • for predictionExplanations jobs, a PredictionExplanations
  • for featureEffects, a FeatureEffects
  • for featureFit, a FeatureFit
Raises:
JobNotFinished

If the job is not finished, the result is not available.

AsyncProcessUnsuccessfulError

If the job errored or was aborted

get_result_when_complete(max_wait=600, params=None)
Parameters:
max_wait : int, optional

How long to wait for the job to finish.

params : dict, optional

Query parameters to be added to request.

Returns:
result: object

Return type is the same as would be returned by Job.get_result.

Raises:
AsyncTimeoutError

If the job does not finish in time

AsyncProcessUnsuccessfulError

If the job errored or was aborted

wait_for_completion(max_wait: int = 600) → None

Waits for job to complete.

Parameters:
max_wait : int, optional

How long to wait for the job to finish.

class datarobot.models.FeatureImpactJob(data, completed_resource_url=None, with_metadata=False)

Custom Feature Impact job to handle different return value structures.

The original implementation had just the the data and the new one also includes some metadata.

In general, we aim to keep the number of Job classes low by just utilizing the job_type attribute to control any specific formatting; however in this case when we needed to support a new representation with the _same_ job_type, customizing the behavior of _make_result_from_location allowed us to achieve our ends without complicating the _make_result_from_json method.

classmethod get(project_id, job_id, with_metadata=False)

Fetches one job.

Parameters:
project_id : str

The identifier of the project in which the job resides

job_id : str

The job id

with_metadata : bool

To make this job return the metadata (i.e. the full object of the completed resource) set the with_metadata flag to True.

Returns:
job : Job

The job

Raises:
AsyncFailureError

Querying this resource gave a status code other than 200 or 303

cancel()

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_result(params=None)
Parameters:
params : dict or None

Query parameters to be added to request to get results.

For featureEffects and featureFit, source param is required to define source,
otherwise the default is `training`
Returns:
result : object
Return type depends on the job type:
  • for model jobs, a Model is returned
  • for predict jobs, a pandas.DataFrame (with predictions) is returned
  • for featureImpact jobs, a list of dicts by default (see with_metadata parameter of the FeatureImpactJob class and its get() method).
  • for primeRulesets jobs, a list of Rulesets
  • for primeModel jobs, a PrimeModel
  • for primeDownloadValidation jobs, a PrimeFile
  • for predictionExplanationInitialization jobs, a PredictionExplanationsInitialization
  • for predictionExplanations jobs, a PredictionExplanations
  • for featureEffects, a FeatureEffects
  • for featureFit, a FeatureFit
Raises:
JobNotFinished

If the job is not finished, the result is not available.

AsyncProcessUnsuccessfulError

If the job errored or was aborted

get_result_when_complete(max_wait=600, params=None)
Parameters:
max_wait : int, optional

How long to wait for the job to finish.

params : dict, optional

Query parameters to be added to request.

Returns:
result: object

Return type is the same as would be returned by Job.get_result.

Raises:
AsyncTimeoutError

If the job does not finish in time

AsyncProcessUnsuccessfulError

If the job errored or was aborted

refresh()

Update this object with the latest job data from the server.

wait_for_completion(max_wait: int = 600) → None

Waits for job to complete.

Parameters:
max_wait : int, optional

How long to wait for the job to finish.

Lift Chart

class datarobot.models.lift_chart.LiftChart(source, bins, source_model_id, target_class)

Lift chart data for model.

Notes

LiftChartBin is a dict containing the following:

  • actual (float) Sum of actual target values in bin
  • predicted (float) Sum of predicted target values in bin
  • bin_weight (float) The weight of the bin. For weighted projects, it is the sum of the weights of the rows in the bin. For unweighted projects, it is the number of rows in the bin.
Attributes:
source : str

Lift chart data source. Can be ‘validation’, ‘crossValidation’ or ‘holdout’.

bins : list of dict

List of dicts with schema described as LiftChartBin above.

source_model_id : str

ID of the model this lift chart represents; in some cases, insights from the parent of a frozen model may be used

target_class : str, optional

For multiclass lift - target class for this lift chart data.

Missing Values Report

class datarobot.models.missing_report.MissingValuesReport(missing_values_report: List[MissingReportPerFeatureDict])

Missing values report for model, contains list of reports per feature sorted by missing count in descending order.

Notes

Report per feature contains:

  • feature : feature name.
  • type : feature type – ‘Numeric’ or ‘Categorical’.
  • missing_count : missing values count in training data.
  • missing_percentage : missing values percentage in training data.
  • tasks : list of information per each task, which was applied to feature.

task information contains:

  • id : a number of task in the blueprint diagram.
  • name : task name.
  • descriptions : human readable aggregated information about how the task handles missing values. The following descriptions may be present: what value is imputed for missing values, whether the feature being missing is treated as a feature by the task, whether missing values are treated as infrequent values, whether infrequent values are treated as missing values, and whether missing values are ignored.
classmethod get(project_id: str, model_id: str) → datarobot.models.missing_report.MissingValuesReport

Retrieve a missing report.

Parameters:
project_id : str

The project’s id.

model_id : str

The model’s id.

Returns:
MissingValuesReport

The queried missing report.

Models

Model

class datarobot.models.Model(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, model_type=None, model_category=None, is_frozen=None, is_n_clusters_dynamically_determined=None, blueprint_id=None, metrics=None, project=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, n_clusters=None, has_empty_clusters=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, model_number=None, parent_model_id=None, use_project_settings=None, supports_composable_ml=None)

A model trained on a project’s dataset capable of making predictions

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

processes : list of str

the processes used by the model

featurelist_name : str

the name of the featurelist used by the model

featurelist_id : str

the id of the featurelist used by the model

sample_pct : float or None

the percentage of the project dataset used in training the model. If the project uses datetime partitioning, the sample_pct will be None. See training_row_count, training_duration, and training_start_date and training_end_date instead.

training_row_count : int or None

the number of rows of the project dataset used in training the model. In a datetime partitioned project, if specified, defines the number of rows used to train the model and evaluate backtest scores; if unspecified, either training_duration or training_start_date and training_end_date was used to determine that instead.

training_duration : str or None

only present for models in datetime partitioned projects. If specified, a duration string specifying the duration spanned by the data used to train the model and evaluate backtest scores.

training_start_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the start date of the data used to train the model.

training_end_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the end date of the data used to train the model.

model_type : str

what model this is, e.g. ‘Nystroem Kernel SVM Regressor’

model_category : str

what kind of model this is - ‘prime’ for DataRobot Prime models, ‘blend’ for blender models, and ‘model’ for other models

is_frozen : bool

whether this model is a frozen model

is_n_clusters_dynamically_determined : bool

(New in version v2.27) optional, if this model determines number of clusters dynamically

blueprint_id : str

the id of the blueprint used in this model

metrics : dict

a mapping from each metric to the model’s scores for that metric

monotonic_increasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced.

monotonic_decreasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced.

n_clusters : int

(New in version v2.27) optional, number of data clusters discovered by model

has_empty_clusters: bool

(New in version v2.27) optional, whether clustering models produces empty clusters.

supports_monotonic_constraints : bool

optional, whether this model supports enforcing monotonic constraints

is_starred : bool

whether this model marked as starred

prediction_threshold : float

for binary classification projects, the threshold used for predictions

prediction_threshold_read_only : bool

indicated whether modification of the prediction threshold is forbidden. Threshold modification is forbidden once a model has had a deployment created or predictions made via the dedicated prediction API.

model_number : integer

model number assigned to a model

parent_model_id : str or None

(New in version v2.20) the id of the model that tuning parameters are derived from

use_project_settings : bool or None

(New in version v2.20) Only present for models in datetime-partitioned projects. If True, indicates that the custom backtest partitioning settings specified by the user were used to train the model and evaluate backtest scores.

supports_composable_ml : bool or None

(New in version v2.26) whether this model is supported in the Composable ML.

classmethod get(project: str, model_id: str) → datarobot.models.model.Model

Retrieve a specific model.

Parameters:
project : str

The project’s id.

model_id : str

The model_id of the leaderboard item to retrieve.

Returns:
model : Model

The queried instance.

Raises:
ValueError

passed project parameter value is of not supported type

get_features_used() → List[str]

Query the server to determine which features were used.

Note that the data returned by this method is possibly different than the names of the features in the featurelist used by this model. This method will return the raw features that must be supplied in order for predictions to be generated on a new set of data. The featurelist, in contrast, would also include the names of derived features.

Returns:
features : list of str

The names of the features used in the model.

get_supported_capabilities()

Retrieves a summary of the capabilities supported by a model.

New in version v2.14.

Returns:
supportsBlending: bool

whether the model supports blending

supportsMonotonicConstraints: bool

whether the model supports monotonic constraints

hasWordCloud: bool

whether the model has word cloud data available

eligibleForPrime: bool

whether the model is eligible for Prime

hasParameters: bool

whether the model has parameters that can be retrieved

supportsCodeGeneration: bool

(New in version v2.18) whether the model supports code generation

supportsShap: bool
(New in version v2.18) True if the model supports Shapley package. i.e. Shapley based

feature Importance

supportsEarlyStopping: bool

(New in version v2.22) True if this is an early stopping tree-based model and number of trained iterations can be retrieved.

get_num_iterations_trained()

Retrieves the number of estimators trained by early-stopping tree-based models.

– versionadded:: v2.22

Returns:
projectId: str

id of project containing the model

modelId: str

id of the model

data: array

list of numEstimatorsItem objects, one for each modeling stage.

numEstimatorsItem will be of the form:
stage: str

indicates the modeling stage (for multi-stage models); None of single-stage models

numIterations: int

the number of estimators or iterations trained by the model

delete() → None

Delete a model from the project’s leaderboard.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

open_model_browser() → None

Opens model at project leaderboard in web browser. Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

train(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, scoring_type: Optional[str] = None, training_row_count: Optional[int] = None, monotonic_increasing_featurelist_id: Union[str, object, None] = <object object>, monotonic_decreasing_featurelist_id: Union[str, object, None] = <object object>) → str

Train the blueprint used in model on a particular featurelist or amount of data.

This method creates a new training job for worker and appends it to the end of the queue for this project. After the job has finished you can get the newly trained model by retrieving it from the project leaderboard, or by retrieving the result of the job.

Either sample_pct or training_row_count can be used to specify the amount of data to use, but not both. If neither are specified, a default of the maximum amount of data that can safely be used to train any blueprint without going into the validation data will be selected.

In smart-sampled projects, sample_pct and training_row_count are assumed to be in terms of rows of the minority class.

Note

For datetime partitioned projects, see train_datetime instead.

Parameters:
sample_pct : float, optional

The amount of data to use for training, as a percentage of the project dataset from 0 to 100.

featurelist_id : str, optional

The identifier of the featurelist to use. If not defined, the featurelist of this model is used.

scoring_type : str, optional

Either validation or crossValidation (also dr.SCORING_TYPE.validation or dr.SCORING_TYPE.cross_validation). validation is available for every partitioning type, and indicates that the default model validation should be used for the project. If the project uses a form of cross-validation partitioning, crossValidation can also be used to indicate that all of the available training/validation combinations should be used to evaluate the model.

training_row_count : int, optional

The number of rows to use to train the requested model.

monotonic_increasing_featurelist_id : str

(new in version 2.11) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str

(new in version 2.11) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
model_job_id : str

id of created job, can be used as parameter to ModelJob.get method or wait_for_async_model_creation function

Examples

project = Project.get('project-id')
model = Model.get('project-id', 'model-id')
model_job_id = model.train(training_row_count=project.max_train_rows)
train_datetime(featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, training_duration: Optional[str] = None, time_window_sample_pct: Optional[int] = None, monotonic_increasing_featurelist_id: Optional[Union[str, object]] = <object object>, monotonic_decreasing_featurelist_id: Optional[Union[str, object]] = <object object>, use_project_settings: bool = False, sampling_method: Optional[str] = None) → ModelJob

Trains this model on a different featurelist or sample size.

Requires that this model is part of a datetime partitioned project; otherwise, an error will occur.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
featurelist_id : str, optional

the featurelist to use to train the model. If not specified, the featurelist of this model is used.

training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, neither training_duration nor use_project_settings may be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, neither training_row_count nor use_project_settings may be specified.

use_project_settings : bool, optional

(New in version v2.20) defaults to False. If True, indicates that the custom backtest partitioning settings specified by the user will be used to train the model and evaluate backtest scores. If specified, neither training_row_count nor training_duration may be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

monotonic_increasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
job : ModelJob

the created job to build the model

retrain(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, n_clusters: Optional[int] = None) → ModelJob

Submit a job to the queue to train a blender model.

Parameters:
sample_pct: float, optional

The sample size in percents (1 to 100) to use in training. If this parameter is used then training_row_count should not be given.

featurelist_id : str, optional

The featurelist id

training_row_count : int, optional

The number of rows used to train the model. If this parameter is used, then sample_pct should not be given.

n_clusters: int, optional

(new in version 2.27) number of clusters to use in an unsupervised clustering model. This parameter is used only for unsupervised clustering models that do not determine the number of clusters automatically.

Returns:
job : ModelJob

The created job that is retraining the model

request_predictions(dataset_id: Optional[str] = None, dataset: Optional[Dataset] = None, dataframe: Optional[pd.DataFrame] = None, file_path: Optional[str] = None, file: Optional[IOBase] = None, include_prediction_intervals: Optional[bool] = None, prediction_intervals_size: Optional[int] = None, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, explanation_algorithm: Optional[str] = None, max_explanations: Optional[int] = None, max_ngram_explanations: Optional[Union[int, str]] = None) → PredictJob

Requests predictions against a previously uploaded dataset.

Parameters:
dataset_id : string, optional

The ID of the dataset to make predictions against (as uploaded from Project.upload_dataset)

dataset : Dataset, optional

The dataset to make predictions against (as uploaded from Project.upload_dataset)

dataframe : pd.DataFrame, optional

(New in v3.0) The dataframe to make predictions against

file_path : str, optional

(New in v3.0) Path to file to make predictions against

file : IOBase, optional

(New in v3.0) File to make predictions against

include_prediction_intervals : bool, optional

(New in v2.16) For time series projects only. Specifies whether prediction intervals should be calculated for this request. Defaults to True if prediction_intervals_size is specified, otherwise defaults to False.

prediction_intervals_size : int, optional

(New in v2.16) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if include_prediction_intervals is True. Prediction intervals size must be between 1 and 100 (inclusive).

forecast_point : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

predictions_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

explanation_algorithm: (New in version v2.21) optional; If set to ‘shap’, the

response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations: (New in version v2.21) int optional; specifies the maximum number of

explanation values that should be returned for each row, ordered by absolute value, greatest to least. If null, no limit. In the case of ‘shap’: if the number of features is greater than the limit, the sum of remaining values will also be returned as shapRemainingTotal. Defaults to null. Cannot be set if explanation_algorithm is omitted.

max_ngram_explanations: optional; int or str

(New in version v2.29) Specifies the maximum number of text explanation values that should be returned. If set to all, text explanations will be computed and all the ngram explanations will be returned. If set to a non zero positive integer value, text explanations will be computed and this amount of descendingly sorted ngram explanations will be returned. By default text explanation won’t be triggered to be computed.

Returns:
job : PredictJob

The job computing the predictions

get_feature_impact(with_metadata: bool = False)

Retrieve the computed Feature Impact results, a measure of the relevance of each feature in the model.

Feature Impact is computed for each column by creating new data with that column randomly permuted (but the others left unchanged), and seeing how the error metric score for the predictions is affected. The ‘impactUnnormalized’ is how much worse the error metric score is when making predictions on this modified data. The ‘impactNormalized’ is normalized so that the largest value is 1. In both cases, larger values indicate more important features.

If a feature is a redundant feature, i.e. once other features are considered it doesn’t contribute much in addition, the ‘redundantWith’ value is the name of feature that has the highest correlation with this feature. Note that redundancy detection is only available for jobs run after the addition of this feature. When retrieving data that predates this functionality, a NoRedundancyImpactAvailable warning will be used.

Elsewhere this technique is sometimes called ‘Permutation Importance’.

Requires that Feature Impact has already been computed with request_feature_impact.

Parameters:
with_metadata : bool

The flag indicating if the result should include the metadata as well.

Returns:
list or dict

The feature impact data response depends on the with_metadata parameter. The response is either a dict with metadata and a list with actual data or just a list with that data.

Each List item is a dict with the keys featureName, impactNormalized, and impactUnnormalized, redundantWith and count.

For dict response available keys are:

  • featureImpacts - Feature Impact data as a dictionary. Each item is a dict with
    keys: featureName, impactNormalized, and impactUnnormalized, and redundantWith.
  • shapBased - A boolean that indicates whether Feature Impact was calculated using
    Shapley values.
  • ranRedundancyDetection - A boolean that indicates whether redundant feature
    identification was run while calculating this Feature Impact.
  • rowCount - An integer or None that indicates the number of rows that was used to
    calculate Feature Impact. For the Feature Impact calculated with the default logic, without specifying the rowCount, we return None here.
  • count - An integer with the number of features under the featureImpacts.
Raises:
ClientError (404)

If the feature impacts have not been computed.

get_multiclass_feature_impact()

For multiclass it’s possible to calculate feature impact separately for each target class. The method for calculation is exactly the same, calculated in one-vs-all style for each target class.

Requires that Feature Impact has already been computed with request_feature_impact.

Returns:
feature_impacts : list of dict

The feature impact data. Each item is a dict with the keys ‘featureImpacts’ (list), ‘class’ (str). Each item in ‘featureImpacts’ is a dict with the keys ‘featureName’, ‘impactNormalized’, and ‘impactUnnormalized’, and ‘redundantWith’.

Raises:
ClientError (404)

If the multiclass feature impacts have not been computed.

request_feature_impact(row_count: Optional[int] = None, with_metadata: bool = False)

Request feature impacts to be computed for the model.

See get_feature_impact for more information on the result of the job.

Parameters:
row_count : int

The sample size (specified in rows) to use for Feature Impact computation. This is not supported for unsupervised, multi-class (that has a separate method) and time series projects.

Returns:
job : Job

A Job representing the feature impact computation. To get the completed feature impact data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature impacts have already been requested.

request_external_test(dataset_id: str, actual_value_column: Optional[str] = None)

Request external test to compute scores and insights on an external test dataset

Parameters:
dataset_id : string

The dataset to make predictions against (as uploaded from Project.upload_dataset)

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

Returns
——-
job : Job

a Job representing external dataset insights computation

get_or_request_feature_impact(max_wait: int = 600, **kwargs)

Retrieve feature impact for the model, requesting a job if it hasn’t been run previously

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature impact job to complete before erroring

**kwargs

Arbitrary keyword arguments passed to request_feature_impact.

Returns:
feature_impacts : list or dict

The feature impact data. See get_feature_impact for the exact schema.

get_feature_effect_metadata()

Retrieve Feature Effects metadata. Response contains status and available model sources.

  • Feature Fit for the training partition is always available, with the exception of older projects that only supported Feature Fit for validation.
  • When a model is trained into validation or holdout without stacked predictions (i.e., no out-of-sample predictions in those partitions), Feature Effects is not available for validation or holdout.
  • Feature Effects for holdout is not available when holdout was not unlocked for the project.

Use source to retrieve Feature Effects, selecting one of the provided sources.

Returns:
feature_effect_metadata: FeatureEffectMetadata
get_feature_fit_metadata()
Retrieve Feature Fit metadata. Response contains status and available model sources.
  • Feature Fit of training is always available (except for the old project which supports only Feature Fit for validation).
  • When a model is trained into validation or holdout without stacked prediction (e.g. no out-of-sample prediction in validation or holdout), Feature Fit is not available for validation or holdout.
  • Feature Fit for holdout is not available when there is no holdout configured for the project.
source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.
Returns:
feature_effect_metadata: FeatureFitMetadata
request_feature_effect(row_count: Optional[int] = None)

Request feature effects to be computed for the model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

Returns:
job : Job

A Job representing the feature effect computation. To get the completed feature effect data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_effects_multiclass(row_count: Optional[int] = None, top_n_features: Optional[int] = None, features=None)

Request Feature Effects computation for the multiclass model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by feature impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

Returns:
job : Job

A Job representing Feature Effect computation. To get the completed Feature Effect data, use job.get_result or job.get_result_when_complete.

get_feature_effect(source: str)

Retrieve Feature Effects for the model.

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

Raises:
ClientError (404)

If the feature effects have not been computed or source is not valid value.

get_feature_effects_multiclass(source: str = 'training', class_: Optional[str] = None)

Retrieve Feature Effects for the multiclass model.

Feature Effects provide partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : str

The source Feature Effects are retrieved for.

class_ : str or None

The class name Feature Effects are retrieved for.

Returns:
list

The list of multiclass feature effects.

Raises:
ClientError (404)

If Feature Effects have not been computed or source is not valid value.

get_or_request_feature_effects_multiclass(source, top_n_features=None, features=None, row_count=None, class_=None, max_wait=600)

Retrieve Feature Effects for the multiclass model, requesting a job if it hasn’t been run previously.

Parameters:
source : string

The source Feature Effects retrieve for.

class_ : str or None

The class name Feature Effects retrieve for.

row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by Feature Impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

max_wait : int, optional

The maximum time to wait for a requested Feature Effects job to complete before erroring.

Returns:
feature_effects : list of FeatureEffectsMulticlass

The list of multiclass feature effects data.

get_or_request_feature_effect(source: str, max_wait: int = 600, row_count: Optional[int] = None)

Retrieve feature effect for the model, requesting a job if it hasn’t been run previously

See get_feature_effect_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature effect job to complete before erroring

row_count : int, optional

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

request_feature_fit()

Request feature fit to be computed for the model.

See get_feature_effect for more information on the result of the job.

Returns:
job : Job

A Job representing the feature fit computation. To get the completed feature fit data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

get_feature_fit(source: str)

Retrieve Feature Fit for the model.

Feature Fit provides partial dependence and predicted vs actual values for top-500 features ordered by feature importance score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Fit has already been computed with request_feature_effect.

See get_feature_fit_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_fit : FeatureFit

The feature fit data.

Raises:
ClientError (404)

If the feature fit have not been computed or source is not valid value.

get_or_request_feature_fit(source: str, max_wait: int = 600)

Retrieve feature fit for the model, requesting a job if it hasn’t been run previously

See get_feature_fit_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature fit job to complete before erroring

source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_effects : FeatureFit

The feature fit data.

get_prime_eligibility()

Check if this model can be approximated with DataRobot Prime

Returns:
prime_eligibility : dict

a dict indicating whether a model can be approximated with DataRobot Prime (key can_make_prime) and why it may be ineligible (key message)

request_approximation()

Request an approximation of this model using DataRobot Prime

This will create several rulesets that could be used to approximate this model. After comparing their scores and rule counts, the code used in the approximation can be downloaded and run locally.

Returns:
job : Job

the job generating the rulesets

get_rulesets() → List[datarobot.models.ruleset.Ruleset]

List the rulesets approximating this model generated by DataRobot Prime

If this model hasn’t been approximated yet, will return an empty list. Note that these are rulesets approximating this model, not rulesets used to construct this model.

Returns:
rulesets : list of Ruleset
download_export(filepath: str) → None

Download an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

Parameters:
filepath : str

The path at which to save the exported model file.

request_transferable_export(prediction_intervals_size: Optional[int] = None) → Job

Request generation of an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

This function does not download the exported file. Use download_export for that.

Parameters:
prediction_intervals_size : int, optional

(New in v2.19) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Prediction intervals size must be between 1 and 100 (inclusive).

Returns:
Job

Examples

model = datarobot.Model.get('project-id', 'model-id')
job = model.request_transferable_export()
job.wait_for_completion()
model.download_export('my_exported_model.drmodel')

# Client must be configured to use standalone prediction server for import:
datarobot.Client(token='my-token-at-standalone-server',
                 endpoint='standalone-server-url/api/v2')

imported_model = datarobot.ImportedModel.create('my_exported_model.drmodel')
request_frozen_model(sample_pct: Optional[float] = None, training_row_count: Optional[int] = None) → ModelJob

Train a new frozen model with parameters from this model

Note

This method only works if project the model belongs to is not datetime partitioned. If it is, use request_frozen_datetime_model instead.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

Parameters:
sample_pct : float

optional, the percentage of the dataset to use with the model. If not provided, will use the value from this model.

training_row_count : int

(New in version v2.9) optional, the integer number of rows of the dataset to use with the model. Only one of sample_pct and training_row_count should be specified.

Returns:
model_job : ModelJob

the modeling job training a frozen model

request_frozen_datetime_model(training_row_count: Optional[int] = None, training_duration: Optional[str] = None, training_start_date: Optional[datetime] = None, training_end_date: Optional[datetime] = None, time_window_sample_pct: Optional[int] = None, sampling_method: Optional[str] = None) → ModelJob

Train a new frozen model with parameters from this model.

Requires that this model belongs to a datetime partitioned project. If it does not, an error will occur when submitting the job.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

In addition of training_row_count and training_duration, frozen datetime models may be trained on an exact date range. Only one of training_row_count, training_duration, or training_start_date and training_end_date should be specified.

Models specified using training_start_date and training_end_date are the only ones that can be trained into the holdout data (once the holdout is unlocked).

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, training_duration may not be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, training_row_count may not be specified.

training_start_date : datetime.datetime, optional

the start date of the data to train to model on. Only rows occurring at or after this datetime will be used. If training_start_date is specified, training_end_date must also be specified.

training_end_date : datetime.datetime, optional

the end date of the data to train the model on. Only rows occurring strictly before this datetime will be used. If training_end_date is specified, training_start_date must also be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

Returns:
model_job : ModelJob

the modeling job training a frozen model

get_parameters()

Retrieve model parameters.

Returns:
ModelParameters

Model parameters for this model.

get_lift_chart(source, fallback_to_parent_insights=False)

Retrieve the model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
LiftChart

Model lift chart data

Raises:
ClientError

If the insight is not available for this model

get_all_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_multiclass_lift_chart(source, fallback_to_parent_insights=False)

Retrieve model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_all_multiclass_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_multilabel_lift_charts(source, fallback_to_parent_insights=False)

Retrieve model Lift charts for the specified source.

New in version v2.24.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_residuals_chart(source, fallback_to_parent_insights=False)

Retrieve model residuals chart for the specified source.

Parameters:
source : str

Residuals chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent if the residuals chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return residuals data from this model’s parent.

Returns:
ResidualsChart

Model residuals chart data

Raises:
ClientError

If the insight is not available for this model

get_all_residuals_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ResidualsChart

Data for all available model residuals charts.

get_pareto_front()

Retrieve the Pareto Front for a Eureqa model.

This method is only supported for Eureqa models.

Returns:
ParetoFront

Model ParetoFront data

get_confusion_chart(source, fallback_to_parent_insights=False)

Retrieve them model’s confusion matrix for the specified source.

Parameters:
source : str

Confusion chart source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent if the confusion chart is not available for this model and the defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
ConfusionChart

Model ConfusionChart data

Raises:
ClientError

If the insight is not available for this model

get_all_confusion_charts(fallback_to_parent_insights=False)

Retrieve a list of all confusion matrices available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent for any source that is not available for this model and if this has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ConfusionChart

Data for all available confusion charts for model.

get_roc_curve(source, fallback_to_parent_insights=False)

Retrieve the ROC curve for a binary model for the specified source.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
RocCurve

Model ROC curve data

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is multilabel

get_all_roc_curves(fallback_to_parent_insights=False)

Retrieve a list of all ROC curves available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of RocCurve

Data for all available model ROC curves.

get_labelwise_roc_curves(source, fallback_to_parent_insights=False)

Retrieve a list of LabelwiseRocCurve instances for a multilabel model the given source and all labels.

New in version v2.24.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
list of : class:LabelwiseRocCurve <datarobot.models.roc_curve.LabelwiseRocCurve>

Labelwise ROC Curve instances for source and all labels

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is binary

get_word_cloud(exclude_stop_words=False)

Retrieve word cloud data for the model.

Parameters:
exclude_stop_words : bool, optional

Set to True if you want stopwords filtered out of response.

Returns:
WordCloud

Word cloud data for the model.

download_scoring_code(file_name, source_code=False)

Download the Scoring Code JAR.

Parameters:
file_name : str

File path where scoring code will be saved.

source_code : bool, optional

Set to True to download source code archive. It will not be executable.

get_model_blueprint_documents()

Get documentation for tasks used in this model.

Returns:
list of BlueprintTaskDocument

All documents available for the model.

get_model_blueprint_chart()

Retrieve a diagram that can be used to understand data flow in the blueprint.

Returns:
ModelBlueprintChart

The queried model blueprint chart.

get_missing_report_info()

Retrieve a report on missing training data that can be used to understand missing values treatment in the model. The report consists of missing values resolutions for features numeric or categorical features that were part of building the model.

Returns:
An iterable of MissingReportPerFeature

The queried model missing report, sorted by missing count (DESCENDING order).

get_frozen_child_models()

Retrieve the IDs for all models that are frozen from this model.

Returns:
A list of Models
request_training_predictions(data_subset, explanation_algorithm=None, max_explanations=None)

Start a job to build training predictions

Parameters:
data_subset : str

data set definition to build predictions on. Choices are:

  • dr.enums.DATA_SUBSET.ALL or string all for all data available. Not valid for
    models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.VALIDATION_AND_HOLDOUT or string validationAndHoldout for
    all data except training set. Not valid for models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.HOLDOUT or string holdout for holdout data set only
  • dr.enums.DATA_SUBSET.ALL_BACKTESTS or string allBacktests for downloading
    the predictions for all backtest validation folds. Requires the model to have successfully scored all backtests. Datetime partitioned projects only.
explanation_algorithm : dr.enums.EXPLANATIONS_ALGORITHM

(New in v2.21) Optional. If set to dr.enums.EXPLANATIONS_ALGORITHM.SHAP, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to None (no prediction explanations).

max_explanations : int

(New in v2.21) Optional. Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of dr.enums.EXPLANATIONS_ALGORITHM.SHAP: If not set, explanations are returned for all features. If the number of features is greater than the max_explanations, the sum of remaining values will also be returned as shap_remaining_total. Max 100. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Is ignored if explanation_algorithm is not set.

Returns:
Job

an instance of created async job

cross_validate()

Run cross validation on the model.

Note

To perform Cross Validation on a new model with new parameters, use train instead.

Returns:
ModelJob

The created job to build the model

get_cross_validation_scores(partition=None, metric=None)

Return a dictionary, keyed by metric, showing cross validation scores per partition.

Cross Validation should already have been performed using cross_validate or train.

Note

Models that computed cross validation before this feature was added will need to be deleted and retrained before this method can be used.

Parameters:
partition : float

optional, the id of the partition (1,2,3.0,4.0,etc…) to filter results by can be a whole number positive integer or float value. 0 corresponds to the validation partition.

metric: unicode

optional name of the metric to filter to resulting cross validation scores by

Returns:
cross_validation_scores: dict

A dictionary keyed by metric showing cross validation scores per partition.

advanced_tune(params, description: Optional[str] = None) → ModelJob

Generate a new model with the specified advanced-tuning parameters

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Parameters:
params : dict

Mapping of parameter ID to parameter value. The list of valid parameter IDs for a model can be found by calling get_advanced_tuning_parameters(). This endpoint does not need to include values for all parameters. If a parameter is omitted, its current_value will be used.

description : str

Human-readable string describing the newly advanced-tuned model

Returns:
ModelJob

The created job to build the model

get_advanced_tuning_parameters() → AdvancedTuningParamsType

Get the advanced-tuning parameters available for this model.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
dict

A dictionary describing the advanced-tuning parameters for the current model. There are two top-level keys, tuning_description and tuning_parameters.

tuning_description an optional value. If not None, then it indicates the user-specified description of this set of tuning parameter.

tuning_parameters is a list of a dicts, each has the following keys

  • parameter_name : (str) name of the parameter (unique per task, see below)
  • parameter_id : (str) opaque ID string uniquely identifying parameter
  • default_value : (*) default value of the parameter for the blueprint
  • current_value : (*) value of the parameter that was used for this model
  • task_name : (str) name of the task that this parameter belongs to
  • constraints: (dict) see the notes below
  • vertex_id: (str) ID of vertex that this parameter belongs to

Notes

The type of default_value and current_value is defined by the constraints structure. It will be a string or numeric Python type.

constraints is a dict with at least one, possibly more, of the following keys. The presence of a key indicates that the parameter may take on the specified type. (If a key is absent, this means that the parameter may not take on the specified type.) If a key on constraints is present, its value will be a dict containing all of the fields described below for that key.

"constraints": {
    "select": {
        "values": [<list(basestring or number) : possible values>]
    },
    "ascii": {},
    "unicode": {},
    "int": {
        "min": <int : minimum valid value>,
        "max": <int : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "float": {
        "min": <float : minimum valid value>,
        "max": <float : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "intList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <int : minimum valid value>,
        "max_val": <int : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "floatList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <float : minimum valid value>,
        "max_val": <float : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    }
}

The keys have meaning as follows:

  • select: Rather than specifying a specific data type, if present, it indicates that the parameter is permitted to take on any of the specified values. Listed values may be of any string or real (non-complex) numeric type.
  • ascii: The parameter may be a unicode object that encodes simple ASCII characters. (A-Z, a-z, 0-9, whitespace, and certain common symbols.) In addition to listed constraints, ASCII keys currently may not contain either newlines or semicolons.
  • unicode: The parameter may be any Python unicode object.
  • int: The value may be an object of type int within the specified range (inclusive). Please note that the value will be passed around using the JSON format, and some JSON parsers have undefined behavior with integers outside of the range [-(2**53)+1, (2**53)-1].
  • float: The value may be an object of type float within the specified range (inclusive).
  • intList, floatList: The value may be a list of int or float objects, respectively, following constraints as specified respectively by the int and float types (above).

Many parameters only specify one key under constraints. If a parameter specifies multiple keys, the parameter may take on any value permitted by any key.

start_advanced_tuning_session()

Start an Advanced Tuning session. Returns an object that helps set up arguments for an Advanced Tuning model execution.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
AdvancedTuningSession

Session for setting up and running Advanced Tuning on a model

star_model() → None

Mark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

unstar_model() → None

Unmark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

set_prediction_threshold(threshold)

Set a custom prediction threshold for the model.

May not be used once prediction_threshold_read_only is True for this model.

Parameters:
threshold : float

only used for binary classification projects. The threshold to when deciding between the positive and negative classes when making predictions. Should be between 0.0 and 1.0 (inclusive).

download_training_artifact(file_name)

Retrieve trained artifact(s) from a model containing one or more custom tasks.

Artifact(s) will be downloaded to the specified local filepath.

Parameters:
file_name : str

File path where trained model artifact(s) will be saved.

request_fairness_insights(fairness_metrics_set=None)

Request fairness insights to be computed for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

Returns:
status_id : str

A statusId of computation request.

get_fairness_insights(fairness_metrics_set=None, offset=0, limit=100)

Retrieve a list of Per Class Bias insights for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

offset : int, optional

Number of items to skip.

limit : int, optional

Number of items to return.

Returns:
json
request_data_disparity_insights(feature, compared_class_names)

Request data disparity insights to be computed for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

compared_class_names : list(str)

List of two classes to compare

Returns:
status_id : str

A statusId of computation request.

get_data_disparity_insights(feature, class_name1, class_name2)

Retrieve a list of Cross Class Data Disparity insights for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

class_name1 : str

One of the compared classes

class_name2 : str

Another compared class

Returns:
json
request_cross_class_accuracy_scores()

Request data disparity insights to be computed for the model.

Returns:
status_id : str

A statusId of computation request.

get_cross_class_accuracy_scores()

Retrieves a list of Cross Class Accuracy scores for the model.

Returns:
json
classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

PrimeModel

class datarobot.models.PrimeModel(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, model_type=None, model_category=None, is_frozen=None, blueprint_id=None, metrics=None, parent_model_id=None, ruleset_id=None, rule_count=None, score=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, model_number=None, supports_composable_ml=None)

Represents a DataRobot Prime model approximating a parent model with downloadable code.

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

processes : list of str

the processes used by the model

featurelist_name : str

the name of the featurelist used by the model

featurelist_id : str

the id of the featurelist used by the model

sample_pct : float

the percentage of the project dataset used in training the model

training_row_count : int or None

the number of rows of the project dataset used in training the model. In a datetime partitioned project, if specified, defines the number of rows used to train the model and evaluate backtest scores; if unspecified, either training_duration or training_start_date and training_end_date was used to determine that instead.

training_duration : str or None

only present for models in datetime partitioned projects. If specified, a duration string specifying the duration spanned by the data used to train the model and evaluate backtest scores.

training_start_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the start date of the data used to train the model.

training_end_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the end date of the data used to train the model.

model_type : str

what model this is, e.g. ‘DataRobot Prime’

model_category : str

what kind of model this is - always ‘prime’ for DataRobot Prime models

is_frozen : bool

whether this model is a frozen model

blueprint_id : str

the id of the blueprint used in this model

metrics : dict

a mapping from each metric to the model’s scores for that metric

ruleset : Ruleset

the ruleset used in the Prime model

parent_model_id : str

the id of the model that this Prime model approximates

monotonic_increasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced.

monotonic_decreasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced.

supports_monotonic_constraints : bool

optional, whether this model supports enforcing monotonic constraints

is_starred : bool

whether this model is marked as starred

prediction_threshold : float

for binary classification projects, the threshold used for predictions

prediction_threshold_read_only : bool

indicated whether modification of the prediction threshold is forbidden. Threshold modification is forbidden once a model has had a deployment created or predictions made via the dedicated prediction API.

supports_composable_ml : bool or None

(New in version v2.26) whether this model is supported in the Composable ML.

classmethod get(project_id, model_id)

Retrieve a specific prime model.

Parameters:
project_id : str

The id of the project the prime model belongs to

model_id : str

The model_id of the prime model to retrieve.

Returns:
model : PrimeModel

The queried instance.

request_download_validation(language)

Prep and validate the downloadable code for the ruleset associated with this model.

Parameters:
language : str

the language the code should be downloaded in - see datarobot.enums.PRIME_LANGUAGE for available languages

Returns:
job : Job

A job tracking the code preparation and validation

advanced_tune(params, description: Optional[str] = None) → ModelJob

Generate a new model with the specified advanced-tuning parameters

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Parameters:
params : dict

Mapping of parameter ID to parameter value. The list of valid parameter IDs for a model can be found by calling get_advanced_tuning_parameters(). This endpoint does not need to include values for all parameters. If a parameter is omitted, its current_value will be used.

description : str

Human-readable string describing the newly advanced-tuned model

Returns:
ModelJob

The created job to build the model

cross_validate()

Run cross validation on the model.

Note

To perform Cross Validation on a new model with new parameters, use train instead.

Returns:
ModelJob

The created job to build the model

delete() → None

Delete a model from the project’s leaderboard.

download_export(filepath: str) → None

Download an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

Parameters:
filepath : str

The path at which to save the exported model file.

download_scoring_code(file_name, source_code=False)

Download the Scoring Code JAR.

Parameters:
file_name : str

File path where scoring code will be saved.

source_code : bool, optional

Set to True to download source code archive. It will not be executable.

download_training_artifact(file_name)

Retrieve trained artifact(s) from a model containing one or more custom tasks.

Artifact(s) will be downloaded to the specified local filepath.

Parameters:
file_name : str

File path where trained model artifact(s) will be saved.

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

get_advanced_tuning_parameters() → AdvancedTuningParamsType

Get the advanced-tuning parameters available for this model.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
dict

A dictionary describing the advanced-tuning parameters for the current model. There are two top-level keys, tuning_description and tuning_parameters.

tuning_description an optional value. If not None, then it indicates the user-specified description of this set of tuning parameter.

tuning_parameters is a list of a dicts, each has the following keys

  • parameter_name : (str) name of the parameter (unique per task, see below)
  • parameter_id : (str) opaque ID string uniquely identifying parameter
  • default_value : (*) default value of the parameter for the blueprint
  • current_value : (*) value of the parameter that was used for this model
  • task_name : (str) name of the task that this parameter belongs to
  • constraints: (dict) see the notes below
  • vertex_id: (str) ID of vertex that this parameter belongs to

Notes

The type of default_value and current_value is defined by the constraints structure. It will be a string or numeric Python type.

constraints is a dict with at least one, possibly more, of the following keys. The presence of a key indicates that the parameter may take on the specified type. (If a key is absent, this means that the parameter may not take on the specified type.) If a key on constraints is present, its value will be a dict containing all of the fields described below for that key.

"constraints": {
    "select": {
        "values": [<list(basestring or number) : possible values>]
    },
    "ascii": {},
    "unicode": {},
    "int": {
        "min": <int : minimum valid value>,
        "max": <int : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "float": {
        "min": <float : minimum valid value>,
        "max": <float : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "intList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <int : minimum valid value>,
        "max_val": <int : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "floatList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <float : minimum valid value>,
        "max_val": <float : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    }
}

The keys have meaning as follows:

  • select: Rather than specifying a specific data type, if present, it indicates that the parameter is permitted to take on any of the specified values. Listed values may be of any string or real (non-complex) numeric type.
  • ascii: The parameter may be a unicode object that encodes simple ASCII characters. (A-Z, a-z, 0-9, whitespace, and certain common symbols.) In addition to listed constraints, ASCII keys currently may not contain either newlines or semicolons.
  • unicode: The parameter may be any Python unicode object.
  • int: The value may be an object of type int within the specified range (inclusive). Please note that the value will be passed around using the JSON format, and some JSON parsers have undefined behavior with integers outside of the range [-(2**53)+1, (2**53)-1].
  • float: The value may be an object of type float within the specified range (inclusive).
  • intList, floatList: The value may be a list of int or float objects, respectively, following constraints as specified respectively by the int and float types (above).

Many parameters only specify one key under constraints. If a parameter specifies multiple keys, the parameter may take on any value permitted by any key.

get_all_confusion_charts(fallback_to_parent_insights=False)

Retrieve a list of all confusion matrices available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent for any source that is not available for this model and if this has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ConfusionChart

Data for all available confusion charts for model.

get_all_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_multiclass_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_residuals_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ResidualsChart

Data for all available model residuals charts.

get_all_roc_curves(fallback_to_parent_insights=False)

Retrieve a list of all ROC curves available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of RocCurve

Data for all available model ROC curves.

get_confusion_chart(source, fallback_to_parent_insights=False)

Retrieve them model’s confusion matrix for the specified source.

Parameters:
source : str

Confusion chart source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent if the confusion chart is not available for this model and the defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
ConfusionChart

Model ConfusionChart data

Raises:
ClientError

If the insight is not available for this model

get_cross_class_accuracy_scores()

Retrieves a list of Cross Class Accuracy scores for the model.

Returns:
json
get_cross_validation_scores(partition=None, metric=None)

Return a dictionary, keyed by metric, showing cross validation scores per partition.

Cross Validation should already have been performed using cross_validate or train.

Note

Models that computed cross validation before this feature was added will need to be deleted and retrained before this method can be used.

Parameters:
partition : float

optional, the id of the partition (1,2,3.0,4.0,etc…) to filter results by can be a whole number positive integer or float value. 0 corresponds to the validation partition.

metric: unicode

optional name of the metric to filter to resulting cross validation scores by

Returns:
cross_validation_scores: dict

A dictionary keyed by metric showing cross validation scores per partition.

get_data_disparity_insights(feature, class_name1, class_name2)

Retrieve a list of Cross Class Data Disparity insights for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

class_name1 : str

One of the compared classes

class_name2 : str

Another compared class

Returns:
json
get_fairness_insights(fairness_metrics_set=None, offset=0, limit=100)

Retrieve a list of Per Class Bias insights for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

offset : int, optional

Number of items to skip.

limit : int, optional

Number of items to return.

Returns:
json
get_feature_effect(source: str)

Retrieve Feature Effects for the model.

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

Raises:
ClientError (404)

If the feature effects have not been computed or source is not valid value.

get_feature_effect_metadata()

Retrieve Feature Effects metadata. Response contains status and available model sources.

  • Feature Fit for the training partition is always available, with the exception of older projects that only supported Feature Fit for validation.
  • When a model is trained into validation or holdout without stacked predictions (i.e., no out-of-sample predictions in those partitions), Feature Effects is not available for validation or holdout.
  • Feature Effects for holdout is not available when holdout was not unlocked for the project.

Use source to retrieve Feature Effects, selecting one of the provided sources.

Returns:
feature_effect_metadata: FeatureEffectMetadata
get_feature_effects_multiclass(source: str = 'training', class_: Optional[str] = None)

Retrieve Feature Effects for the multiclass model.

Feature Effects provide partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : str

The source Feature Effects are retrieved for.

class_ : str or None

The class name Feature Effects are retrieved for.

Returns:
list

The list of multiclass feature effects.

Raises:
ClientError (404)

If Feature Effects have not been computed or source is not valid value.

get_feature_fit(source: str)

Retrieve Feature Fit for the model.

Feature Fit provides partial dependence and predicted vs actual values for top-500 features ordered by feature importance score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Fit has already been computed with request_feature_effect.

See get_feature_fit_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_fit : FeatureFit

The feature fit data.

Raises:
ClientError (404)

If the feature fit have not been computed or source is not valid value.

get_feature_fit_metadata()
Retrieve Feature Fit metadata. Response contains status and available model sources.
  • Feature Fit of training is always available (except for the old project which supports only Feature Fit for validation).
  • When a model is trained into validation or holdout without stacked prediction (e.g. no out-of-sample prediction in validation or holdout), Feature Fit is not available for validation or holdout.
  • Feature Fit for holdout is not available when there is no holdout configured for the project.
source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.
Returns:
feature_effect_metadata: FeatureFitMetadata
get_feature_impact(with_metadata: bool = False)

Retrieve the computed Feature Impact results, a measure of the relevance of each feature in the model.

Feature Impact is computed for each column by creating new data with that column randomly permuted (but the others left unchanged), and seeing how the error metric score for the predictions is affected. The ‘impactUnnormalized’ is how much worse the error metric score is when making predictions on this modified data. The ‘impactNormalized’ is normalized so that the largest value is 1. In both cases, larger values indicate more important features.

If a feature is a redundant feature, i.e. once other features are considered it doesn’t contribute much in addition, the ‘redundantWith’ value is the name of feature that has the highest correlation with this feature. Note that redundancy detection is only available for jobs run after the addition of this feature. When retrieving data that predates this functionality, a NoRedundancyImpactAvailable warning will be used.

Elsewhere this technique is sometimes called ‘Permutation Importance’.

Requires that Feature Impact has already been computed with request_feature_impact.

Parameters:
with_metadata : bool

The flag indicating if the result should include the metadata as well.

Returns:
list or dict

The feature impact data response depends on the with_metadata parameter. The response is either a dict with metadata and a list with actual data or just a list with that data.

Each List item is a dict with the keys featureName, impactNormalized, and impactUnnormalized, redundantWith and count.

For dict response available keys are:

  • featureImpacts - Feature Impact data as a dictionary. Each item is a dict with
    keys: featureName, impactNormalized, and impactUnnormalized, and redundantWith.
  • shapBased - A boolean that indicates whether Feature Impact was calculated using
    Shapley values.
  • ranRedundancyDetection - A boolean that indicates whether redundant feature
    identification was run while calculating this Feature Impact.
  • rowCount - An integer or None that indicates the number of rows that was used to
    calculate Feature Impact. For the Feature Impact calculated with the default logic, without specifying the rowCount, we return None here.
  • count - An integer with the number of features under the featureImpacts.
Raises:
ClientError (404)

If the feature impacts have not been computed.

get_features_used() → List[str]

Query the server to determine which features were used.

Note that the data returned by this method is possibly different than the names of the features in the featurelist used by this model. This method will return the raw features that must be supplied in order for predictions to be generated on a new set of data. The featurelist, in contrast, would also include the names of derived features.

Returns:
features : list of str

The names of the features used in the model.

get_frozen_child_models()

Retrieve the IDs for all models that are frozen from this model.

Returns:
A list of Models
get_labelwise_roc_curves(source, fallback_to_parent_insights=False)

Retrieve a list of LabelwiseRocCurve instances for a multilabel model the given source and all labels.

New in version v2.24.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
list of : class:LabelwiseRocCurve <datarobot.models.roc_curve.LabelwiseRocCurve>

Labelwise ROC Curve instances for source and all labels

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is binary

Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_lift_chart(source, fallback_to_parent_insights=False)

Retrieve the model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
LiftChart

Model lift chart data

Raises:
ClientError

If the insight is not available for this model

get_missing_report_info()

Retrieve a report on missing training data that can be used to understand missing values treatment in the model. The report consists of missing values resolutions for features numeric or categorical features that were part of building the model.

Returns:
An iterable of MissingReportPerFeature

The queried model missing report, sorted by missing count (DESCENDING order).

get_model_blueprint_chart()

Retrieve a diagram that can be used to understand data flow in the blueprint.

Returns:
ModelBlueprintChart

The queried model blueprint chart.

get_model_blueprint_documents()

Get documentation for tasks used in this model.

Returns:
list of BlueprintTaskDocument

All documents available for the model.

get_multiclass_feature_impact()

For multiclass it’s possible to calculate feature impact separately for each target class. The method for calculation is exactly the same, calculated in one-vs-all style for each target class.

Requires that Feature Impact has already been computed with request_feature_impact.

Returns:
feature_impacts : list of dict

The feature impact data. Each item is a dict with the keys ‘featureImpacts’ (list), ‘class’ (str). Each item in ‘featureImpacts’ is a dict with the keys ‘featureName’, ‘impactNormalized’, and ‘impactUnnormalized’, and ‘redundantWith’.

Raises:
ClientError (404)

If the multiclass feature impacts have not been computed.

get_multiclass_lift_chart(source, fallback_to_parent_insights=False)

Retrieve model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_multilabel_lift_charts(source, fallback_to_parent_insights=False)

Retrieve model Lift charts for the specified source.

New in version v2.24.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_num_iterations_trained()

Retrieves the number of estimators trained by early-stopping tree-based models.

– versionadded:: v2.22

Returns:
projectId: str

id of project containing the model

modelId: str

id of the model

data: array

list of numEstimatorsItem objects, one for each modeling stage.

numEstimatorsItem will be of the form:
stage: str

indicates the modeling stage (for multi-stage models); None of single-stage models

numIterations: int

the number of estimators or iterations trained by the model

get_or_request_feature_effect(source: str, max_wait: int = 600, row_count: Optional[int] = None)

Retrieve feature effect for the model, requesting a job if it hasn’t been run previously

See get_feature_effect_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature effect job to complete before erroring

row_count : int, optional

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

get_or_request_feature_effects_multiclass(source, top_n_features=None, features=None, row_count=None, class_=None, max_wait=600)

Retrieve Feature Effects for the multiclass model, requesting a job if it hasn’t been run previously.

Parameters:
source : string

The source Feature Effects retrieve for.

class_ : str or None

The class name Feature Effects retrieve for.

row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by Feature Impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

max_wait : int, optional

The maximum time to wait for a requested Feature Effects job to complete before erroring.

Returns:
feature_effects : list of FeatureEffectsMulticlass

The list of multiclass feature effects data.

get_or_request_feature_fit(source: str, max_wait: int = 600)

Retrieve feature fit for the model, requesting a job if it hasn’t been run previously

See get_feature_fit_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature fit job to complete before erroring

source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_effects : FeatureFit

The feature fit data.

get_or_request_feature_impact(max_wait: int = 600, **kwargs)

Retrieve feature impact for the model, requesting a job if it hasn’t been run previously

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature impact job to complete before erroring

**kwargs

Arbitrary keyword arguments passed to request_feature_impact.

Returns:
feature_impacts : list or dict

The feature impact data. See get_feature_impact for the exact schema.

get_parameters()

Retrieve model parameters.

Returns:
ModelParameters

Model parameters for this model.

get_pareto_front()

Retrieve the Pareto Front for a Eureqa model.

This method is only supported for Eureqa models.

Returns:
ParetoFront

Model ParetoFront data

get_prime_eligibility()

Check if this model can be approximated with DataRobot Prime

Returns:
prime_eligibility : dict

a dict indicating whether a model can be approximated with DataRobot Prime (key can_make_prime) and why it may be ineligible (key message)

get_residuals_chart(source, fallback_to_parent_insights=False)

Retrieve model residuals chart for the specified source.

Parameters:
source : str

Residuals chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent if the residuals chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return residuals data from this model’s parent.

Returns:
ResidualsChart

Model residuals chart data

Raises:
ClientError

If the insight is not available for this model

get_roc_curve(source, fallback_to_parent_insights=False)

Retrieve the ROC curve for a binary model for the specified source.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
RocCurve

Model ROC curve data

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is multilabel

get_rulesets() → List[datarobot.models.ruleset.Ruleset]

List the rulesets approximating this model generated by DataRobot Prime

If this model hasn’t been approximated yet, will return an empty list. Note that these are rulesets approximating this model, not rulesets used to construct this model.

Returns:
rulesets : list of Ruleset
get_supported_capabilities()

Retrieves a summary of the capabilities supported by a model.

New in version v2.14.

Returns:
supportsBlending: bool

whether the model supports blending

supportsMonotonicConstraints: bool

whether the model supports monotonic constraints

hasWordCloud: bool

whether the model has word cloud data available

eligibleForPrime: bool

whether the model is eligible for Prime

hasParameters: bool

whether the model has parameters that can be retrieved

supportsCodeGeneration: bool

(New in version v2.18) whether the model supports code generation

supportsShap: bool
(New in version v2.18) True if the model supports Shapley package. i.e. Shapley based

feature Importance

supportsEarlyStopping: bool

(New in version v2.22) True if this is an early stopping tree-based model and number of trained iterations can be retrieved.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_word_cloud(exclude_stop_words=False)

Retrieve word cloud data for the model.

Parameters:
exclude_stop_words : bool, optional

Set to True if you want stopwords filtered out of response.

Returns:
WordCloud

Word cloud data for the model.

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

open_model_browser() → None

Opens model at project leaderboard in web browser. Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

request_cross_class_accuracy_scores()

Request data disparity insights to be computed for the model.

Returns:
status_id : str

A statusId of computation request.

request_data_disparity_insights(feature, compared_class_names)

Request data disparity insights to be computed for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

compared_class_names : list(str)

List of two classes to compare

Returns:
status_id : str

A statusId of computation request.

request_external_test(dataset_id: str, actual_value_column: Optional[str] = None)

Request external test to compute scores and insights on an external test dataset

Parameters:
dataset_id : string

The dataset to make predictions against (as uploaded from Project.upload_dataset)

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

Returns
——-
job : Job

a Job representing external dataset insights computation

request_fairness_insights(fairness_metrics_set=None)

Request fairness insights to be computed for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

Returns:
status_id : str

A statusId of computation request.

request_feature_effect(row_count: Optional[int] = None)

Request feature effects to be computed for the model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

Returns:
job : Job

A Job representing the feature effect computation. To get the completed feature effect data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_effects_multiclass(row_count: Optional[int] = None, top_n_features: Optional[int] = None, features=None)

Request Feature Effects computation for the multiclass model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by feature impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

Returns:
job : Job

A Job representing Feature Effect computation. To get the completed Feature Effect data, use job.get_result or job.get_result_when_complete.

request_feature_fit()

Request feature fit to be computed for the model.

See get_feature_effect for more information on the result of the job.

Returns:
job : Job

A Job representing the feature fit computation. To get the completed feature fit data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_impact(row_count: Optional[int] = None, with_metadata: bool = False)

Request feature impacts to be computed for the model.

See get_feature_impact for more information on the result of the job.

Parameters:
row_count : int

The sample size (specified in rows) to use for Feature Impact computation. This is not supported for unsupervised, multi-class (that has a separate method) and time series projects.

Returns:
job : Job

A Job representing the feature impact computation. To get the completed feature impact data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature impacts have already been requested.

request_predictions(dataset_id: Optional[str] = None, dataset: Optional[Dataset] = None, dataframe: Optional[pd.DataFrame] = None, file_path: Optional[str] = None, file: Optional[IOBase] = None, include_prediction_intervals: Optional[bool] = None, prediction_intervals_size: Optional[int] = None, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, explanation_algorithm: Optional[str] = None, max_explanations: Optional[int] = None, max_ngram_explanations: Optional[Union[int, str]] = None) → PredictJob

Requests predictions against a previously uploaded dataset.

Parameters:
dataset_id : string, optional

The ID of the dataset to make predictions against (as uploaded from Project.upload_dataset)

dataset : Dataset, optional

The dataset to make predictions against (as uploaded from Project.upload_dataset)

dataframe : pd.DataFrame, optional

(New in v3.0) The dataframe to make predictions against

file_path : str, optional

(New in v3.0) Path to file to make predictions against

file : IOBase, optional

(New in v3.0) File to make predictions against

include_prediction_intervals : bool, optional

(New in v2.16) For time series projects only. Specifies whether prediction intervals should be calculated for this request. Defaults to True if prediction_intervals_size is specified, otherwise defaults to False.

prediction_intervals_size : int, optional

(New in v2.16) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if include_prediction_intervals is True. Prediction intervals size must be between 1 and 100 (inclusive).

forecast_point : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

predictions_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

explanation_algorithm: (New in version v2.21) optional; If set to ‘shap’, the

response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations: (New in version v2.21) int optional; specifies the maximum number of

explanation values that should be returned for each row, ordered by absolute value, greatest to least. If null, no limit. In the case of ‘shap’: if the number of features is greater than the limit, the sum of remaining values will also be returned as shapRemainingTotal. Defaults to null. Cannot be set if explanation_algorithm is omitted.

max_ngram_explanations: optional; int or str

(New in version v2.29) Specifies the maximum number of text explanation values that should be returned. If set to all, text explanations will be computed and all the ngram explanations will be returned. If set to a non zero positive integer value, text explanations will be computed and this amount of descendingly sorted ngram explanations will be returned. By default text explanation won’t be triggered to be computed.

Returns:
job : PredictJob

The job computing the predictions

request_training_predictions(data_subset, explanation_algorithm=None, max_explanations=None)

Start a job to build training predictions

Parameters:
data_subset : str

data set definition to build predictions on. Choices are:

  • dr.enums.DATA_SUBSET.ALL or string all for all data available. Not valid for
    models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.VALIDATION_AND_HOLDOUT or string validationAndHoldout for
    all data except training set. Not valid for models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.HOLDOUT or string holdout for holdout data set only
  • dr.enums.DATA_SUBSET.ALL_BACKTESTS or string allBacktests for downloading
    the predictions for all backtest validation folds. Requires the model to have successfully scored all backtests. Datetime partitioned projects only.
explanation_algorithm : dr.enums.EXPLANATIONS_ALGORITHM

(New in v2.21) Optional. If set to dr.enums.EXPLANATIONS_ALGORITHM.SHAP, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to None (no prediction explanations).

max_explanations : int

(New in v2.21) Optional. Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of dr.enums.EXPLANATIONS_ALGORITHM.SHAP: If not set, explanations are returned for all features. If the number of features is greater than the max_explanations, the sum of remaining values will also be returned as shap_remaining_total. Max 100. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Is ignored if explanation_algorithm is not set.

Returns:
Job

an instance of created async job

request_transferable_export(prediction_intervals_size: Optional[int] = None) → Job

Request generation of an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

This function does not download the exported file. Use download_export for that.

Parameters:
prediction_intervals_size : int, optional

(New in v2.19) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Prediction intervals size must be between 1 and 100 (inclusive).

Returns:
Job

Examples

model = datarobot.Model.get('project-id', 'model-id')
job = model.request_transferable_export()
job.wait_for_completion()
model.download_export('my_exported_model.drmodel')

# Client must be configured to use standalone prediction server for import:
datarobot.Client(token='my-token-at-standalone-server',
                 endpoint='standalone-server-url/api/v2')

imported_model = datarobot.ImportedModel.create('my_exported_model.drmodel')
retrain(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, n_clusters: Optional[int] = None) → ModelJob

Submit a job to the queue to train a blender model.

Parameters:
sample_pct: float, optional

The sample size in percents (1 to 100) to use in training. If this parameter is used then training_row_count should not be given.

featurelist_id : str, optional

The featurelist id

training_row_count : int, optional

The number of rows used to train the model. If this parameter is used, then sample_pct should not be given.

n_clusters: int, optional

(new in version 2.27) number of clusters to use in an unsupervised clustering model. This parameter is used only for unsupervised clustering models that do not determine the number of clusters automatically.

Returns:
job : ModelJob

The created job that is retraining the model

set_prediction_threshold(threshold)

Set a custom prediction threshold for the model.

May not be used once prediction_threshold_read_only is True for this model.

Parameters:
threshold : float

only used for binary classification projects. The threshold to when deciding between the positive and negative classes when making predictions. Should be between 0.0 and 1.0 (inclusive).

star_model() → None

Mark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

start_advanced_tuning_session()

Start an Advanced Tuning session. Returns an object that helps set up arguments for an Advanced Tuning model execution.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
AdvancedTuningSession

Session for setting up and running Advanced Tuning on a model

unstar_model() → None

Unmark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

BlenderModel

class datarobot.models.BlenderModel(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, model_type=None, model_category=None, is_frozen=None, blueprint_id=None, metrics=None, model_ids=None, blender_method=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, model_number=None, parent_model_id=None, supports_composable_ml=None)

Represents blender model that combines prediction results from other models.

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

processes : list of str

the processes used by the model

featurelist_name : str

the name of the featurelist used by the model

featurelist_id : str

the id of the featurelist used by the model

sample_pct : float

the percentage of the project dataset used in training the model

training_row_count : int or None

the number of rows of the project dataset used in training the model. In a datetime partitioned project, if specified, defines the number of rows used to train the model and evaluate backtest scores; if unspecified, either training_duration or training_start_date and training_end_date was used to determine that instead.

training_duration : str or None

only present for models in datetime partitioned projects. If specified, a duration string specifying the duration spanned by the data used to train the model and evaluate backtest scores.

training_start_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the start date of the data used to train the model.

training_end_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the end date of the data used to train the model.

model_type : str

what model this is, e.g. ‘DataRobot Prime’

model_category : str

what kind of model this is - always ‘prime’ for DataRobot Prime models

is_frozen : bool

whether this model is a frozen model

blueprint_id : str

the id of the blueprint used in this model

metrics : dict

a mapping from each metric to the model’s scores for that metric

model_ids : list of str

List of model ids used in blender

blender_method : str

Method used to blend results from underlying models

monotonic_increasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced.

monotonic_decreasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced.

supports_monotonic_constraints : bool

optional, whether this model supports enforcing monotonic constraints

is_starred : bool

whether this model marked as starred

prediction_threshold : float

for binary classification projects, the threshold used for predictions

prediction_threshold_read_only : bool

indicated whether modification of the prediction threshold is forbidden. Threshold modification is forbidden once a model has had a deployment created or predictions made via the dedicated prediction API.

model_number : integer

model number assigned to a model

parent_model_id : str or None

(New in version v2.20) the id of the model that tuning parameters are derived from

supports_composable_ml : bool or None

(New in version v2.26) whether this model is supported in the Composable ML.

classmethod get(project_id, model_id)

Retrieve a specific blender.

Parameters:
project_id : str

The project’s id.

model_id : str

The model_id of the leaderboard item to retrieve.

Returns:
model : BlenderModel

The queried instance.

advanced_tune(params, description: Optional[str] = None) → ModelJob

Generate a new model with the specified advanced-tuning parameters

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Parameters:
params : dict

Mapping of parameter ID to parameter value. The list of valid parameter IDs for a model can be found by calling get_advanced_tuning_parameters(). This endpoint does not need to include values for all parameters. If a parameter is omitted, its current_value will be used.

description : str

Human-readable string describing the newly advanced-tuned model

Returns:
ModelJob

The created job to build the model

cross_validate()

Run cross validation on the model.

Note

To perform Cross Validation on a new model with new parameters, use train instead.

Returns:
ModelJob

The created job to build the model

delete() → None

Delete a model from the project’s leaderboard.

download_export(filepath: str) → None

Download an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

Parameters:
filepath : str

The path at which to save the exported model file.

download_scoring_code(file_name, source_code=False)

Download the Scoring Code JAR.

Parameters:
file_name : str

File path where scoring code will be saved.

source_code : bool, optional

Set to True to download source code archive. It will not be executable.

download_training_artifact(file_name)

Retrieve trained artifact(s) from a model containing one or more custom tasks.

Artifact(s) will be downloaded to the specified local filepath.

Parameters:
file_name : str

File path where trained model artifact(s) will be saved.

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

get_advanced_tuning_parameters() → AdvancedTuningParamsType

Get the advanced-tuning parameters available for this model.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
dict

A dictionary describing the advanced-tuning parameters for the current model. There are two top-level keys, tuning_description and tuning_parameters.

tuning_description an optional value. If not None, then it indicates the user-specified description of this set of tuning parameter.

tuning_parameters is a list of a dicts, each has the following keys

  • parameter_name : (str) name of the parameter (unique per task, see below)
  • parameter_id : (str) opaque ID string uniquely identifying parameter
  • default_value : (*) default value of the parameter for the blueprint
  • current_value : (*) value of the parameter that was used for this model
  • task_name : (str) name of the task that this parameter belongs to
  • constraints: (dict) see the notes below
  • vertex_id: (str) ID of vertex that this parameter belongs to

Notes

The type of default_value and current_value is defined by the constraints structure. It will be a string or numeric Python type.

constraints is a dict with at least one, possibly more, of the following keys. The presence of a key indicates that the parameter may take on the specified type. (If a key is absent, this means that the parameter may not take on the specified type.) If a key on constraints is present, its value will be a dict containing all of the fields described below for that key.

"constraints": {
    "select": {
        "values": [<list(basestring or number) : possible values>]
    },
    "ascii": {},
    "unicode": {},
    "int": {
        "min": <int : minimum valid value>,
        "max": <int : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "float": {
        "min": <float : minimum valid value>,
        "max": <float : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "intList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <int : minimum valid value>,
        "max_val": <int : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "floatList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <float : minimum valid value>,
        "max_val": <float : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    }
}

The keys have meaning as follows:

  • select: Rather than specifying a specific data type, if present, it indicates that the parameter is permitted to take on any of the specified values. Listed values may be of any string or real (non-complex) numeric type.
  • ascii: The parameter may be a unicode object that encodes simple ASCII characters. (A-Z, a-z, 0-9, whitespace, and certain common symbols.) In addition to listed constraints, ASCII keys currently may not contain either newlines or semicolons.
  • unicode: The parameter may be any Python unicode object.
  • int: The value may be an object of type int within the specified range (inclusive). Please note that the value will be passed around using the JSON format, and some JSON parsers have undefined behavior with integers outside of the range [-(2**53)+1, (2**53)-1].
  • float: The value may be an object of type float within the specified range (inclusive).
  • intList, floatList: The value may be a list of int or float objects, respectively, following constraints as specified respectively by the int and float types (above).

Many parameters only specify one key under constraints. If a parameter specifies multiple keys, the parameter may take on any value permitted by any key.

get_all_confusion_charts(fallback_to_parent_insights=False)

Retrieve a list of all confusion matrices available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent for any source that is not available for this model and if this has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ConfusionChart

Data for all available confusion charts for model.

get_all_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_multiclass_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_residuals_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ResidualsChart

Data for all available model residuals charts.

get_all_roc_curves(fallback_to_parent_insights=False)

Retrieve a list of all ROC curves available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of RocCurve

Data for all available model ROC curves.

get_confusion_chart(source, fallback_to_parent_insights=False)

Retrieve them model’s confusion matrix for the specified source.

Parameters:
source : str

Confusion chart source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent if the confusion chart is not available for this model and the defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
ConfusionChart

Model ConfusionChart data

Raises:
ClientError

If the insight is not available for this model

get_cross_class_accuracy_scores()

Retrieves a list of Cross Class Accuracy scores for the model.

Returns:
json
get_cross_validation_scores(partition=None, metric=None)

Return a dictionary, keyed by metric, showing cross validation scores per partition.

Cross Validation should already have been performed using cross_validate or train.

Note

Models that computed cross validation before this feature was added will need to be deleted and retrained before this method can be used.

Parameters:
partition : float

optional, the id of the partition (1,2,3.0,4.0,etc…) to filter results by can be a whole number positive integer or float value. 0 corresponds to the validation partition.

metric: unicode

optional name of the metric to filter to resulting cross validation scores by

Returns:
cross_validation_scores: dict

A dictionary keyed by metric showing cross validation scores per partition.

get_data_disparity_insights(feature, class_name1, class_name2)

Retrieve a list of Cross Class Data Disparity insights for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

class_name1 : str

One of the compared classes

class_name2 : str

Another compared class

Returns:
json
get_fairness_insights(fairness_metrics_set=None, offset=0, limit=100)

Retrieve a list of Per Class Bias insights for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

offset : int, optional

Number of items to skip.

limit : int, optional

Number of items to return.

Returns:
json
get_feature_effect(source: str)

Retrieve Feature Effects for the model.

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

Raises:
ClientError (404)

If the feature effects have not been computed or source is not valid value.

get_feature_effect_metadata()

Retrieve Feature Effects metadata. Response contains status and available model sources.

  • Feature Fit for the training partition is always available, with the exception of older projects that only supported Feature Fit for validation.
  • When a model is trained into validation or holdout without stacked predictions (i.e., no out-of-sample predictions in those partitions), Feature Effects is not available for validation or holdout.
  • Feature Effects for holdout is not available when holdout was not unlocked for the project.

Use source to retrieve Feature Effects, selecting one of the provided sources.

Returns:
feature_effect_metadata: FeatureEffectMetadata
get_feature_effects_multiclass(source: str = 'training', class_: Optional[str] = None)

Retrieve Feature Effects for the multiclass model.

Feature Effects provide partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : str

The source Feature Effects are retrieved for.

class_ : str or None

The class name Feature Effects are retrieved for.

Returns:
list

The list of multiclass feature effects.

Raises:
ClientError (404)

If Feature Effects have not been computed or source is not valid value.

get_feature_fit(source: str)

Retrieve Feature Fit for the model.

Feature Fit provides partial dependence and predicted vs actual values for top-500 features ordered by feature importance score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Fit has already been computed with request_feature_effect.

See get_feature_fit_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_fit : FeatureFit

The feature fit data.

Raises:
ClientError (404)

If the feature fit have not been computed or source is not valid value.

get_feature_fit_metadata()
Retrieve Feature Fit metadata. Response contains status and available model sources.
  • Feature Fit of training is always available (except for the old project which supports only Feature Fit for validation).
  • When a model is trained into validation or holdout without stacked prediction (e.g. no out-of-sample prediction in validation or holdout), Feature Fit is not available for validation or holdout.
  • Feature Fit for holdout is not available when there is no holdout configured for the project.
source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.
Returns:
feature_effect_metadata: FeatureFitMetadata
get_feature_impact(with_metadata: bool = False)

Retrieve the computed Feature Impact results, a measure of the relevance of each feature in the model.

Feature Impact is computed for each column by creating new data with that column randomly permuted (but the others left unchanged), and seeing how the error metric score for the predictions is affected. The ‘impactUnnormalized’ is how much worse the error metric score is when making predictions on this modified data. The ‘impactNormalized’ is normalized so that the largest value is 1. In both cases, larger values indicate more important features.

If a feature is a redundant feature, i.e. once other features are considered it doesn’t contribute much in addition, the ‘redundantWith’ value is the name of feature that has the highest correlation with this feature. Note that redundancy detection is only available for jobs run after the addition of this feature. When retrieving data that predates this functionality, a NoRedundancyImpactAvailable warning will be used.

Elsewhere this technique is sometimes called ‘Permutation Importance’.

Requires that Feature Impact has already been computed with request_feature_impact.

Parameters:
with_metadata : bool

The flag indicating if the result should include the metadata as well.

Returns:
list or dict

The feature impact data response depends on the with_metadata parameter. The response is either a dict with metadata and a list with actual data or just a list with that data.

Each List item is a dict with the keys featureName, impactNormalized, and impactUnnormalized, redundantWith and count.

For dict response available keys are:

  • featureImpacts - Feature Impact data as a dictionary. Each item is a dict with
    keys: featureName, impactNormalized, and impactUnnormalized, and redundantWith.
  • shapBased - A boolean that indicates whether Feature Impact was calculated using
    Shapley values.
  • ranRedundancyDetection - A boolean that indicates whether redundant feature
    identification was run while calculating this Feature Impact.
  • rowCount - An integer or None that indicates the number of rows that was used to
    calculate Feature Impact. For the Feature Impact calculated with the default logic, without specifying the rowCount, we return None here.
  • count - An integer with the number of features under the featureImpacts.
Raises:
ClientError (404)

If the feature impacts have not been computed.

get_features_used() → List[str]

Query the server to determine which features were used.

Note that the data returned by this method is possibly different than the names of the features in the featurelist used by this model. This method will return the raw features that must be supplied in order for predictions to be generated on a new set of data. The featurelist, in contrast, would also include the names of derived features.

Returns:
features : list of str

The names of the features used in the model.

get_frozen_child_models()

Retrieve the IDs for all models that are frozen from this model.

Returns:
A list of Models
get_labelwise_roc_curves(source, fallback_to_parent_insights=False)

Retrieve a list of LabelwiseRocCurve instances for a multilabel model the given source and all labels.

New in version v2.24.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
list of : class:LabelwiseRocCurve <datarobot.models.roc_curve.LabelwiseRocCurve>

Labelwise ROC Curve instances for source and all labels

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is binary

Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_lift_chart(source, fallback_to_parent_insights=False)

Retrieve the model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
LiftChart

Model lift chart data

Raises:
ClientError

If the insight is not available for this model

get_missing_report_info()

Retrieve a report on missing training data that can be used to understand missing values treatment in the model. The report consists of missing values resolutions for features numeric or categorical features that were part of building the model.

Returns:
An iterable of MissingReportPerFeature

The queried model missing report, sorted by missing count (DESCENDING order).

get_model_blueprint_chart()

Retrieve a diagram that can be used to understand data flow in the blueprint.

Returns:
ModelBlueprintChart

The queried model blueprint chart.

get_model_blueprint_documents()

Get documentation for tasks used in this model.

Returns:
list of BlueprintTaskDocument

All documents available for the model.

get_multiclass_feature_impact()

For multiclass it’s possible to calculate feature impact separately for each target class. The method for calculation is exactly the same, calculated in one-vs-all style for each target class.

Requires that Feature Impact has already been computed with request_feature_impact.

Returns:
feature_impacts : list of dict

The feature impact data. Each item is a dict with the keys ‘featureImpacts’ (list), ‘class’ (str). Each item in ‘featureImpacts’ is a dict with the keys ‘featureName’, ‘impactNormalized’, and ‘impactUnnormalized’, and ‘redundantWith’.

Raises:
ClientError (404)

If the multiclass feature impacts have not been computed.

get_multiclass_lift_chart(source, fallback_to_parent_insights=False)

Retrieve model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_multilabel_lift_charts(source, fallback_to_parent_insights=False)

Retrieve model Lift charts for the specified source.

New in version v2.24.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_num_iterations_trained()

Retrieves the number of estimators trained by early-stopping tree-based models.

– versionadded:: v2.22

Returns:
projectId: str

id of project containing the model

modelId: str

id of the model

data: array

list of numEstimatorsItem objects, one for each modeling stage.

numEstimatorsItem will be of the form:
stage: str

indicates the modeling stage (for multi-stage models); None of single-stage models

numIterations: int

the number of estimators or iterations trained by the model

get_or_request_feature_effect(source: str, max_wait: int = 600, row_count: Optional[int] = None)

Retrieve feature effect for the model, requesting a job if it hasn’t been run previously

See get_feature_effect_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature effect job to complete before erroring

row_count : int, optional

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

get_or_request_feature_effects_multiclass(source, top_n_features=None, features=None, row_count=None, class_=None, max_wait=600)

Retrieve Feature Effects for the multiclass model, requesting a job if it hasn’t been run previously.

Parameters:
source : string

The source Feature Effects retrieve for.

class_ : str or None

The class name Feature Effects retrieve for.

row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by Feature Impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

max_wait : int, optional

The maximum time to wait for a requested Feature Effects job to complete before erroring.

Returns:
feature_effects : list of FeatureEffectsMulticlass

The list of multiclass feature effects data.

get_or_request_feature_fit(source: str, max_wait: int = 600)

Retrieve feature fit for the model, requesting a job if it hasn’t been run previously

See get_feature_fit_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature fit job to complete before erroring

source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_effects : FeatureFit

The feature fit data.

get_or_request_feature_impact(max_wait: int = 600, **kwargs)

Retrieve feature impact for the model, requesting a job if it hasn’t been run previously

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature impact job to complete before erroring

**kwargs

Arbitrary keyword arguments passed to request_feature_impact.

Returns:
feature_impacts : list or dict

The feature impact data. See get_feature_impact for the exact schema.

get_parameters()

Retrieve model parameters.

Returns:
ModelParameters

Model parameters for this model.

get_pareto_front()

Retrieve the Pareto Front for a Eureqa model.

This method is only supported for Eureqa models.

Returns:
ParetoFront

Model ParetoFront data

get_prime_eligibility()

Check if this model can be approximated with DataRobot Prime

Returns:
prime_eligibility : dict

a dict indicating whether a model can be approximated with DataRobot Prime (key can_make_prime) and why it may be ineligible (key message)

get_residuals_chart(source, fallback_to_parent_insights=False)

Retrieve model residuals chart for the specified source.

Parameters:
source : str

Residuals chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent if the residuals chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return residuals data from this model’s parent.

Returns:
ResidualsChart

Model residuals chart data

Raises:
ClientError

If the insight is not available for this model

get_roc_curve(source, fallback_to_parent_insights=False)

Retrieve the ROC curve for a binary model for the specified source.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
RocCurve

Model ROC curve data

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is multilabel

get_rulesets() → List[datarobot.models.ruleset.Ruleset]

List the rulesets approximating this model generated by DataRobot Prime

If this model hasn’t been approximated yet, will return an empty list. Note that these are rulesets approximating this model, not rulesets used to construct this model.

Returns:
rulesets : list of Ruleset
get_supported_capabilities()

Retrieves a summary of the capabilities supported by a model.

New in version v2.14.

Returns:
supportsBlending: bool

whether the model supports blending

supportsMonotonicConstraints: bool

whether the model supports monotonic constraints

hasWordCloud: bool

whether the model has word cloud data available

eligibleForPrime: bool

whether the model is eligible for Prime

hasParameters: bool

whether the model has parameters that can be retrieved

supportsCodeGeneration: bool

(New in version v2.18) whether the model supports code generation

supportsShap: bool
(New in version v2.18) True if the model supports Shapley package. i.e. Shapley based

feature Importance

supportsEarlyStopping: bool

(New in version v2.22) True if this is an early stopping tree-based model and number of trained iterations can be retrieved.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_word_cloud(exclude_stop_words=False)

Retrieve word cloud data for the model.

Parameters:
exclude_stop_words : bool, optional

Set to True if you want stopwords filtered out of response.

Returns:
WordCloud

Word cloud data for the model.

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

open_model_browser() → None

Opens model at project leaderboard in web browser. Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

request_approximation()

Request an approximation of this model using DataRobot Prime

This will create several rulesets that could be used to approximate this model. After comparing their scores and rule counts, the code used in the approximation can be downloaded and run locally.

Returns:
job : Job

the job generating the rulesets

request_cross_class_accuracy_scores()

Request data disparity insights to be computed for the model.

Returns:
status_id : str

A statusId of computation request.

request_data_disparity_insights(feature, compared_class_names)

Request data disparity insights to be computed for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

compared_class_names : list(str)

List of two classes to compare

Returns:
status_id : str

A statusId of computation request.

request_external_test(dataset_id: str, actual_value_column: Optional[str] = None)

Request external test to compute scores and insights on an external test dataset

Parameters:
dataset_id : string

The dataset to make predictions against (as uploaded from Project.upload_dataset)

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

Returns
——-
job : Job

a Job representing external dataset insights computation

request_fairness_insights(fairness_metrics_set=None)

Request fairness insights to be computed for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

Returns:
status_id : str

A statusId of computation request.

request_feature_effect(row_count: Optional[int] = None)

Request feature effects to be computed for the model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

Returns:
job : Job

A Job representing the feature effect computation. To get the completed feature effect data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_effects_multiclass(row_count: Optional[int] = None, top_n_features: Optional[int] = None, features=None)

Request Feature Effects computation for the multiclass model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by feature impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

Returns:
job : Job

A Job representing Feature Effect computation. To get the completed Feature Effect data, use job.get_result or job.get_result_when_complete.

request_feature_fit()

Request feature fit to be computed for the model.

See get_feature_effect for more information on the result of the job.

Returns:
job : Job

A Job representing the feature fit computation. To get the completed feature fit data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_impact(row_count: Optional[int] = None, with_metadata: bool = False)

Request feature impacts to be computed for the model.

See get_feature_impact for more information on the result of the job.

Parameters:
row_count : int

The sample size (specified in rows) to use for Feature Impact computation. This is not supported for unsupervised, multi-class (that has a separate method) and time series projects.

Returns:
job : Job

A Job representing the feature impact computation. To get the completed feature impact data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature impacts have already been requested.

request_frozen_datetime_model(training_row_count: Optional[int] = None, training_duration: Optional[str] = None, training_start_date: Optional[datetime] = None, training_end_date: Optional[datetime] = None, time_window_sample_pct: Optional[int] = None, sampling_method: Optional[str] = None) → ModelJob

Train a new frozen model with parameters from this model.

Requires that this model belongs to a datetime partitioned project. If it does not, an error will occur when submitting the job.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

In addition of training_row_count and training_duration, frozen datetime models may be trained on an exact date range. Only one of training_row_count, training_duration, or training_start_date and training_end_date should be specified.

Models specified using training_start_date and training_end_date are the only ones that can be trained into the holdout data (once the holdout is unlocked).

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, training_duration may not be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, training_row_count may not be specified.

training_start_date : datetime.datetime, optional

the start date of the data to train to model on. Only rows occurring at or after this datetime will be used. If training_start_date is specified, training_end_date must also be specified.

training_end_date : datetime.datetime, optional

the end date of the data to train the model on. Only rows occurring strictly before this datetime will be used. If training_end_date is specified, training_start_date must also be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

Returns:
model_job : ModelJob

the modeling job training a frozen model

request_frozen_model(sample_pct: Optional[float] = None, training_row_count: Optional[int] = None) → ModelJob

Train a new frozen model with parameters from this model

Note

This method only works if project the model belongs to is not datetime partitioned. If it is, use request_frozen_datetime_model instead.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

Parameters:
sample_pct : float

optional, the percentage of the dataset to use with the model. If not provided, will use the value from this model.

training_row_count : int

(New in version v2.9) optional, the integer number of rows of the dataset to use with the model. Only one of sample_pct and training_row_count should be specified.

Returns:
model_job : ModelJob

the modeling job training a frozen model

request_predictions(dataset_id: Optional[str] = None, dataset: Optional[Dataset] = None, dataframe: Optional[pd.DataFrame] = None, file_path: Optional[str] = None, file: Optional[IOBase] = None, include_prediction_intervals: Optional[bool] = None, prediction_intervals_size: Optional[int] = None, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, explanation_algorithm: Optional[str] = None, max_explanations: Optional[int] = None, max_ngram_explanations: Optional[Union[int, str]] = None) → PredictJob

Requests predictions against a previously uploaded dataset.

Parameters:
dataset_id : string, optional

The ID of the dataset to make predictions against (as uploaded from Project.upload_dataset)

dataset : Dataset, optional

The dataset to make predictions against (as uploaded from Project.upload_dataset)

dataframe : pd.DataFrame, optional

(New in v3.0) The dataframe to make predictions against

file_path : str, optional

(New in v3.0) Path to file to make predictions against

file : IOBase, optional

(New in v3.0) File to make predictions against

include_prediction_intervals : bool, optional

(New in v2.16) For time series projects only. Specifies whether prediction intervals should be calculated for this request. Defaults to True if prediction_intervals_size is specified, otherwise defaults to False.

prediction_intervals_size : int, optional

(New in v2.16) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if include_prediction_intervals is True. Prediction intervals size must be between 1 and 100 (inclusive).

forecast_point : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

predictions_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

explanation_algorithm: (New in version v2.21) optional; If set to ‘shap’, the

response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations: (New in version v2.21) int optional; specifies the maximum number of

explanation values that should be returned for each row, ordered by absolute value, greatest to least. If null, no limit. In the case of ‘shap’: if the number of features is greater than the limit, the sum of remaining values will also be returned as shapRemainingTotal. Defaults to null. Cannot be set if explanation_algorithm is omitted.

max_ngram_explanations: optional; int or str

(New in version v2.29) Specifies the maximum number of text explanation values that should be returned. If set to all, text explanations will be computed and all the ngram explanations will be returned. If set to a non zero positive integer value, text explanations will be computed and this amount of descendingly sorted ngram explanations will be returned. By default text explanation won’t be triggered to be computed.

Returns:
job : PredictJob

The job computing the predictions

request_training_predictions(data_subset, explanation_algorithm=None, max_explanations=None)

Start a job to build training predictions

Parameters:
data_subset : str

data set definition to build predictions on. Choices are:

  • dr.enums.DATA_SUBSET.ALL or string all for all data available. Not valid for
    models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.VALIDATION_AND_HOLDOUT or string validationAndHoldout for
    all data except training set. Not valid for models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.HOLDOUT or string holdout for holdout data set only
  • dr.enums.DATA_SUBSET.ALL_BACKTESTS or string allBacktests for downloading
    the predictions for all backtest validation folds. Requires the model to have successfully scored all backtests. Datetime partitioned projects only.
explanation_algorithm : dr.enums.EXPLANATIONS_ALGORITHM

(New in v2.21) Optional. If set to dr.enums.EXPLANATIONS_ALGORITHM.SHAP, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to None (no prediction explanations).

max_explanations : int

(New in v2.21) Optional. Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of dr.enums.EXPLANATIONS_ALGORITHM.SHAP: If not set, explanations are returned for all features. If the number of features is greater than the max_explanations, the sum of remaining values will also be returned as shap_remaining_total. Max 100. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Is ignored if explanation_algorithm is not set.

Returns:
Job

an instance of created async job

request_transferable_export(prediction_intervals_size: Optional[int] = None) → Job

Request generation of an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

This function does not download the exported file. Use download_export for that.

Parameters:
prediction_intervals_size : int, optional

(New in v2.19) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Prediction intervals size must be between 1 and 100 (inclusive).

Returns:
Job

Examples

model = datarobot.Model.get('project-id', 'model-id')
job = model.request_transferable_export()
job.wait_for_completion()
model.download_export('my_exported_model.drmodel')

# Client must be configured to use standalone prediction server for import:
datarobot.Client(token='my-token-at-standalone-server',
                 endpoint='standalone-server-url/api/v2')

imported_model = datarobot.ImportedModel.create('my_exported_model.drmodel')
retrain(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, n_clusters: Optional[int] = None) → ModelJob

Submit a job to the queue to train a blender model.

Parameters:
sample_pct: float, optional

The sample size in percents (1 to 100) to use in training. If this parameter is used then training_row_count should not be given.

featurelist_id : str, optional

The featurelist id

training_row_count : int, optional

The number of rows used to train the model. If this parameter is used, then sample_pct should not be given.

n_clusters: int, optional

(new in version 2.27) number of clusters to use in an unsupervised clustering model. This parameter is used only for unsupervised clustering models that do not determine the number of clusters automatically.

Returns:
job : ModelJob

The created job that is retraining the model

set_prediction_threshold(threshold)

Set a custom prediction threshold for the model.

May not be used once prediction_threshold_read_only is True for this model.

Parameters:
threshold : float

only used for binary classification projects. The threshold to when deciding between the positive and negative classes when making predictions. Should be between 0.0 and 1.0 (inclusive).

star_model() → None

Mark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

start_advanced_tuning_session()

Start an Advanced Tuning session. Returns an object that helps set up arguments for an Advanced Tuning model execution.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
AdvancedTuningSession

Session for setting up and running Advanced Tuning on a model

train(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, scoring_type: Optional[str] = None, training_row_count: Optional[int] = None, monotonic_increasing_featurelist_id: Union[str, object, None] = <object object>, monotonic_decreasing_featurelist_id: Union[str, object, None] = <object object>) → str

Train the blueprint used in model on a particular featurelist or amount of data.

This method creates a new training job for worker and appends it to the end of the queue for this project. After the job has finished you can get the newly trained model by retrieving it from the project leaderboard, or by retrieving the result of the job.

Either sample_pct or training_row_count can be used to specify the amount of data to use, but not both. If neither are specified, a default of the maximum amount of data that can safely be used to train any blueprint without going into the validation data will be selected.

In smart-sampled projects, sample_pct and training_row_count are assumed to be in terms of rows of the minority class.

Note

For datetime partitioned projects, see train_datetime instead.

Parameters:
sample_pct : float, optional

The amount of data to use for training, as a percentage of the project dataset from 0 to 100.

featurelist_id : str, optional

The identifier of the featurelist to use. If not defined, the featurelist of this model is used.

scoring_type : str, optional

Either validation or crossValidation (also dr.SCORING_TYPE.validation or dr.SCORING_TYPE.cross_validation). validation is available for every partitioning type, and indicates that the default model validation should be used for the project. If the project uses a form of cross-validation partitioning, crossValidation can also be used to indicate that all of the available training/validation combinations should be used to evaluate the model.

training_row_count : int, optional

The number of rows to use to train the requested model.

monotonic_increasing_featurelist_id : str

(new in version 2.11) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str

(new in version 2.11) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
model_job_id : str

id of created job, can be used as parameter to ModelJob.get method or wait_for_async_model_creation function

Examples

project = Project.get('project-id')
model = Model.get('project-id', 'model-id')
model_job_id = model.train(training_row_count=project.max_train_rows)
train_datetime(featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, training_duration: Optional[str] = None, time_window_sample_pct: Optional[int] = None, monotonic_increasing_featurelist_id: Optional[Union[str, object]] = <object object>, monotonic_decreasing_featurelist_id: Optional[Union[str, object]] = <object object>, use_project_settings: bool = False, sampling_method: Optional[str] = None) → ModelJob

Trains this model on a different featurelist or sample size.

Requires that this model is part of a datetime partitioned project; otherwise, an error will occur.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
featurelist_id : str, optional

the featurelist to use to train the model. If not specified, the featurelist of this model is used.

training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, neither training_duration nor use_project_settings may be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, neither training_row_count nor use_project_settings may be specified.

use_project_settings : bool, optional

(New in version v2.20) defaults to False. If True, indicates that the custom backtest partitioning settings specified by the user will be used to train the model and evaluate backtest scores. If specified, neither training_row_count nor training_duration may be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

monotonic_increasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
job : ModelJob

the created job to build the model

unstar_model() → None

Unmark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

DatetimeModel

class datarobot.models.DatetimeModel(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, time_window_sample_pct=None, sampling_method=None, model_type=None, model_category=None, is_frozen=None, blueprint_id=None, metrics=None, training_info=None, holdout_score=None, holdout_status=None, data_selection_method=None, backtests=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, effective_feature_derivation_window_start=None, effective_feature_derivation_window_end=None, forecast_window_start=None, forecast_window_end=None, windows_basis_unit=None, model_number=None, parent_model_id=None, use_project_settings=None, supports_composable_ml=None)

Represents a model from a datetime partitioned project

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Note that only one of training_row_count, training_duration, and training_start_date and training_end_date will be specified, depending on the data_selection_method of the model. Whichever method was selected determines the amount of data used to train on when making predictions and scoring the backtests and the holdout.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

processes : list of str

the processes used by the model

featurelist_name : str

the name of the featurelist used by the model

featurelist_id : str

the id of the featurelist used by the model

sample_pct : float

the percentage of the project dataset used in training the model

training_row_count : int or None

If specified, an int specifying the number of rows used to train the model and evaluate backtest scores.

training_duration : str or None

If specified, a duration string specifying the duration spanned by the data used to train the model and evaluate backtest scores.

training_start_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the start date of the data used to train the model.

training_end_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the end date of the data used to train the model.

time_window_sample_pct : int or None

An integer between 1 and 99 indicating the percentage of sampling within the training window. The points kept are determined by a random uniform sample. If not specified, no sampling was done.

sampling_method : str or None

(New in v2.23) indicates the way training data has been selected (either how rows have been selected within backtest or how time_window_sample_pct has been applied).

model_type : str

what model this is, e.g. ‘Nystroem Kernel SVM Regressor’

model_category : str

what kind of model this is - ‘prime’ for DataRobot Prime models, ‘blend’ for blender models, and ‘model’ for other models

is_frozen : bool

whether this model is a frozen model

blueprint_id : str

the id of the blueprint used in this model

metrics : dict

a mapping from each metric to the model’s scores for that metric. The keys in metrics are the different metrics used to evaluate the model, and the values are the results. The dictionaries inside of metrics will be as described here: ‘validation’, the score for a single backtest; ‘crossValidation’, always None; ‘backtesting’, the average score for all backtests if all are available and computed, or None otherwise; ‘backtestingScores’, a list of scores for all backtests where the score is None if that backtest does not have a score available; and ‘holdout’, the score for the holdout or None if the holdout is locked or the score is unavailable.

backtests : list of dict

describes what data was used to fit each backtest, the score for the project metric, and why the backtest score is unavailable if it is not provided.

data_selection_method : str

which of training_row_count, training_duration, or training_start_data and training_end_date were used to determine the data used to fit the model. One of ‘rowCount’, ‘duration’, or ‘selectedDateRange’.

training_info : dict

describes which data was used to train on when scoring the holdout and making predictions. training_info` will have the following keys: holdout_training_start_date, holdout_training_duration, holdout_training_row_count, holdout_training_end_date, prediction_training_start_date, prediction_training_duration, prediction_training_row_count, prediction_training_end_date. Start and end dates will be datetimes, durations will be duration strings, and rows will be integers.

holdout_score : float or None

the score against the holdout, if available and the holdout is unlocked, according to the project metric.

holdout_status : string or None

the status of the holdout score, e.g. “COMPLETED”, “HOLDOUT_BOUNDARIES_EXCEEDED”. Unavailable if the holdout fold was disabled in the partitioning configuration.

monotonic_increasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced.

monotonic_decreasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced.

supports_monotonic_constraints : bool

optional, whether this model supports enforcing monotonic constraints

is_starred : bool

whether this model marked as starred

prediction_threshold : float

for binary classification projects, the threshold used for predictions

prediction_threshold_read_only : bool

indicated whether modification of the prediction threshold is forbidden. Threshold modification is forbidden once a model has had a deployment created or predictions made via the dedicated prediction API.

effective_feature_derivation_window_start : int or None

(New in v2.16) For time series projects only. How many units of the windows_basis_unit into the past relative to the forecast point the user needs to provide history for at prediction time. This can differ from the feature_derivation_window_start set on the project due to the differencing method and period selected, or if the model is a time series native model such as ARIMA. Will be a negative integer in time series projects and None otherwise.

effective_feature_derivation_window_end : int or None

(New in v2.16) For time series projects only. How many units of the windows_basis_unit into the past relative to the forecast point the feature derivation window should end. Will be a non-positive integer in time series projects and None otherwise.

forecast_window_start : int or None

(New in v2.16) For time series projects only. How many units of the windows_basis_unit into the future relative to the forecast point the forecast window should start. Note that this field will be the same as what is shown in the project settings. Will be a non-negative integer in time series projects and None otherwise.

forecast_window_end : int or None

(New in v2.16) For time series projects only. How many units of the windows_basis_unit into the future relative to the forecast point the forecast window should end. Note that this field will be the same as what is shown in the project settings. Will be a non-negative integer in time series projects and None otherwise.

windows_basis_unit : str or None

(New in v2.16) For time series projects only. Indicates which unit is the basis for the feature derivation window and the forecast window. Note that this field will be the same as what is shown in the project settings. In time series projects, will be either the detected time unit or “ROW”, and None otherwise.

model_number : integer

model number assigned to a model

parent_model_id : str or None

(New in version v2.20) the id of the model that tuning parameters are derived from

use_project_settings : bool or None

(New in version v2.20) If True, indicates that the custom backtest partitioning settings specified by the user were used to train the model and evaluate backtest scores.

supports_composable_ml : bool or None

(New in version v2.26) whether this model is supported in the Composable ML.

classmethod get(project, model_id)

Retrieve a specific datetime model.

If the project does not use datetime partitioning, a ClientError will occur.

Parameters:
project : str

the id of the project the model belongs to

model_id : str

the id of the model to retrieve

Returns:
model : DatetimeModel

the model

score_backtests()

Compute the scores for all available backtests.

Some backtests may be unavailable if the model is trained into their validation data.

Returns:
job : Job

a job tracking the backtest computation. When it is complete, all available backtests will have scores computed.

cross_validate() → NoReturn

Inherited from the model. DatetimeModels cannot request cross validation scores; use backtests instead.

get_cross_validation_scores(partition=None, metric=None) → NoReturn

Inherited from Model - DatetimeModels cannot request Cross Validation scores,

Use backtests instead.

request_training_predictions(data_subset, *args, **kwargs)

Start a job that builds training predictions.

Parameters:
data_subset : str

data set definition to build predictions on. Choices are:

  • dr.enums.DATA_SUBSET.HOLDOUT for holdout data set only
  • dr.enums.DATA_SUBSET.ALL_BACKTESTS for downloading the predictions for all
    backtest validation folds. Requires the model to have successfully scored all backtests.
Returns
——-
Job

an instance of created async job

get_series_accuracy_as_dataframe(offset=0, limit=100, metric=None, multiseries_value=None, order_by=None, reverse=False)

Retrieve series accuracy results for the specified model as a pandas.DataFrame.

Parameters:
offset : int, optional

The number of results to skip. Defaults to 0 if not specified.

limit : int, optional

The maximum number of results to return. Defaults to 100 if not specified.

metric : str, optional

The name of the metric to retrieve scores for. If omitted, the default project metric will be used.

multiseries_value : str, optional

If specified, only the series containing the given value in one of the series ID columns will be returned.

order_by : str, optional

Used for sorting the series. Attribute must be one of datarobot.enums.SERIES_ACCURACY_ORDER_BY.

reverse : bool, optional

Used for sorting the series. If True, will sort the series in descending order by the attribute specified by order_by.

Returns:
data

A pandas.DataFrame with the Series Accuracy for the specified model.

download_series_accuracy_as_csv(filename, encoding='utf-8', offset=0, limit=100, metric=None, multiseries_value=None, order_by=None, reverse=False)

Save series accuracy results for the specified model in a CSV file.

Parameters:
filename : str or file object

The path or file object to save the data to.

encoding : str, optional

A string representing the encoding to use in the output csv file. Defaults to ‘utf-8’.

offset : int, optional

The number of results to skip. Defaults to 0 if not specified.

limit : int, optional

The maximum number of results to return. Defaults to 100 if not specified.

metric : str, optional

The name of the metric to retrieve scores for. If omitted, the default project metric will be used.

multiseries_value : str, optional

If specified, only the series containing the given value in one of the series ID columns will be returned.

order_by : str, optional

Used for sorting the series. Attribute must be one of datarobot.enums.SERIES_ACCURACY_ORDER_BY.

reverse : bool, optional

Used for sorting the series. If True, will sort the series in descending order by the attribute specified by order_by.

compute_series_accuracy(compute_all_series=False)

Compute series accuracy for the model.

Parameters:
compute_all_series : bool, optional

Calculate accuracy for all series or only first 1000.

Returns:
Job

an instance of the created async job

retrain(time_window_sample_pct=None, featurelist_id=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, sampling_method=None)

Retrain an existing datetime model using a new training period for the model’s training set (with optional time window sampling) or a different feature list.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
featurelist_id : str, optional

The ID of the featurelist to use.

training_row_count : int, optional

The number of rows to train the model on. If this parameter is used then sample_pct cannot be specified.

time_window_sample_pct : int, optional

An int between 1 and 99 indicating the percentage of sampling within the time window. The points kept are determined by a random uniform sample. If specified, training_row_count must not be specified and either training_duration or training_start_date and training_end_date must be specified.

training_duration : str, optional

A duration string representing the training duration for the submitted model. If specified then training_row_count, training_start_date, and training_end_date cannot be specified.

training_start_date : str, optional

A datetime string representing the start date of the data to use for training this model. If specified, training_end_date must also be specified, and training_duration cannot be specified. The value must be before the training_end_date value.

training_end_date : str, optional

A datetime string representing the end date of the data to use for training this model. If specified, training_start_date must also be specified, and training_duration cannot be specified. The value must be after the training_start_date value.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

Returns:
job : ModelJob

The created job that is retraining the model

get_feature_effect_metadata()

Retrieve Feature Effect metadata for each backtest. Response contains status and available sources for each backtest of the model.

  • Each backtest is available for training and validation
  • If holdout is configured for the project it has holdout as backtestIndex. It has training and holdout sources available.

Start/stop models contain a single response item with startstop value for backtestIndex.

  • Feature Effect of training is always available (except for the old project which supports only Feature Effect for validation).
  • When a model is trained into validation or holdout without stacked prediction (e.g. no out-of-sample prediction in validation or holdout), Feature Effect is not available for validation or holdout.
  • Feature Effect for holdout is not available when there is no holdout configured for the project.

source is expected parameter to retrieve Feature Effect. One of provided sources shall be used.

backtestIndex is expected parameter to submit compute request and retrieve Feature Effect. One of provided backtest indexes shall be used.

Returns:
feature_effect_metadata: FeatureEffectMetadataDatetime
get_feature_fit_metadata()

Retrieve Feature Fit metadata for each backtest. Response contains status and available sources for each backtest of the model.

  • Each backtest is available for training and validation
  • If holdout is configured for the project it has holdout as backtestIndex. It has training and holdout sources available.

Start/stop models contain a single response item with startstop value for backtestIndex.

  • Feature Fit of training is always available (except for the old project which supports only Feature Effect for validation).
  • When a model is trained into validation or holdout without stacked prediction (e.g. no out-of-sample prediction in validation or holdout), Feature Fit is not available for validation or holdout.
  • Feature Fit for holdout is not available when there is no holdout configured for the project.

source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.

backtestIndex is expected parameter to submit compute request and retrieve Feature Fit. One of provided backtest indexes shall be used.

Returns:
feature_effect_metadata: FeatureFitMetadataDatetime
request_feature_effect(backtest_index)

Request feature effects to be computed for the model.

See get_feature_effect for more information on the result of the job.

See get_feature_effect_metadata for retrieving information of backtest_index.

Parameters:
backtest_index: string, FeatureEffectMetadataDatetime.backtest_index.

The backtest index to retrieve Feature Effects for.

Returns:
job : Job

A Job representing the feature effect computation. To get the completed feature effect data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

get_feature_effect(source, backtest_index)

Retrieve Feature Effects for the model.

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information of source, backtest_index.

Parameters:
source: string

The source Feature Effects are retrieved for. One value of [FeatureEffectMetadataDatetime.sources]. To retrieve the available sources for feature effect.

backtest_index: string, FeatureEffectMetadataDatetime.backtest_index.

The backtest index to retrieve Feature Effects for.

Returns:
feature_effects: FeatureEffects

The feature effects data.

Raises:
ClientError (404)

If the feature effects have not been computed or source is not valid value.

get_or_request_feature_effect(source, backtest_index, max_wait=600)

Retrieve feature effect for the model, requesting a job if it hasn’t been run previously

See get_feature_effect_metadata for retrieving information of source, backtest_index.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature effect job to complete before erroring

source : string

The source Feature Effects are retrieved for. One value of [FeatureEffectMetadataDatetime.sources]. To retrieve the available sources for feature effect.

backtest_index: string, FeatureEffectMetadataDatetime.backtest_index.

The backtest index to retrieve Feature Effects for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

request_feature_effects_multiclass(backtest_index, row_count=None, top_n_features=None, features=None)

Request feature effects to be computed for the multiclass datetime model.

See get_feature_effect for more information on the result of the job.

Parameters:
backtest_index : str

The backtest index to use for Feature Effects calculation.

row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by Feature Impact) used to calculate Feature Effects.

features : list or None

The list of features to use to calculate Feature Effects.

Returns:
job : Job

A Job representing Feature Effects computation. To get the completed Feature Effect data, use job.get_result or job.get_result_when_complete.

get_feature_effects_multiclass(backtest_index, source='training', class_=None)

Retrieve Feature Effects for the multiclass datetime model.

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
backtest_index : str

The backtest index to retrieve Feature Effects for.

source : str

The source Feature Effects are retrieved for.

class_ : str or None

The class name Feature Effects are retrieved for.

Returns:
list

The list of multiclass Feature Effects.

Raises:
ClientError (404)

If the Feature Effects have not been computed or source is not valid value.

get_or_request_feature_effects_multiclass(backtest_index, source, top_n_features=None, features=None, row_count=None, class_=None, max_wait=600)

Retrieve Feature Effects for a datetime multiclass model, and request a job if it hasn’t been run previously.

Parameters:
backtest_index : str

The backtest index to retrieve Feature Effects for.

source : string

The source from which Feature Effects are retrieved.

class_ : str or None

The class name Feature Effects retrieve for.

row_count : int

The number of rows used from the dataset for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by feature impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

max_wait : int, optional

The maximum time to wait for a requested feature effect job to complete before erroring.

Returns:
feature_effects : list of FeatureEffectsMulticlass

The list of multiclass feature effects data.

request_feature_fit(backtest_index)

Request feature fit to be computed for the model.

See get_feature_fit for more information on the result of the job.

See get_feature_fit_metadata for retrieving information of backtest_index.

Parameters:
backtest_index: string, FeatureFitMetadataDatetime.backtest_index.

The backtest index to retrieve Feature Fit for.

Returns:
job : Job

A Job representing the feature fit computation. To get the completed feature fit data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature fit have already been requested.

get_feature_fit(source, backtest_index)

Retrieve Feature Fit for the model.

Feature Fit provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Fit has already been computed with request_feature_fit.

See get_feature_fit_metadata for retrieving information of source, backtest_index.

Parameters:
source: string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadataDatetime.sources]. To retrieve the available sources for feature fit.

backtest_index: string, FeatureFitMetadataDatetime.backtest_index.

The backtest index to retrieve Feature Fit for.

Returns:
feature_fit: FeatureFit

The feature fit data.

Raises:
ClientError (404)

If the feature fit have not been computed or source is not valid value.

get_or_request_feature_fit(source, backtest_index, max_wait=600)

Retrieve feature fit for the model, requesting a job if it hasn’t been run previously

See get_feature_fit_metadata for retrieving information of source, backtest_index.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature fit job to complete before erroring

source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadataDatetime.sources]. To retrieve the available sources for feature effect.

backtest_index: string, FeatureFitMetadataDatetime.backtest_index.

The backtest index to retrieve Feature Fit for.

Returns:
feature_fit : FeatureFit

The feature fit data.

calculate_prediction_intervals(prediction_intervals_size: int) → Job

Calculate prediction intervals for this DatetimeModel for the specified size.

New in version v2.19.

Parameters:
prediction_intervals_size : int

The prediction interval’s size to calculate for this model. See the prediction intervals documentation for more information.

Returns:
job : Job

a Job tracking the prediction intervals computation

get_calculated_prediction_intervals(offset=None, limit=None)

Retrieve a list of already-calculated prediction intervals for this model

New in version v2.19.

Parameters:
offset : int, optional

If provided, this many results will be skipped

limit : int, optional

If provided, at most this many results will be returned. If not provided, will return at most 100 results.

Returns:
list[int]

A descending-ordered list of already-calculated prediction interval sizes

compute_datetime_trend_plots(backtest=0, source='validation', forecast_distance_start=None, forecast_distance_end=None)

Computes datetime trend plots (Accuracy over Time, Forecast vs Actual, Anomaly over Time) for this model

New in version v2.25.

Parameters:
backtest : int or string, optional

Compute plots for a specific backtest (use the backtest index starting from zero). To compute plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

forecast_distance_start : int, optional:

The start of forecast distance range (forecast window) to compute. If not specified, the first forecast distance for this project will be used. Only for time series supervised models

forecast_distance_end : int, optional:

The end of forecast distance range (forecast window) to compute. If not specified, the last forecast distance for this project will be used. Only for time series supervised models

Returns:
job : Job

a Job tracking the datetime trend plots computation

Notes

  • Forecast distance specifies the number of time steps between the predicted point and the origin point.
  • For the multiseries models only first 1000 series in alphabetical order and an average plot for them will be computed.
  • Maximum 100 forecast distances can be requested for calculation in time series supervised projects.
get_accuracy_over_time_plots_metadata(forecast_distance=None)

Retrieve Accuracy over Time plots metadata for this model.

New in version v2.25.

Parameters:
forecast_distance : int, optional

Forecast distance to retrieve the metadata for. If not specified, the first forecast distance for this project will be used. Only available for time series projects.

Returns:
metadata : AccuracyOverTimePlotsMetadata

a AccuracyOverTimePlotsMetadata representing Accuracy over Time plots metadata

get_accuracy_over_time_plot(backtest=0, source='validation', forecast_distance=None, series_id=None, resolution=None, max_bin_size=None, start_date=None, end_date=None, max_wait=600)

Retrieve Accuracy over Time plots for this model.

New in version v2.25.

Parameters:
backtest : int or string, optional

Retrieve plots for a specific backtest (use the backtest index starting from zero). To retrieve plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

forecast_distance : int, optional

Forecast distance to retrieve the plots for. If not specified, the first forecast distance for this project will be used. Only available for time series projects.

series_id : string, optional

The name of the series to retrieve for multiseries projects. If not provided an average plot for the first 1000 series will be retrieved.

resolution : string, optional

Specifying at which resolution the data should be binned. If not provided an optimal resolution will be used to build chart data with number of bins <= max_bin_size. One of dr.enums.DATETIME_TREND_PLOTS_RESOLUTION.

max_bin_size : int, optional

An int between 1 and 1000, which specifies the maximum number of bins for the retrieval. Default is 500.

start_date : datetime.datetime, optional

The start of the date range to return. If not specified, start date for requested plot will be used.

end_date : datetime.datetime, optional

The end of the date range to return. If not specified, end date for requested plot will be used.

max_wait : int or None, optional

The maximum time to wait for a compute job to complete before retrieving the plots. Default is dr.enums.DEFAULT_MAX_WAIT. If 0 or None, the plots would be retrieved without attempting the computation.

Returns:
plot : AccuracyOverTimePlot

a AccuracyOverTimePlot representing Accuracy over Time plot

Examples

import datarobot as dr
import pandas as pd
model = dr.DatetimeModel(project_id=project_id, id=model_id)
plot = model.get_accuracy_over_time_plot()
df = pd.DataFrame.from_dict(plot.bins)
figure = df.plot("start_date", ["actual", "predicted"]).get_figure()
figure.savefig("accuracy_over_time.png")
get_accuracy_over_time_plot_preview(backtest=0, source='validation', forecast_distance=None, series_id=None, max_wait=600)

Retrieve Accuracy over Time preview plots for this model.

New in version v2.25.

Parameters:
backtest : int or string, optional

Retrieve plots for a specific backtest (use the backtest index starting from zero). To retrieve plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

forecast_distance : int, optional

Forecast distance to retrieve the plots for. If not specified, the first forecast distance for this project will be used. Only available for time series projects.

series_id : string, optional

The name of the series to retrieve for multiseries projects. If not provided an average plot for the first 1000 series will be retrieved.

max_wait : int or None, optional

The maximum time to wait for a compute job to complete before retrieving the plots. Default is dr.enums.DEFAULT_MAX_WAIT. If 0 or None, the plots would be retrieved without attempting the computation.

Returns:
plot : AccuracyOverTimePlotPreview

a AccuracyOverTimePlotPreview representing Accuracy over Time plot preview

Examples

import datarobot as dr
import pandas as pd
model = dr.DatetimeModel(project_id=project_id, id=model_id)
plot = model.get_accuracy_over_time_plot_preview()
df = pd.DataFrame.from_dict(plot.bins)
figure = df.plot("start_date", ["actual", "predicted"]).get_figure()
figure.savefig("accuracy_over_time_preview.png")
get_forecast_vs_actual_plots_metadata()

Retrieve Forecast vs Actual plots metadata for this model.

New in version v2.25.

Returns:
metadata : ForecastVsActualPlotsMetadata

a ForecastVsActualPlotsMetadata representing Forecast vs Actual plots metadata

get_forecast_vs_actual_plot(backtest=0, source='validation', forecast_distance_start=None, forecast_distance_end=None, series_id=None, resolution=None, max_bin_size=None, start_date=None, end_date=None, max_wait=600)

Retrieve Forecast vs Actual plots for this model.

New in version v2.25.

Parameters:
backtest : int or string, optional

Retrieve plots for a specific backtest (use the backtest index starting from zero). To retrieve plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

forecast_distance_start : int, optional:

The start of forecast distance range (forecast window) to retrieve. If not specified, the first forecast distance for this project will be used.

forecast_distance_end : int, optional:

The end of forecast distance range (forecast window) to retrieve. If not specified, the last forecast distance for this project will be used.

series_id : string, optional

The name of the series to retrieve for multiseries projects. If not provided an average plot for the first 1000 series will be retrieved.

resolution : string, optional

Specifying at which resolution the data should be binned. If not provided an optimal resolution will be used to build chart data with number of bins <= max_bin_size. One of dr.enums.DATETIME_TREND_PLOTS_RESOLUTION.

max_bin_size : int, optional

An int between 1 and 1000, which specifies the maximum number of bins for the retrieval. Default is 500.

start_date : datetime.datetime, optional

The start of the date range to return. If not specified, start date for requested plot will be used.

end_date : datetime.datetime, optional

The end of the date range to return. If not specified, end date for requested plot will be used.

max_wait : int or None, optional

The maximum time to wait for a compute job to complete before retrieving the plots. Default is dr.enums.DEFAULT_MAX_WAIT. If 0 or None, the plots would be retrieved without attempting the computation.

Returns:
plot : ForecastVsActualPlot

a ForecastVsActualPlot representing Forecast vs Actual plot

Examples

import datarobot as dr
import pandas as pd
import matplotlib.pyplot as plt

model = dr.DatetimeModel(project_id=project_id, id=model_id)
plot = model.get_forecast_vs_actual_plot()
df = pd.DataFrame.from_dict(plot.bins)

# As an example, get the forecasts for the 10th point
forecast_point_index = 10
# Pad the forecasts for plotting. The forecasts length must match the df length
forecasts = [None] * forecast_point_index + df.forecasts[forecast_point_index]
forecasts = forecasts + [None] * (len(df) - len(forecasts))

plt.plot(df.start_date, df.actual, label="Actual")
plt.plot(df.start_date, forecasts, label="Forecast")
forecast_point = df.start_date[forecast_point_index]
plt.title("Forecast vs Actual (Forecast Point {})".format(forecast_point))
plt.legend()
plt.savefig("forecast_vs_actual.png")
get_forecast_vs_actual_plot_preview(backtest=0, source='validation', series_id=None, max_wait=600)

Retrieve Forecast vs Actual preview plots for this model.

New in version v2.25.

Parameters:
backtest : int or string, optional

Retrieve plots for a specific backtest (use the backtest index starting from zero). To retrieve plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

series_id : string, optional

The name of the series to retrieve for multiseries projects. If not provided an average plot for the first 1000 series will be retrieved.

max_wait : int or None, optional

The maximum time to wait for a compute job to complete before retrieving the plots. Default is dr.enums.DEFAULT_MAX_WAIT. If 0 or None, the plots would be retrieved without attempting the computation.

Returns:
plot : ForecastVsActualPlotPreview

a ForecastVsActualPlotPreview representing Forecast vs Actual plot preview

Examples

import datarobot as dr
import pandas as pd
model = dr.DatetimeModel(project_id=project_id, id=model_id)
plot = model.get_forecast_vs_actual_plot_preview()
df = pd.DataFrame.from_dict(plot.bins)
figure = df.plot("start_date", ["actual", "predicted"]).get_figure()
figure.savefig("forecast_vs_actual_preview.png")
get_anomaly_over_time_plots_metadata()

Retrieve Anomaly over Time plots metadata for this model.

New in version v2.25.

Returns:
metadata : AnomalyOverTimePlotsMetadata

a AnomalyOverTimePlotsMetadata representing Anomaly over Time plots metadata

get_anomaly_over_time_plot(backtest=0, source='validation', series_id=None, resolution=None, max_bin_size=None, start_date=None, end_date=None, max_wait=600)

Retrieve Anomaly over Time plots for this model.

New in version v2.25.

Parameters:
backtest : int or string, optional

Retrieve plots for a specific backtest (use the backtest index starting from zero). To retrieve plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

series_id : string, optional

The name of the series to retrieve for multiseries projects. If not provided an average plot for the first 1000 series will be retrieved.

resolution : string, optional

Specifying at which resolution the data should be binned. If not provided an optimal resolution will be used to build chart data with number of bins <= max_bin_size. One of dr.enums.DATETIME_TREND_PLOTS_RESOLUTION.

max_bin_size : int, optional

An int between 1 and 1000, which specifies the maximum number of bins for the retrieval. Default is 500.

start_date : datetime.datetime, optional

The start of the date range to return. If not specified, start date for requested plot will be used.

end_date : datetime.datetime, optional

The end of the date range to return. If not specified, end date for requested plot will be used.

max_wait : int or None, optional

The maximum time to wait for a compute job to complete before retrieving the plots. Default is dr.enums.DEFAULT_MAX_WAIT. If 0 or None, the plots would be retrieved without attempting the computation.

Returns:
plot : AnomalyOverTimePlot

a AnomalyOverTimePlot representing Anomaly over Time plot

Examples

import datarobot as dr
import pandas as pd
model = dr.DatetimeModel(project_id=project_id, id=model_id)
plot = model.get_anomaly_over_time_plot()
df = pd.DataFrame.from_dict(plot.bins)
figure = df.plot("start_date", "predicted").get_figure()
figure.savefig("anomaly_over_time.png")
get_anomaly_over_time_plot_preview(prediction_threshold=0.5, backtest=0, source='validation', series_id=None, max_wait=600)

Retrieve Anomaly over Time preview plots for this model.

New in version v2.25.

Parameters:
prediction_threshold: float, optional

Only bins with predictions exceeding this threshold will be returned in the response.

backtest : int or string, optional

Retrieve plots for a specific backtest (use the backtest index starting from zero). To retrieve plots for holdout, use dr.enums.DATA_SUBSET.HOLDOUT

source : string, optional

The source of the data for the backtest/holdout. Attribute must be one of dr.enums.SOURCE_TYPE

series_id : string, optional

The name of the series to retrieve for multiseries projects. If not provided an average plot for the first 1000 series will be retrieved.

max_wait : int or None, optional

The maximum time to wait for a compute job to complete before retrieving the plots. Default is dr.enums.DEFAULT_MAX_WAIT. If 0 or None, the plots would be retrieved without attempting the computation.

Returns:
plot : AnomalyOverTimePlotPreview

a AnomalyOverTimePlotPreview representing Anomaly over Time plot preview

Examples

import datarobot as dr
import pandas as pd
import matplotlib.pyplot as plt

model = dr.DatetimeModel(project_id=project_id, id=model_id)
plot = model.get_anomaly_over_time_plot_preview(prediction_threshold=0.01)
df = pd.DataFrame.from_dict(plot.bins)
x = pd.date_range(
    plot.start_date, plot.end_date, freq=df.end_date[0] - df.start_date[0]
)
plt.plot(x, [0] * len(x), label="Date range")
plt.plot(df.start_date, [0] * len(df.start_date), "ro", label="Anomaly")
plt.yticks([])
plt.legend()
plt.savefig("anomaly_over_time_preview.png")
initialize_anomaly_assessment(backtest, source, series_id=None)

Initialize the anomaly assessment insight and calculate Shapley explanations for the most anomalous points in the subset. The insight is available for anomaly detection models in time series unsupervised projects which also support calculation of Shapley values.

Parameters:
backtest: int starting with 0 or “holdout”

The backtest to compute insight for.

source: “training” or “validation”

The source to compute insight for.

series_id: string

Required for multiseries projects. The series id to compute insight for. Say if there is a series column containing cities, the example of the series name to pass would be “Boston”

Returns:
AnomalyAssessmentRecord
get_anomaly_assessment_records(backtest=None, source=None, series_id=None, limit=100, offset=0, with_data_only=False)

Retrieve computed Anomaly Assessment records for this model. Model must be an anomaly detection model in time series unsupervised project which also supports calculation of Shapley values.

Records can be filtered by the data backtest, source and series_id. The results can be limited.

New in version v2.25.

Parameters:
backtest: int starting with 0 or “holdout”

The backtest of the data to filter records by.

source: “training” or “validation”

The source of the data to filter records by.

series_id: string

The series id to filter records by.

limit: int, optional
offset: int, optional
with_data_only: bool, optional

Whether to return only records with preview and explanations available. False by default.

Returns:
records : list of AnomalyAssessmentRecord

a AnomalyAssessmentRecord representing Anomaly Assessment Record

get_feature_impact(with_metadata=False, backtest=None)

Retrieve the computed Feature Impact results, a measure of the relevance of each feature in the model.

Feature Impact is computed for each column by creating new data with that column randomly permuted (but the others left unchanged), and seeing how the error metric score for the predictions is affected. The ‘impactUnnormalized’ is how much worse the error metric score is when making predictions on this modified data. The ‘impactNormalized’ is normalized so that the largest value is 1. In both cases, larger values indicate more important features.

If a feature is a redundant feature, i.e. once other features are considered it doesn’t contribute much in addition, the ‘redundantWith’ value is the name of feature that has the highest correlation with this feature. Note that redundancy detection is only available for jobs run after the addition of this feature. When retrieving data that predates this functionality, a NoRedundancyImpactAvailable warning will be used.

Elsewhere this technique is sometimes called ‘Permutation Importance’.

Requires that Feature Impact has already been computed with request_feature_impact.

Parameters:
with_metadata : bool

The flag indicating if the result should include the metadata as well.

backtest : int or string

The index of the backtest unless it is holdout then it is string ‘holdout’. This is supported only in DatetimeModels

Returns:
list or dict

The feature impact data response depends on the with_metadata parameter. The response is either a dict with metadata and a list with actual data or just a list with that data.

Each List item is a dict with the keys featureName, impactNormalized, and impactUnnormalized, redundantWith and count.

For dict response available keys are:

  • featureImpacts - Feature Impact data as a dictionary. Each item is a dict with
    keys: featureName, impactNormalized, and impactUnnormalized, and redundantWith.
  • shapBased - A boolean that indicates whether Feature Impact was calculated using
    Shapley values.
  • ranRedundancyDetection - A boolean that indicates whether redundant feature
    identification was run while calculating this Feature Impact.
  • rowCount - An integer or None that indicates the number of rows that was used to
    calculate Feature Impact. For the Feature Impact calculated with the default logic, without specifying the rowCount, we return None here.
  • count - An integer with the number of features under the featureImpacts.
Raises:
ClientError (404)

If the feature impacts have not been computed.

request_feature_impact(row_count=None, with_metadata=False, backtest=None)

Request feature impacts to be computed for the model.

See get_feature_impact for more information on the result of the job.

Parameters:
row_count : int

The sample size (specified in rows) to use for Feature Impact computation. This is not supported for unsupervised, multi-class (that has a separate method) and time series projects.

backtest : int or string

The index of the backtest unless it is holdout then it is string ‘holdout’. This is supported only in DatetimeModels

Returns:
job : Job

A Job representing the feature impact computation. To get the completed feature impact data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature impacts have already been requested.

get_or_request_feature_impact(max_wait=600, row_count=None, with_metadata=False, backtest=None)

Retrieve feature impact for the model, requesting a job if it hasn’t been run previously

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature impact job to complete before erroring

**kwargs

Arbitrary keyword arguments passed to request_feature_impact.

Returns:
feature_impacts : list or dict

The feature impact data. See get_feature_impact for the exact schema.

advanced_tune(params, description: Optional[str] = None) → ModelJob

Generate a new model with the specified advanced-tuning parameters

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Parameters:
params : dict

Mapping of parameter ID to parameter value. The list of valid parameter IDs for a model can be found by calling get_advanced_tuning_parameters(). This endpoint does not need to include values for all parameters. If a parameter is omitted, its current_value will be used.

description : str

Human-readable string describing the newly advanced-tuned model

Returns:
ModelJob

The created job to build the model

delete() → None

Delete a model from the project’s leaderboard.

download_export(filepath: str) → None

Download an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

Parameters:
filepath : str

The path at which to save the exported model file.

download_scoring_code(file_name, source_code=False)

Download the Scoring Code JAR.

Parameters:
file_name : str

File path where scoring code will be saved.

source_code : bool, optional

Set to True to download source code archive. It will not be executable.

download_training_artifact(file_name)

Retrieve trained artifact(s) from a model containing one or more custom tasks.

Artifact(s) will be downloaded to the specified local filepath.

Parameters:
file_name : str

File path where trained model artifact(s) will be saved.

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

get_advanced_tuning_parameters() → AdvancedTuningParamsType

Get the advanced-tuning parameters available for this model.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
dict

A dictionary describing the advanced-tuning parameters for the current model. There are two top-level keys, tuning_description and tuning_parameters.

tuning_description an optional value. If not None, then it indicates the user-specified description of this set of tuning parameter.

tuning_parameters is a list of a dicts, each has the following keys

  • parameter_name : (str) name of the parameter (unique per task, see below)
  • parameter_id : (str) opaque ID string uniquely identifying parameter
  • default_value : (*) default value of the parameter for the blueprint
  • current_value : (*) value of the parameter that was used for this model
  • task_name : (str) name of the task that this parameter belongs to
  • constraints: (dict) see the notes below
  • vertex_id: (str) ID of vertex that this parameter belongs to

Notes

The type of default_value and current_value is defined by the constraints structure. It will be a string or numeric Python type.

constraints is a dict with at least one, possibly more, of the following keys. The presence of a key indicates that the parameter may take on the specified type. (If a key is absent, this means that the parameter may not take on the specified type.) If a key on constraints is present, its value will be a dict containing all of the fields described below for that key.

"constraints": {
    "select": {
        "values": [<list(basestring or number) : possible values>]
    },
    "ascii": {},
    "unicode": {},
    "int": {
        "min": <int : minimum valid value>,
        "max": <int : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "float": {
        "min": <float : minimum valid value>,
        "max": <float : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "intList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <int : minimum valid value>,
        "max_val": <int : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "floatList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <float : minimum valid value>,
        "max_val": <float : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    }
}

The keys have meaning as follows:

  • select: Rather than specifying a specific data type, if present, it indicates that the parameter is permitted to take on any of the specified values. Listed values may be of any string or real (non-complex) numeric type.
  • ascii: The parameter may be a unicode object that encodes simple ASCII characters. (A-Z, a-z, 0-9, whitespace, and certain common symbols.) In addition to listed constraints, ASCII keys currently may not contain either newlines or semicolons.
  • unicode: The parameter may be any Python unicode object.
  • int: The value may be an object of type int within the specified range (inclusive). Please note that the value will be passed around using the JSON format, and some JSON parsers have undefined behavior with integers outside of the range [-(2**53)+1, (2**53)-1].
  • float: The value may be an object of type float within the specified range (inclusive).
  • intList, floatList: The value may be a list of int or float objects, respectively, following constraints as specified respectively by the int and float types (above).

Many parameters only specify one key under constraints. If a parameter specifies multiple keys, the parameter may take on any value permitted by any key.

get_all_confusion_charts(fallback_to_parent_insights=False)

Retrieve a list of all confusion matrices available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent for any source that is not available for this model and if this has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ConfusionChart

Data for all available confusion charts for model.

get_all_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_multiclass_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_residuals_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ResidualsChart

Data for all available model residuals charts.

get_all_roc_curves(fallback_to_parent_insights=False)

Retrieve a list of all ROC curves available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of RocCurve

Data for all available model ROC curves.

get_confusion_chart(source, fallback_to_parent_insights=False)

Retrieve them model’s confusion matrix for the specified source.

Parameters:
source : str

Confusion chart source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent if the confusion chart is not available for this model and the defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
ConfusionChart

Model ConfusionChart data

Raises:
ClientError

If the insight is not available for this model

get_cross_class_accuracy_scores()

Retrieves a list of Cross Class Accuracy scores for the model.

Returns:
json
get_data_disparity_insights(feature, class_name1, class_name2)

Retrieve a list of Cross Class Data Disparity insights for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

class_name1 : str

One of the compared classes

class_name2 : str

Another compared class

Returns:
json
get_fairness_insights(fairness_metrics_set=None, offset=0, limit=100)

Retrieve a list of Per Class Bias insights for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

offset : int, optional

Number of items to skip.

limit : int, optional

Number of items to return.

Returns:
json
get_features_used() → List[str]

Query the server to determine which features were used.

Note that the data returned by this method is possibly different than the names of the features in the featurelist used by this model. This method will return the raw features that must be supplied in order for predictions to be generated on a new set of data. The featurelist, in contrast, would also include the names of derived features.

Returns:
features : list of str

The names of the features used in the model.

get_frozen_child_models()

Retrieve the IDs for all models that are frozen from this model.

Returns:
A list of Models
get_labelwise_roc_curves(source, fallback_to_parent_insights=False)

Retrieve a list of LabelwiseRocCurve instances for a multilabel model the given source and all labels.

New in version v2.24.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
list of : class:LabelwiseRocCurve <datarobot.models.roc_curve.LabelwiseRocCurve>

Labelwise ROC Curve instances for source and all labels

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is binary

Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_lift_chart(source, fallback_to_parent_insights=False)

Retrieve the model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
LiftChart

Model lift chart data

Raises:
ClientError

If the insight is not available for this model

get_missing_report_info()

Retrieve a report on missing training data that can be used to understand missing values treatment in the model. The report consists of missing values resolutions for features numeric or categorical features that were part of building the model.

Returns:
An iterable of MissingReportPerFeature

The queried model missing report, sorted by missing count (DESCENDING order).

get_model_blueprint_chart()

Retrieve a diagram that can be used to understand data flow in the blueprint.

Returns:
ModelBlueprintChart

The queried model blueprint chart.

get_model_blueprint_documents()

Get documentation for tasks used in this model.

Returns:
list of BlueprintTaskDocument

All documents available for the model.

get_multiclass_feature_impact()

For multiclass it’s possible to calculate feature impact separately for each target class. The method for calculation is exactly the same, calculated in one-vs-all style for each target class.

Requires that Feature Impact has already been computed with request_feature_impact.

Returns:
feature_impacts : list of dict

The feature impact data. Each item is a dict with the keys ‘featureImpacts’ (list), ‘class’ (str). Each item in ‘featureImpacts’ is a dict with the keys ‘featureName’, ‘impactNormalized’, and ‘impactUnnormalized’, and ‘redundantWith’.

Raises:
ClientError (404)

If the multiclass feature impacts have not been computed.

get_multiclass_lift_chart(source, fallback_to_parent_insights=False)

Retrieve model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_multilabel_lift_charts(source, fallback_to_parent_insights=False)

Retrieve model Lift charts for the specified source.

New in version v2.24.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_num_iterations_trained()

Retrieves the number of estimators trained by early-stopping tree-based models.

– versionadded:: v2.22

Returns:
projectId: str

id of project containing the model

modelId: str

id of the model

data: array

list of numEstimatorsItem objects, one for each modeling stage.

numEstimatorsItem will be of the form:
stage: str

indicates the modeling stage (for multi-stage models); None of single-stage models

numIterations: int

the number of estimators or iterations trained by the model

get_parameters()

Retrieve model parameters.

Returns:
ModelParameters

Model parameters for this model.

get_pareto_front()

Retrieve the Pareto Front for a Eureqa model.

This method is only supported for Eureqa models.

Returns:
ParetoFront

Model ParetoFront data

get_prime_eligibility()

Check if this model can be approximated with DataRobot Prime

Returns:
prime_eligibility : dict

a dict indicating whether a model can be approximated with DataRobot Prime (key can_make_prime) and why it may be ineligible (key message)

get_residuals_chart(source, fallback_to_parent_insights=False)

Retrieve model residuals chart for the specified source.

Parameters:
source : str

Residuals chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent if the residuals chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return residuals data from this model’s parent.

Returns:
ResidualsChart

Model residuals chart data

Raises:
ClientError

If the insight is not available for this model

get_roc_curve(source, fallback_to_parent_insights=False)

Retrieve the ROC curve for a binary model for the specified source.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
RocCurve

Model ROC curve data

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is multilabel

get_rulesets() → List[datarobot.models.ruleset.Ruleset]

List the rulesets approximating this model generated by DataRobot Prime

If this model hasn’t been approximated yet, will return an empty list. Note that these are rulesets approximating this model, not rulesets used to construct this model.

Returns:
rulesets : list of Ruleset
get_supported_capabilities()

Retrieves a summary of the capabilities supported by a model.

New in version v2.14.

Returns:
supportsBlending: bool

whether the model supports blending

supportsMonotonicConstraints: bool

whether the model supports monotonic constraints

hasWordCloud: bool

whether the model has word cloud data available

eligibleForPrime: bool

whether the model is eligible for Prime

hasParameters: bool

whether the model has parameters that can be retrieved

supportsCodeGeneration: bool

(New in version v2.18) whether the model supports code generation

supportsShap: bool
(New in version v2.18) True if the model supports Shapley package. i.e. Shapley based

feature Importance

supportsEarlyStopping: bool

(New in version v2.22) True if this is an early stopping tree-based model and number of trained iterations can be retrieved.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_word_cloud(exclude_stop_words=False)

Retrieve word cloud data for the model.

Parameters:
exclude_stop_words : bool, optional

Set to True if you want stopwords filtered out of response.

Returns:
WordCloud

Word cloud data for the model.

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

open_model_browser() → None

Opens model at project leaderboard in web browser. Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

request_approximation()

Request an approximation of this model using DataRobot Prime

This will create several rulesets that could be used to approximate this model. After comparing their scores and rule counts, the code used in the approximation can be downloaded and run locally.

Returns:
job : Job

the job generating the rulesets

request_cross_class_accuracy_scores()

Request data disparity insights to be computed for the model.

Returns:
status_id : str

A statusId of computation request.

request_data_disparity_insights(feature, compared_class_names)

Request data disparity insights to be computed for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

compared_class_names : list(str)

List of two classes to compare

Returns:
status_id : str

A statusId of computation request.

request_external_test(dataset_id: str, actual_value_column: Optional[str] = None)

Request external test to compute scores and insights on an external test dataset

Parameters:
dataset_id : string

The dataset to make predictions against (as uploaded from Project.upload_dataset)

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

Returns
——-
job : Job

a Job representing external dataset insights computation

request_fairness_insights(fairness_metrics_set=None)

Request fairness insights to be computed for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

Returns:
status_id : str

A statusId of computation request.

request_frozen_datetime_model(training_row_count: Optional[int] = None, training_duration: Optional[str] = None, training_start_date: Optional[datetime] = None, training_end_date: Optional[datetime] = None, time_window_sample_pct: Optional[int] = None, sampling_method: Optional[str] = None) → ModelJob

Train a new frozen model with parameters from this model.

Requires that this model belongs to a datetime partitioned project. If it does not, an error will occur when submitting the job.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

In addition of training_row_count and training_duration, frozen datetime models may be trained on an exact date range. Only one of training_row_count, training_duration, or training_start_date and training_end_date should be specified.

Models specified using training_start_date and training_end_date are the only ones that can be trained into the holdout data (once the holdout is unlocked).

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, training_duration may not be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, training_row_count may not be specified.

training_start_date : datetime.datetime, optional

the start date of the data to train to model on. Only rows occurring at or after this datetime will be used. If training_start_date is specified, training_end_date must also be specified.

training_end_date : datetime.datetime, optional

the end date of the data to train the model on. Only rows occurring strictly before this datetime will be used. If training_end_date is specified, training_start_date must also be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

Returns:
model_job : ModelJob

the modeling job training a frozen model

request_predictions(dataset_id: Optional[str] = None, dataset: Optional[Dataset] = None, dataframe: Optional[pd.DataFrame] = None, file_path: Optional[str] = None, file: Optional[IOBase] = None, include_prediction_intervals: Optional[bool] = None, prediction_intervals_size: Optional[int] = None, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, explanation_algorithm: Optional[str] = None, max_explanations: Optional[int] = None, max_ngram_explanations: Optional[Union[int, str]] = None) → PredictJob

Requests predictions against a previously uploaded dataset.

Parameters:
dataset_id : string, optional

The ID of the dataset to make predictions against (as uploaded from Project.upload_dataset)

dataset : Dataset, optional

The dataset to make predictions against (as uploaded from Project.upload_dataset)

dataframe : pd.DataFrame, optional

(New in v3.0) The dataframe to make predictions against

file_path : str, optional

(New in v3.0) Path to file to make predictions against

file : IOBase, optional

(New in v3.0) File to make predictions against

include_prediction_intervals : bool, optional

(New in v2.16) For time series projects only. Specifies whether prediction intervals should be calculated for this request. Defaults to True if prediction_intervals_size is specified, otherwise defaults to False.

prediction_intervals_size : int, optional

(New in v2.16) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if include_prediction_intervals is True. Prediction intervals size must be between 1 and 100 (inclusive).

forecast_point : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

predictions_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

explanation_algorithm: (New in version v2.21) optional; If set to ‘shap’, the

response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations: (New in version v2.21) int optional; specifies the maximum number of

explanation values that should be returned for each row, ordered by absolute value, greatest to least. If null, no limit. In the case of ‘shap’: if the number of features is greater than the limit, the sum of remaining values will also be returned as shapRemainingTotal. Defaults to null. Cannot be set if explanation_algorithm is omitted.

max_ngram_explanations: optional; int or str

(New in version v2.29) Specifies the maximum number of text explanation values that should be returned. If set to all, text explanations will be computed and all the ngram explanations will be returned. If set to a non zero positive integer value, text explanations will be computed and this amount of descendingly sorted ngram explanations will be returned. By default text explanation won’t be triggered to be computed.

Returns:
job : PredictJob

The job computing the predictions

request_transferable_export(prediction_intervals_size: Optional[int] = None) → Job

Request generation of an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

This function does not download the exported file. Use download_export for that.

Parameters:
prediction_intervals_size : int, optional

(New in v2.19) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Prediction intervals size must be between 1 and 100 (inclusive).

Returns:
Job

Examples

model = datarobot.Model.get('project-id', 'model-id')
job = model.request_transferable_export()
job.wait_for_completion()
model.download_export('my_exported_model.drmodel')

# Client must be configured to use standalone prediction server for import:
datarobot.Client(token='my-token-at-standalone-server',
                 endpoint='standalone-server-url/api/v2')

imported_model = datarobot.ImportedModel.create('my_exported_model.drmodel')
set_prediction_threshold(threshold)

Set a custom prediction threshold for the model.

May not be used once prediction_threshold_read_only is True for this model.

Parameters:
threshold : float

only used for binary classification projects. The threshold to when deciding between the positive and negative classes when making predictions. Should be between 0.0 and 1.0 (inclusive).

star_model() → None

Mark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

start_advanced_tuning_session()

Start an Advanced Tuning session. Returns an object that helps set up arguments for an Advanced Tuning model execution.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
AdvancedTuningSession

Session for setting up and running Advanced Tuning on a model

train_datetime(featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, training_duration: Optional[str] = None, time_window_sample_pct: Optional[int] = None, monotonic_increasing_featurelist_id: Optional[Union[str, object]] = <object object>, monotonic_decreasing_featurelist_id: Optional[Union[str, object]] = <object object>, use_project_settings: bool = False, sampling_method: Optional[str] = None) → ModelJob

Trains this model on a different featurelist or sample size.

Requires that this model is part of a datetime partitioned project; otherwise, an error will occur.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
featurelist_id : str, optional

the featurelist to use to train the model. If not specified, the featurelist of this model is used.

training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, neither training_duration nor use_project_settings may be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, neither training_row_count nor use_project_settings may be specified.

use_project_settings : bool, optional

(New in version v2.20) defaults to False. If True, indicates that the custom backtest partitioning settings specified by the user will be used to train the model and evaluate backtest scores. If specified, neither training_row_count nor training_duration may be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

monotonic_increasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
job : ModelJob

the created job to build the model

unstar_model() → None

Unmark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

Frozen Model

class datarobot.models.FrozenModel(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, model_type=None, model_category=None, is_frozen=None, blueprint_id=None, metrics=None, parent_model_id=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, model_number=None, supports_composable_ml=None)

Represents a model tuned with parameters which are derived from another model

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

processes : list of str

the processes used by the model

featurelist_name : str

the name of the featurelist used by the model

featurelist_id : str

the id of the featurelist used by the model

sample_pct : float

the percentage of the project dataset used in training the model

training_row_count : int or None

the number of rows of the project dataset used in training the model. In a datetime partitioned project, if specified, defines the number of rows used to train the model and evaluate backtest scores; if unspecified, either training_duration or training_start_date and training_end_date was used to determine that instead.

training_duration : str or None

only present for models in datetime partitioned projects. If specified, a duration string specifying the duration spanned by the data used to train the model and evaluate backtest scores.

training_start_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the start date of the data used to train the model.

training_end_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the end date of the data used to train the model.

model_type : str

what model this is, e.g. ‘Nystroem Kernel SVM Regressor’

model_category : str

what kind of model this is - ‘prime’ for DataRobot Prime models, ‘blend’ for blender models, and ‘model’ for other models

is_frozen : bool

whether this model is a frozen model

parent_model_id : str

the id of the model that tuning parameters are derived from

blueprint_id : str

the id of the blueprint used in this model

metrics : dict

a mapping from each metric to the model’s scores for that metric

monotonic_increasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced.

monotonic_decreasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced.

supports_monotonic_constraints : bool

optional, whether this model supports enforcing monotonic constraints

is_starred : bool

whether this model marked as starred

prediction_threshold : float

for binary classification projects, the threshold used for predictions

prediction_threshold_read_only : bool

indicated whether modification of the prediction threshold is forbidden. Threshold modification is forbidden once a model has had a deployment created or predictions made via the dedicated prediction API.

model_number : integer

model number assigned to a model

supports_composable_ml : bool or None

(New in version v2.26) whether this model is supported in the Composable ML.

classmethod get(project_id, model_id)

Retrieve a specific frozen model.

Parameters:
project_id : str

The project’s id.

model_id : str

The model_id of the leaderboard item to retrieve.

Returns:
model : FrozenModel

The queried instance.

Imported Model

Note

Imported Models are used in Stand Alone Scoring Engines. If you are not an administrator of such an engine, they are not relevant to you.

class datarobot.models.ImportedModel(id: str, imported_at: Optional[datetime.datetime] = None, model_id: Optional[str] = None, target: Optional[str] = None, featurelist_name: Optional[str] = None, dataset_name: Optional[str] = None, model_name: Optional[str] = None, project_id: Optional[str] = None, note: Optional[str] = None, origin_url: Optional[str] = None, imported_by_username: Optional[str] = None, project_name: Optional[str] = None, created_by_username: Optional[str] = None, created_by_id: Optional[str] = None, imported_by_id: Optional[str] = None, display_name: Optional[str] = None)

Represents an imported model available for making predictions. These are only relevant for administrators of on-premise Stand Alone Scoring Engines.

ImportedModels are trained in one DataRobot application, exported as a .drmodel file, and then imported for use in a Stand Alone Scoring Engine.

Attributes:
id : str

id of the import

model_name : str

model type describing the model generated by DataRobot

display_name : str

manually specified human-readable name of the imported model

note : str

manually added node about this imported model

imported_at : datetime

the time the model was imported

imported_by_username : str

username of the user who imported the model

imported_by_id : str

id of the user who imported the model

origin_url : str

URL of the application the model was exported from

model_id : str

original id of the model prior to export

featurelist_name : str

name of the featurelist used to train the model

project_id : str

id of the project the model belonged to prior to export

project_name : str

name of the project the model belonged to prior to export

target : str

the target of the project the model belonged to prior to export

dataset_name : str

filename of the dataset used to create the project the model belonged to

created_by_username : str

username of the user who created the model prior to export

created_by_id : str

id of the user who created the model prior to export

classmethod create(path: str, max_wait: int = 600) → datarobot.models.imported_model.ImportedModel

Import a previously exported model for predictions.

Parameters:
path : str

The path to the exported model file

max_wait : int, optional

Time in seconds after which model import is considered unsuccessful

classmethod get(import_id: str) → datarobot.models.imported_model.ImportedModel

Retrieve imported model info

Parameters:
import_id : str

The ID of the imported model.

Returns:
imported_model : ImportedModel

The ImportedModel instance

classmethod list(limit: Optional[int] = None, offset: Optional[int] = None) → List[datarobot.models.imported_model.ImportedModel]

List the imported models.

Parameters:
limit : int

The number of records to return. The server will use a (possibly finite) default if not specified.

offset : int

The number of records to skip.

Returns:
imported_models : list[ImportedModel]
update(display_name: Optional[str] = None, note: Optional[str] = None) → None

Update the display name or note for an imported model. The ImportedModel object is updated in place.

Parameters:
display_name : str

The new display name.

note : str

The new note.

delete() → None

Delete this imported model.

RatingTableModel

class datarobot.models.RatingTableModel(id=None, processes=None, featurelist_name=None, featurelist_id=None, project_id=None, sample_pct=None, training_row_count=None, training_duration=None, training_start_date=None, training_end_date=None, model_type=None, model_category=None, is_frozen=None, blueprint_id=None, metrics=None, rating_table_id=None, monotonic_increasing_featurelist_id=None, monotonic_decreasing_featurelist_id=None, supports_monotonic_constraints=None, is_starred=None, prediction_threshold=None, prediction_threshold_read_only=None, model_number=None, supports_composable_ml=None)

A model that has a rating table.

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

processes : list of str

the processes used by the model

featurelist_name : str

the name of the featurelist used by the model

featurelist_id : str

the id of the featurelist used by the model

sample_pct : float or None

the percentage of the project dataset used in training the model. If the project uses datetime partitioning, the sample_pct will be None. See training_row_count, training_duration, and training_start_date and training_end_date instead.

training_row_count : int or None

the number of rows of the project dataset used in training the model. In a datetime partitioned project, if specified, defines the number of rows used to train the model and evaluate backtest scores; if unspecified, either training_duration or training_start_date and training_end_date was used to determine that instead.

training_duration : str or None

only present for models in datetime partitioned projects. If specified, a duration string specifying the duration spanned by the data used to train the model and evaluate backtest scores.

training_start_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the start date of the data used to train the model.

training_end_date : datetime or None

only present for frozen models in datetime partitioned projects. If specified, the end date of the data used to train the model.

model_type : str

what model this is, e.g. ‘Nystroem Kernel SVM Regressor’

model_category : str

what kind of model this is - ‘prime’ for DataRobot Prime models, ‘blend’ for blender models, and ‘model’ for other models

is_frozen : bool

whether this model is a frozen model

blueprint_id : str

the id of the blueprint used in this model

metrics : dict

a mapping from each metric to the model’s scores for that metric

rating_table_id : str

the id of the rating table that belongs to this model

monotonic_increasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced.

monotonic_decreasing_featurelist_id : str

optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced.

supports_monotonic_constraints : bool

optional, whether this model supports enforcing monotonic constraints

is_starred : bool

whether this model marked as starred

prediction_threshold : float

for binary classification projects, the threshold used for predictions

prediction_threshold_read_only : bool

indicated whether modification of the prediction threshold is forbidden. Threshold modification is forbidden once a model has had a deployment created or predictions made via the dedicated prediction API.

model_number : integer

model number assigned to a model

supports_composable_ml : bool or None

(New in version v2.26) whether this model is supported in the Composable ML.

classmethod get(project_id, model_id)

Retrieve a specific rating table model

If the project does not have a rating table, a ClientError will occur.

Parameters:
project_id : str

the id of the project the model belongs to

model_id : str

the id of the model to retrieve

Returns:
model : RatingTableModel

the model

classmethod create_from_rating_table(project_id: str, rating_table_id: str) → Job

Creates a new model from a validated rating table record. The RatingTable must not be associated with an existing model.

Parameters:
project_id : str

the id of the project the rating table belongs to

rating_table_id : str

the id of the rating table to create this model from

Returns:
job: Job

an instance of created async job

Raises:
ClientError (422)

Raised if creating model from a RatingTable that failed validation

JobAlreadyRequested

Raised if creating model from a RatingTable that is already associated with a RatingTableModel

advanced_tune(params, description: Optional[str] = None) → ModelJob

Generate a new model with the specified advanced-tuning parameters

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Parameters:
params : dict

Mapping of parameter ID to parameter value. The list of valid parameter IDs for a model can be found by calling get_advanced_tuning_parameters(). This endpoint does not need to include values for all parameters. If a parameter is omitted, its current_value will be used.

description : str

Human-readable string describing the newly advanced-tuned model

Returns:
ModelJob

The created job to build the model

cross_validate()

Run cross validation on the model.

Note

To perform Cross Validation on a new model with new parameters, use train instead.

Returns:
ModelJob

The created job to build the model

delete() → None

Delete a model from the project’s leaderboard.

download_export(filepath: str) → None

Download an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

Parameters:
filepath : str

The path at which to save the exported model file.

download_scoring_code(file_name, source_code=False)

Download the Scoring Code JAR.

Parameters:
file_name : str

File path where scoring code will be saved.

source_code : bool, optional

Set to True to download source code archive. It will not be executable.

download_training_artifact(file_name)

Retrieve trained artifact(s) from a model containing one or more custom tasks.

Artifact(s) will be downloaded to the specified local filepath.

Parameters:
file_name : str

File path where trained model artifact(s) will be saved.

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

get_advanced_tuning_parameters() → AdvancedTuningParamsType

Get the advanced-tuning parameters available for this model.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
dict

A dictionary describing the advanced-tuning parameters for the current model. There are two top-level keys, tuning_description and tuning_parameters.

tuning_description an optional value. If not None, then it indicates the user-specified description of this set of tuning parameter.

tuning_parameters is a list of a dicts, each has the following keys

  • parameter_name : (str) name of the parameter (unique per task, see below)
  • parameter_id : (str) opaque ID string uniquely identifying parameter
  • default_value : (*) default value of the parameter for the blueprint
  • current_value : (*) value of the parameter that was used for this model
  • task_name : (str) name of the task that this parameter belongs to
  • constraints: (dict) see the notes below
  • vertex_id: (str) ID of vertex that this parameter belongs to

Notes

The type of default_value and current_value is defined by the constraints structure. It will be a string or numeric Python type.

constraints is a dict with at least one, possibly more, of the following keys. The presence of a key indicates that the parameter may take on the specified type. (If a key is absent, this means that the parameter may not take on the specified type.) If a key on constraints is present, its value will be a dict containing all of the fields described below for that key.

"constraints": {
    "select": {
        "values": [<list(basestring or number) : possible values>]
    },
    "ascii": {},
    "unicode": {},
    "int": {
        "min": <int : minimum valid value>,
        "max": <int : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "float": {
        "min": <float : minimum valid value>,
        "max": <float : maximum valid value>,
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "intList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <int : minimum valid value>,
        "max_val": <int : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    },
    "floatList": {
        "min_length": <int : minimum valid length>,
        "max_length": <int : maximum valid length>
        "min_val": <float : minimum valid value>,
        "max_val": <float : maximum valid value>
        "supports_grid_search": <bool : True if Grid Search may be
                                        requested for this param>
    }
}

The keys have meaning as follows:

  • select: Rather than specifying a specific data type, if present, it indicates that the parameter is permitted to take on any of the specified values. Listed values may be of any string or real (non-complex) numeric type.
  • ascii: The parameter may be a unicode object that encodes simple ASCII characters. (A-Z, a-z, 0-9, whitespace, and certain common symbols.) In addition to listed constraints, ASCII keys currently may not contain either newlines or semicolons.
  • unicode: The parameter may be any Python unicode object.
  • int: The value may be an object of type int within the specified range (inclusive). Please note that the value will be passed around using the JSON format, and some JSON parsers have undefined behavior with integers outside of the range [-(2**53)+1, (2**53)-1].
  • float: The value may be an object of type float within the specified range (inclusive).
  • intList, floatList: The value may be a list of int or float objects, respectively, following constraints as specified respectively by the int and float types (above).

Many parameters only specify one key under constraints. If a parameter specifies multiple keys, the parameter may take on any value permitted by any key.

get_all_confusion_charts(fallback_to_parent_insights=False)

Retrieve a list of all confusion matrices available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent for any source that is not available for this model and if this has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ConfusionChart

Data for all available confusion charts for model.

get_all_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_multiclass_lift_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of LiftChart

Data for all available model lift charts.

Raises:
ClientError

If the insight is not available for this model

get_all_residuals_charts(fallback_to_parent_insights=False)

Retrieve a list of all Lift charts available for the model.

Parameters:
fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of ResidualsChart

Data for all available model residuals charts.

get_all_roc_curves(fallback_to_parent_insights=False)

Retrieve a list of all ROC curves available for the model.

Parameters:
fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent for any source that is not available for this model and if this model has a defined parent model. If omitted or False, or this model has no parent, this will not attempt to retrieve any data from this model’s parent.

Returns:
list of RocCurve

Data for all available model ROC curves.

get_confusion_chart(source, fallback_to_parent_insights=False)

Retrieve them model’s confusion matrix for the specified source.

Parameters:
source : str

Confusion chart source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return confusion chart data for this model’s parent if the confusion chart is not available for this model and the defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
ConfusionChart

Model ConfusionChart data

Raises:
ClientError

If the insight is not available for this model

get_cross_class_accuracy_scores()

Retrieves a list of Cross Class Accuracy scores for the model.

Returns:
json
get_cross_validation_scores(partition=None, metric=None)

Return a dictionary, keyed by metric, showing cross validation scores per partition.

Cross Validation should already have been performed using cross_validate or train.

Note

Models that computed cross validation before this feature was added will need to be deleted and retrained before this method can be used.

Parameters:
partition : float

optional, the id of the partition (1,2,3.0,4.0,etc…) to filter results by can be a whole number positive integer or float value. 0 corresponds to the validation partition.

metric: unicode

optional name of the metric to filter to resulting cross validation scores by

Returns:
cross_validation_scores: dict

A dictionary keyed by metric showing cross validation scores per partition.

get_data_disparity_insights(feature, class_name1, class_name2)

Retrieve a list of Cross Class Data Disparity insights for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

class_name1 : str

One of the compared classes

class_name2 : str

Another compared class

Returns:
json
get_fairness_insights(fairness_metrics_set=None, offset=0, limit=100)

Retrieve a list of Per Class Bias insights for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

offset : int, optional

Number of items to skip.

limit : int, optional

Number of items to return.

Returns:
json
get_feature_effect(source: str)

Retrieve Feature Effects for the model.

Feature Effects provides partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

Raises:
ClientError (404)

If the feature effects have not been computed or source is not valid value.

get_feature_effect_metadata()

Retrieve Feature Effects metadata. Response contains status and available model sources.

  • Feature Fit for the training partition is always available, with the exception of older projects that only supported Feature Fit for validation.
  • When a model is trained into validation or holdout without stacked predictions (i.e., no out-of-sample predictions in those partitions), Feature Effects is not available for validation or holdout.
  • Feature Effects for holdout is not available when holdout was not unlocked for the project.

Use source to retrieve Feature Effects, selecting one of the provided sources.

Returns:
feature_effect_metadata: FeatureEffectMetadata
get_feature_effects_multiclass(source: str = 'training', class_: Optional[str] = None)

Retrieve Feature Effects for the multiclass model.

Feature Effects provide partial dependence and predicted vs actual values for top-500 features ordered by feature impact score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Effects has already been computed with request_feature_effect.

See get_feature_effect_metadata for retrieving information the available sources.

Parameters:
source : str

The source Feature Effects are retrieved for.

class_ : str or None

The class name Feature Effects are retrieved for.

Returns:
list

The list of multiclass feature effects.

Raises:
ClientError (404)

If Feature Effects have not been computed or source is not valid value.

get_feature_fit(source: str)

Retrieve Feature Fit for the model.

Feature Fit provides partial dependence and predicted vs actual values for top-500 features ordered by feature importance score.

The partial dependence shows marginal effect of a feature on the target variable after accounting for the average effects of all other predictive features. It indicates how, holding all other variables except the feature of interest as they were, the value of this feature affects your prediction.

Requires that Feature Fit has already been computed with request_feature_effect.

See get_feature_fit_metadata for retrieving information the available sources.

Parameters:
source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_fit : FeatureFit

The feature fit data.

Raises:
ClientError (404)

If the feature fit have not been computed or source is not valid value.

get_feature_fit_metadata()
Retrieve Feature Fit metadata. Response contains status and available model sources.
  • Feature Fit of training is always available (except for the old project which supports only Feature Fit for validation).
  • When a model is trained into validation or holdout without stacked prediction (e.g. no out-of-sample prediction in validation or holdout), Feature Fit is not available for validation or holdout.
  • Feature Fit for holdout is not available when there is no holdout configured for the project.
source is expected parameter to retrieve Feature Fit. One of provided sources shall be used.
Returns:
feature_effect_metadata: FeatureFitMetadata
get_feature_impact(with_metadata: bool = False)

Retrieve the computed Feature Impact results, a measure of the relevance of each feature in the model.

Feature Impact is computed for each column by creating new data with that column randomly permuted (but the others left unchanged), and seeing how the error metric score for the predictions is affected. The ‘impactUnnormalized’ is how much worse the error metric score is when making predictions on this modified data. The ‘impactNormalized’ is normalized so that the largest value is 1. In both cases, larger values indicate more important features.

If a feature is a redundant feature, i.e. once other features are considered it doesn’t contribute much in addition, the ‘redundantWith’ value is the name of feature that has the highest correlation with this feature. Note that redundancy detection is only available for jobs run after the addition of this feature. When retrieving data that predates this functionality, a NoRedundancyImpactAvailable warning will be used.

Elsewhere this technique is sometimes called ‘Permutation Importance’.

Requires that Feature Impact has already been computed with request_feature_impact.

Parameters:
with_metadata : bool

The flag indicating if the result should include the metadata as well.

Returns:
list or dict

The feature impact data response depends on the with_metadata parameter. The response is either a dict with metadata and a list with actual data or just a list with that data.

Each List item is a dict with the keys featureName, impactNormalized, and impactUnnormalized, redundantWith and count.

For dict response available keys are:

  • featureImpacts - Feature Impact data as a dictionary. Each item is a dict with
    keys: featureName, impactNormalized, and impactUnnormalized, and redundantWith.
  • shapBased - A boolean that indicates whether Feature Impact was calculated using
    Shapley values.
  • ranRedundancyDetection - A boolean that indicates whether redundant feature
    identification was run while calculating this Feature Impact.
  • rowCount - An integer or None that indicates the number of rows that was used to
    calculate Feature Impact. For the Feature Impact calculated with the default logic, without specifying the rowCount, we return None here.
  • count - An integer with the number of features under the featureImpacts.
Raises:
ClientError (404)

If the feature impacts have not been computed.

get_features_used() → List[str]

Query the server to determine which features were used.

Note that the data returned by this method is possibly different than the names of the features in the featurelist used by this model. This method will return the raw features that must be supplied in order for predictions to be generated on a new set of data. The featurelist, in contrast, would also include the names of derived features.

Returns:
features : list of str

The names of the features used in the model.

get_frozen_child_models()

Retrieve the IDs for all models that are frozen from this model.

Returns:
A list of Models
get_labelwise_roc_curves(source, fallback_to_parent_insights=False)

Retrieve a list of LabelwiseRocCurve instances for a multilabel model the given source and all labels.

New in version v2.24.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
list of : class:LabelwiseRocCurve <datarobot.models.roc_curve.LabelwiseRocCurve>

Labelwise ROC Curve instances for source and all labels

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is binary

Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_lift_chart(source, fallback_to_parent_insights=False)

Retrieve the model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
LiftChart

Model lift chart data

Raises:
ClientError

If the insight is not available for this model

get_missing_report_info()

Retrieve a report on missing training data that can be used to understand missing values treatment in the model. The report consists of missing values resolutions for features numeric or categorical features that were part of building the model.

Returns:
An iterable of MissingReportPerFeature

The queried model missing report, sorted by missing count (DESCENDING order).

get_model_blueprint_chart()

Retrieve a diagram that can be used to understand data flow in the blueprint.

Returns:
ModelBlueprintChart

The queried model blueprint chart.

get_model_blueprint_documents()

Get documentation for tasks used in this model.

Returns:
list of BlueprintTaskDocument

All documents available for the model.

get_multiclass_feature_impact()

For multiclass it’s possible to calculate feature impact separately for each target class. The method for calculation is exactly the same, calculated in one-vs-all style for each target class.

Requires that Feature Impact has already been computed with request_feature_impact.

Returns:
feature_impacts : list of dict

The feature impact data. Each item is a dict with the keys ‘featureImpacts’ (list), ‘class’ (str). Each item in ‘featureImpacts’ is a dict with the keys ‘featureName’, ‘impactNormalized’, and ‘impactUnnormalized’, and ‘redundantWith’.

Raises:
ClientError (404)

If the multiclass feature impacts have not been computed.

get_multiclass_lift_chart(source, fallback_to_parent_insights=False)

Retrieve model Lift chart for the specified source.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_multilabel_lift_charts(source, fallback_to_parent_insights=False)

Retrieve model Lift charts for the specified source.

New in version v2.24.

Parameters:
source : str

Lift chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return lift chart data for this model’s parent if the lift chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return insight data from this model’s parent.

Returns:
list of LiftChart

Model lift chart data for each saved target class

Raises:
ClientError

If the insight is not available for this model

get_num_iterations_trained()

Retrieves the number of estimators trained by early-stopping tree-based models.

– versionadded:: v2.22

Returns:
projectId: str

id of project containing the model

modelId: str

id of the model

data: array

list of numEstimatorsItem objects, one for each modeling stage.

numEstimatorsItem will be of the form:
stage: str

indicates the modeling stage (for multi-stage models); None of single-stage models

numIterations: int

the number of estimators or iterations trained by the model

get_or_request_feature_effect(source: str, max_wait: int = 600, row_count: Optional[int] = None)

Retrieve feature effect for the model, requesting a job if it hasn’t been run previously

See get_feature_effect_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature effect job to complete before erroring

row_count : int, optional

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

source : string

The source Feature Effects are retrieved for.

Returns:
feature_effects : FeatureEffects

The feature effects data.

get_or_request_feature_effects_multiclass(source, top_n_features=None, features=None, row_count=None, class_=None, max_wait=600)

Retrieve Feature Effects for the multiclass model, requesting a job if it hasn’t been run previously.

Parameters:
source : string

The source Feature Effects retrieve for.

class_ : str or None

The class name Feature Effects retrieve for.

row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by Feature Impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

max_wait : int, optional

The maximum time to wait for a requested Feature Effects job to complete before erroring.

Returns:
feature_effects : list of FeatureEffectsMulticlass

The list of multiclass feature effects data.

get_or_request_feature_fit(source: str, max_wait: int = 600)

Retrieve feature fit for the model, requesting a job if it hasn’t been run previously

See get_feature_fit_metadata for retrieving information of source.

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature fit job to complete before erroring

source : string

The source Feature Fit are retrieved for. One value of [FeatureFitMetadata.sources].

Returns:
feature_effects : FeatureFit

The feature fit data.

get_or_request_feature_impact(max_wait: int = 600, **kwargs)

Retrieve feature impact for the model, requesting a job if it hasn’t been run previously

Parameters:
max_wait : int, optional

The maximum time to wait for a requested feature impact job to complete before erroring

**kwargs

Arbitrary keyword arguments passed to request_feature_impact.

Returns:
feature_impacts : list or dict

The feature impact data. See get_feature_impact for the exact schema.

get_parameters()

Retrieve model parameters.

Returns:
ModelParameters

Model parameters for this model.

get_pareto_front()

Retrieve the Pareto Front for a Eureqa model.

This method is only supported for Eureqa models.

Returns:
ParetoFront

Model ParetoFront data

get_prime_eligibility()

Check if this model can be approximated with DataRobot Prime

Returns:
prime_eligibility : dict

a dict indicating whether a model can be approximated with DataRobot Prime (key can_make_prime) and why it may be ineligible (key message)

get_residuals_chart(source, fallback_to_parent_insights=False)

Retrieve model residuals chart for the specified source.

Parameters:
source : str

Residuals chart data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values.

fallback_to_parent_insights : bool

Optional, if True, this will return residuals chart data for this model’s parent if the residuals chart is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return residuals data from this model’s parent.

Returns:
ResidualsChart

Model residuals chart data

Raises:
ClientError

If the insight is not available for this model

get_roc_curve(source, fallback_to_parent_insights=False)

Retrieve the ROC curve for a binary model for the specified source.

Parameters:
source : str

ROC curve data source. Check datarobot.enums.CHART_DATA_SOURCE for possible values. (New in version v2.23) For time series and OTV models, also accepts values backtest_2, backtest_3, …, up to the number of backtests in the model.

fallback_to_parent_insights : bool

(New in version v2.14) Optional, if True, this will return ROC curve data for this model’s parent if the ROC curve is not available for this model and the model has a defined parent model. If omitted or False, or there is no parent model, will not attempt to return data from this model’s parent.

Returns:
RocCurve

Model ROC curve data

Raises:
ClientError

If the insight is not available for this model

(New in version v3.0) TypeError

If the underlying project type is multilabel

get_rulesets() → List[datarobot.models.ruleset.Ruleset]

List the rulesets approximating this model generated by DataRobot Prime

If this model hasn’t been approximated yet, will return an empty list. Note that these are rulesets approximating this model, not rulesets used to construct this model.

Returns:
rulesets : list of Ruleset
get_supported_capabilities()

Retrieves a summary of the capabilities supported by a model.

New in version v2.14.

Returns:
supportsBlending: bool

whether the model supports blending

supportsMonotonicConstraints: bool

whether the model supports monotonic constraints

hasWordCloud: bool

whether the model has word cloud data available

eligibleForPrime: bool

whether the model is eligible for Prime

hasParameters: bool

whether the model has parameters that can be retrieved

supportsCodeGeneration: bool

(New in version v2.18) whether the model supports code generation

supportsShap: bool
(New in version v2.18) True if the model supports Shapley package. i.e. Shapley based

feature Importance

supportsEarlyStopping: bool

(New in version v2.22) True if this is an early stopping tree-based model and number of trained iterations can be retrieved.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to this model at leaderboard.

get_word_cloud(exclude_stop_words=False)

Retrieve word cloud data for the model.

Parameters:
exclude_stop_words : bool, optional

Set to True if you want stopwords filtered out of response.

Returns:
WordCloud

Word cloud data for the model.

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

open_model_browser() → None

Opens model at project leaderboard in web browser. Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

request_approximation()

Request an approximation of this model using DataRobot Prime

This will create several rulesets that could be used to approximate this model. After comparing their scores and rule counts, the code used in the approximation can be downloaded and run locally.

Returns:
job : Job

the job generating the rulesets

request_cross_class_accuracy_scores()

Request data disparity insights to be computed for the model.

Returns:
status_id : str

A statusId of computation request.

request_data_disparity_insights(feature, compared_class_names)

Request data disparity insights to be computed for the model.

Parameters:
feature : str

Bias and Fairness protected feature name.

compared_class_names : list(str)

List of two classes to compare

Returns:
status_id : str

A statusId of computation request.

request_external_test(dataset_id: str, actual_value_column: Optional[str] = None)

Request external test to compute scores and insights on an external test dataset

Parameters:
dataset_id : string

The dataset to make predictions against (as uploaded from Project.upload_dataset)

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

Returns
——-
job : Job

a Job representing external dataset insights computation

request_fairness_insights(fairness_metrics_set=None)

Request fairness insights to be computed for the model.

Parameters:
fairness_metrics_set : str, optional

Can be one of <datarobot.enums.FairnessMetricsSet>. The fairness metric used to calculate the fairness scores.

Returns:
status_id : str

A statusId of computation request.

request_feature_effect(row_count: Optional[int] = None)

Request feature effects to be computed for the model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

(New in version v2.21) The sample size to use for Feature Impact computation. Minimum is 10 rows. Maximum is 100000 rows or the training sample size of the model, whichever is less.

Returns:
job : Job

A Job representing the feature effect computation. To get the completed feature effect data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_effects_multiclass(row_count: Optional[int] = None, top_n_features: Optional[int] = None, features=None)

Request Feature Effects computation for the multiclass model.

See get_feature_effect for more information on the result of the job.

Parameters:
row_count : int

The number of rows from dataset to use for Feature Impact calculation.

top_n_features : int or None

Number of top features (ranked by feature impact) used to calculate Feature Effects.

features : list or None

The list of features used to calculate Feature Effects.

Returns:
job : Job

A Job representing Feature Effect computation. To get the completed Feature Effect data, use job.get_result or job.get_result_when_complete.

request_feature_fit()

Request feature fit to be computed for the model.

See get_feature_effect for more information on the result of the job.

Returns:
job : Job

A Job representing the feature fit computation. To get the completed feature fit data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature effect have already been requested.

request_feature_impact(row_count: Optional[int] = None, with_metadata: bool = False)

Request feature impacts to be computed for the model.

See get_feature_impact for more information on the result of the job.

Parameters:
row_count : int

The sample size (specified in rows) to use for Feature Impact computation. This is not supported for unsupervised, multi-class (that has a separate method) and time series projects.

Returns:
job : Job

A Job representing the feature impact computation. To get the completed feature impact data, use job.get_result or job.get_result_when_complete.

Raises:
JobAlreadyRequested (422)

If the feature impacts have already been requested.

request_frozen_datetime_model(training_row_count: Optional[int] = None, training_duration: Optional[str] = None, training_start_date: Optional[datetime] = None, training_end_date: Optional[datetime] = None, time_window_sample_pct: Optional[int] = None, sampling_method: Optional[str] = None) → ModelJob

Train a new frozen model with parameters from this model.

Requires that this model belongs to a datetime partitioned project. If it does not, an error will occur when submitting the job.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

In addition of training_row_count and training_duration, frozen datetime models may be trained on an exact date range. Only one of training_row_count, training_duration, or training_start_date and training_end_date should be specified.

Models specified using training_start_date and training_end_date are the only ones that can be trained into the holdout data (once the holdout is unlocked).

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, training_duration may not be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, training_row_count may not be specified.

training_start_date : datetime.datetime, optional

the start date of the data to train to model on. Only rows occurring at or after this datetime will be used. If training_start_date is specified, training_end_date must also be specified.

training_end_date : datetime.datetime, optional

the end date of the data to train the model on. Only rows occurring strictly before this datetime will be used. If training_end_date is specified, training_start_date must also be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

Returns:
model_job : ModelJob

the modeling job training a frozen model

request_frozen_model(sample_pct: Optional[float] = None, training_row_count: Optional[int] = None) → ModelJob

Train a new frozen model with parameters from this model

Note

This method only works if project the model belongs to is not datetime partitioned. If it is, use request_frozen_datetime_model instead.

Frozen models use the same tuning parameters as their parent model instead of independently optimizing them to allow efficiently retraining models on larger amounts of the training data.

Parameters:
sample_pct : float

optional, the percentage of the dataset to use with the model. If not provided, will use the value from this model.

training_row_count : int

(New in version v2.9) optional, the integer number of rows of the dataset to use with the model. Only one of sample_pct and training_row_count should be specified.

Returns:
model_job : ModelJob

the modeling job training a frozen model

request_predictions(dataset_id: Optional[str] = None, dataset: Optional[Dataset] = None, dataframe: Optional[pd.DataFrame] = None, file_path: Optional[str] = None, file: Optional[IOBase] = None, include_prediction_intervals: Optional[bool] = None, prediction_intervals_size: Optional[int] = None, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, explanation_algorithm: Optional[str] = None, max_explanations: Optional[int] = None, max_ngram_explanations: Optional[Union[int, str]] = None) → PredictJob

Requests predictions against a previously uploaded dataset.

Parameters:
dataset_id : string, optional

The ID of the dataset to make predictions against (as uploaded from Project.upload_dataset)

dataset : Dataset, optional

The dataset to make predictions against (as uploaded from Project.upload_dataset)

dataframe : pd.DataFrame, optional

(New in v3.0) The dataframe to make predictions against

file_path : str, optional

(New in v3.0) Path to file to make predictions against

file : IOBase, optional

(New in v3.0) File to make predictions against

include_prediction_intervals : bool, optional

(New in v2.16) For time series projects only. Specifies whether prediction intervals should be calculated for this request. Defaults to True if prediction_intervals_size is specified, otherwise defaults to False.

prediction_intervals_size : int, optional

(New in v2.16) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Defaults to 80 if include_prediction_intervals is True. Prediction intervals size must be between 1 and 100 (inclusive).

forecast_point : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

predictions_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column can be used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

explanation_algorithm: (New in version v2.21) optional; If set to ‘shap’, the

response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations: (New in version v2.21) int optional; specifies the maximum number of

explanation values that should be returned for each row, ordered by absolute value, greatest to least. If null, no limit. In the case of ‘shap’: if the number of features is greater than the limit, the sum of remaining values will also be returned as shapRemainingTotal. Defaults to null. Cannot be set if explanation_algorithm is omitted.

max_ngram_explanations: optional; int or str

(New in version v2.29) Specifies the maximum number of text explanation values that should be returned. If set to all, text explanations will be computed and all the ngram explanations will be returned. If set to a non zero positive integer value, text explanations will be computed and this amount of descendingly sorted ngram explanations will be returned. By default text explanation won’t be triggered to be computed.

Returns:
job : PredictJob

The job computing the predictions

request_training_predictions(data_subset, explanation_algorithm=None, max_explanations=None)

Start a job to build training predictions

Parameters:
data_subset : str

data set definition to build predictions on. Choices are:

  • dr.enums.DATA_SUBSET.ALL or string all for all data available. Not valid for
    models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.VALIDATION_AND_HOLDOUT or string validationAndHoldout for
    all data except training set. Not valid for models in datetime partitioned projects
  • dr.enums.DATA_SUBSET.HOLDOUT or string holdout for holdout data set only
  • dr.enums.DATA_SUBSET.ALL_BACKTESTS or string allBacktests for downloading
    the predictions for all backtest validation folds. Requires the model to have successfully scored all backtests. Datetime partitioned projects only.
explanation_algorithm : dr.enums.EXPLANATIONS_ALGORITHM

(New in v2.21) Optional. If set to dr.enums.EXPLANATIONS_ALGORITHM.SHAP, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to None (no prediction explanations).

max_explanations : int

(New in v2.21) Optional. Specifies the maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. In the case of dr.enums.EXPLANATIONS_ALGORITHM.SHAP: If not set, explanations are returned for all features. If the number of features is greater than the max_explanations, the sum of remaining values will also be returned as shap_remaining_total. Max 100. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns. Is ignored if explanation_algorithm is not set.

Returns:
Job

an instance of created async job

request_transferable_export(prediction_intervals_size: Optional[int] = None) → Job

Request generation of an exportable model file for use in an on-premise DataRobot standalone prediction environment.

This function can only be used if model export is enabled, and will only be useful if you have an on-premise environment in which to import it.

This function does not download the exported file. Use download_export for that.

Parameters:
prediction_intervals_size : int, optional

(New in v2.19) For time series projects only. Represents the percentile to use for the size of the prediction intervals. Prediction intervals size must be between 1 and 100 (inclusive).

Returns:
Job

Examples

model = datarobot.Model.get('project-id', 'model-id')
job = model.request_transferable_export()
job.wait_for_completion()
model.download_export('my_exported_model.drmodel')

# Client must be configured to use standalone prediction server for import:
datarobot.Client(token='my-token-at-standalone-server',
                 endpoint='standalone-server-url/api/v2')

imported_model = datarobot.ImportedModel.create('my_exported_model.drmodel')
retrain(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, n_clusters: Optional[int] = None) → ModelJob

Submit a job to the queue to train a blender model.

Parameters:
sample_pct: float, optional

The sample size in percents (1 to 100) to use in training. If this parameter is used then training_row_count should not be given.

featurelist_id : str, optional

The featurelist id

training_row_count : int, optional

The number of rows used to train the model. If this parameter is used, then sample_pct should not be given.

n_clusters: int, optional

(new in version 2.27) number of clusters to use in an unsupervised clustering model. This parameter is used only for unsupervised clustering models that do not determine the number of clusters automatically.

Returns:
job : ModelJob

The created job that is retraining the model

set_prediction_threshold(threshold)

Set a custom prediction threshold for the model.

May not be used once prediction_threshold_read_only is True for this model.

Parameters:
threshold : float

only used for binary classification projects. The threshold to when deciding between the positive and negative classes when making predictions. Should be between 0.0 and 1.0 (inclusive).

star_model() → None

Mark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

start_advanced_tuning_session()

Start an Advanced Tuning session. Returns an object that helps set up arguments for an Advanced Tuning model execution.

As of v2.17, all models other than blenders, open source, prime, baseline and user-created support Advanced Tuning.

Returns:
AdvancedTuningSession

Session for setting up and running Advanced Tuning on a model

train(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, scoring_type: Optional[str] = None, training_row_count: Optional[int] = None, monotonic_increasing_featurelist_id: Union[str, object, None] = <object object>, monotonic_decreasing_featurelist_id: Union[str, object, None] = <object object>) → str

Train the blueprint used in model on a particular featurelist or amount of data.

This method creates a new training job for worker and appends it to the end of the queue for this project. After the job has finished you can get the newly trained model by retrieving it from the project leaderboard, or by retrieving the result of the job.

Either sample_pct or training_row_count can be used to specify the amount of data to use, but not both. If neither are specified, a default of the maximum amount of data that can safely be used to train any blueprint without going into the validation data will be selected.

In smart-sampled projects, sample_pct and training_row_count are assumed to be in terms of rows of the minority class.

Note

For datetime partitioned projects, see train_datetime instead.

Parameters:
sample_pct : float, optional

The amount of data to use for training, as a percentage of the project dataset from 0 to 100.

featurelist_id : str, optional

The identifier of the featurelist to use. If not defined, the featurelist of this model is used.

scoring_type : str, optional

Either validation or crossValidation (also dr.SCORING_TYPE.validation or dr.SCORING_TYPE.cross_validation). validation is available for every partitioning type, and indicates that the default model validation should be used for the project. If the project uses a form of cross-validation partitioning, crossValidation can also be used to indicate that all of the available training/validation combinations should be used to evaluate the model.

training_row_count : int, optional

The number of rows to use to train the requested model.

monotonic_increasing_featurelist_id : str

(new in version 2.11) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str

(new in version 2.11) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
model_job_id : str

id of created job, can be used as parameter to ModelJob.get method or wait_for_async_model_creation function

Examples

project = Project.get('project-id')
model = Model.get('project-id', 'model-id')
model_job_id = model.train(training_row_count=project.max_train_rows)
train_datetime(featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, training_duration: Optional[str] = None, time_window_sample_pct: Optional[int] = None, monotonic_increasing_featurelist_id: Optional[Union[str, object]] = <object object>, monotonic_decreasing_featurelist_id: Optional[Union[str, object]] = <object object>, use_project_settings: bool = False, sampling_method: Optional[str] = None) → ModelJob

Trains this model on a different featurelist or sample size.

Requires that this model is part of a datetime partitioned project; otherwise, an error will occur.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
featurelist_id : str, optional

the featurelist to use to train the model. If not specified, the featurelist of this model is used.

training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, neither training_duration nor use_project_settings may be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, neither training_row_count nor use_project_settings may be specified.

use_project_settings : bool, optional

(New in version v2.20) defaults to False. If True, indicates that the custom backtest partitioning settings specified by the user will be used to train the model and evaluate backtest scores. If specified, neither training_row_count nor training_duration may be specified.

time_window_sample_pct : int, optional

may only be specified when the requested model is a time window (e.g. duration or start and end dates). An integer between 1 and 99 indicating the percentage to sample by within the window. The points kept are determined by a random uniform sample. If specified, training_duration must be specified otherwise, the number of rows used to train the model and evaluate backtest scores and an error will occur.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

monotonic_increasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
job : ModelJob

the created job to build the model

unstar_model() → None

Unmark the model as starred.

Model stars propagate to the web application and the API, and can be used to filter when listing models.

Combined Model

See API reference for Combined Model in Segmented Modeling API Reference

Advanced Tuning

class datarobot.models.advanced_tuning.AdvancedTuningSession(model: Model)

A session enabling users to configure and run advanced tuning for a model.

Every model contains a set of one or more tasks. Every task contains a set of zero or more parameters. This class allows tuning the values of each parameter on each task of a model, before running that model.

This session is client-side only and is not persistent. Only the final model, constructed when run is called, is persisted on the DataRobot server.

Attributes:
description : str

Description for the new advance-tuned model. Defaults to the same description as the base model.

get_task_names() → List[str]

Get the list of task names that are available for this model

Returns:
list(str)

List of task names

get_parameter_names(task_name: str) → List[str]

Get the list of parameter names available for a specific task

Returns:
list(str)

List of parameter names

set_parameter(value: Union[int, float, str, List[str]], task_name: Optional[str] = None, parameter_name: Optional[str] = None, parameter_id: Optional[str] = None) → None

Set the value of a parameter to be used

The caller must supply enough of the optional arguments to this function to uniquely identify the parameter that is being set. For example, a less-common parameter name such as ‘building_block__complementary_error_function’ might only be used once (if at all) by a single task in a model. In which case it may be sufficient to simply specify ‘parameter_name’. But a more-common name such as ‘random_seed’ might be used by several of the model’s tasks, and it may be necessary to also specify ‘task_name’ to clarify which task’s random seed is to be set. This function only affects client-side state. It will not check that the new parameter value(s) are valid.

Parameters:
task_name : str

Name of the task whose parameter needs to be set

parameter_name : str

Name of the parameter to set

parameter_id : str

ID of the parameter to set

value : int, float, list, or str

New value for the parameter, with legal values determined by the parameter being set

Raises:
NoParametersFoundException

if no matching parameters are found.

NonUniqueParametersException

if multiple parameters matched the specified filtering criteria

get_parameters() → AdvancedTuningParamsType

Returns the set of parameters available to this model

The returned parameters have one additional key, “value”, reflecting any new values that have been set in this AdvancedTuningSession. When the session is run, “value” will be used, or if it is unset, “current_value”.

Returns:
parameters : dict

“Parameters” dictionary, same as specified on Model.get_advanced_tuning_params.

An additional field is added per parameter to the ‘tuning_parameters’ list in the dictionary:
value : int, float, list, or str

The current value of the parameter. None if none has been specified.

run() → ModelJob

Submit this model for Advanced Tuning.

Returns:
datarobot.models.modeljob.ModelJob

The created job to build the model

ModelJob

datarobot.models.modeljob.wait_for_async_model_creation(project_id, model_job_id, max_wait=600)

Given a Project id and ModelJob id poll for status of process responsible for model creation until model is created.

Parameters:
project_id : str

The identifier of the project

model_job_id : str

The identifier of the ModelJob

max_wait : int, optional

Time in seconds after which model creation is considered unsuccessful

Returns:
model : Model

Newly created model

Raises:
AsyncModelCreationError

Raised if status of fetched ModelJob object is error

AsyncTimeoutError

Model wasn’t created in time, specified by max_wait parameter

class datarobot.models.ModelJob(data, completed_resource_url=None)

Tracks asynchronous work being done within a project

Attributes:
id : int

the id of the job

project_id : str

the id of the project the job belongs to

status : str

the status of the job - will be one of datarobot.enums.QUEUE_STATUS

job_type : str

what kind of work the job is doing - will be ‘model’ for modeling jobs

is_blocked : bool

if true, the job is blocked (cannot be executed) until its dependencies are resolved

sample_pct : float

the percentage of the project’s dataset used in this modeling job

model_type : str

the model this job builds (e.g. ‘Nystroem Kernel SVM Regressor’)

processes : list of str

the processes used by the model

featurelist_id : str

the id of the featurelist used in this modeling job

blueprint : Blueprint

the blueprint used in this modeling job

classmethod from_job(job)

Transforms a generic Job into a ModelJob

Parameters:
job: Job

A generic job representing a ModelJob

Returns:
model_job: ModelJob

A fully populated ModelJob with all the details of the job

Raises:
ValueError:

If the generic Job was not a model job, e.g. job_type != JOB_TYPE.MODEL

classmethod get(project_id, model_job_id)

Fetches one ModelJob. If the job finished, raises PendingJobFinished exception.

Parameters:
project_id : str

The identifier of the project the model belongs to

model_job_id : str

The identifier of the model_job

Returns:
model_job : ModelJob

The pending ModelJob

Raises:
PendingJobFinished

If the job being queried already finished, and the server is re-routing to the finished model.

AsyncFailureError

Querying this resource gave a status code other than 200 or 303

classmethod get_model(project_id, model_job_id)

Fetches a finished model from the job used to create it.

Parameters:
project_id : str

The identifier of the project the model belongs to

model_job_id : str

The identifier of the model_job

Returns:
model : Model

The finished model

Raises:
JobNotFinished

If the job has not finished yet

AsyncFailureError

Querying the model_job in question gave a status code other than 200 or 303

cancel()

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_result(params=None)
Parameters:
params : dict or None

Query parameters to be added to request to get results.

For featureEffects and featureFit, source param is required to define source,
otherwise the default is `training`
Returns:
result : object
Return type depends on the job type:
  • for model jobs, a Model is returned
  • for predict jobs, a pandas.DataFrame (with predictions) is returned
  • for featureImpact jobs, a list of dicts by default (see with_metadata parameter of the FeatureImpactJob class and its get() method).
  • for primeRulesets jobs, a list of Rulesets
  • for primeModel jobs, a PrimeModel
  • for primeDownloadValidation jobs, a PrimeFile
  • for predictionExplanationInitialization jobs, a PredictionExplanationsInitialization
  • for predictionExplanations jobs, a PredictionExplanations
  • for featureEffects, a FeatureEffects
  • for featureFit, a FeatureFit
Raises:
JobNotFinished

If the job is not finished, the result is not available.

AsyncProcessUnsuccessfulError

If the job errored or was aborted

get_result_when_complete(max_wait=600, params=None)
Parameters:
max_wait : int, optional

How long to wait for the job to finish.

params : dict, optional

Query parameters to be added to request.

Returns:
result: object

Return type is the same as would be returned by Job.get_result.

Raises:
AsyncTimeoutError

If the job does not finish in time

AsyncProcessUnsuccessfulError

If the job errored or was aborted

refresh()

Update this object with the latest job data from the server.

wait_for_completion(max_wait: int = 600) → None

Waits for job to complete.

Parameters:
max_wait : int, optional

How long to wait for the job to finish.

Pareto Front

class datarobot.models.pareto_front.ParetoFront(project_id, error_metric, hyperparameters, target_type, solutions)

Pareto front data for a Eureqa model.

The pareto front reflects the tradeoffs between error and complexity for particular model. The solutions reflect possible Eureqa models that are different levels of complexity. By default, only one solution will have a corresponding model, but models can be created for each solution.

Attributes:
project_id : str

the ID of the project the model belongs to

error_metric : str

Eureqa error-metric identifier used to compute error metrics for this search. Note that Eureqa error metrics do NOT correspond 1:1 with DataRobot error metrics – the available metrics are not the same, and are computed from a subset of the training data rather than from the validation data.

hyperparameters : dict

Hyperparameters used by this run of the Eureqa blueprint

target_type : str

Indicating what kind of modeling is being done in this project, either ‘Regression’, ‘Binary’ (Binary classification), or ‘Multiclass’ (Multiclass classification).

solutions : list(Solution)

Solutions that Eureqa has found to model this data. Some solutions will have greater accuracy. Others will have slightly less accuracy but will use simpler expressions.

classmethod from_server_data(data, keep_attrs=None)

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : list

List of the dotted namespace notations for attributes to keep within the object structure even if their values are None

class datarobot.models.pareto_front.Solution(eureqa_solution_id, complexity, error, expression, expression_annotated, best_model, project_id)

Eureqa Solution.

A solution represents a possible Eureqa model; however not all solutions have models associated with them. It must have a model created before it can be used to make predictions, etc.

Attributes:
eureqa_solution_id: str

ID of this Solution

complexity: int

Complexity score for this solution. Complexity score is a function of the mathematical operators used in the current solution. The Complexity calculation can be tuned via model hyperparameters.

error: float or None

Error for the current solution, as computed by Eureqa using the ‘error_metric’ error metric. It will be None if model refitted existing solution.

expression: str

Eureqa model equation string.

expression_annotated: str

Eureqa model equation string with variable names tagged for easy identification.

best_model: bool

True, if the model is determined to be the best

create_model()

Add this solution to the leaderboard, if it is not already present.

Partitioning

class datarobot.RandomCV(holdout_pct, reps, seed=0)

A partition in which observations are randomly assigned to cross-validation groups and the holdout set.

Parameters:
holdout_pct : int

the desired percentage of dataset to assign to holdout set

reps : int

number of cross validation folds to use

seed : int

a seed to use for randomization

class datarobot.StratifiedCV(holdout_pct, reps, seed=0)

A partition in which observations are randomly assigned to cross-validation groups and the holdout set, preserving in each group the same ratio of positive to negative cases as in the original data.

Parameters:
holdout_pct : int

the desired percentage of dataset to assign to holdout set

reps : int

number of cross validation folds to use

seed : int

a seed to use for randomization

class datarobot.GroupCV(holdout_pct, reps, partition_key_cols, seed=0)

A partition in which one column is specified, and rows sharing a common value for that column are guaranteed to stay together in the partitioning into cross-validation groups and the holdout set.

Parameters:
holdout_pct : int

the desired percentage of dataset to assign to holdout set

reps : int

number of cross validation folds to use

partition_key_cols : list

a list containing a single string, where the string is the name of the column whose values should remain together in partitioning

seed : int

a seed to use for randomization

class datarobot.UserCV(user_partition_col, cv_holdout_level, seed=0)

A partition where the cross-validation folds and the holdout set are specified by the user.

Parameters:
user_partition_col : string

the name of the column containing the partition assignments

cv_holdout_level

the value of the partition column indicating a row is part of the holdout set

seed : int

a seed to use for randomization

class datarobot.RandomTVH(holdout_pct, validation_pct, seed=0)

Specifies a partitioning method in which rows are randomly assigned to training, validation, and holdout.

Parameters:
holdout_pct : int

the desired percentage of dataset to assign to holdout set

validation_pct : int

the desired percentage of dataset to assign to validation set

seed : int

a seed to use for randomization

class datarobot.UserTVH(user_partition_col, training_level, validation_level, holdout_level, seed=0)

Specifies a partitioning method in which rows are assigned by the user to training, validation, and holdout sets.

Parameters:
user_partition_col : string

the name of the column containing the partition assignments

training_level

the value of the partition column indicating a row is part of the training set

validation_level

the value of the partition column indicating a row is part of the validation set

holdout_level

the value of the partition column indicating a row is part of the holdout set (use None if you want no holdout set)

seed : int

a seed to use for randomization

class datarobot.StratifiedTVH(holdout_pct, validation_pct, seed=0)

A partition in which observations are randomly assigned to train, validation, and holdout sets, preserving in each group the same ratio of positive to negative cases as in the original data.

Parameters:
holdout_pct : int

the desired percentage of dataset to assign to holdout set

validation_pct : int

the desired percentage of dataset to assign to validation set

seed : int

a seed to use for randomization

class datarobot.GroupTVH(holdout_pct, validation_pct, partition_key_cols, seed=0)

A partition in which one column is specified, and rows sharing a common value for that column are guaranteed to stay together in the partitioning into the training, validation, and holdout sets.

Parameters:
holdout_pct : int

the desired percentage of dataset to assign to holdout set

validation_pct : int

the desired percentage of dataset to assign to validation set

partition_key_cols : list

a list containing a single string, where the string is the name of the column whose values should remain together in partitioning

seed : int

a seed to use for randomization

class datarobot.DatetimePartitioningSpecification(datetime_partition_column, autopilot_data_selection_method=None, validation_duration=None, holdout_start_date=None, holdout_duration=None, disable_holdout=None, gap_duration=None, number_of_backtests=None, backtests=None, use_time_series=False, default_to_known_in_advance=False, default_to_do_not_derive=False, feature_derivation_window_start=None, feature_derivation_window_end=None, feature_settings=None, forecast_window_start=None, forecast_window_end=None, windows_basis_unit=None, treat_as_exponential=None, differencing_method=None, periodicities=None, multiseries_id_columns=None, use_cross_series_features=None, aggregation_type=None, cross_series_group_by_columns=None, calendar_id=None, holdout_end_date=None, unsupervised_mode=False, model_splits=None, allow_partial_history_time_series_predictions=False)

Uniquely defines a DatetimePartitioning for some project

Includes only the attributes of DatetimePartitioning that are directly controllable by users, not those determined by the DataRobot application based on the project dataset and the user-controlled settings.

This is the specification that should be passed to Project.analyze_and_model via the partitioning_method parameter. To see the full partitioning based on the project dataset, use DatetimePartitioning.generate.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Note that either (holdout_start_date, holdout_duration) or (holdout_start_date, holdout_end_date) can be used to specify holdout partitioning settings.

Attributes:
datetime_partition_column : str

the name of the column whose values as dates are used to assign a row to a particular partition

autopilot_data_selection_method : str

one of datarobot.enums.DATETIME_AUTOPILOT_DATA_SELECTION_METHOD. Whether models created by the autopilot should use “rowCount” or “duration” as their data_selection_method.

validation_duration : str or None

the default validation_duration for the backtests

holdout_start_date : datetime.datetime or None

The start date of holdout scoring data. If holdout_start_date is specified, either holdout_duration or holdout_end_date must also be specified. If disable_holdout is set to True, holdout_start_date, holdout_duration, and holdout_end_date may not be specified.

holdout_duration : str or None

The duration of the holdout scoring data. If holdout_duration is specified, holdout_start_date must also be specified. If disable_holdout is set to True, holdout_duration, holdout_start_date, and holdout_end_date may not be specified.

holdout_end_date : datetime.datetime or None

The end date of holdout scoring data. If holdout_end_date is specified, holdout_start_date must also be specified. If disable_holdout is set to True, holdout_end_date, holdout_start_date, and holdout_duration may not be specified.

disable_holdout : bool or None

(New in version v2.8) Whether to suppress allocating a holdout fold. If set to True, holdout_start_date, holdout_duration, and holdout_end_date may not be specified.

gap_duration : str or None

The duration of the gap between training and holdout scoring data

number_of_backtests : int or None

the number of backtests to use

backtests : list of BacktestSpecification

the exact specification of backtests to use. The indices of the specified backtests should range from 0 to number_of_backtests - 1. If any backtest is left unspecified, a default configuration will be chosen.

use_time_series : bool

(New in version v2.8) Whether to create a time series project (if True) or an OTV project which uses datetime partitioning (if False). The default behavior is to create an OTV project.

default_to_known_in_advance : bool

(New in version v2.11) Optional, default False. Used for time series projects only. Sets whether all features default to being treated as known in advance. Known in advance features are expected to be known for dates in the future when making predictions, e.g., “is this a holiday?”. Individual features can be set to a value different than the default using the feature_settings parameter.

default_to_do_not_derive : bool

(New in v2.17) Optional, default False. Used for time series projects only. Sets whether all features default to being treated as do-not-derive features, excluding them from feature derivation. Individual features can be set to a value different than the default by using the feature_settings parameter.

feature_derivation_window_start : int or None

(New in version v2.8) Only used for time series projects. Offset into the past to define how far back relative to the forecast point the feature derivation window should start. Expressed in terms of the windows_basis_unit and should be negative value or zero.

feature_derivation_window_end : int or None

(New in version v2.8) Only used for time series projects. Offset into the past to define how far back relative to the forecast point the feature derivation window should end. Expressed in terms of the windows_basis_unit and should be a negative value or zero.

feature_settings : list of FeatureSettings

(New in version v2.9) Optional, a list specifying per feature settings, can be left unspecified.

forecast_window_start : int or None

(New in version v2.8) Only used for time series projects. Offset into the future to define how far forward relative to the forecast point the forecast window should start. Expressed in terms of the windows_basis_unit.

forecast_window_end : int or None

(New in version v2.8) Only used for time series projects. Offset into the future to define how far forward relative to the forecast point the forecast window should end. Expressed in terms of the windows_basis_unit.

windows_basis_unit : string, optional

(New in version v2.14) Only used for time series projects. Indicates which unit is a basis for feature derivation window and forecast window. Valid options are detected time unit (one of the datarobot.enums.TIME_UNITS) or “ROW”. If omitted, the default value is the detected time unit.

treat_as_exponential : string, optional

(New in version v2.9) defaults to “auto”. Used to specify whether to treat data as exponential trend and apply transformations like log-transform. Use values from the datarobot.enums.TREAT_AS_EXPONENTIAL enum.

differencing_method : string, optional

(New in version v2.9) defaults to “auto”. Used to specify which differencing method to apply of case if data is stationary. Use values from datarobot.enums.DIFFERENCING_METHOD enum.

periodicities : list of Periodicity, optional

(New in version v2.9) a list of datarobot.Periodicity. Periodicities units should be “ROW”, if the windows_basis_unit is “ROW”.

multiseries_id_columns : list of str or null

(New in version v2.11) a list of the names of multiseries id columns to define series within the training data. Currently only one multiseries id column is supported.

use_cross_series_features : bool

(New in version v2.14) Whether to use cross series features.

aggregation_type : str, optional

(New in version v2.14) The aggregation type to apply when creating cross series features. Optional, must be one of “total” or “average”.

cross_series_group_by_columns : list of str, optional

(New in version v2.15) List of columns (currently of length 1). Optional setting that indicates how to further split series into related groups. For example, if every series is sales of an individual product, the series group-by could be the product category with values like “men’s clothing”, “sports equipment”, etc.. Can only be used in a multiseries project with use_cross_series_features set to True.

calendar_id : str, optional

(New in version v2.15) The id of the CalendarFile to use with this project.

unsupervised_mode: bool, optional

(New in version v2.20) defaults to False, indicates whether partitioning should be constructed for the unsupervised project.

model_splits: int, optional

(New in version v2.21) Sets the cap on the number of jobs per model used when building models to control number of jobs in the queue. Higher number of model splits will allow for less downsampling leading to the use of more post-processed data.

allow_partial_history_time_series_predictions: bool, optional

(New in version v2.24) Whether to allow time series models to make predictions using partial historical data.

collect_payload()

Set up the dict that should be sent to the server when setting the target Returns ——- partitioning_spec : dict

prep_payload(project_id, max_wait=600)

Run any necessary validation and prep of the payload, including async operations

Mainly used for the datetime partitioning spec but implemented in general for consistency

update(**kwargs) → None

Update this instance, matching attributes to kwargs

Mainly used for the datetime partitioning spec but implemented in general for consistency

class datarobot.BacktestSpecification(index, gap_duration=None, validation_start_date=None, validation_duration=None, validation_end_date=None, primary_training_start_date=None, primary_training_end_date=None)

Uniquely defines a Backtest used in a DatetimePartitioning

Includes only the attributes of a backtest directly controllable by users. The other attributes are assigned by the DataRobot application based on the project dataset and the user-controlled settings.

There are two ways to specify an individual backtest:

Option 1: Use index, gap_duration, validation_start_date, and validation_duration. All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method.

import datarobot as dr

partitioning_spec = dr.DatetimePartitioningSpecification(
    backtests=[
        # modify the first backtest using option 1
        dr.BacktestSpecification(
            index=0,
            gap_duration=dr.partitioning_methods.construct_duration_string(),
            validation_start_date=datetime(year=2010, month=1, day=1),
            validation_duration=dr.partitioning_methods.construct_duration_string(years=1),
        )
    ],
    # other partitioning settings...
)

Option 2 (New in version v2.20): Use index, primary_training_start_date, primary_training_end_date, validation_start_date, and validation_end_date. In this case, note that setting primary_training_end_date and validation_start_date to the same timestamp will result with no gap being created.

import datarobot as dr

partitioning_spec = dr.DatetimePartitioningSpecification(
    backtests=[
        # modify the first backtest using option 2
        dr.BacktestSpecification(
            index=0,
            primary_training_start_date=datetime(year=2005, month=1, day=1),
            primary_training_end_date=datetime(year=2010, month=1, day=1),
            validation_start_date=datetime(year=2010, month=1, day=1),
            validation_end_date=datetime(year=2011, month=1, day=1),
        )
    ],
    # other partitioning settings...
)

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
index : int

the index of the backtest to update

gap_duration : str

a duration string specifying the desired duration of the gap between training and validation scoring data for the backtest

validation_start_date : datetime.datetime

the desired start date of the validation scoring data for this backtest

validation_duration : str

a duration string specifying the desired duration of the validation scoring data for this backtest

validation_end_date : datetime.datetime

the desired end date of the validation scoring data for this backtest

primary_training_start_date : datetime.datetime

the desired start date of the training partition for this backtest

primary_training_end_date : datetime.datetime

the desired end date of the training partition for this backtest

class datarobot.FeatureSettings(feature_name, known_in_advance=None, do_not_derive=None)

Per feature settings

Attributes:
feature_name : string

name of the feature

known_in_advance : bool

(New in version v2.11) Optional, for time series projects only. Sets whether the feature is known in advance, i.e., values for future dates are known at prediction time. If not specified, the feature uses the value from the default_to_known_in_advance flag.

do_not_derive : bool

(New in v2.17) Optional, for time series projects only. Sets whether the feature is excluded from feature derivation. If not specified, the feature uses the value from the default_to_do_not_derive flag.

collect_payload(use_a_priori=False)
Parameters:
use_a_priori : bool : Switch to using the older a_priori key name instead of known_in_advance. Default: False
Returns:
BacktestSpecification dictionary representation
class datarobot.Periodicity(time_steps, time_unit)

Periodicity configuration

Parameters:
time_steps : int

Time step value

time_unit : string

Time step unit, valid options are values from datarobot.enums.TIME_UNITS

Examples

from datarobot as dr
periodicities = [
    dr.Periodicity(time_steps=10, time_unit=dr.enums.TIME_UNITS.HOUR),
    dr.Periodicity(time_steps=600, time_unit=dr.enums.TIME_UNITS.MINUTE)]
spec = dr.DatetimePartitioningSpecification(
    # ...
    periodicities=periodicities
)
class datarobot.DatetimePartitioning(project_id=None, datetime_partition_column=None, date_format=None, autopilot_data_selection_method=None, validation_duration=None, available_training_start_date=None, available_training_duration=None, available_training_row_count=None, available_training_end_date=None, primary_training_start_date=None, primary_training_duration=None, primary_training_row_count=None, primary_training_end_date=None, gap_start_date=None, gap_duration=None, gap_row_count=None, gap_end_date=None, disable_holdout=None, holdout_start_date=None, holdout_duration=None, holdout_row_count=None, holdout_end_date=None, number_of_backtests=None, backtests=None, total_row_count=None, use_time_series=False, default_to_known_in_advance=False, default_to_do_not_derive=False, feature_derivation_window_start=None, feature_derivation_window_end=None, feature_settings=None, forecast_window_start=None, forecast_window_end=None, windows_basis_unit=None, treat_as_exponential=None, differencing_method=None, periodicities=None, multiseries_id_columns=None, number_of_known_in_advance_features=0, number_of_do_not_derive_features=0, use_cross_series_features=None, aggregation_type=None, cross_series_group_by_columns=None, calendar_id=None, calendar_name=None, model_splits=None, allow_partial_history_time_series_predictions=False)

Full partitioning of a project for datetime partitioning.

To instantiate, use DatetimePartitioning.get(project_id).

Includes both the attributes specified by the user, as well as those determined by the DataRobot application based on the project dataset. In order to use a partitioning to set the target, call to_specification and pass the resulting DatetimePartitioningSpecification to Project.analyze_and_model via the partitioning_method parameter.

The available training data corresponds to all the data available for training, while the primary training data corresponds to the data that can be used to train while ensuring that all backtests are available. If a model is trained with more data than is available in the primary training data, then all backtests may not have scores available.

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
project_id : str

the id of the project this partitioning applies to

datetime_partition_column : str

the name of the column whose values as dates are used to assign a row to a particular partition

date_format : str

the format (e.g. “%Y-%m-%d %H:%M:%S”) by which the partition column was interpreted (compatible with strftime)

autopilot_data_selection_method : str

one of datarobot.enums.DATETIME_AUTOPILOT_DATA_SELECTION_METHOD. Whether models created by the autopilot use “rowCount” or “duration” as their data_selection_method.

validation_duration : str or None

the validation duration specified when initializing the partitioning - not directly significant if the backtests have been modified, but used as the default validation_duration for the backtests. Can be absent if this is a time series project with an irregular primary date/time feature.

available_training_start_date : datetime.datetime

The start date of the available training data for scoring the holdout

available_training_duration : str

The duration of the available training data for scoring the holdout

available_training_row_count : int or None

The number of rows in the available training data for scoring the holdout. Only available when retrieving the partitioning after setting the target.

available_training_end_date : datetime.datetime

The end date of the available training data for scoring the holdout

primary_training_start_date : datetime.datetime or None

The start date of primary training data for scoring the holdout. Unavailable when the holdout fold is disabled.

primary_training_duration : str

The duration of the primary training data for scoring the holdout

primary_training_row_count : int or None

The number of rows in the primary training data for scoring the holdout. Only available when retrieving the partitioning after setting the target.

primary_training_end_date : datetime.datetime or None

The end date of the primary training data for scoring the holdout. Unavailable when the holdout fold is disabled.

gap_start_date : datetime.datetime or None

The start date of the gap between training and holdout scoring data. Unavailable when the holdout fold is disabled.

gap_duration : str

The duration of the gap between training and holdout scoring data

gap_row_count : int or None

The number of rows in the gap between training and holdout scoring data. Only available when retrieving the partitioning after setting the target.

gap_end_date : datetime.datetime or None

The end date of the gap between training and holdout scoring data. Unavailable when the holdout fold is disabled.

disable_holdout : bool or None

Whether to suppress allocating a holdout fold. If set to True, holdout_start_date, holdout_duration, and holdout_end_date may not be specified.

holdout_start_date : datetime.datetime or None

The start date of holdout scoring data. Unavailable when the holdout fold is disabled.

holdout_duration : str

The duration of the holdout scoring data

holdout_row_count : int or None

The number of rows in the holdout scoring data. Only available when retrieving the partitioning after setting the target.

holdout_end_date : datetime.datetime or None

The end date of the holdout scoring data. Unavailable when the holdout fold is disabled.

number_of_backtests : int

the number of backtests used.

backtests : list of Backtest

the configured backtests.

total_row_count : int

the number of rows in the project dataset. Only available when retrieving the partitioning after setting the target.

use_time_series : bool

(New in version v2.8) Whether to create a time series project (if True) or an OTV project which uses datetime partitioning (if False). The default behavior is to create an OTV project.

default_to_known_in_advance : bool

(New in version v2.11) Optional, default False. Used for time series projects only. Sets whether all features default to being treated as known in advance. Known in advance features are expected to be known for dates in the future when making predictions, e.g., “is this a holiday?”. Individual features can be set to a value different from the default using the feature_settings parameter.

default_to_do_not_derive : bool

(New in v2.17) Optional, default False. Used for time series projects only. Sets whether all features default to being treated as do-not-derive features, excluding them from feature derivation. Individual features can be set to a value different from the default by using the feature_settings parameter.

feature_derivation_window_start : int or None

(New in version v2.8) Only used for time series projects. Offset into the past to define how far back relative to the forecast point the feature derivation window should start. Expressed in terms of the windows_basis_unit.

feature_derivation_window_end : int or None

(New in version v2.8) Only used for time series projects. Offset into the past to define how far back relative to the forecast point the feature derivation window should end. Expressed in terms of the windows_basis_unit.

feature_settings : list of FeatureSettings

(New in version v2.9) Optional, a list specifying per feature settings, can be left unspecified.

forecast_window_start : int or None

(New in version v2.8) Only used for time series projects. Offset into the future to define how far forward relative to the forecast point the forecast window should start. Expressed in terms of the windows_basis_unit.

forecast_window_end : int or None

(New in version v2.8) Only used for time series projects. Offset into the future to define how far forward relative to the forecast point the forecast window should end. Expressed in terms of the windows_basis_unit.

windows_basis_unit : string, optional

(New in version v2.14) Only used for time series projects. Indicates which unit is a basis for feature derivation window and forecast window. Valid options are detected time unit (one of the datarobot.enums.TIME_UNITS) or “ROW”. If omitted, the default value is detected time unit.

treat_as_exponential : string, optional

(New in version v2.9) defaults to “auto”. Used to specify whether to treat data as exponential trend and apply transformations like log-transform. Use values from the datarobot.enums.TREAT_AS_EXPONENTIAL enum.

differencing_method : string, optional

(New in version v2.9) defaults to “auto”. Used to specify which differencing method to apply of case if data is stationary. Use values from the datarobot.enums.DIFFERENCING_METHOD enum.

periodicities : list of Periodicity, optional

(New in version v2.9) a list of datarobot.Periodicity. Periodicities units should be “ROW”, if the windows_basis_unit is “ROW”.

multiseries_id_columns : list of str or null

(New in version v2.11) a list of the names of multiseries id columns to define series within the training data. Currently only one multiseries id column is supported.

number_of_known_in_advance_features : int

(New in version v2.14) Number of features that are marked as known in advance.

number_of_do_not_derive_features : int

(New in v2.17) Number of features that are excluded from derivation.

use_cross_series_features : bool

(New in version v2.14) Whether to use cross series features.

aggregation_type : str, optional

(New in version v2.14) The aggregation type to apply when creating cross series features. Optional, must be one of “total” or “average”.

cross_series_group_by_columns : list of str, optional

(New in version v2.15) List of columns (currently of length 1). Optional setting that indicates how to further split series into related groups. For example, if every series is sales of an individual product, the series group-by could be the product category with values like “men’s clothing”, “sports equipment”, etc.. Can only be used in a multiseries project with use_cross_series_features set to True.

calendar_id : str, optional

(New in version v2.15) Only available for time series projects. The id of the CalendarFile to use with this project.

calendar_name : str, optional

(New in version v2.17) Only available for time series projects. The name of the CalendarFile used with this project.

model_splits: int, optional

(New in version v2.21) Sets the cap on the number of jobs per model used when building models to control number of jobs in the queue. Higher number of model splits will allow for less downsampling leading to the use of more post-processed data.

allow_partial_history_time_series_predictions: bool, optional

(New in version v2.24) Whether to allow time series models to make predictions using partial historical data.

classmethod generate(project_id, spec, max_wait=600, target=None)

Preview the full partitioning determined by a DatetimePartitioningSpecification

Based on the project dataset and the partitioning specification, inspect the full partitioning that would be used if the same specification were passed into Project.analyze_and_model.

Parameters:
project_id : str

the id of the project

spec : DatetimePartitioningSpec

the desired partitioning

max_wait : int, optional

For some settings (e.g. generating a partitioning preview for a multiseries project for the first time), an asynchronous task must be run to analyze the dataset. max_wait governs the maximum time (in seconds) to wait before giving up. In all non-multiseries projects, this is unused.

target : str, optional

the name of the target column. For unsupervised projects target may be None. Providing a target will ensure that partitions are correctly optimized for your dataset.

Returns:
DatetimePartitioning :

the full generated partitioning

classmethod get(project_id)

Retrieve the DatetimePartitioning from a project

Only available if the project has already set the target as a datetime project.

Parameters:
project_id : str

the id of the project to retrieve partitioning for

Returns:
DatetimePartitioning : the full partitioning for the project
classmethod generate_optimized(project_id, spec, target, max_wait=600)

Preview the full partitioning determined by a DatetimePartitioningSpecification

Based on the project dataset and the partitioning specification, inspect the full partitioning that would be used if the same specification were passed into Project.analyze_and_model.

Parameters:
project_id : str

the id of the project

spec : DatetimePartitioningSpec

the desired partitioning

target : str

the name of the target column. For unsupervised projects target may be None.

max_wait : int, optional

Governs the maximum time (in seconds) to wait before giving up.

Returns:
DatetimePartitioning :

the full generated partitioning

classmethod get_optimized(project_id, datetime_partitioning_id)

Retrieve an Optimized DatetimePartitioning from a project for the specified datetime_partitioning_id. A datetime_partitioning_id is created by using the generate_optimized function.

Parameters:
project_id : str

the id of the project to retrieve partitioning for

datetime_partitioning_id : ObjectId

the ObjectId associated with the project to retrieve from mongo

Returns:
DatetimePartitioning : the full partitioning for the project
classmethod feature_log_list(project_id, offset=None, limit=None)

Retrieve the feature derivation log content and log length for a time series project.

The Time Series Feature Log provides details about the feature generation process for a time series project. It includes information about which features are generated and their priority, as well as the detected properties of the time series data such as whether the series is stationary, and periodicities detected.

This route is only supported for time series projects that have finished partitioning.

The feature derivation log will include information about:

  • Detected stationarity of the series:
    e.g. ‘Series detected as non-stationary’
  • Detected presence of multiplicative trend in the series:
    e.g. ‘Multiplicative trend detected’
  • Detected presence of multiplicative trend in the series:
    e.g. ‘Detected periodicities: 7 day’
  • Maximum number of feature to be generated:
    e.g. ‘Maximum number of feature to be generated is 1440’
  • Window sizes used in rolling statistics / lag extractors
    e.g. ‘The window sizes chosen to be: 2 months
    (because the time step is 1 month and Feature Derivation Window is 2 months)’
  • Features that are specified as known-in-advance
    e.g. ‘Variables treated as apriori: holiday’
  • Details about why certain variables are transformed in the input data
    e.g. ‘Generating variable “y (log)” from “y” because multiplicative trend
    is detected’
  • Details about features generated as timeseries features, and their priority
    e.g. ‘Generating feature “date (actual)” from “date” (priority: 1)’
Parameters:
project_id : str

project id to retrieve a feature derivation log for.

offset : int

optional, defaults is 0, this many results will be skipped.

limit : int

optional, defaults to 100, at most this many results are returned. To specify no limit, use 0. The default may change without notice.

classmethod feature_log_retrieve(project_id)

Retrieve the feature derivation log content and log length for a time series project.

The Time Series Feature Log provides details about the feature generation process for a time series project. It includes information about which features are generated and their priority, as well as the detected properties of the time series data such as whether the series is stationary, and periodicities detected.

This route is only supported for time series projects that have finished partitioning.

The feature derivation log will include information about:

  • Detected stationarity of the series:
    e.g. ‘Series detected as non-stationary’
  • Detected presence of multiplicative trend in the series:
    e.g. ‘Multiplicative trend detected’
  • Detected presence of multiplicative trend in the series:
    e.g. ‘Detected periodicities: 7 day’
  • Maximum number of feature to be generated:
    e.g. ‘Maximum number of feature to be generated is 1440’
  • Window sizes used in rolling statistics / lag extractors
    e.g. ‘The window sizes chosen to be: 2 months
    (because the time step is 1 month and Feature Derivation Window is 2 months)’
  • Features that are specified as known-in-advance
    e.g. ‘Variables treated as apriori: holiday’
  • Details about why certain variables are transformed in the input data
    e.g. ‘Generating variable “y (log)” from “y” because multiplicative trend
    is detected’
  • Details about features generated as timeseries features, and their priority
    e.g. ‘Generating feature “date (actual)” from “date” (priority: 1)’
Parameters:
project_id : str

project id to retrieve a feature derivation log for.

to_specification(use_holdout_start_end_format=False, use_backtest_start_end_format=False)

Render the DatetimePartitioning as a DatetimePartitioningSpecification

The resulting specification can be used when setting the target, and contains only the attributes directly controllable by users.

Parameters:
use_holdout_start_end_format : bool, optional

Defaults to False. If True, will use holdout_end_date when configuring the holdout partition. If False, will use holdout_duration instead.

use_backtest_start_end_format : bool, optional

Defaults to False. If False, will use a duration-based approach for specifying backtests (gap_duration, validation_start_date, and validation_duration). If True, will use a start/end date approach for specifying backtests (primary_training_start_date, primary_training_end_date, validation_start_date, validation_end_date).

Returns:
DatetimePartitioningSpecification

the specification for this partitioning

to_dataframe()

Render the partitioning settings as a dataframe for convenience of display

Excludes project_id, datetime_partition_column, date_format, autopilot_data_selection_method, validation_duration, and number_of_backtests, as well as the row count information, if present.

Also excludes the time series specific parameters for use_time_series, default_to_known_in_advance, default_to_do_not_derive, and defining the feature derivation and forecast windows.

class datarobot.helpers.partitioning_methods.Backtest(index=None, available_training_start_date=None, available_training_duration=None, available_training_row_count=None, available_training_end_date=None, primary_training_start_date=None, primary_training_duration=None, primary_training_row_count=None, primary_training_end_date=None, gap_start_date=None, gap_duration=None, gap_row_count=None, gap_end_date=None, validation_start_date=None, validation_duration=None, validation_row_count=None, validation_end_date=None, total_row_count=None)

A backtest used to evaluate models trained in a datetime partitioned project

When setting up a datetime partitioning project, backtests are specified by a BacktestSpecification.

The available training data corresponds to all the data available for training, while the primary training data corresponds to the data that can be used to train while ensuring that all backtests are available. If a model is trained with more data than is available in the primary training data, then all backtests may not have scores available.

All durations are specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Attributes:
index : int

the index of the backtest

available_training_start_date : datetime.datetime

the start date of the available training data for this backtest

available_training_duration : str

the duration of available training data for this backtest

available_training_row_count : int or None

the number of rows of available training data for this backtest. Only available when retrieving from a project where the target is set.

available_training_end_date : datetime.datetime

the end date of the available training data for this backtest

primary_training_start_date : datetime.datetime

the start date of the primary training data for this backtest

primary_training_duration : str

the duration of the primary training data for this backtest

primary_training_row_count : int or None

the number of rows of primary training data for this backtest. Only available when retrieving from a project where the target is set.

primary_training_end_date : datetime.datetime

the end date of the primary training data for this backtest

gap_start_date : datetime.datetime

the start date of the gap between training and validation scoring data for this backtest

gap_duration : str

the duration of the gap between training and validation scoring data for this backtest

gap_row_count : int or None

the number of rows in the gap between training and validation scoring data for this backtest. Only available when retrieving from a project where the target is set.

gap_end_date : datetime.datetime

the end date of the gap between training and validation scoring data for this backtest

validation_start_date : datetime.datetime

the start date of the validation scoring data for this backtest

validation_duration : str

the duration of the validation scoring data for this backtest

validation_row_count : int or None

the number of rows of validation scoring data for this backtest. Only available when retrieving from a project where the target is set.

validation_end_date : datetime.datetime

the end date of the validation scoring data for this backtest

total_row_count : int or None

the number of rows in this backtest. Only available when retrieving from a project where the target is set.

to_specification(use_start_end_format=False)

Render this backtest as a BacktestSpecification.

The resulting specification includes only the attributes users can directly control, not those indirectly determined by the project dataset.

Parameters:
use_start_end_format : bool

Default False. If False, will use a duration-based approach for specifying backtests (gap_duration, validation_start_date, and validation_duration). If True, will use a start/end date approach for specifying backtests (primary_training_start_date, primary_training_end_date, validation_start_date, validation_end_date).

Returns:
BacktestSpecification

the specification for this backtest

to_dataframe()

Render this backtest as a dataframe for convenience of display

Returns:
backtest_partitioning : pandas.Dataframe

the backtest attributes, formatted into a dataframe

datarobot.helpers.partitioning_methods.construct_duration_string(years=0, months=0, days=0, hours=0, minutes=0, seconds=0)

Construct a valid string representing a duration in accordance with ISO8601

A duration of six months, 3 days, and 12 hours could be represented as P6M3DT12H.

Parameters:
years : int

the number of years in the duration

months : int

the number of months in the duration

days : int

the number of days in the duration

hours : int

the number of hours in the duration

minutes : int

the number of minutes in the duration

seconds : int

the number of seconds in the duration

Returns:
duration_string: str

The duration string, specified compatibly with ISO8601

PayoffMatrix

class datarobot.models.PayoffMatrix(project_id: str, id: str, name: Optional[str] = None, true_positive_value: Optional[float] = None, true_negative_value: Optional[float] = None, false_positive_value: Optional[float] = None, false_negative_value: Optional[float] = None)

Represents a Payoff Matrix, a costs/benefit scenario used for creating a profit curve.

Examples

import datarobot as dr

# create a payoff matrix
payoff_matrix = dr.PayoffMatrix.create(
    project_id,
    name,
    true_positive_value=100,
    true_negative_value=10,
    false_positive_value=0,
    false_negative_value=-10,
)

# list available payoff matrices
payoff_matrices = dr.PayoffMatrix.list(project_id)
payoff_matrix = payoff_matrices[0]
Attributes:
project_id : str

id of the project with which the payoff matrix is associated.

id : str

id of the payoff matrix.

name : str

User-supplied label for the payoff matrix.

true_positive_value : float

Cost or benefit of a true positive classification

true_negative_value : float

Cost or benefit of a true negative classification

false_positive_value : float

Cost or benefit of a false positive classification

false_negative_value : float

Cost or benefit of a false negative classification

classmethod create(project_id: str, name: str, true_positive_value: Optional[float] = 1, true_negative_value: Optional[float] = 1, false_positive_value: Optional[float] = -1, false_negative_value: Optional[float] = -1) → datarobot.models.payoff_matrix.PayoffMatrix

Create a payoff matrix associated with a specific project.

Parameters:
project_id : str

id of the project with which the payoff matrix will be associated

Returns:
payoff_matrix : PayoffMatrix

The newly created payoff matrix

classmethod list(project_id: str) → List[datarobot.models.payoff_matrix.PayoffMatrix]

Fetch all the payoff matrices for a project.

Parameters:
project_id : str

id of the project

Returns
——-
List of PayoffMatrix

A list of PayoffMatrix objects

Raises
——
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(project_id: str, id: str) → datarobot.models.payoff_matrix.PayoffMatrix

Retrieve a specified payoff matrix.

Parameters:
project_id : str

id of the project the model belongs to

id : str

id of the payoff matrix

Returns:
PayoffMatrix object representing specified
payoff matrix
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod update(project_id: str, id: str, name: str, true_positive_value: float, true_negative_value: float, false_positive_value: float, false_negative_value: float) → datarobot.models.payoff_matrix.PayoffMatrix

Update (replace) a payoff matrix. Note that all data fields are required.

Parameters:
project_id : str

id of the project to which the payoff matrix belongs

id : str

id of the payoff matrix

name : str

User-supplied label for the payoff matrix

true_positive_value : float

True positive payoff value to use for the profit curve

true_negative_value : float

True negative payoff value to use for the profit curve

false_positive_value : float

False positive payoff value to use for the profit curve

false_negative_value : float

False negative payoff value to use for the profit curve

Returns:
payoff_matrix

PayoffMatrix with updated values

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod delete(project_id: str, id: str) → requests.models.Response

Delete a specified payoff matrix.

Parameters:
project_id : str

id of the project the model belongs to

id : str

id of the payoff matrix

Returns:
response : requests.Response

Empty response (204)

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

PredictJob

datarobot.models.predict_job.wait_for_async_predictions(project_id, predict_job_id, max_wait=600)

Given a Project id and PredictJob id poll for status of process responsible for predictions generation until it’s finished

Parameters:
project_id : str

The identifier of the project

predict_job_id : str

The identifier of the PredictJob

max_wait : int, optional

Time in seconds after which predictions creation is considered unsuccessful

Returns:
predictions : pandas.DataFrame

Generated predictions.

Raises:
AsyncPredictionsGenerationError

Raised if status of fetched PredictJob object is error

AsyncTimeoutError

Predictions weren’t generated in time, specified by max_wait parameter

class datarobot.models.PredictJob(data, completed_resource_url=None)

Tracks asynchronous work being done within a project

Attributes:
id : int

the id of the job

project_id : str

the id of the project the job belongs to

status : str

the status of the job - will be one of datarobot.enums.QUEUE_STATUS

job_type : str

what kind of work the job is doing - will be ‘predict’ for predict jobs

is_blocked : bool

if true, the job is blocked (cannot be executed) until its dependencies are resolved

message : str

a message about the state of the job, typically explaining why an error occurred

classmethod from_job(job)

Transforms a generic Job into a PredictJob

Parameters:
job: Job

A generic job representing a PredictJob

Returns:
predict_job: PredictJob

A fully populated PredictJob with all the details of the job

Raises:
ValueError:

If the generic Job was not a predict job, e.g. job_type != JOB_TYPE.PREDICT

classmethod get(project_id, predict_job_id)

Fetches one PredictJob. If the job finished, raises PendingJobFinished exception.

Parameters:
project_id : str

The identifier of the project the model on which prediction was started belongs to

predict_job_id : str

The identifier of the predict_job

Returns:
predict_job : PredictJob

The pending PredictJob

Raises:
PendingJobFinished

If the job being queried already finished, and the server is re-routing to the finished predictions.

AsyncFailureError

Querying this resource gave a status code other than 200 or 303

classmethod get_predictions(project_id, predict_job_id, class_prefix='class_')

Fetches finished predictions from the job used to generate them.

Note

The prediction API for classifications now returns an additional prediction_values dictionary that is converted into a series of class_prefixed columns in the final dataframe. For example, <label> = 1.0 is converted to ‘class_1.0’. If you are on an older version of the client (prior to v2.8), you must update to v2.8 to correctly pivot this data.

Parameters:
project_id : str

The identifier of the project to which belongs the model used for predictions generation

predict_job_id : str

The identifier of the predict_job

class_prefix : str

The prefix to append to labels in the final dataframe (e.g., apple -> class_apple)

Returns:
predictions : pandas.DataFrame

Generated predictions

Raises:
JobNotFinished

If the job has not finished yet

AsyncFailureError

Querying the predict_job in question gave a status code other than 200 or 303

cancel()

Cancel this job. If this job has not finished running, it will be removed and canceled.

get_result(params=None)
Parameters:
params : dict or None

Query parameters to be added to request to get results.

For featureEffects and featureFit, source param is required to define source,
otherwise the default is `training`
Returns:
result : object
Return type depends on the job type:
  • for model jobs, a Model is returned
  • for predict jobs, a pandas.DataFrame (with predictions) is returned
  • for featureImpact jobs, a list of dicts by default (see with_metadata parameter of the FeatureImpactJob class and its get() method).
  • for primeRulesets jobs, a list of Rulesets
  • for primeModel jobs, a PrimeModel
  • for primeDownloadValidation jobs, a PrimeFile
  • for predictionExplanationInitialization jobs, a PredictionExplanationsInitialization
  • for predictionExplanations jobs, a PredictionExplanations
  • for featureEffects, a FeatureEffects
  • for featureFit, a FeatureFit
Raises:
JobNotFinished

If the job is not finished, the result is not available.

AsyncProcessUnsuccessfulError

If the job errored or was aborted

get_result_when_complete(max_wait=600, params=None)
Parameters:
max_wait : int, optional

How long to wait for the job to finish.

params : dict, optional

Query parameters to be added to request.

Returns:
result: object

Return type is the same as would be returned by Job.get_result.

Raises:
AsyncTimeoutError

If the job does not finish in time

AsyncProcessUnsuccessfulError

If the job errored or was aborted

refresh()

Update this object with the latest job data from the server.

wait_for_completion(max_wait: int = 600) → None

Waits for job to complete.

Parameters:
max_wait : int, optional

How long to wait for the job to finish.

Prediction Dataset

class datarobot.models.PredictionDataset(project_id: str, id: str, name: str, created: str, num_rows: int, num_columns: int, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, relax_known_in_advance_features_check: Optional[bool] = None, data_quality_warnings: Optional[DataQualityWarning] = None, forecast_point_range: Optional[List[datetime]] = None, data_start_date: Optional[datetime] = None, data_end_date: Optional[datetime] = None, max_forecast_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, detected_actual_value_columns: Optional[List[DetectedActualValueColumn]] = None, contains_target_values: Optional[bool] = None, secondary_datasets_config_id: Optional[str] = None)

A dataset uploaded to make predictions

Typically created via project.upload_dataset

Attributes:
id : str

the id of the dataset

project_id : str

the id of the project the dataset belongs to

created : str

the time the dataset was created

name : str

the name of the dataset

num_rows : int

the number of rows in the dataset

num_columns : int

the number of columns in the dataset

forecast_point : datetime.datetime or None

For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series predictions documentation for more information.

predictions_start_date : datetime.datetime or None, optional

For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

relax_known_in_advance_features_check : bool, optional

(New in version v2.15) For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.

data_quality_warnings : dict, optional

(New in version v2.15) A dictionary that contains available warnings about potential problems in this prediction dataset. Available warnings include:

has_kia_missing_values_in_forecast_window : bool

Applicable for time series projects. If True, known in advance features have missing values in forecast window which may decrease prediction accuracy.

insufficient_rows_for_evaluating_models : bool

Applicable for datasets which are used as external test sets. If True, there is not enough rows in dataset to calculate insights.

single_class_actual_value_column : bool

Applicable for datasets which are used as external test sets. If True, actual value column has only one class and such insights as ROC curve can not be calculated. Only applies for binary classification projects or unsupervised projects.

forecast_point_range : list[datetime.datetime] or None, optional

(New in version v2.20) For time series projects only. Specifies the range of dates available for use as a forecast point.

data_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The minimum primary date of this prediction dataset.

data_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The maximum primary date of this prediction dataset.

max_forecast_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The maximum forecast date of this prediction dataset.

actual_value_column : string, optional

(New in version v2.21) Optional, only available for unsupervised projects, in case dataset was uploaded with actual value column specified. Name of the column which will be used to calculate the classification metrics and insights.

detected_actual_value_columns : list of dict, optional

(New in version v2.21) For unsupervised projects only, list of detected actual value columns information containing missing count and name for each column.

contains_target_values : bool, optional

(New in version v2.21) Only for supervised projects. If True, dataset contains target values and can be used to calculate the classification metrics and insights.

secondary_datasets_config_id: string or None, optional

(New in version v2.23) The Id of the alternative secondary dataset config to use during prediction for Feature discovery project.

classmethod get(project_id: str, dataset_id: str) → datarobot.models.prediction_dataset.PredictionDataset

Retrieve information about a dataset uploaded for predictions

Parameters:
project_id:

the id of the project to query

dataset_id:

the id of the dataset to retrieve

Returns:
dataset: PredictionDataset

A dataset uploaded to make predictions

delete() → None

Delete a dataset uploaded for predictions

Will also delete predictions made using this dataset and cancel any predict jobs using this dataset.

Prediction Explanations

class datarobot.PredictionExplanationsInitialization(project_id, model_id, prediction_explanations_sample=None)

Represents a prediction explanations initialization of a model.

Attributes:
project_id : str

id of the project the model belongs to

model_id : str

id of the model the prediction explanations initialization is for

prediction_explanations_sample : list of dict

a small sample of prediction explanations that could be generated for the model

classmethod get(project_id, model_id)

Retrieve the prediction explanations initialization for a model.

Prediction explanations initializations are a prerequisite for computing prediction explanations, and include a sample what the computed prediction explanations for a prediction dataset would look like.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model the prediction explanations initialization is for

Returns:
prediction_explanations_initialization : PredictionExplanationsInitialization

The queried instance.

Raises:
ClientError (404)

If the project or model does not exist or the initialization has not been computed.

classmethod create(project_id, model_id)

Create a prediction explanations initialization for the specified model.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model for which initialization is requested

Returns:
job : Job

an instance of created async job

delete()

Delete this prediction explanations initialization.

class datarobot.PredictionExplanations(id, project_id, model_id, dataset_id, max_explanations, num_columns, finish_time, prediction_explanations_location, threshold_low=None, threshold_high=None, class_names=None, num_top_classes=None)

Represents prediction explanations metadata and provides access to computation results.

Examples

prediction_explanations = dr.PredictionExplanations.get(project_id, explanations_id)
for row in prediction_explanations.get_rows():
    print(row)  # row is an instance of PredictionExplanationsRow
Attributes:
id : str

id of the record and prediction explanations computation result

project_id : str

id of the project the model belongs to

model_id : str

id of the model the prediction explanations are for

dataset_id : str

id of the prediction dataset prediction explanations were computed for

max_explanations : int

maximum number of prediction explanations to supply per row of the dataset

threshold_low : float

the lower threshold, below which a prediction must score in order for prediction explanations to be computed for a row in the dataset

threshold_high : float

the high threshold, above which a prediction must score in order for prediction explanations to be computed for a row in the dataset

num_columns : int

the number of columns prediction explanations were computed for

finish_time : float

timestamp referencing when computation for these prediction explanations finished

prediction_explanations_location : str

where to retrieve the prediction explanations

classmethod get(project_id, prediction_explanations_id)

Retrieve a specific prediction explanations metadata.

Parameters:
project_id : str

id of the project the explanations belong to

prediction_explanations_id : str

id of the prediction explanations

Returns:
prediction_explanations : PredictionExplanations

The queried instance.

classmethod create(project_id, model_id, dataset_id, max_explanations=None, threshold_low=None, threshold_high=None, mode=None)

Create prediction explanations for the specified dataset.

In order to create PredictionExplanations for a particular model and dataset, you must first:

  • Compute feature impact for the model via datarobot.Model.get_feature_impact()
  • Compute a PredictionExplanationsInitialization for the model via datarobot.PredictionExplanationsInitialization.create(project_id, model_id)
  • Compute predictions for the model and dataset via datarobot.Model.request_predictions(dataset_id)

threshold_high and threshold_low are optional filters applied to speed up computation. When at least one is specified, only the selected outlier rows will have prediction explanations computed. Rows are considered to be outliers if their predicted value (in case of regression projects) or probability of being the positive class (in case of classification projects) is less than threshold_low or greater than thresholdHigh. If neither is specified, prediction explanations will be computed for all rows.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model for which prediction explanations are requested

dataset_id : str

id of the prediction dataset for which prediction explanations are requested

threshold_low : float, optional

the lower threshold, below which a prediction must score in order for prediction explanations to be computed for a row in the dataset. If neither threshold_high nor threshold_low is specified, prediction explanations will be computed for all rows.

threshold_high : float, optional

the high threshold, above which a prediction must score in order for prediction explanations to be computed. If neither threshold_high nor threshold_low is specified, prediction explanations will be computed for all rows.

max_explanations : int, optional

the maximum number of prediction explanations to supply per row of the dataset, default: 3.

mode : PredictionExplanationsMode, optional

mode of calculation for multiclass models, if not specified - server default is to explain only the predicted class, identical to passing TopPredictionsMode(1).

Returns:
job: Job

an instance of created async job

classmethod list(project_id, model_id=None, limit=None, offset=None)

List of prediction explanations metadata for a specified project.

Parameters:
project_id : str

id of the project to list prediction explanations for

model_id : str, optional

if specified, only prediction explanations computed for this model will be returned

limit : int or None

at most this many results are returned, default: no limit

offset : int or None

this many results will be skipped, default: 0

Returns:
prediction_explanations : list[PredictionExplanations]
get_rows(batch_size=None, exclude_adjusted_predictions=True)

Retrieve prediction explanations rows.

Parameters:
batch_size : int or None, optional

maximum number of prediction explanations rows to retrieve per request

exclude_adjusted_predictions : bool

Optional, defaults to True. Set to False to include adjusted predictions, which will differ from the predictions on some projects, e.g. those with an exposure column specified.

Yields:
prediction_explanations_row : PredictionExplanationsRow

Represents prediction explanations computed for a prediction row.

is_multiclass()

Multiclass XEMP always has one and only one of these parameters set

get_number_of_explained_classes()

How many classes we attempt to explain for each row

get_all_as_dataframe(exclude_adjusted_predictions=True)

Retrieve all prediction explanations rows and return them as a pandas.DataFrame.

Returned dataframe has the following structure:

  • row_id : row id from prediction dataset
  • prediction : the output of the model for this row
  • adjusted_prediction : adjusted prediction values (only appears for projects that utilize prediction adjustments, e.g. projects with an exposure column)
  • class_0_label : a class level from the target (only appears for classification projects)
  • class_0_probability : the probability that the target is this class (only appears for classification projects)
  • class_1_label : a class level from the target (only appears for classification projects)
  • class_1_probability : the probability that the target is this class (only appears for classification projects)
  • explanation_0_feature : the name of the feature contributing to the prediction for this explanation
  • explanation_0_feature_value : the value the feature took on
  • explanation_0_label : the output being driven by this explanation. For regression projects, this is the name of the target feature. For classification projects, this is the class label whose probability increasing would correspond to a positive strength.
  • explanation_0_qualitative_strength : a human-readable description of how strongly the feature affected the prediction (e.g. ‘+++’, ‘–’, ‘+’) for this explanation
  • explanation_0_strength : the amount this feature’s value affected the prediction
  • explanation_N_feature : the name of the feature contributing to the prediction for this explanation
  • explanation_N_feature_value : the value the feature took on
  • explanation_N_label : the output being driven by this explanation. For regression projects, this is the name of the target feature. For classification projects, this is the class label whose probability increasing would correspond to a positive strength.
  • explanation_N_qualitative_strength : a human-readable description of how strongly the feature affected the prediction (e.g. ‘+++’, ‘–’, ‘+’) for this explanation
  • explanation_N_strength : the amount this feature’s value affected the prediction

For classification projects, the server does not guarantee any ordering on the prediction values, however within this function we sort the values so that class_X corresponds to the same class from row to row.

Parameters:
exclude_adjusted_predictions : bool

Optional, defaults to True. Set this to False to include adjusted prediction values in the returned dataframe.

Returns:
dataframe: pandas.DataFrame
download_to_csv(filename, encoding='utf-8', exclude_adjusted_predictions=True)

Save prediction explanations rows into CSV file.

Parameters:
filename : str or file object

path or file object to save prediction explanations rows

encoding : string, optional

A string representing the encoding to use in the output file, defaults to ‘utf-8’

exclude_adjusted_predictions : bool

Optional, defaults to True. Set to False to include adjusted predictions, which will differ from the predictions on some projects, e.g. those with an exposure column specified.

get_prediction_explanations_page(limit=None, offset=None, exclude_adjusted_predictions=True)

Get prediction explanations.

If you don’t want use a generator interface, you can access paginated prediction explanations directly.

Parameters:
limit : int or None

the number of records to return, the server will use a (possibly finite) default if not specified

offset : int or None

the number of records to skip, default 0

exclude_adjusted_predictions : bool

Optional, defaults to True. Set to False to include adjusted predictions, which will differ from the predictions on some projects, e.g. those with an exposure column specified.

Returns:
prediction_explanations : PredictionExplanationsPage
delete()

Delete these prediction explanations.

class datarobot.models.prediction_explanations.PredictionExplanationsRow(row_id, prediction, prediction_values, prediction_explanations=None, adjusted_prediction=None, adjusted_prediction_values=None)

Represents prediction explanations computed for a prediction row.

Notes

PredictionValue contains:

  • label : describes what this model output corresponds to. For regression projects, it is the name of the target feature. For classification projects, it is a level from the target feature.
  • value : the output of the prediction. For regression projects, it is the predicted value of the target. For classification projects, it is the predicted probability the row belongs to the class identified by the label.

PredictionExplanation contains:

  • label : described what output was driven by this explanation. For regression projects, it is the name of the target feature. For classification projects, it is the class whose probability increasing would correspond to a positive strength of this prediction explanation.
  • feature : the name of the feature contributing to the prediction
  • feature_value : the value the feature took on for this row
  • strength : the amount this feature’s value affected the prediction
  • qualitative_strength : a human-readable description of how strongly the feature affected the prediction (e.g. ‘+++’, ‘–’, ‘+’)
Attributes:
row_id : int

which row this PredictionExplanationsRow describes

prediction : float

the output of the model for this row

adjusted_prediction : float or None

adjusted prediction value for projects that provide this information, None otherwise

prediction_values : list

an array of dictionaries with a schema described as PredictionValue

adjusted_prediction_values : list

same as prediction_values but for adjusted predictions

prediction_explanations : list

an array of dictionaries with a schema described as PredictionExplanation

class datarobot.models.prediction_explanations.PredictionExplanationsPage(id, count=None, previous=None, next=None, data=None, prediction_explanations_record_location=None, adjustment_method=None)

Represents a batch of prediction explanations received by one request.

Attributes:
id : str

id of the prediction explanations computation result

data : list[dict]

list of raw prediction explanations; each row corresponds to a row of the prediction dataset

count : int

total number of rows computed

previous_page : str

where to retrieve previous page of prediction explanations, None if current page is the first

next_page : str

where to retrieve next page of prediction explanations, None if current page is the last

prediction_explanations_record_location : str

where to retrieve the prediction explanations metadata

adjustment_method : str

Adjustment method that was applied to predictions, or ‘N/A’ if no adjustments were done.

classmethod get(project_id, prediction_explanations_id, limit=None, offset=0, exclude_adjusted_predictions=True)

Retrieve prediction explanations.

Parameters:
project_id : str

id of the project the model belongs to

prediction_explanations_id : str

id of the prediction explanations

limit : int or None

the number of records to return; the server will use a (possibly finite) default if not specified

offset : int or None

the number of records to skip, default 0

exclude_adjusted_predictions : bool

Optional, defaults to True. Set to False to include adjusted predictions, which will differ from the predictions on some projects, e.g. those with an exposure column specified.

Returns:
prediction_explanations : PredictionExplanationsPage

The queried instance.

class datarobot.models.ShapMatrix(project_id: str, id: str, model_id: Optional[str] = None, dataset_id: Optional[str] = None)

Represents SHAP based prediction explanations and provides access to score values.

Examples

import datarobot as dr

# request SHAP matrix calculation
shap_matrix_job = dr.ShapMatrix.create(project_id, model_id, dataset_id)
shap_matrix = shap_matrix_job.get_result_when_complete()

# list available SHAP matrices
shap_matrices = dr.ShapMatrix.list(project_id)
shap_matrix = shap_matrices[0]

# get SHAP matrix as dataframe
shap_matrix_values = shap_matrix.get_as_dataframe()
Attributes:
project_id : str

id of the project the model belongs to

shap_matrix_id : str

id of the generated SHAP matrix

model_id : str

id of the model used to

dataset_id : str

id of the prediction dataset SHAP values were computed for

classmethod create(project_id: str, model_id: str, dataset_id: str) → ShapMatrixJob

Calculate SHAP based prediction explanations against previously uploaded dataset.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model for which prediction explanations are requested

dataset_id : str

id of the prediction dataset for which prediction explanations are requested (as uploaded from Project.upload_dataset)

Returns:
job : ShapMatrixJob

The job computing the SHAP based prediction explanations

Raises:
ClientError

If the server responded with 4xx status. Possible reasons are project, model or dataset don’t exist, user is not allowed or model doesn’t support SHAP based prediction explanations

ServerError

If the server responded with 5xx status

classmethod list(project_id: str) → List[datarobot.models.shap_matrix.ShapMatrix]

Fetch all the computed SHAP prediction explanations for a project.

Parameters:
project_id : str

id of the project

Returns:
List of ShapMatrix

A list of ShapMatrix objects

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(project_id: str, id: str) → datarobot.models.shap_matrix.ShapMatrix

Retrieve the specific SHAP matrix.

Parameters:
project_id : str

id of the project the model belongs to

id : str

id of the SHAP matrix

Returns:
ShapMatrix object representing specified record
get_as_dataframe(read_timeout: int = 60) → pandas.core.frame.DataFrame

Retrieve SHAP matrix values as dataframe.

Returns:
dataframe : pandas.DataFrame

A dataframe with SHAP scores

read_timeout : int (optional, default 60)

New in version 2.29.

Wait this many seconds for the server to respond.

Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

class datarobot.models.ClassListMode(class_names)

Calculate prediction explanations for the specified classes in each row.

Attributes:
class_names : list

List of class names that will be explained for each dataset row.

get_api_parameters(batch_route=False)

Get parameters passed in corresponding API call

Parameters:
batch_route : bool

Batch routes describe prediction calls with all possible parameters, so to distinguish explanation parameters from others they have prefix in parameters.

Returns:
dict
class datarobot.models.TopPredictionsMode(num_top_classes)

Calculate prediction explanations for the number of top predicted classes in each row.

Attributes:
num_top_classes : int

Number of top predicted classes [1..10] that will be explained for each dataset row.

get_api_parameters(batch_route=False)

Get parameters passed in corresponding API call

Parameters:
batch_route : bool

Batch routes describe prediction calls with all possible parameters, so to distinguish explanation parameters from others they have prefix in parameters.

Returns:
dict

Predictions

class datarobot.models.Predictions(project_id: str, prediction_id: str, model_id: Optional[str] = None, dataset_id: Optional[str] = None, includes_prediction_intervals: Optional[bool] = None, prediction_intervals_size: Optional[int] = None, forecast_point: Optional[datetime] = None, predictions_start_date: Optional[datetime] = None, predictions_end_date: Optional[datetime] = None, actual_value_column: Optional[str] = None, explanation_algorithm: Optional[str] = None, max_explanations: Optional[int] = None, shap_warnings: Optional[ShapWarnings] = None)

Represents predictions metadata and provides access to prediction results.

Examples

List all predictions for a project

import datarobot as dr

# Fetch all predictions for a project
all_predictions = dr.Predictions.list(project_id)

# Inspect all calculated predictions
for predictions in all_predictions:
    print(predictions)  # repr includes project_id, model_id, and dataset_id

Retrieve predictions by id

import datarobot as dr

# Getting predictions by id
predictions = dr.Predictions.get(project_id, prediction_id)

# Dump actual predictions
df = predictions.get_all_as_dataframe()
print(df)
Attributes:
project_id : str

id of the project the model belongs to

model_id : str

id of the model

prediction_id : str

id of generated predictions

includes_prediction_intervals : bool, optional

(New in v2.16) For time series projects only. Indicates if prediction intervals will be part of the response. Defaults to False.

prediction_intervals_size : int, optional

(New in v2.16) For time series projects only. Indicates the percentile used for prediction intervals calculation. Will be present only if includes_prediction_intervals is True.

forecast_point : datetime.datetime, optional

(New in v2.20) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

predictions_start_date : datetime.datetime or None, optional

(New in v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) For time series unsupervised projects only. Actual value column which was used to calculate the classification metrics and insights on the prediction dataset. Can’t be provided with the forecast_point parameter.

explanation_algorithm : datarobot.enums.EXPLANATIONS_ALGORITHM, optional

(New in version v2.21) If set to ‘shap’, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations : int, optional

(New in version v2.21) The maximum number of explanation values that should be returned for each row, ordered by absolute value, greatest to least. If null, no limit. In the case of ‘shap’: if the number of features is greater than the limit, the sum of remaining values will also be returned as shapRemainingTotal. Defaults to null. Cannot be set if explanation_algorithm is omitted.

shap_warnings : dict, optional

(New in version v2.21) Will be present if explanation_algorithm was set to datarobot.enums.EXPLANATIONS_ALGORITHM.SHAP and there were additivity failures during SHAP values calculation.

classmethod list(project_id: str, model_id: Optional[str] = None, dataset_id: Optional[str] = None) → List[datarobot.models.predictions.Predictions]

Fetch all the computed predictions metadata for a project.

Parameters:
project_id : str

id of the project

model_id : str, optional

if specified, only predictions metadata for this model will be retrieved

dataset_id : str, optional

if specified, only predictions metadata for this dataset will be retrieved

Returns:
A list of : py:class:Predictions <datarobot.models.Predictions> objects
classmethod get(project_id: str, prediction_id: str) → datarobot.models.predictions.Predictions

Retrieve the specific predictions metadata

Parameters:
project_id : str

id of the project the model belongs to

prediction_id : str

id of the prediction set

Returns:
Predictions object representing specified
predictions
get_all_as_dataframe(class_prefix: str = 'class_', serializer: str = 'json') → pandas.core.frame.DataFrame

Retrieve all prediction rows and return them as a pandas.DataFrame.

Parameters:
class_prefix : str, optional

The prefix to append to labels in the final dataframe. Default is class_ (e.g., apple -> class_apple)

serializer : str, optional

Serializer to use for the download. Options: json (default) or csv.

Returns:
dataframe: pandas.DataFrame
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status.

datarobot.errors.ServerError

if the server responded with 5xx status.

download_to_csv(filename: Union[str, bool], encoding: str = 'utf-8', serializer: str = 'json') → None

Save prediction rows into CSV file.

Parameters:
filename : str or file object

path or file object to save prediction rows

encoding : string, optional

A string representing the encoding to use in the output file, defaults to ‘utf-8’

serializer : str, optional

Serializer to use for the download. Options: json (default) or csv.

PredictionServer

class datarobot.PredictionServer(id: Optional[str] = None, url: Optional[str] = None, datarobot_key: Optional[str] = None)

A prediction server can be used to make predictions.

Attributes:
id : str, optional

The id of the prediction server.

url : str

The url of the prediction server.

datarobot_key : str, optional

The Datarobot-Key HTTP header used in requests to this prediction server. Note that in the datarobot.models.Deployment instance there is the default_prediction_server property which has this value as a “kebab-cased” key as opposed to “snake_cased”.

classmethod list() → List[datarobot.models.prediction_server.PredictionServer]

Returns a list of prediction servers a user can use to make predictions.

New in version v2.17.

Returns:
prediction_servers : list of PredictionServer instances

Contains a list of prediction servers that can be used to make predictions.

Examples

prediction_servers = PredictionServer.list()
prediction_servers
>>> [PredictionServer('https://example.com')]

PrimeFile

class datarobot.models.PrimeFile(id: Optional[str] = None, project_id: Optional[str] = None, parent_model_id: Optional[str] = None, model_id: Optional[str] = None, ruleset_id: Optional[int] = None, language: Optional[str] = None, is_valid: Optional[bool] = None)

Represents an executable file available for download of the code for a DataRobot Prime model

Attributes:
id : str

the id of the PrimeFile

project_id : str

the id of the project this PrimeFile belongs to

parent_model_id : str

the model being approximated by this PrimeFile

model_id : str

the prime model this file represents

ruleset_id : int

the ruleset being used in this PrimeFile

language : str

the language of the code in this file - see enums.LANGUAGE for possibilities

is_valid : bool

whether the code passed basic validation

download(filepath: str) → None

Download the code and save it to a file

Parameters:
filepath: string

the location to save the file to

Project

class datarobot.models.Project(id=None, project_name=None, mode=None, target=None, target_type=None, holdout_unlocked=None, metric=None, stage=None, partition=None, positive_class=None, created=None, advanced_options=None, max_train_pct=None, max_train_rows=None, file_name=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=None, use_feature_discovery=None, relationships_configuration_id=None, project_description=None, query_generator_id=None, segmentation=None, partitioning_method=None, catalog_id=None, catalog_version_id=None)

A project built from a particular training dataset

Attributes:
id : str

the id of the project

project_name : str

the name of the project

project_description : str

an optional description for the project

mode : int

The current autopilot mode. 0: Full Autopilot. 2: Manual Mode. 4: Comprehensive Autopilot. null: Mode not set.

target : str

the name of the selected target features

target_type : str

Indicating what kind of modeling is being done in this project Options are: ‘Regression’, ‘Binary’ (Binary classification), ‘Multiclass’ (Multiclass classification), ‘Multilabel’ (Multilabel classification)

holdout_unlocked : bool

whether the holdout has been unlocked

metric : str

the selected project metric (e.g. LogLoss)

stage : str

the stage the project has reached - one of datarobot.enums.PROJECT_STAGE

partition : dict

information about the selected partitioning options

positive_class : str

for binary classification projects, the selected positive class; otherwise, None

created : datetime

the time the project was created

advanced_options : AdvancedOptions

information on the advanced options that were selected for the project settings, e.g. a weights column or a cap of the runtime of models that can advance autopilot stages

max_train_pct : float

the maximum percentage of the project dataset that can be used without going into the validation data or being too large to submit any blueprint for training

max_train_rows : int

the maximum number of rows that can be trained on without going into the validation data or being too large to submit any blueprint for training

file_name : str

the name of the file uploaded for the project dataset

credentials : list, optional

a list of credentials for the datasets used in relationship configuration (previously graphs).

feature_engineering_prediction_point : str, optional

for Feature Discovery projects that are not time-aware, this parameter identifies the column in the primary dataset which should be used as the prediction point.

unsupervised_mode : bool, optional

(New in version v2.20) defaults to False, indicates whether this is an unsupervised project.

relationships_configuration_id : str, optional

(New in version v2.21) id of the relationships configuration to use

query_generator_id: str, optional

(New in version v2.27) id of the query generator applied for time series data prep

segmentation : dict, optional

information on the segmentation options for segmented project

partitioning_method : PartitioningMethod, optional

(New in version v3.0) The partitioning class for this project. This attribute should only be used with newly-created projects and before calling Project.analyze_and_model(). After the project has been aimed, see Project.partition for actual partitioning options.

catalog_id : str

(New in version v3.0) ID of the dataset used during creation of the project.

catalog_version_id : str

(New in version v3.0) The object ID of the catalog_version which the project’s dataset belongs to.

set_options(options: Optional[datarobot.helpers.AdvancedOptions] = None, **kwargs) → None

Update the advanced options of this project.

Either accepts an AdvancedOptions object or indiviudal keyword arguments. This is an inplace update.

Raises:
ValueError

Raised if an object passed to the options parameter is not an AdvancedOptions instance, a valid keyword argument from the AdvancedOptions class, or a combination of an AdvancedOptions instance AND keyword arguments.

get_options() → datarobot.helpers.AdvancedOptions

Return the stored advanced options for this project.

Returns:
AdvancedOptions
classmethod get(project_id: str) → TProject

Gets information about a project.

Parameters:
project_id : str

The identifier of the project you want to load.

Returns:
project : Project

The queried project

Examples

import datarobot as dr
p = dr.Project.get(project_id='54e639a18bd88f08078ca831')
p.id
>>>'54e639a18bd88f08078ca831'
p.project_name
>>>'Some project name'
classmethod create(sourcedata, project_name='Untitled Project', max_wait=600, read_timeout=600, dataset_filename=None)

Creates a project with provided data.

Project creation is asynchronous process, which means that after initial request we will keep polling status of async process that is responsible for project creation until it’s finished. For SDK users this only means that this method might raise exceptions related to it’s async nature.

Parameters:
sourcedata : basestring, file, pathlib.Path or pandas.DataFrame

Dataset to use for the project. If string can be either a path to a local file, url to publicly available file or raw file content. If using a file, the filename must consist of ASCII characters only.

project_name : str, unicode, optional

The name to assign to the empty project.

max_wait : int, optional

Time in seconds after which project creation is considered unsuccessful

read_timeout: int

The maximum number of seconds to wait for the server to respond indicating that the initial upload is complete

dataset_filename : string or None, optional

(New in version v2.14) File name to use for dataset. Ignored for url and file path sources.

Returns:
project : Project

Instance with initialized data.

Raises:
InputNotUnderstoodError

Raised if sourcedata isn’t one of supported types.

AsyncFailureError

Polling for status of async process resulted in response with unsupported status code. Beginning in version 2.1, this will be ProjectAsyncFailureError, a subclass of AsyncFailureError

AsyncProcessUnsuccessfulError

Raised if project creation was unsuccessful

AsyncTimeoutError

Raised if project creation took more time, than specified by max_wait parameter

Examples

p = Project.create('/home/datasets/somedataset.csv',
                   project_name="New API project")
p.id
>>> '5921731dkqshda8yd28h'
p.project_name
>>> 'New API project'
classmethod encrypted_string(plaintext: str) → str

Sends a string to DataRobot to be encrypted

This is used for passwords that DataRobot uses to access external data sources

Parameters:
plaintext : str

The string to encrypt

Returns:
ciphertext : str

The encrypted string

classmethod create_from_hdfs(url: str, port: Optional[int] = None, project_name: Optional[str] = None, max_wait: int = 600)

Create a project from a datasource on a WebHDFS server.

Parameters:
url : str

The location of the WebHDFS file, both server and full path. Per the DataRobot specification, must begin with hdfs://, e.g. hdfs:///tmp/10kDiabetes.csv

port : int, optional

The port to use. If not specified, will default to the server default (50070)

project_name : str, optional

A name to give to the project

max_wait : int

The maximum number of seconds to wait before giving up.

Returns:
Project

Examples

p = Project.create_from_hdfs('hdfs:///tmp/somedataset.csv',
                             project_name="New API project")
p.id
>>> '5921731dkqshda8yd28h'
p.project_name
>>> 'New API project'
classmethod create_from_data_source(data_source_id: str, username: Optional[str] = None, password: Optional[str] = None, credential_id: Optional[str] = None, use_kerberos: Optional[bool] = None, credential_data: Optional[dict[str, Any]] = None, project_name: Optional[str] = None, max_wait: int = 600) → TProject

Create a project from a data source. Either data_source or data_source_id should be specified.

Parameters:
data_source_id : str

the identifier of the data source.

username : str, optional

The username for database authentication. If supplied password must also be supplied.

password : str, optional

The password for database authentication. The password is encrypted at server side and never saved / stored. If supplied username must also be supplied.

credential_id: str, optional

The ID of the set of credentials to use instead of user and password. Note that with this change, username and password will become optional.

use_kerberos: bool, optional

Server default is False. If true, use kerberos authentication for database authentication.

credential_data: dict, optional

The credentials to authenticate with the database, to use instead of user/password or credential ID.

project_name : str, optional

optional, a name to give to the project.

max_wait : int

optional, the maximum number of seconds to wait before giving up.

Returns:
Project
Raises:
InvalidUsageError

Raised if either username or password is passed without the other.

classmethod create_from_dataset(dataset_id: str, dataset_version_id: Optional[str] = None, project_name: Optional[str] = None, user: Optional[str] = None, password: Optional[str] = None, credential_id: Optional[str] = None, use_kerberos: Optional[bool] = None, credential_data: Optional[Dict[str, str]] = None) → TProject

Create a Project from a datarobot.models.Dataset

Parameters:
dataset_id: string

The ID of the dataset entry to user for the project’s Dataset

dataset_version_id: string, optional

The ID of the dataset version to use for the project dataset. If not specified - uses latest version associated with dataset_id

project_name: string, optional

The name of the project to be created. If not specified, will be “Untitled Project” for database connections, otherwise the project name will be based on the file used.

user: string, optional

The username for database authentication.

password: string, optional

The password (in cleartext) for database authentication. The password will be encrypted on the server side in scope of HTTP request and never saved or stored

credential_id: string, optional

The ID of the set of credentials to use instead of user and password.

use_kerberos: bool, optional

Server default is False. If true, use kerberos authentication for database authentication.

credential_data: dict, optional

The credentials to authenticate with the database, to use instead of user/password or credential ID.

Returns:
Project
classmethod from_async(async_location: str, max_wait: int = 600) → TProject

Given a temporary async status location poll for no more than max_wait seconds until the async process (project creation or setting the target, for example) finishes successfully, then return the ready project

Parameters:
async_location : str

The URL for the temporary async status resource. This is returned as a header in the response to a request that initiates an async process

max_wait : int

The maximum number of seconds to wait before giving up.

Returns:
project : Project

The project, now ready

Raises:
ProjectAsyncFailureError

If the server returned an unexpected response while polling for the asynchronous operation to resolve

AsyncProcessUnsuccessfulError

If the final result of the asynchronous operation was a failure

AsyncTimeoutError

If the asynchronous operation did not resolve within the time specified

classmethod start(sourcedata: Union[str, pandas.core.frame.DataFrame], target: Optional[str] = None, project_name: str = 'Untitled Project', worker_count: Optional[int] = None, metric: Optional[str] = None, autopilot_on: bool = True, blueprint_threshold: Optional[int] = None, response_cap: Optional[float] = None, partitioning_method: Optional[datarobot.helpers.partitioning_methods.PartitioningMethod] = None, positive_class: Union[str, float, int, None] = None, target_type: Optional[str] = None, unsupervised_mode: bool = False, blend_best_models: Optional[bool] = None, prepare_model_for_deployment: Optional[bool] = None, consider_blenders_in_recommendation: Optional[bool] = None, scoring_code_only: Optional[bool] = None, min_secondary_validation_model_count: Optional[int] = None, shap_only_mode: Optional[bool] = None, relationships_configuration_id: Optional[str] = None, autopilot_with_feature_discovery: Optional[bool] = None, feature_discovery_supervised_feature_reduction: Optional[bool] = None, unsupervised_type: Optional[datarobot.enums.UnsupervisedTypeEnum] = None, autopilot_cluster_list: Optional[List[int]] = None, bias_mitigation_feature_name: Optional[str] = None, bias_mitigation_technique: Optional[str] = None, include_bias_mitigation_feature_as_predictor_variable: Optional[bool] = None) → TProject

Chain together project creation, file upload, and target selection.

Note

While this function provides a simple means to get started, it does not expose all possible parameters. For advanced usage, using create, set_advanced_options and analyze_and_model directly is recommended.

Parameters:
sourcedata : str or pandas.DataFrame

The path to the file to upload. Can be either a path to a local file or a publicly accessible URL (starting with http://, https://, file://, or s3://). If the source is a DataFrame, it will be serialized to a temporary buffer. If using a file, the filename must consist of ASCII characters only.

target : str, optional

The name of the target column in the uploaded file. Should not be provided if unsupervised_mode is True.

project_name : str

The project name.

Returns:
project : Project

The newly created and initialized project.

Other Parameters:
 
worker_count : int, optional

The number of workers that you want to allocate to this project.

metric : str, optional

The name of metric to use.

autopilot_on : boolean, default True

Whether or not to begin modeling automatically.

blueprint_threshold : int, optional

Number of hours the model is permitted to run. Minimum 1

response_cap : float, optional

Quantile of the response distribution to use for response capping Must be in range 0.5 .. 1.0

partitioning_method : PartitioningMethod object, optional

Instance of one of the Partition Classes defined in datarobot.helpers.partitioning_methods. As an alternative, use Project.set_partitioning_method or Project.set_datetime_partitioning to set the partitioning for the project.

positive_class : str, float, or int; optional

Specifies a level of the target column that should be treated as the positive class for binary classification. May only be specified for binary classification targets.

target_type : str, optional

Override the automatically selected target_type. An example usage would be setting the target_type=’Multiclass’ when you want to preform a multiclass classification task on a numeric column that has a low cardinality. You can use TARGET_TYPE enum.

unsupervised_mode : boolean, default False

Specifies whether to create an unsupervised project.

blend_best_models: bool, optional

blend best models during Autopilot run

scoring_code_only: bool, optional

Keep only models that can be converted to scorable java code during Autopilot run.

shap_only_mode: bool, optional

Keep only models that support SHAP values during Autopilot run. Use SHAP-based insights wherever possible. Defaults to False.

prepare_model_for_deployment: bool, optional

Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning “RECOMMENDED FOR DEPLOYMENT” label.

consider_blenders_in_recommendation: bool, optional

Include blenders when selecting a model to prepare for deployment in an Autopilot Run. Defaults to False.

min_secondary_validation_model_count: int, optional

Compute “All backtest” scores (datetime models) or cross validation scores for the specified number of highest ranking models on the Leaderboard, if over the Autopilot default.

relationships_configuration_id : str, optional

(New in version v2.23) id of the relationships configuration to use

autopilot_with_feature_discovery: bool, optional.

(New in version v2.23) If true, autopilot will run on a feature list that includes features found via search for interactions.

feature_discovery_supervised_feature_reduction: bool, optional

(New in version v2.23) Run supervised feature reduction for feature discovery projects.

unsupervised_type : UnsupervisedTypeEnum, optional

(New in version v2.27) Specifies whether an unsupervised project is anomaly detection or clustering.

autopilot_cluster_list : list(int), optional

(New in version v2.27) Specifies the list of clusters to build for each model during Autopilot. Specifying multiple values in a list will build models with each number of clusters for the Leaderboard.

bias_mitigation_feature_name : str, optional

The feature from protected features that will be used in a bias mitigation task to mitigate bias

bias_mitigation_technique : str, optional

One of datarobot.enums.BiasMitigationTechnique Options: - ‘preprocessingReweighing’ - ‘postProcessingRejectionOptionBasedClassification’ The technique by which we’ll mitigate bias, which will inform which bias mitigation task we insert into blueprints

include_bias_mitigation_feature_as_predictor_variable : bool, optional

Whether we should also use the mitigation feature as in input to the modeler just like any other categorical used for training, i.e. do we want the model to “train on” this feature in addition to using it for bias mitigation

Raises:
AsyncFailureError

Polling for status of async process resulted in response with unsupported status code

AsyncProcessUnsuccessfulError

Raised if project creation or target setting was unsuccessful

AsyncTimeoutError

Raised if project creation or target setting timed out

Examples

Project.start("./tests/fixtures/file.csv",
              "a_target",
              project_name="test_name",
              worker_count=4,
              metric="a_metric")

This is an example of using a URL to specify the datasource:

Project.start("https://example.com/data/file.csv",
              "a_target",
              project_name="test_name",
              worker_count=4,
              metric="a_metric")
classmethod list(search_params: Optional[Dict[str, str]] = None) → List[datarobot.models.project.Project]

Returns the projects associated with this account.

Parameters:
search_params : dict, optional.

If not None, the returned projects are filtered by lookup. Currently you can query projects by:

  • project_name
Returns:
projects : list of Project instances

Contains a list of projects associated with this user account.

Raises:
TypeError

Raised if search_params parameter is provided, but is not of supported type.

Examples

List all projects .. code-block:: python

p_list = Project.list() p_list >>> [Project(‘Project One’), Project(‘Two’)]

Search for projects by name .. code-block:: python

Project.list(search_params={‘project_name’: ‘red’}) >>> [Project(‘Predtime’), Project(‘Fred Project’)]
refresh() → None

Fetches the latest state of the project, and updates this object with that information. This is an in place update, not a new object.

Returns:
self : Project

the now-updated project

delete() → None

Removes this project from your account.

analyze_and_model(target=None, mode='quick', metric=None, worker_count=None, positive_class=None, partitioning_method=None, featurelist_id=None, advanced_options=None, max_wait=600, target_type=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=False, relationships_configuration_id=None, class_mapping_aggregation_settings=None, segmentation_task_id=None, unsupervised_type=None, autopilot_cluster_list=None)

Set target variable of an existing project and begin the autopilot process or send data to DataRobot for feature analysis only if manual mode is specified.

Any options saved using set_options will be used if nothing is passed to advanced_options. However, saved options will be ignored if advanced_options are passed.

Target setting is an asynchronous process, which means that after initial request we will keep polling status of async process that is responsible for target setting until it’s finished. For SDK users this only means that this method might raise exceptions related to it’s async nature.

When execution returns to the caller, the autopilot process will already have commenced (again, unless manual mode is specified).

Parameters:
target : str, optional

The name of the target column in the uploaded file. Should not be provided if unsupervised_mode is True.

mode : str, optional

You can use AUTOPILOT_MODE enum to choose between

  • AUTOPILOT_MODE.FULL_AUTO
  • AUTOPILOT_MODE.MANUAL
  • AUTOPILOT_MODE.QUICK
  • AUTOPILOT_MODE.COMPREHENSIVE: Runs all blueprints in the repository (warning: this may be extremely slow).

If unspecified, QUICK is used. If the MANUAL value is used, the model creation process will need to be started by executing the start_autopilot function with the desired featurelist. It will start immediately otherwise.

metric : str, optional

Name of the metric to use for evaluating models. You can query the metrics available for the target by way of Project.get_metrics. If none is specified, then the default recommended by DataRobot is used.

worker_count : int, optional

The number of concurrent workers to request for this project. If None, then the default is used. (New in version v2.14) Setting this to -1 will request the maximum number available to your account.

partitioning_method : PartitioningMethod object, optional

Instance of one of the Partition Classes defined in datarobot.helpers.partitioning_methods. As an alternative, use Project.set_partitioning_method or Project.set_datetime_partitioning to set the partitioning for the project.

positive_class : str, float, or int; optional

Specifies a level of the target column that should be treated as the positive class for binary classification. May only be specified for binary classification targets.

featurelist_id : str, optional

Specifies which feature list to use.

advanced_options : AdvancedOptions, optional

Used to set advanced options of project creation. Will override any options saved using set_options.

max_wait : int, optional

Time in seconds after which target setting is considered unsuccessful.

target_type : str, optional

Override the automatically selected target_type. An example usage would be setting the target_type=’Multiclass’ when you want to preform a multiclass classification task on a numeric column that has a low cardinality. You can use TARGET_TYPE enum.

credentials: list, optional,

a list of credentials for the datasets used in relationship configuration (previously graphs).

feature_engineering_prediction_point : str, optional

additional aim parameter.

unsupervised_mode : boolean, default False

(New in version v2.20) Specifies whether to create an unsupervised project. If True, target may not be provided.

relationships_configuration_id : str, optional

(New in version v2.21) ID of the relationships configuration to use.

segmentation_task_id : str or SegmentationTask, optional

(New in version v2.28) The segmentation task that should be used to split the project for segmented modeling.

unsupervised_type : UnsupervisedTypeEnum, optional

(New in version v2.27) Specifies whether an unsupervised project is anomaly detection or clustering.

autopilot_cluster_list : list(int), optional

(New in version v2.27) Specifies the list of clusters to build for each model during Autopilot. Specifying multiple values in a list will build models with each number of clusters for the Leaderboard.

Returns:
project : Project

The instance with updated attributes.

Raises:
AsyncFailureError

Polling for status of async process resulted in response with unsupported status code

AsyncProcessUnsuccessfulError

Raised if target setting was unsuccessful

AsyncTimeoutError

Raised if target setting took more time, than specified by max_wait parameter

TypeError

Raised if advanced_options, partitioning_method or target_type is provided, but is not of supported type

See also

datarobot.models.Project.start
combines project creation, file upload, and target selection. Provides fewer options, but is useful for getting started quickly.
set_target(target=None, mode='quick', metric=None, worker_count=None, positive_class=None, partitioning_method=None, featurelist_id=None, advanced_options=None, max_wait=600, target_type=None, credentials=None, feature_engineering_prediction_point=None, unsupervised_mode=False, relationships_configuration_id=None, class_mapping_aggregation_settings=None, segmentation_task_id=None, unsupervised_type=None, autopilot_cluster_list=None)

Set target variable of an existing project and begin the Autopilot process (unless manual mode is specified).

Target setting is an asynchronous process, which means that after initial request DataRobot keeps polling status of an async process that is responsible for target setting until it’s finished. For SDK users, this method might raise exceptions related to its async nature.

When execution returns to the caller, the Autopilot process will already have commenced (again, unless manual mode is specified).

Parameters:
target : str, optional

The name of the target column in the uploaded file. Should not be provided if unsupervised_mode is True.

mode : str, optional

You can use AUTOPILOT_MODE enum to choose between

  • AUTOPILOT_MODE.FULL_AUTO
  • AUTOPILOT_MODE.MANUAL
  • AUTOPILOT_MODE.QUICK
  • AUTOPILOT_MODE.COMPREHENSIVE: Runs all blueprints in the repository (warning: this may be extremely slow).

If unspecified, QUICK mode is used. If the MANUAL value is used, the model creation process needs to be started by executing the start_autopilot function with the desired feature list. It will start immediately otherwise.

metric : str, optional

Name of the metric to use for evaluating models. You can query the metrics available for the target by way of Project.get_metrics. If none is specified, then the default recommended by DataRobot is used.

worker_count : int, optional

The number of concurrent workers to request for this project. If None, then the default is used. (New in version v2.14) Setting this to -1 will request the maximum number available to your account.

partitioning_method : PartitioningMethod object, optional

Instance of one of the Partition Classes defined in datarobot.helpers.partitioning_methods. As an alternative, use Project.set_partitioning_method or Project.set_datetime_partitioning to set the partitioning for the project.

positive_class : str, float, or int; optional

Specifies a level of the target column that should be treated as the positive class for binary classification. May only be specified for binary classification targets.

partitioning_method : PartitioningMethod object, optional

Instance of one of the Partition Classes defined in datarobot.helpers.partitioning_methods.

featurelist_id : str, optional

Specifies which feature list to use.

advanced_options : AdvancedOptions, optional

Used to set advanced options of project creation.

max_wait : int, optional

Time in seconds after which target setting is considered unsuccessful.

target_type : str, optional

Override the automatically selected target_type. An example usage would be setting the target_type=Multiclass’ when you want to preform a multiclass classification task on a numeric column that has a low cardinality. You can use ``TARGET_TYPE` enum.

credentials: list, optional,

A list of credentials for the datasets used in relationship configuration (previously graphs).

feature_engineering_prediction_point : str, optional

Additional aim parameter.

unsupervised_mode : boolean, default False

(New in version v2.20) Specifies whether to create an unsupervised project. If True, target may not be provided.

relationships_configuration_id : str, optional

(New in version v2.21) ID of the relationships configuration to use.

class_mapping_aggregation_settings : ClassMappingAggregationSettings, optional

Instance of datarobot.helpers.ClassMappingAggregationSettings

segmentation_task_id : str or SegmentationTask, optional

(New in version v2.28) The segmentation task that should be used to split the project for segmented modeling.

unsupervised_type : UnsupervisedTypeEnum, optional

(New in version v2.27) Specifies whether an unsupervised project is anomaly detection or clustering.

autopilot_cluster_list : list(int), optional

(New in version v2.27) Specifies the list of clusters to build for each model during Autopilot. Specifying multiple values in a list will build models with each number of clusters for the Leaderboard.

Returns:
project : Project

The instance with updated attributes.

Raises:
AsyncFailureError

Polling for status of async process resulted in response with unsupported status code.

AsyncProcessUnsuccessfulError

Raised if target setting was unsuccessful.

AsyncTimeoutError

Raised if target setting took more time, than specified by max_wait parameter.

TypeError

Raised if advanced_options, partitioning_method or target_type is provided, but is not of supported type.

See also

datarobot.models.Project.start
Combines project creation, file upload, and target selection. Provides fewer options, but is useful for getting started quickly.
datarobot.models.Project.analyze_and_model
the method replacing set_target after it is removed.
get_models(order_by: Union[str, List[str], None] = None, search_params: Optional[Dict[str, Any]] = None, with_metric: Optional[str] = None) → List[Optional[datarobot.models.model.Model]]

List all completed, successful models in the leaderboard for the given project.

Parameters:
order_by : str or list of strings, optional

If not None, the returned models are ordered by this attribute. If None, the default return is the order of default project metric.

Allowed attributes to sort by are:

  • metric
  • sample_pct

If the sort attribute is preceded by a hyphen, models will be sorted in descending order, otherwise in ascending order.

Multiple sort attributes can be included as a comma-delimited string or in a list e.g. order_by=`sample_pct,-metric` or order_by=[sample_pct, -metric]

Using metric to sort by will result in models being sorted according to their validation score by how well they did according to the project metric.

search_params : dict, optional.

If not None, the returned models are filtered by lookup. Currently you can query models by:

  • name
  • sample_pct
  • is_starred
with_metric : str, optional.

If not None, the returned models will only have scores for this metric. Otherwise all the metrics are returned.

Returns:
models : a list of Model instances.

All of the models that have been trained in this project.

Raises:
TypeError

Raised if order_by or search_params parameter is provided, but is not of supported type.

Examples

Project.get('pid').get_models(order_by=['-sample_pct',
                              'metric'])

# Getting models that contain "Ridge" in name
# and with sample_pct more than 64
Project.get('pid').get_models(
    search_params={
        'sample_pct__gt': 64,
        'name': "Ridge"
    })

# Filtering models based on 'starred' flag:
Project.get('pid').get_models(search_params={'is_starred': True})
recommended_model() → Optional[datarobot.models.model.Model]

Returns the default recommended model, or None if there is no default recommended model.

Returns:
recommended_model : Model or None

The default recommended model.

get_top_model(metric: Optional[str] = None) → datarobot.models.model.Model

Obtain the top ranked model for a given metric/ If no metric is passed in, it uses the project’s default metric. Models that display score of N/A in the UI are not included in the ranking (see https://docs.datarobot.com/en/docs/modeling/reference/model-detail/leaderboard-ref.html#na-scores).

Parameters:
metric : str, optional

Metric to sort models

Returns:
model : Model

The top model

Raises:
ValueError

Raised if the project is unsupervised. Raised if the project has no target set. Raised if no metric was passed or the project has no metric. Raised if the metric passed is not used by the models on the leaderboard.

Examples

from datarobot.models.project import Project

project = Project.get("<MY_PROJECT_ID>")
top_model = project.get_top_model()
get_datetime_models() → List[datarobot.models.model.DatetimeModel]

List all models in the project as DatetimeModels

Requires the project to be datetime partitioned. If it is not, a ClientError will occur.

Returns:
models : list of DatetimeModel

the datetime models

get_prime_models() → List[datarobot.models.model.PrimeModel]

List all DataRobot Prime models for the project Prime models were created to approximate a parent model, and have downloadable code.

Returns:
models : list of PrimeModel
get_prime_files(parent_model_id=None, model_id=None)

List all downloadable code files from DataRobot Prime for the project

Parameters:
parent_model_id : str, optional

Filter for only those prime files approximating this parent model

model_id : str, optional

Filter for only those prime files with code for this prime model

Returns:
files: list of PrimeFile
get_dataset() → Optional[Dataset]

Retrieve the dataset used to create a project.

Returns:
Dataset

Dataset used for creation of project or None if no catalog_id present.

Examples

from datarobot.models.project import Project

project = Project.get("<MY_PROJECT_ID>")
dataset = project.get_dataset()
get_datasets() → List[datarobot.models.prediction_dataset.PredictionDataset]

List all the datasets that have been uploaded for predictions

Returns:
datasets : list of PredictionDataset instances
upload_dataset(sourcedata, max_wait=600, read_timeout=600, forecast_point=None, predictions_start_date=None, predictions_end_date=None, dataset_filename=None, relax_known_in_advance_features_check=None, credentials=None, actual_value_column=None, secondary_datasets_config_id=None)

Upload a new dataset to make predictions against

Parameters:
sourcedata : str, file or pandas.DataFrame

Data to be used for predictions. If string, can be either a path to a local file, a publicly accessible URL (starting with http://, https://, file://), or raw file content. If using a file on disk, the filename must consist of ASCII characters only.

max_wait : int, optional

The maximum number of seconds to wait for the uploaded dataset to be processed before raising an error.

read_timeout : int, optional

The maximum number of seconds to wait for the server to respond indicating that the initial upload is complete

forecast_point : datetime.datetime or None, optional

(New in version v2.8) May only be specified for time series projects, otherwise the upload will be rejected. The time in the dataset relative to which predictions should be generated in a time series project. See the Time Series documentation for more information. If not provided, will default to using the latest forecast point in the dataset.

predictions_start_date : datetime.datetime or None, optional

(New in version v2.11) May only be specified for time series projects. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Cannot be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.11) May only be specified for time series projects. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Cannot be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. Cannot be provided with the forecast_point parameter.

dataset_filename : string or None, optional

(New in version v2.14) File name to use for the dataset. Ignored for url and file path sources.

relax_known_in_advance_features_check : bool, optional

(New in version v2.15) For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.

credentials: list, optional, a list of credentials for the datasets used

in Feature discovery project

secondary_datasets_config_id: string or None, optional

(New in version v2.23) The Id of the alternative secondary dataset config to use during prediction for Feature discovery project.

Returns
——-
dataset : PredictionDataset

The newly uploaded dataset.

Raises:
InputNotUnderstoodError

Raised if sourcedata isn’t one of supported types.

AsyncFailureError

Raised if polling for the status of an async process resulted in a response with an unsupported status code.

AsyncProcessUnsuccessfulError

Raised if project creation was unsuccessful (i.e. the server reported an error in uploading the dataset).

AsyncTimeoutError

Raised if processing the uploaded dataset took more time than specified by the max_wait parameter.

ValueError

Raised if forecast_point or predictions_start_date and predictions_end_date are provided, but are not of the supported type.

upload_dataset_from_data_source(data_source_id, username, password, max_wait=600, forecast_point=None, relax_known_in_advance_features_check=None, credentials=None, predictions_start_date=None, predictions_end_date=None, actual_value_column=None, secondary_datasets_config_id=None)

Upload a new dataset from a data source to make predictions against

Parameters:
data_source_id : str

The identifier of the data source.

username : str

The username for database authentication.

password : str

The password for database authentication. The password is encrypted at server side and never saved / stored.

max_wait : int, optional

Optional, the maximum number of seconds to wait before giving up.

forecast_point : datetime.datetime or None, optional

(New in version v2.8) For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

relax_known_in_advance_features_check : bool, optional

(New in version v2.15) For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.

credentials: list, optional, a list of credentials for the datasets used

in Feature discovery project

predictions_start_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

(New in version v2.20) For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

(New in version v2.21) Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. Cannot be provided with the forecast_point parameter.

secondary_datasets_config_id: string or None, optional

(New in version v2.23) The Id of the alternative secondary dataset config to use during prediction for Feature discovery project.

Returns
——-
dataset : PredictionDataset

the newly uploaded dataset

upload_dataset_from_catalog(dataset_id, dataset_version_id=None, max_wait=600, forecast_point=None, relax_known_in_advance_features_check=None, credentials=None, predictions_start_date=None, predictions_end_date=None, actual_value_column=None, secondary_datasets_config_id=None)

Upload a new dataset from a catalog dataset to make predictions against

Parameters:
dataset_id : str

The identifier of the dataset.

dataset_version_id : str, optional

The version id of the dataset to use.

max_wait : int, optional

Optional, the maximum number of seconds to wait before giving up.

forecast_point : datetime.datetime or None, optional

For time series projects only. This is the default point relative to which predictions will be generated, based on the forecast window of the project. See the time series prediction documentation for more information.

relax_known_in_advance_features_check : bool, optional

For time series projects only. If True, missing values in the known in advance features are allowed in the forecast window at the prediction time. If omitted or False, missing values are not allowed.

credentials: list, optional

A list of credentials for the datasets used in Feature discovery project.

predictions_start_date : datetime.datetime or None, optional

For time series projects only. The start date for bulk predictions. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_end_date. Can’t be provided with the forecast_point parameter.

predictions_end_date : datetime.datetime or None, optional

For time series projects only. The end date for bulk predictions, exclusive. Note that this parameter is for generating historical predictions using the training data. This parameter should be provided in conjunction with predictions_start_date. Can’t be provided with the forecast_point parameter.

actual_value_column : string, optional

Actual value column name, valid for the prediction files if the project is unsupervised and the dataset is considered as bulk predictions dataset. Cannot be provided with the forecast_point parameter.

secondary_datasets_config_id: string or None, optional

The Id of the alternative secondary dataset config to use during prediction for Feature discovery project.

Returns:
dataset : PredictionDataset

the newly uploaded dataset

get_blueprints()

List all blueprints recommended for a project.

Returns:
menu : list of Blueprint instances

All the blueprints recommended by DataRobot for a project

get_features() → List[datarobot.models.feature.Feature]

List all features for this project

Returns:
list of Feature

all features for this project

get_modeling_features(batch_size: Optional[int] = None) → List[datarobot.models.feature.ModelingFeature]

List all modeling features for this project

Only available once the target and partitioning settings have been set. For more information on the distinction between input and modeling features, see the time series documentation.

Parameters:
batch_size : int, optional

The number of features to retrieve in a single API call. If specified, the client may make multiple calls to retrieve the full list of features. If not specified, an appropriate default will be chosen by the server.

Returns:
list of ModelingFeature

All modeling features in this project

get_featurelists() → List[datarobot.models.featurelist.Featurelist]

List all featurelists created for this project

Returns:
list of Featurelist

All featurelists created for this project

get_associations(assoc_type, metric, featurelist_id=None)

Get the association statistics and metadata for a project’s informative features

New in version v2.17.

Parameters:
assoc_type : string or None

The type of association, must be either ‘association’ or ‘correlation’

metric : string or None

The specified association metric, belongs under either association or correlation umbrella

featurelist_id : string or None

The desired featurelist for which to get association statistics (New in version v2.19)

Returns:
association_data : dict

Pairwise metric strength data, feature clustering data, and ordering data for Feature Association Matrix visualization

get_association_featurelists()

List featurelists and get feature association status for each

New in version v2.19.

Returns:
feature_lists : dict

Dict with ‘featurelists’ as key, with list of featurelists as values

get_association_matrix_details(feature1: str, feature2: str)

Get a sample of the actual values used to measure the association between a pair of features

New in version v2.17.

Parameters:
feature1 : str

Feature name for the first feature of interest

feature2 : str

Feature name for the second feature of interest

Returns:
dict

This data has 3 keys: chart_type, features, values, and types

chart_type : str

Type of plotting the pair of features gets in the UI. e.g. ‘HORIZONTAL_BOX’, ‘VERTICAL_BOX’, ‘SCATTER’ or ‘CONTINGENCY’

values : list

A list of triplet lists e.g. {“values”: [[460.0, 428.5, 0.001], [1679.3, 259.0, 0.001], …] The first entry of each list is a value of feature1, the second entry of each list is a value of feature2, and the third is the relative frequency of the pair of datapoints in the sample.

features : list of str

A list of the passed features, [feature1, feature2]

types : list of str

A list of the passed features’ types inferred by DataRobot. e.g. [‘NUMERIC’, ‘CATEGORICAL’]

get_modeling_featurelists(batch_size: Optional[int] = None) → List[datarobot.models.featurelist.ModelingFeaturelist]

List all modeling featurelists created for this project

Modeling featurelists can only be created after the target and partitioning options have been set for a project. In time series projects, these are the featurelists that can be used for modeling; in other projects, they behave the same as regular featurelists.

See the time series documentation for more information.

Parameters:
batch_size : int, optional

The number of featurelists to retrieve in a single API call. If specified, the client may make multiple calls to retrieve the full list of features. If not specified, an appropriate default will be chosen by the server.

Returns:
list of ModelingFeaturelist

all modeling featurelists in this project

get_discarded_features() → datarobot.models.restore_discarded_features.DiscardedFeaturesInfo

Retrieve discarded during feature generation features. Applicable for time series projects. Can be called at the modeling stage.

Returns:
discarded_features_info: DiscardedFeaturesInfo
restore_discarded_features(features: List[str], max_wait: int = 600) → datarobot.models.restore_discarded_features.FeatureRestorationStatus

Restore discarded during feature generation features. Applicable for time series projects. Can be called at the modeling stage.

Returns:
status: FeatureRestorationStatus

information about features requested to be restored.

create_type_transform_feature(name: str, parent_name: str, variable_type: str, replacement: Union[str, float, None] = None, date_extraction: Optional[str] = None, max_wait: int = 600) → datarobot.models.feature.Feature

Create a new feature by transforming the type of an existing feature in the project

Note that only the following transformations are supported:

  1. Text to categorical or numeric
  2. Categorical to text or numeric
  3. Numeric to categorical
  4. Date to categorical or numeric

Note

Special considerations when casting numeric to categorical

There are two parameters which can be used for variableType to convert numeric data to categorical levels. These differ in the assumptions they make about the input data, and are very important when considering the data that will be used to make predictions. The assumptions that each makes are:

  • categorical : The data in the column is all integral, and there are no missing values. If either of these conditions do not hold in the training set, the transformation will be rejected. During predictions, if any of the values in the parent column are missing, the predictions will error.
  • categoricalInt : New in v2.6 All of the data in the column should be considered categorical in its string form when cast to an int by truncation. For example the value 3 will be cast as the string 3 and the value 3.14 will also be cast as the string 3. Further, the value -3.6 will become the string -3. Missing values will still be recognized as missing.

For convenience these are represented in the enum VARIABLE_TYPE_TRANSFORM with the names CATEGORICAL and CATEGORICAL_INT.

Parameters:
name : str

The name to give to the new feature

parent_name : str

The name of the feature to transform

variable_type : str

The type the new column should have. See the values within datarobot.enums.VARIABLE_TYPE_TRANSFORM.

replacement : str or float, optional

The value that missing or unconvertable data should have

date_extraction : str, optional

Must be specified when parent_name is a date column (and left None otherwise). Specifies which value from a date should be extracted. See the list of values in datarobot.enums.DATE_EXTRACTION

max_wait : int, optional

The maximum amount of time to wait for DataRobot to finish processing the new column. This process can take more time with more data to process. If this operation times out, an AsyncTimeoutError will occur. DataRobot continues the processing and the new column may successfully be constructed.

Returns:
Feature

The data of the new Feature

Raises:
AsyncFailureError

If any of the responses from the server are unexpected

AsyncProcessUnsuccessfulError

If the job being waited for has failed or has been cancelled

AsyncTimeoutError

If the resource did not resolve in time

get_featurelist_by_name(name: str) → Optional[datarobot.models.featurelist.Featurelist]

Creates a new featurelist

Parameters:
name : str, optional

The name of the Project’s featurelist to get.

Returns:
Featurelist

featurelist found by name, optional

Examples

project = Project.get('5223deadbeefdeadbeef0101')
featurelist = project.get_featurelist_by_name("Raw Features")
create_featurelist(name: Optional[str] = None, features: Optional[List[str]] = None, starting_featurelist: Optional[datarobot.models.featurelist.Featurelist] = None, starting_featurelist_id: Optional[str] = None, starting_featurelist_name: Optional[str] = None, features_to_include: Optional[List[str]] = None, features_to_exclude: Optional[List[str]] = None) → datarobot.models.featurelist.Featurelist

Creates a new featurelist

Parameters:
name : str, optional

The name to give to this new featurelist. Names must be unique, so an error will be returned from the server if this name has already been used in this project. We dynamically create a name if none is provided.

features : list of str, optional

The names of the features. Each feature must exist in the project already.

starting_featurelist : Featurelist, optional

The featurelist to use as the basis when creating a new featurelist. starting_featurelist.features will be read to get the list of features that we will manipulate.

starting_featurelist_id : str, optional

The featurelist ID used instead of passing an object instance.

starting_featurelist_name : str, optional

The featurelist name like “Informative Features” to find a featurelist via the API, and use to fetch features.

features_to_include : list of str, optional

The list of the feature names to include in new featurelist. Throws an error if an item in this list is not in the featurelist that was passed, or that was retrieved from the API. If nothing is passed, all features are included from the starting featurelist.

features_to_exclude : list of str, optional

The list of the feature names to exclude in the new featurelist. Throws an error if an item in this list is not in the featurelist that was passed, also throws an error if a feature is in this list as well as features_to_include. Method cannot use both at the same time.

Returns:
Featurelist

newly created featurelist

Raises:
DuplicateFeaturesError

Raised if features variable contains duplicate features

InvalidUsageError

Raised method is called with incompatible arguments

Examples

project = Project.get('5223deadbeefdeadbeef0101')
flists = project.get_featurelists()

# Create a new featurelist using a subset of features from an
# existing featurelist
flist = flists[0]
features = flist.features[::2]  # Half of the features

new_flist = project.create_featurelist(
    name='Feature Subset',
    features=features,
)
project = Project.get('5223deadbeefdeadbeef0101')

# Create a new featurelist using a subset of features from an
# existing featurelist by using features_to_exclude param

new_flist = project.create_featurelist(
    name='Feature Subset of Existing Featurelist',
    starting_featurelist_name="Informative Features",
    features_to_exclude=["metformin", "weight", "age"],
)
create_modeling_featurelist(name: str, features: List[str], skip_datetime_partition_column: bool = False) → datarobot.models.featurelist.ModelingFeaturelist

Create a new modeling featurelist

Modeling featurelists can only be created after the target and partitioning options have been set for a project. In time series projects, these are the featurelists that can be used for modeling; in other projects, they behave the same as regular featurelists.

See the time series documentation for more information.

Parameters:
name : str

the name of the modeling featurelist to create. Names must be unique within the project, or the server will return an error.

features : list of str

the names of the features to include in the modeling featurelist. Each feature must be a modeling feature.

skip_datetime_partition_column: boolean, optional

False by default. If True, featurelist will not contain datetime partition column. Use to create monotonic feature lists in Time Series projects. Setting makes no difference for not Time Series projects. Monotonic featurelists can not be used for modeling.

Returns:
featurelist : ModelingFeaturelist

the newly created featurelist

Examples

project = Project.get('1234deadbeeffeeddead4321')
modeling_features = project.get_modeling_features()
selected_features = [feat.name for feat in modeling_features][:5]  # select first five
new_flist = project.create_modeling_featurelist('Model This', selected_features)
get_metrics(feature_name: str)

Get the metrics recommended for modeling on the given feature.

Parameters:
feature_name : str

The name of the feature to query regarding which metrics are recommended for modeling.

Returns:
feature_name: str

The name of the feature that was looked up

available_metrics: list of str

An array of strings representing the appropriate metrics. If the feature cannot be selected as the target, then this array will be empty.

metric_details: list of dict

The list of metricDetails objects

metric_name: str

Name of the metric

supports_timeseries: boolean

This metric is valid for timeseries

supports_multiclass: boolean

This metric is valid for multiclass classification

supports_binary: boolean

This metric is valid for binary classification

supports_regression: boolean

This metric is valid for regression

ascending: boolean

Should the metric be sorted in ascending order

get_status()

Query the server for project status.

Returns:
status : dict

Contains:

  • autopilot_done : a boolean.
  • stage : a short string indicating which stage the project is in.
  • stage_description : a description of what stage means.

Examples

{"autopilot_done": False,
 "stage": "modeling",
 "stage_description": "Ready for modeling"}
pause_autopilot() → bool

Pause autopilot, which stops processing the next jobs in the queue.

Returns:
paused : boolean

Whether the command was acknowledged

unpause_autopilot() → bool

Unpause autopilot, which restarts processing the next jobs in the queue.

Returns:
unpaused : boolean

Whether the command was acknowledged.

start_autopilot(featurelist_id: str, mode: datarobot.enums.Enum = 'quick', blend_best_models: bool = True, scoring_code_only: bool = False, prepare_model_for_deployment: bool = True, consider_blenders_in_recommendation: bool = False, run_leakage_removed_feature_list: bool = True, autopilot_cluster_list: Optional[List[int]] = None) → None

Start Autopilot on provided featurelist with the specified Autopilot settings, halting the current Autopilot run.

Only one autopilot can be running at the time. That’s why any ongoing autopilot on a different featurelist will be halted - modeling jobs in queue would not be affected but new jobs would not be added to queue by the halted autopilot.

Parameters:
featurelist_id : str

Identifier of featurelist that should be used for autopilot

mode : str, optional

The Autopilot mode to run. You can use AUTOPILOT_MODE enum to choose between

  • AUTOPILOT_MODE.FULL_AUTO
  • AUTOPILOT_MODE.QUICK
  • AUTOPILOT_MODE.COMPREHENSIVE

If unspecified, AUTOPILOT_MODE.QUICK is used.

blend_best_models : bool, optional

Blend best models during Autopilot run. This option is not supported in SHAP-only ‘ ‘mode.

scoring_code_only : bool, optional

Keep only models that can be converted to scorable java code during Autopilot run.

prepare_model_for_deployment : bool, optional

Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning “RECOMMENDED FOR DEPLOYMENT” label.

consider_blenders_in_recommendation : bool, optional

Include blenders when selecting a model to prepare for deployment in an Autopilot Run. This option is not supported in SHAP-only mode or for multilabel projects.

run_leakage_removed_feature_list : bool, optional

Run Autopilot on Leakage Removed feature list (if exists).

autopilot_cluster_list : list of int, optional

(New in v2.27) A list of integers, where each value will be used as the number of clusters in Autopilot model(s) for unsupervised clustering projects. Cannot be specified unless project unsupervisedMode is true and unsupervisedType is set to ‘clustering’.

Raises:
AppPlatformError

Raised project’s target was not selected or the settings for Autopilot are invalid for the project project.

train(trainable, sample_pct=None, featurelist_id=None, source_project_id=None, scoring_type=None, training_row_count=None, monotonic_increasing_featurelist_id=<object object>, monotonic_decreasing_featurelist_id=<object object>, n_clusters=None)

Submit a job to the queue to train a model.

Either sample_pct or training_row_count can be used to specify the amount of data to use, but not both. If neither are specified, a default of the maximum amount of data that can safely be used to train any blueprint without going into the validation data will be selected.

In smart-sampled projects, sample_pct and training_row_count are assumed to be in terms of rows of the minority class.

Note

If the project uses datetime partitioning, use Project.train_datetime instead.

Parameters:
trainable : str or Blueprint

For str, this is assumed to be a blueprint_id. If no source_project_id is provided, the project_id will be assumed to be the project that this instance represents.

Otherwise, for a Blueprint, it contains the blueprint_id and source_project_id that we want to use. featurelist_id will assume the default for this project if not provided, and sample_pct will default to using the maximum training value allowed for this project’s partition setup. source_project_id will be ignored if a Blueprint instance is used for this parameter

sample_pct : float, optional

The amount of data to use for training, as a percentage of the project dataset from 0 to 100.

featurelist_id : str, optional

The identifier of the featurelist to use. If not defined, the default for this project is used.

source_project_id : str, optional

Which project created this blueprint_id. If None, it defaults to looking in this project. Note that you must have read permissions in this project.

scoring_type : str, optional

Either validation or crossValidation (also dr.SCORING_TYPE.validation or dr.SCORING_TYPE.cross_validation). validation is available for every partitioning type, and indicates that the default model validation should be used for the project. If the project uses a form of cross-validation partitioning, crossValidation can also be used to indicate that all of the available training/validation combinations should be used to evaluate the model.

training_row_count : int, optional

The number of rows to use to train the requested model.

monotonic_increasing_featurelist_id : str, optional

(new in version 2.11) the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str, optional

(new in version 2.11) the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

n_clusters: int, optional

(new in version 2.27) Number of clusters to use in an unsupervised clustering model. This parameter is used only for unsupervised clustering models that don’t automatically determine the number of clusters.

Returns:
model_job_id : str

id of created job, can be used as parameter to ModelJob.get method or wait_for_async_model_creation function

Examples

Use a Blueprint instance:

blueprint = project.get_blueprints()[0]
model_job_id = project.train(blueprint, training_row_count=project.max_train_rows)

Use a blueprint_id, which is a string. In the first case, it is assumed that the blueprint was created by this project. If you are using a blueprint used by another project, you will need to pass the id of that other project as well.

blueprint_id = 'e1c7fc29ba2e612a72272324b8a842af'
project.train(blueprint, training_row_count=project.max_train_rows)

another_project.train(blueprint, source_project_id=project.id)

You can also easily use this interface to train a new model using the data from an existing model:

model = project.get_models()[0]
model_job_id = project.train(model.blueprint.id,
                             sample_pct=100)
train_datetime(blueprint_id, featurelist_id=None, training_row_count=None, training_duration=None, source_project_id=None, monotonic_increasing_featurelist_id=<object object>, monotonic_decreasing_featurelist_id=<object object>, use_project_settings=False, sampling_method=None)

Create a new model in a datetime partitioned project

If the project is not datetime partitioned, an error will occur.

All durations should be specified with a duration string such as those returned by the partitioning_methods.construct_duration_string helper method. Please see datetime partitioned project documentation for more information on duration strings.

Parameters:
blueprint_id : str

the blueprint to use to train the model

featurelist_id : str, optional

the featurelist to use to train the model. If not specified, the project default will be used.

training_row_count : int, optional

the number of rows of data that should be used to train the model. If specified, neither training_duration nor use_project_settings may be specified.

training_duration : str, optional

a duration string specifying what time range the data used to train the model should span. If specified, neither training_row_count nor use_project_settings may be specified.

sampling_method : str, optional

(New in version v2.23) defines the way training data is selected. Can be either random or latest. In combination with training_row_count defines how rows are selected from backtest (latest by default). When training data is defined using time range (training_duration or use_project_settings) this setting changes the way time_window_sample_pct is applied (random by default). Applicable to OTV projects only.

use_project_settings : bool, optional

(New in version v2.20) defaults to False. If True, indicates that the custom backtest partitioning settings specified by the user will be used to train the model and evaluate backtest scores. If specified, neither training_row_count nor training_duration may be specified.

source_project_id : str, optional

the id of the project this blueprint comes from, if not this project. If left unspecified, the blueprint must belong to this project.

monotonic_increasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. Passing None disables increasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

monotonic_decreasing_featurelist_id : str, optional

(New in version v2.18) optional, the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. Passing None disables decreasing monotonicity constraint. Default (dr.enums.MONOTONICITY_FEATURELIST_DEFAULT) is the one specified by the blueprint.

Returns:
job : ModelJob

the created job to build the model

blend(model_ids: List[str], blender_method: str) → datarobot.models.modeljob.ModelJob

Submit a job for creating blender model. Upon success, the new job will be added to the end of the queue.

Parameters:
model_ids : list of str

List of model ids that will be used to create blender. These models should have completed validation stage without errors, and can’t be blenders or DataRobot Prime

blender_method : str

Chosen blend method, one from datarobot.enums.BLENDER_METHOD. If this is a time series project, only methods in datarobot.enums.TS_BLENDER_METHOD are allowed.

Returns:
model_job : ModelJob

New ModelJob instance for the blender creation job in queue.

See also

datarobot.models.Project.check_blendable
to confirm if models can be blended
check_blendable(model_ids: List[str], blender_method: str) → datarobot.helpers.eligibility_result.EligibilityResult

Check if the specified models can be successfully blended

Parameters:
model_ids : list of str

List of model ids that will be used to create blender. These models should have completed validation stage without errors, and can’t be blenders or DataRobot Prime

blender_method : str

Chosen blend method, one from datarobot.enums.BLENDER_METHOD. If this is a time series project, only methods in datarobot.enums.TS_BLENDER_METHOD are allowed.

Returns:
EligibilityResult
start_prepare_model_for_deployment(model_id: str) → None

Prepare a specific model for deployment.

The requested model will be trained on the maximum autopilot size then go through the recommendation stages. For datetime partitioned projects, this includes the feature impact stage, retraining on a reduced feature list, and retraining the best of the reduced feature list model and the max autopilot original model on recent data. For non-datetime partitioned projects, this includes the feature impact stage, retraining on a reduced feature list, retraining the best of the reduced feature list model and the max autopilot original model up to the holdout size, then retraining the up-to-the holdout model on the full dataset.

Parameters:
model_id : str

The model to prepare for deployment.

get_all_jobs(status: Optional[datarobot.enums.Enum] = None) → List[datarobot.models.job.Job]

Get a list of jobs

This will give Jobs representing any type of job, including modeling or predict jobs.

Parameters:
status : QUEUE_STATUS enum, optional

If called with QUEUE_STATUS.INPROGRESS, will return the jobs that are currently running.

If called with QUEUE_STATUS.QUEUE, will return the jobs that are waiting to be run.

If called with QUEUE_STATUS.ERROR, will return the jobs that have errored.

If no value is provided, will return all jobs currently running or waiting to be run.

Returns:
jobs : list

Each is an instance of Job

get_blenders() → List[datarobot.models.model.BlenderModel]

Get a list of blender models.

Returns:
list of BlenderModel

list of all blender models in project.

get_frozen_models() → List[datarobot.models.model.FrozenModel]

Get a list of frozen models

Returns:
list of FrozenModel

list of all frozen models in project.

get_combined_models() → List[datarobot.models.model.CombinedModel]

Get a list of models in segmented project.

Returns:
list of CombinedModel

list of all combined models in segmented project.

get_active_combined_model() → datarobot.models.model.CombinedModel

Retrieve currently active combined model in segmented project.

Returns:
CombinedModel

currently active combined model in segmented project.

get_segments_models(combined_model_id: Optional[str] = None) → List[Dict[str, Any]]

Retrieve a list of all models belonging to the segments/child projects of the segmented project.

Parameters:
combined_model_id : str, optional

Id of the combined model to get segments for. If there is only a single combined model it can be retrieved automatically, but this must be specified when there are > 1 combined models.

Returns:
segments_models : list(dict)

A list of dictionaries containing all of the segments/child projects, each with a list of their models ordered by metric from best to worst.

get_model_jobs(status: Optional[datarobot.enums.Enum] = None) → List[datarobot.models.modeljob.ModelJob]

Get a list of modeling jobs

Parameters:
status : QUEUE_STATUS enum, optional

If called with QUEUE_STATUS.INPROGRESS, will return the modeling jobs that are currently running.

If called with QUEUE_STATUS.QUEUE, will return the modeling jobs that are waiting to be run.

If called with QUEUE_STATUS.ERROR, will return the modeling jobs that have errored.

If no value is provided, will return all modeling jobs currently running or waiting to be run.

Returns:
jobs : list

Each is an instance of ModelJob

get_predict_jobs(status: Optional[datarobot.enums.Enum] = None) → List[datarobot.models.predict_job.PredictJob]

Get a list of prediction jobs

Parameters:
status : QUEUE_STATUS enum, optional

If called with QUEUE_STATUS.INPROGRESS, will return the prediction jobs that are currently running.

If called with QUEUE_STATUS.QUEUE, will return the prediction jobs that are waiting to be run.

If called with QUEUE_STATUS.ERROR, will return the prediction jobs that have errored.

If called without a status, will return all prediction jobs currently running or waiting to be run.

Returns:
jobs : list

Each is an instance of PredictJob

wait_for_autopilot(check_interval: Union[float, int] = 20.0, timeout: Union[float, int, None] = 86400, verbosity: Union[int, datarobot.enums.Enum] = 1) → None

Blocks until autopilot is finished. This will raise an exception if the autopilot mode is changed from AUTOPILOT_MODE.FULL_AUTO.

It makes API calls to sync the project state with the server and to look at which jobs are enqueued.

Parameters:
check_interval : float or int

The maximum time (in seconds) to wait between checks for whether autopilot is finished

timeout : float or int or None

After this long (in seconds), we give up. If None, never timeout.

verbosity:

This should be VERBOSITY_LEVEL.SILENT or VERBOSITY_LEVEL.VERBOSE. For VERBOSITY_LEVEL.SILENT, nothing will be displayed about progress. For VERBOSITY_LEVEL.VERBOSE, the number of jobs in progress or queued is shown. Note that new jobs are added to the queue along the way.

Raises:
AsyncTimeoutError

If autopilot does not finished in the amount of time specified

RuntimeError

If a condition is detected that indicates that autopilot will not complete on its own

rename(project_name: str) → None

Update the name of the project.

Parameters:
project_name : str

The new name

set_project_description(project_description: str) → None

Set or Update the project description.

Parameters:
project_description : str

The new description for this project.

unlock_holdout() → None

Unlock the holdout for this project.

This will cause subsequent queries of the models of this project to contain the metric values for the holdout set, if it exists.

Take care, as this cannot be undone. Remember that best practice is to select a model before analyzing the model performance on the holdout set

set_worker_count(worker_count: int) → None

Sets the number of workers allocated to this project.

Note that this value is limited to the number allowed by your account. Lowering the number will not stop currently running jobs, but will cause the queue to wait for the appropriate number of jobs to finish before attempting to run more jobs.

Parameters:
worker_count : int

The number of concurrent workers to request from the pool of workers. (New in version v2.14) Setting this to -1 will update the number of workers to the maximum available to your account.

set_advanced_options(advanced_options: datarobot.helpers.AdvancedOptions = None, **kwargs) → None

Update the advanced options of this project.

Note

project options will not be stored at the database level, so the options set via this method will only be attached to a project instance for the lifetime of a client session (if you quit your session and reopen a new one before running autopilot, the advanced options will be lost).

Either accepts an AdvancedOptions object to replace all advanced options or indiviudal keyword arguments. This is an inplace update, not a new object. The options set will only remain for the life of this project instance within a given session.

Parameters:
advanced_options : AdvancedOptions, optional

AdvancedOptions instance as an alternative to passing individual parameters.

weights : string, optional

The name of a column indicating the weight of each row

response_cap : float in [0.5, 1), optional

Quantile of the response distribution to use for response capping.

blueprint_threshold : int, optional

Number of hours models are permitted to run before being excluded from later autopilot stages Minimum 1

seed : int, optional

a seed to use for randomization

smart_downsampled : bool, optional

whether to use smart downsampling to throw away excess rows of the majority class. Only applicable to classification and zero-boosted regression projects.

majority_downsampling_rate : float, optional

The percentage between 0 and 100 of the majority rows that should be kept. Specify only if using smart downsampling. May not cause the majority class to become smaller than the minority class.

offset : list of str, optional

(New in version v2.6) the list of the names of the columns containing the offset of each row

exposure : string, optional

(New in version v2.6) the name of a column containing the exposure of each row

accuracy_optimized_mb : bool, optional

(New in version v2.6) Include additional, longer-running models that will be run by the autopilot and available to run manually.

events_count : string, optional

(New in version v2.8) the name of a column specifying events count.

monotonic_increasing_featurelist_id : string, optional

(new in version 2.11) the id of the featurelist that defines the set of features with a monotonically increasing relationship to the target. If None, no such constraints are enforced. When specified, this will set a default for the project that can be overriden at model submission time if desired.

monotonic_decreasing_featurelist_id : string, optional

(new in version 2.11) the id of the featurelist that defines the set of features with a monotonically decreasing relationship to the target. If None, no such constraints are enforced. When specified, this will set a default for the project that can be overriden at model submission time if desired.

only_include_monotonic_blueprints : bool, optional

(new in version 2.11) when true, only blueprints that support enforcing monotonic constraints will be available in the project or selected for the autopilot.

allowed_pairwise_interaction_groups : list of tuple, optional

(New in version v2.19) For GA2M models - specify groups of columns for which pairwise interactions will be allowed. E.g. if set to [(A, B, C), (C, D)] then GA2M models will allow interactions between columns AxB, BxC, AxC, CxD. All others (AxD, BxD) will not be considered.

blend_best_models: bool, optional

(New in version v2.19) blend best models during Autopilot run

scoring_code_only: bool, optional

(New in version v2.19) Keep only models that can be converted to scorable java code during Autopilot run

shap_only_mode: bool, optional

(New in version v2.21) Keep only models that support SHAP values during Autopilot run. Use SHAP-based insights wherever possible. Defaults to False.

prepare_model_for_deployment: bool, optional

(New in version v2.19) Prepare model for deployment during Autopilot run. The preparation includes creating reduced feature list models, retraining best model on higher sample size, computing insights and assigning “RECOMMENDED FOR DEPLOYMENT” label.

consider_blenders_in_recommendation: bool, optional

(New in version 2.22.0) Include blenders when selecting a model to prepare for deployment in an Autopilot Run. Defaults to False.

min_secondary_validation_model_count: int, optional

(New in version v2.19) Compute “All backtest” scores (datetime models) or cross validation scores for the specified number of highest ranking models on the Leaderboard, if over the Autopilot default.

autopilot_data_sampling_method: str, optional

(New in version v2.23) one of datarobot.enums.DATETIME_AUTOPILOT_DATA_SAMPLING_METHOD. Applicable for OTV projects only, defines if autopilot uses “random” or “latest” sampling when iteratively building models on various training samples. Defaults to “random” for duration-based projects and to “latest” for row-based projects.

run_leakage_removed_feature_list: bool, optional

(New in version v2.23) Run Autopilot on Leakage Removed feature list (if exists).

autopilot_with_feature_discovery: bool, optional.

(New in version v2.23) If true, autopilot will run on a feature list that includes features found via search for interactions.

feature_discovery_supervised_feature_reduction: bool, optional

(New in version v2.23) Run supervised feature reduction for feature discovery projects.

exponentially_weighted_moving_alpha: float, optional

(New in version v2.26) defaults to None, value between 0 and 1 (inclusive), indicates alpha parameter used in exponentially weighted moving average within feature derivation window.

external_time_series_baseline_dataset_id: str, optional.

(New in version v2.26) If provided, will generate metrics scaled by external model predictions metric for time series projects. The external predictions catalog must be validated before autopilot starts, see Project.validate_external_time_series_baseline and external baseline predictions documentation for further explanation.

use_supervised_feature_reduction: bool, default ``True` optional

Time Series only. When true, during feature generation DataRobot runs a supervised algorithm to retain only qualifying features. Setting to false can severely impact autopilot duration, especially for datasets with many features.

primary_location_column: str, optional.

The name of primary location column.

protected_features: list of str, optional.

(New in version v2.24) A list of project features to mark as protected for Bias and Fairness testing calculations. Max number of protected features allowed is 10.

preferable_target_value: str, optional.

(New in version v2.24) A target value that should be treated as a favorable outcome for the prediction. For example, if we want to check gender discrimination for giving a loan and our target is named is_bad, then the positive outcome for the prediction would be No, which means that the loan is good and that’s what we treat as a favorable result for the loaner.

fairness_metrics_set: str, optional.

(New in version v2.24) Metric to use for calculating fairness. Can be one of proportionalParity, equalParity, predictionBalance, trueFavorableAndUnfavorableRateParity or favorableAndUnfavorablePredictiveValueParity. Used and required only if Bias & Fairness in AutoML feature is enabled.

fairness_threshold: str, optional.

(New in version v2.24) Threshold value for the fairness metric. Can be in a range of [0.0, 1.0]. If the relative (i.e. normalized) fairness score is below the threshold, then the user will see a visual indication on the

bias_mitigation_feature_name : str, optional

The feature from protected features that will be used in a bias mitigation task to mitigate bias

bias_mitigation_technique : str, optional

One of datarobot.enums.BiasMitigationTechnique Options: - ‘preprocessingReweighing’ - ‘postProcessingRejectionOptionBasedClassification’ The technique by which we’ll mitigate bias, which will inform which bias mitigation task we insert into blueprints

include_bias_mitigation_feature_as_predictor_variable : bool, optional

Whether we should also use the mitigation feature as in input to the modeler just like any other categorical used for training, i.e. do we want the model to “train on” this feature in addition to using it for bias mitigation

list_advanced_options() → Dict[str, Any]

View the advanced options that have been set on a project instance. Includes those that haven’t been set (with value of None).

Returns:
dict of advanced options and their values
set_partitioning_method(cv_method: Optional[str] = None, validation_type: Optional[str] = None, seed: int = 0, reps: Optional[int] = None, user_partition_col: Optional[str] = None, training_level: Union[str, int, None] = None, validation_level: Union[str, int, None] = None, holdout_level: Union[str, int, None] = None, cv_holdout_level: Union[str, int, None] = None, validation_pct: Optional[int] = None, holdout_pct: Optional[int] = None, partition_key_cols: Optional[List[str]] = None, partitioning_method: Optional[datarobot.helpers.partitioning_methods.PartitioningMethod] = None)

Configures the partitioning method for this project.

If this project does not already have a partitioning method set, creates a new configuration based on provided args.

If the partitioning_method arg is set, that configuration will instead be used.

Note

This is an inplace update, not a new object. The options set will only remain for the life of this project instance within a given session. You must still call set_target to make this change permanent for the project. Calling refresh without first calling set_target will invalidate this configuration. Similarly, calling get to retrieve a second copy of the project will not include this configuration.

New in version v3.0.

Parameters:
cv_method: str

The partitioning method used. Supported values can be found in datarobot.enums.CV_METHOD.

validation_type: str

May be “CV” (K-fold cross-validation) or “TVH” (Training, validation, and holdout).

seed : int

A seed to use for randomization.

reps : int

Number of cross validation folds to use.

user_partition_col : str

The name of the column containing the partition assignments.

training_level : Union[str,int]

The value of the partition column indicating a row is part of the training set.

validation_level : Union[str,int]

The value of the partition column indicating a row is part of the validation set.

holdout_level : Union[str,int]

The value of the partition column indicating a row is part of the holdout set (use None if you want no holdout set).

cv_holdout_level: Union[str,int]

The value of the partition column indicating a row is part of the holdout set.

validation_pct : int

The desired percentage of dataset to assign to validation set.

holdout_pct : int

The desired percentage of dataset to assign to holdout set.

partition_key_cols : list

A list containing a single string, where the string is the name of the column whose values should remain together in partitioning.

partitioning_method : PartitioningMethod, optional

An instance of datarobot.helpers.partitioning_methods.PartitioningMethod that will be used instead of creating a new instance from the other args.

Returns:
project : Project

The instance with updated attributes.

Raises:
TypeError

If cv_method or validation_type are not set and partitioning_method is not set.

InvalidUsageError

If invoked after project.set_target or project.start, or if invoked with the wrong combination of args for a given partitioning method.

get_uri() → str
Returns:
url : str

Permanent static hyperlink to a project leaderboard.

Returns:
url : str

Permanent static hyperlink to a project leaderboard.

open_leaderboard_browser() → None

Opens project leaderboard in web browser. Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

get_rating_table_models() → List[datarobot.models.model.RatingTableModel]

Get a list of models with a rating table

Returns:
list of RatingTableModel

list of all models with a rating table in project.

get_rating_tables() → List[datarobot.models.rating_table.RatingTable]

Get a list of rating tables

Returns:
list of RatingTable

list of rating tables in project.

get_access_list() → List[datarobot.models.sharing.SharingAccess]

Retrieve users who have access to this project and their access levels

New in version v2.15.

Returns:
list of : class:SharingAccess <datarobot.SharingAccess>
share(access_list: List[datarobot.models.sharing.SharingAccess], send_notification: Optional[bool] = None, include_feature_discovery_entities: Optional[bool] = None) → None

Modify the ability of users to access this project

New in version v2.15.

Parameters:
access_list : list of SharingAccess

the modifications to make.

send_notification : boolean, default None

(New in version v2.21) optional, whether or not an email notification should be sent, default to None

include_feature_discovery_entities : boolean, default None

(New in version v2.21) optional (default: None), whether or not to share all the related entities i.e., datasets for a project with Feature Discovery enabled

Raises:
datarobot.ClientError :

if you do not have permission to share this project, if the user you’re sharing with doesn’t exist, if the same user appears multiple times in the access_list, or if these changes would leave the project without an owner

Examples

Transfer access to the project from old_user@datarobot.com to new_user@datarobot.com

import datarobot as dr

new_access = dr.SharingAccess(new_user@datarobot.com,
                              dr.enums.SHARING_ROLE.OWNER, can_share=True)
access_list = [dr.SharingAccess(old_user@datarobot.com, None), new_access]

dr.Project.get('my-project-id').share(access_list)
batch_features_type_transform(parent_names: List[str], variable_type: str, prefix: Optional[str] = None, suffix: Optional[str] = None, max_wait: int = 600) → List[datarobot.models.feature.Feature]

Create new features by transforming the type of existing ones.

New in version v2.17.

Note

The following transformations are only supported in batch mode:

  1. Text to categorical or numeric
  2. Categorical to text or numeric
  3. Numeric to categorical

See here for special considerations when casting numeric to categorical. Date to categorical or numeric transformations are not currently supported for batch mode but can be performed individually using create_type_transform_feature.

Parameters:
parent_names : list[str]

The list of variable names to be transformed.

variable_type : str

The type new columns should have. Can be one of ‘categorical’, ‘categoricalInt’, ‘numeric’, and ‘text’ - supported values can be found in datarobot.enums.VARIABLE_TYPE_TRANSFORM.

prefix : str, optional

Note

Either prefix, suffix, or both must be provided.

The string that will preface all feature names. At least one of prefix and suffix must be specified.

suffix : str, optional

Note

Either prefix, suffix, or both must be provided.

The string that will be appended at the end to all feature names. At least one of prefix and suffix must be specified.

max_wait : int, optional

The maximum amount of time to wait for DataRobot to finish processing the new column. This process can take more time with more data to process. If this operation times out, an AsyncTimeoutError will occur. DataRobot continues the processing and the new column may successfully be constructed.

Returns:
list of Features

all features for this project after transformation.

Raises:
TypeError:

If parent_names is not a list.

ValueError

If value of variable_type is not from datarobot.enums.VARIABLE_TYPE_TRANSFORM.

AsyncFailureError`

If any of the responses from the server are unexpected.

AsyncProcessUnsuccessfulError

If the job being waited for has failed or has been cancelled.

AsyncTimeoutError

If the resource did not resolve in time.

clone_project(new_project_name: Optional[str] = None, max_wait: int = 600) → datarobot.models.project.Project

Create a fresh (post-EDA1) copy of this project that is ready for setting targets and modeling options.

Parameters:
new_project_name : str, optional

The desired name of the new project. If omitted, the API will default to ‘Copy of <original project>’

max_wait : int, optional

Time in seconds after which project creation is considered unsuccessful

Returns:
datarobot.models.Project
create_interaction_feature(name: str, features: List[str], separator: str, max_wait: int = 600) → datarobot.models.feature.InteractionFeature

Create a new interaction feature by combining two categorical ones.

New in version v2.21.

Parameters:
name : str

The name of final Interaction Feature

features : list(str)

List of two categorical feature names

separator : str

The character used to join the two data values, one of these ` + - / | & . _ , `

max_wait : int, optional

Time in seconds after which project creation is considered unsuccessful.

Returns:
datarobot.models.InteractionFeature

The data of the new Interaction feature

Raises:
ClientError

If requested Interaction feature can not be created. Possible reasons for example are:

  • one of features either does not exist or is of unsupported type
  • feature with requested name already exists
  • invalid separator character submitted.
AsyncFailureError

If any of the responses from the server are unexpected

AsyncProcessUnsuccessfulError

If the job being waited for has failed or has been cancelled

AsyncTimeoutError

If the resource did not resolve in time

get_relationships_configuration() → datarobot.models.relationships_configuration.RelationshipsConfiguration

Get the relationships configuration for a given project

New in version v2.21.

Returns:
relationships_configuration: RelationshipsConfiguration

relationships configuration applied to project

download_feature_discovery_dataset(file_name: str, pred_dataset_id: Optional[str] = None) → None

Download Feature discovery training or prediction dataset

Parameters:
file_name : str

File path where dataset will be saved.

pred_dataset_id : str, optional

ID of the prediction dataset

download_feature_discovery_recipe_sqls(file_name: str, model_id: Optional[str] = None, max_wait: int = 600) → None

Export and download Feature discovery recipe SQL statements .. versionadded:: v2.25

Parameters:
file_name : str

File path where dataset will be saved.

model_id : str, optional

ID of the model to export SQL for. If specified, QL to generate only features used by the model will be exported. If not specified, SQL to generate all features will be exported.

max_wait : int, optional

Time in seconds after which export is considered unsuccessful.

Raises:
ClientError

If requested SQL cannot be exported. Possible reason is the feature is not available to user.

AsyncFailureError

If any of the responses from the server are unexpected.

AsyncProcessUnsuccessfulError

If the job being waited for has failed or has been cancelled.

AsyncTimeoutError

If the resource did not resolve in time.

validate_external_time_series_baseline(catalog_version_id: str, target: str, datetime_partitioning: datarobot.helpers.partitioning_methods.DatetimePartitioning, max_wait: int = 600) → datarobot.models.external_baseline_validation.ExternalBaselineValidationInfo

Validate external baseline prediction catalog.

The forecast windows settings, validation and holdout duration specified in the datetime specification must be consistent with project settings as these parameters are used to check whether the specified catalog version id has been validated or not. See external baseline predictions documentation for example usage.

Parameters:
catalog_version_id: str

Id of the catalog version for validating external baseline predictions.

target: str

The name of the target column.

datetime_partitioning: DatetimePartitioning object

Instance of the DatetimePartitioning defined in datarobot.helpers.partitioning_methods.

Attributes of the object used to check the validation are:

  • datetime_partition_column
  • forecast_window_start
  • forecast_window_end
  • holdout_start_date
  • holdout_end_date
  • backtests
  • multiseries_id_columns

If the above attributes are different from the project settings, the catalog version will not pass the validation check in the autopilot.

max_wait: int, optional

The maximum number of seconds to wait for the catalog version to be validated before raising an error.

Returns:
external_baseline_validation_info: ExternalBaselineValidationInfo

Validation result of the specified catalog version.

Raises:
AsyncTimeoutError

Raised if the catalog version validation took more time than specified by the max_wait parameter.

download_multicategorical_data_format_errors(file_name: str) → None

Download multicategorical data format errors to the CSV file. If any format errors where detected in potentially multicategorical features the resulting file will contain at max 10 entries. CSV file content contains feature name, dataset index in which the error was detected, row value and type of error detected. In case that there were no errors or none of the features where potentially multicategorical the CSV file will be empty containing only the header.

Parameters:
file_name : str

File path where CSV file will be saved.

get_multiseries_names() → List[Optional[str]]

For a multiseries timeseries project it returns all distinct entries in the multiseries column. For a non timeseries project it will just return an empty list.

Returns:
multiseries_names: List[str]

List of all distinct entries in the multiseries column

restart_segment(segment: str)

Restart single segment in a segmented project.

New in version v2.28.

Segment restart is allowed only for segments that haven’t reached modeling phase. Restart will permanently remove previous project and trigger set up of a new one for particular segment.

Parameters:
segment : str

Segment to restart

get_bias_mitigated_models(parent_model_id: Optional[str] = None, offset: Optional[int] = 0, limit: Optional[int] = 100) → List[Dict[str, Any]]

List the child models with bias mitigation applied

New in version v2.29.

Parameters:
parent_model_id : str, optional

Filter by parent models

offset : int, optional

Number of items to skip.

limit : int, optional

Number of items to return.

Returns:
models : list of dict
apply_bias_mitigation(bias_mitigation_parent_leaderboard_id: str, bias_mitigation_feature_name: str, bias_mitigation_technique: str, include_bias_mitigation_feature_as_predictor_variable: bool) → datarobot.models.modeljob.ModelJob

Apply bias mitigation to an existing model by training a version of that model but with bias mitigation applied. An error will be returned if the model does not support bias mitigation with the technique requested.

New in version v2.29.

Parameters:
bias_mitigation_parent_leaderboard_id : str

The leaderboard id of the model to apply bias mitigation to

bias_mitigation_feature_name : str

The feature name of the protected features that will be used in a bias mitigation task to attempt to mitigate bias

bias_mitigation_technique : str, optional

One of datarobot.enums.BiasMitigationTechnique Options: - ‘preprocessingReweighing’ - ‘postProcessingRejectionOptionBasedClassification’ The technique by which we’ll mitigate bias, which will inform which bias mitigation task we insert into blueprints

include_bias_mitigation_feature_as_predictor_variable : bool

Whether we should also use the mitigation feature as in input to the modeler just like any other categorical used for training, i.e. do we want the model to “train on” this feature in addition to using it for bias mitigation

Returns:
ModelJob

the job of the model with bias mitigation applied that was just submitted for training

request_bias_mitigation_feature_info(bias_mitigation_feature_name: str) → datarobot.models.model.BiasMitigationFeatureInfo

Request a compute job for bias mitigation feature info for a given feature, which will include - if there are any rare classes - if there are any combinations of the target values and the feature values that never occur in the same row - if the feature has a high number of missing values. Note that this feature check is dependent on the current target selected for the project.

New in version v2.29.

Parameters:
bias_mitigation_feature_name : str

The feature name of the protected features that will be used in a bias mitigation task to attempt to mitigate bias

Returns:
BiasMitigationFeatureInfo

Bias mitigation feature info model for the requested feature

get_bias_mitigation_feature_info(bias_mitigation_feature_name: str) → datarobot.models.model.BiasMitigationFeatureInfo

Get the computed bias mitigation feature info for a given feature, which will include - if there are any rare classes - if there are any combinations of the target values and the feature values that never occur in the same row - if the feature has a high number of missing values. Note that this feature check is dependent on the current target selected for the project. If this info has not already been computed, this will raise a 404 error.

New in version v2.29.

Parameters:
bias_mitigation_feature_name : str

The feature name of the protected features that will be used in a bias mitigation task to attempt to mitigate bias

Returns:
BiasMitigationFeatureInfo

Bias mitigation feature info model for the requested feature

classmethod from_data(data: Union[Dict[str, Any], List[Dict[str, Any]]]) → T

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

classmethod from_server_data(data: Union[Dict[str, Any], List[Dict[str, Any]]], keep_attrs: Optional[Iterable[str]] = None) → T

Instantiate an object of this class using the data directly from the server, meaning that the keys may have the wrong camel casing

Parameters:
data : dict

The directly translated dict of JSON from the server. No casing fixes have taken place

keep_attrs : iterable

List, set or tuple of the dotted namespace notations for attributes to keep within the object structure even if their values are None

open_in_browser() → None

Opens class’ relevant web browser location. If default browser is not available the URL is logged.

Note: If text-mode browsers are used, the calling process will block until the user exits the browser.

set_datetime_partitioning(datetime_partition_spec: Optional[datarobot.helpers.partitioning_methods.DatetimePartitioningSpecification] = None, **kwargs) → datarobot.helpers.partitioning_methods.DatetimePartitioning

Set the datetime partitioning method for a time series project by either passing in a DatetimePartitioningSpecification instance or any individual attributes of that class. Updates self.partitioning_method if already set previously (does not replace it).

This is an alternative to passing a specification to Project.analyze_and_model via the partitioning_method parameter. To see the full partitioning based on the project dataset, use DatetimePartitioning.generate.

New in version v3.0.

Parameters:
datetime_partition_spec

DatetimePartitioningSpecification, optional The customizeable aspects of datetime partitioning for a time series project. An alternative to passing individual settings (attributes of the DatetimePartitioningSpecification class).

Returns:
DatetimePartitioning

Full partitioning inluding user-specified attributes as well as those determined by DR based on the dataset.

list_datetime_partition_spec() → Optional[datarobot.helpers.partitioning_methods.DatetimePartitioningSpecification]

List datetime partitioning settings.

This method makes an API call to retrieve settings from the DB if project is in the modeling stage, i.e. if analyze_and_model (autopilot) has already been called.

If analyze_and_model has not yet been called, this method will instead simply print settings from project.partitioning_method.

New in version v3.0.

Returns:
DatetimePartitioningSpecification or None
class datarobot.helpers.eligibility_result.EligibilityResult(supported: bool, reason: str = '', context: str = '')

Represents whether a particular operation is supported

For instance, a function to check whether a set of models can be blended can return an EligibilityResult specifying whether or not blending is supported and why it may not be supported.

Attributes:
supported : bool

whether the operation this result represents is supported

reason : str

why the operation is or is not supported

context : str

what operation isn’t supported

Rating Table

class datarobot.models.RatingTable(id: str, rating_table_name: str, original_filename: str, project_id: str, parent_model_id: str, model_id: Optional[str] = None, model_job_id: Optional[str] = None, validation_job_id: Optional[str] = None, validation_error: Optional[str] = None)

Interface to modify and download rating tables.

Attributes:
id : str

The id of the rating table.

project_id : str

The id of the project this rating table belongs to.

rating_table_name : str

The name of the rating table.

original_filename : str

The name of the file used to create the rating table.

parent_model_id : str

The model id of the model the rating table was validated against.

model_id : str

The model id of the model that was created from the rating table. Can be None if a model has not been created from the rating table.

model_job_id : str

The id of the job to create a model from this rating table. Can be None if a model has not been created from the rating table.

validation_job_id : str

The id of the created job to validate the rating table. Can be None if the rating table has not been validated.

validation_error : str

Contains a description of any errors caused during validation.

classmethod get(project_id: str, rating_table_id: str) → datarobot.models.rating_table.RatingTable

Retrieve a single rating table

Parameters:
project_id : str

The ID of the project the rating table is associated with.

rating_table_id : str

The ID of the rating table

Returns:
rating_table : RatingTable

The queried instance

classmethod create(project_id: str, parent_model_id: str, filename: str, rating_table_name: str = 'Uploaded Rating Table') → Job

Uploads and validates a new rating table CSV

Parameters:
project_id : str

id of the project the rating table belongs to

parent_model_id : str

id of the model for which this rating table should be validated against

filename : str

The path of the CSV file containing the modified rating table.

rating_table_name : str, optional

A human friendly name for the new rating table. The string may be truncated and a suffix may be added to maintain unique names of all rating tables.

Returns:
job: Job

an instance of created async job

Raises:
InputNotUnderstoodError

Raised if filename isn’t one of supported types.

ClientError (400)

Raised if parent_model_id is invalid.

download(filepath: str) → None

Download a csv file containing the contents of this rating table

Parameters:
filepath : str

The path at which to save the rating table file.

rename(rating_table_name: str) → None

Renames a rating table to a different name.

Parameters:
rating_table_name : str

The new name to rename the rating table to.

create_model() → Job

Creates a new model from this rating table record. This rating table must not already be associated with a model and must be valid.

Returns:
job: Job

an instance of created async job

Raises:
ClientError (422)

Raised if creating model from a RatingTable that failed validation

JobAlreadyRequested

Raised if creating model from a RatingTable that is already associated with a RatingTableModel

ROC Curve

class datarobot.models.roc_curve.RocCurve(source: str, roc_points: List[EstimatedMetric], negative_class_predictions: List[float], positive_class_predictions: List[float], source_model_id: str)

ROC curve data for model.

Attributes:
source : str

ROC curve data source. Can be ‘validation’, ‘crossValidation’ or ‘holdout’.

roc_points : list of dict

List of precalculated metrics associated with thresholds for ROC curve.

negative_class_predictions : list of float

List of predictions from example for negative class

positive_class_predictions : list of float

List of predictions from example for positive class

source_model_id : str

ID of the model this ROC curve represents; in some cases, insights from the parent of a frozen model may be used

class datarobot.models.roc_curve.LabelwiseRocCurve(source: str, roc_points: List[EstimatedMetric], negative_class_predictions: List[float], positive_class_predictions: List[float], source_model_id: str, label: str, kolmogorov_smirnov_metric: float, auc: float)

Labelwise ROC curve data for one label and one source.

Attributes:
source : str

ROC curve data source. Can be ‘validation’, ‘crossValidation’ or ‘holdout’.

roc_points : list of dict

List of precalculated metrics associated with thresholds for ROC curve.

negative_class_predictions : list of float

List of predictions from example for negative class

positive_class_predictions : list of float

List of predictions from example for positive class

source_model_id : str

ID of the model this ROC curve represents; in some cases, insights from the parent of a frozen model may be used

label : str

Label name for

kolmogorov_smirnov_metric : float

Kolmogorov-Smirnov metric value for label

auc : float

AUC metric value for label

Ruleset

class datarobot.models.Ruleset(project_id: str, parent_model_id: str, ruleset_id: int, rule_count: int, score: float, model_id: Optional[str] = None)

Represents an approximation of a model with DataRobot Prime

Attributes:
id : str

the id of the ruleset

rule_count : int

the number of rules used to approximate the model

score : float

the validation score of the approximation

project_id : str

the project the approximation belongs to

parent_model_id : str

the model being approximated

model_id : str or None

the model using this ruleset (if it exists). Will be None if no such model has been trained.

request_model() → Job

Request training for a model using this ruleset

Training a model using a ruleset is a necessary prerequisite for being able to download the code for a ruleset.

Returns:
job: Job

the job fitting the new Prime model

Segmented Modeling

API Reference for entities used in Segmented Modeling. See dedicated User Guide for examples.

class datarobot.CombinedModel(id: Optional[str] = None, project_id: Optional[str] = None, segmentation_task_id: Optional[str] = None, is_active_combined_model: bool = False)

A model from a segmented project. Combination of ordinary models in child segments projects.

Attributes:
id : str

the id of the model

project_id : str

the id of the project the model belongs to

segmentation_task_id : str

the id of a segmentation task used in this model

is_active_combined_model : bool

flag indicating if this is the active combined model in segmented project

classmethod get(project_id: str, combined_model_id: str) → datarobot.models.model.CombinedModel

Retrieve combined model

Parameters:
project_id : str

The project’s id.

combined_model_id : str

Id of the combined model.

Returns:
CombinedModel

The queried combined model.

classmethod set_segment_champion(project_id: str, model_id: str, clone: bool = False) → str

Update a segment champion in a combined model by setting the model_id that belongs to the child project_id as the champion.

Parameters:
project_id : str

The project id for the child model that contains the model id.

model_id : str

Id of the model to mark as the champion

clone : bool

(New in version v2.29) optional, defaults to False. Defines if combined model has to be cloned prior to setting champion (champion will be set for new combined model if yes).

Returns:
combined_model_id : str

Id of the combined model that was updated

get_segments_info() → List[datarobot.models.segmentation.SegmentInfo]

Retrieve Combined Model segments info

Returns:
list[SegmentInfo]

List of segments

get_segments_as_dataframe(encoding: str = 'utf-8') → pandas.core.frame.DataFrame

Retrieve Combine Models segments as a DataFrame.

Parameters:
encoding : str, optional

A string representing the encoding to use in the output csv file. Defaults to ‘utf-8’.

Returns:
DataFrame

Combined model segments

get_segments_as_csv(filename: str, encoding: str = 'utf-8') → None

Save the Combine Models segments to a csv.

Parameters:
filename : str or file object

The path or file object to save the data to.

encoding : str, optional

A string representing the encoding to use in the output csv file. Defaults to ‘utf-8’.

train(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, scoring_type: Optional[str] = None, training_row_count: Optional[int] = None, monotonic_increasing_featurelist_id: Union[str, object, None] = <object object>, monotonic_decreasing_featurelist_id: Union[str, object, None] = <object object>) → NoReturn

Inherited from Model - CombinedModels cannot be retrained directly

train_datetime(featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, training_duration: Optional[str] = None, time_window_sample_pct: Optional[int] = None, monotonic_increasing_featurelist_id: Union[str, object, None] = <object object>, monotonic_decreasing_featurelist_id: Union[str, object, None] = <object object>, use_project_settings: bool = False, sampling_method: Optional[str] = None) → NoReturn

Inherited from Model - CombinedModels cannot be retrained directly

retrain(sample_pct: Optional[float] = None, featurelist_id: Optional[str] = None, training_row_count: Optional[int] = None, n_clusters: Optional[int] = None) → NoReturn

Inherited from Model - CombinedModels cannot be retrained directly

request_frozen_model(sample_pct: Optional[float] = None, training_row_count: Optional[int] = None) → NoReturn

Inherited from Model - CombinedModels cannot be retrained as frozen

request_frozen_datetime_model(training_row_count: Optional[int] = None, training_duration: Optional[str] = None, training_start_date: Optional[datetime.datetime] = None, training_end_date: Optional[datetime.datetime] = None, time_window_sample_pct: Optional[int] = None, sampling_method: Optional[str] = None) → NoReturn

Inherited from Model - CombinedModels cannot be retrained as frozen

cross_validate() → NoReturn

Inherited from Model - CombinedModels cannot request cross validation

class datarobot.SegmentationTask(id: str, project_id: str, name: str, type: str, created: str, segments_count: int, segments: List[str], metadata: Dict[str, bool], data: SegmentationData)

A Segmentation Task is used for segmenting an existing project into multiple child projects. Each child project (or segment) will be a separate autopilot run. Currently only user defined segmentation is supported.

Example for creating a new SegmentationTask for Time Series segmentation with a user defined id column:

from datarobot import SegmentationTask

# Create the SegmentationTask
segmentation_task_results = SegmentationTask.create(
    project_id=project.id,
    target=target,
    use_time_series=True,
    datetime_partition_column=datetime_partition_column,
    multiseries_id_columns=[multiseries_id_column],
    user_defined_segment_id_columns=[user_defined_segment_id_column]
)

# Retrieve the completed SegmentationTask object from the job results
segmentation_task = segmentation_task_results['completedJobs'][0]
Attributes:
id : ObjectId

The id of the segmentation task.

project_id : ObjectId

The associated id of the parent project.

type : str

What type of job the segmentation task is associated with, e.g. auto_ml or auto_ts.

created : datetime

The date this segmentation task was created.

segments_count : int

The number of segments the segmentation task generated.

segments : list of strings

The segment names that the segmentation task generated.

metadata : dict

List of features that help to identify the parameters used by the segmentation task.

data : dict

Optional parameters that are associated with enabled metadata for the segmentation task.

classmethod from_data(data: ServerDataDictType) → SegmentationTask

Instantiate an object of this class using a dict.

Parameters:
data : dict

Correctly snake_cased keys and their values.

collect_payload() → Dict[str, str]

Convert the record to a dictionary

classmethod create(project_id: str, target: str, use_time_series: bool = False, datetime_partition_column: Optional[str] = None, multiseries_id_columns: Optional[List[str]] = None, user_defined_segment_id_columns: Optional[List[str]] = None, max_wait: int = 600) → SegmentationTaskCreatedResponse

Creates segmentation tasks for the project based on the defined parameters.

Parameters:
project_id : str

The associated id of the parent project.

target : str

The column that represents the target in the dataset.

use_time_series : bool

Whether AutoTS or AutoML segmentations should be generated.

datetime_partition_column : str or null

Required for Time Series. The name of the column whose values as dates are used to assign a row to a particular partition.

multiseries_id_columns : list of str or null

Required for Time Series. A list of the names of multiseries id columns to define series within the training data. Currently only one multiseries id column is supported.

user_defined_segment_id_columns : list of str or null

Required when using a column for segmentation. A list of the segment id columns to use to define what columns are used to manually segment data. Currently only one user defined segment id column is supported.

max_wait : integer

The number of seconds to wait

Returns:
segmentation_tasks : dict

Dictionary containing the numberOfJobs, completedJobs, and failedJobs. completedJobs is a list of SegmentationTask objects, while failed jobs is a list of dictionaries indicating problems with submitted tasks.

classmethod list(project_id: str) → List[datarobot.models.segmentation.SegmentationTask]

List all of the segmentation tasks that have been created for a specific project_id.

Parameters:
project_id : str

The id of the parent project

Returns:
segmentation_tasks : list of SegmentationTask

List of instances with initialized data.

classmethod get(project_id: str, segmentation_task_id: str) → datarobot.models.segmentation.SegmentationTask

Retrieve information for a single segmentation task associated with a project_id.

Parameters:
project_id : str

The id of the parent project

segmentation_task_id : str

The id of the segmentation task

Returns:
segmentation_task : SegmentationTask

Instance with initialized data.

class datarobot.SegmentInfo(project_id: str, segment: str, project_stage: str, project_status_error: str, autopilot_done: bool, model_count: Optional[int] = None, model_id: Optional[str] = None)

A SegmentInfo is an object containing information about the combined model segments

Attributes:
project_id : str

The associated id of the child project.

segment : str

the name of the segment

project_stage : str

A description of the current stage of the project

project_status_error : str

Project status error message.

autopilot_done : bool

Is autopilot done for the project.

model_count : int

Count of trained models in project.

model_id : str

ID of segment champion model.

classmethod list(project_id: str, model_id: str) → List[datarobot.models.segmentation.SegmentInfo]

List all of the segments that have been created for a specific project_id.

Parameters:
project_id : str

The id of the parent project

Returns:
segments : list of datarobot.models.segmentation.SegmentInfo

List of instances with initialized data.

SHAP

class datarobot.models.ShapImpact(count: int, shap_impacts: List[ShapImpactType], row_count: Optional[int] = None)

Represents SHAP impact score for a feature in a model.

New in version v2.21.

Notes

SHAP impact score for a feature has the following structure:

  • feature_name : (str) the feature name in dataset
  • impact_normalized : (float) normalized impact score value (largest value is 1)
  • impact_unnormalized : (float) raw impact score value
Attributes:
count : int

the number of SHAP Impact object returned

row_count: int or None

the sample size (specified in rows) to use for Shap Impact computation

shap_impacts : list

a list which contains SHAP impact scores for top 1000 features used by a model

classmethod create(project_id: str, model_id: str, row_count: Optional[int] = None) → Job

Create SHAP impact for the specified model.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model to calculate shap impact for

row_count : int

the sample size (specified in rows) to use for Feature Impact computation

Returns:
job : Job

an instance of created async job

classmethod get(project_id: str, model_id: str) → datarobot.models.shap_impact.ShapImpact

Retrieve SHAP impact scores for features in a model.

Parameters:
project_id : str

id of the project the model belongs to

model_id : str

id of the model the SHAP impact is for

Returns:
shap_impact : ShapImpact

The queried instance.

Raises:
ClientError (404)

If the project or model does not exist or the SHAP impact has not been computed.

SharingAccess

class datarobot.SharingAccess(username: str, role: str, can_share: Optional[bool] = None, can_use_data: Optional[bool] = None, user_id: Optional[str] = None)

Represents metadata about whom a entity (e.g. a data store) has been shared with

New in version v2.14.

Currently DataStores, DataSources, Datasets, Projects (new in version v2.15) and CalendarFiles (new in version 2.15) can be shared.

This class can represent either access that has already been granted, or be used to grant access to additional users.

Attributes:
username : str

a particular user

role : str or None

if a string, represents a particular level of access and should be one of datarobot.enums.SHARING_ROLE. For more information on the specific access levels, see the sharing documentation. If None, can be passed to a share function to revoke access for a specific user.

can_share : bool or None

if a bool, indicates whether this user is permitted to further share. When False, the user has access to the entity, but can only revoke their own access but not modify any user’s access role. When True, the user can share with any other user at a access role up to their own. May be None if the SharingAccess was not retrieved from the DataRobot server but intended to be passed into a share function; this will be equivalent to passing True.

can_use_data : bool or None

if a bool, indicates whether this user should be able to view, download and process data (use to create projects, predictions, etc). For OWNER can_use_data is always True. If role is empty canUseData is ignored.

user_id : str or None

the id of the user

Training Predictions

class datarobot.models.training_predictions.TrainingPredictionsIterator(client, path, limit=None)

Lazily fetches training predictions from DataRobot API in chunks of specified size and then iterates rows from responses as named tuples. Each row represents a training prediction computed for a dataset’s row. Each named tuple has the following structure:

Notes

Each PredictionValue dict contains these keys:

label
describes what this model output corresponds to. For regression projects, it is the name of the target feature. For classification and multiclass projects, it is a label from the target feature.
value
the output of the prediction. For regression projects, it is the predicted value of the target. For classification and multiclass projects, it is the predicted probability that the row belongs to the class identified by the label.

Each PredictionExplanations dictionary contains these keys:

label : string
describes what output was driven by this prediction explanation. For regression projects, it is the name of the target feature. For classification projects, it is the class whose probability increasing would correspond to a positive strength of this prediction explanation.
feature : string
the name of the feature contributing to the prediction
feature_value : object
the value the feature took on for this row. The type corresponds to the feature (boolean, integer, number, string)
strength : float
algorithm-specific explanation value attributed to feature in this row

ShapMetadata dictionary contains these keys:

shap_remaining_total : float
The total of SHAP values for features beyond the max_explanations. This can be identically 0 in all rows, if max_explanations is greater than the number of features and thus all features are returned.
shap_base_value : float
the model’s average prediction over the training data. SHAP values are deviations from the base value.
warnings : dict or None
SHAP values calculation warnings (e.g. additivity check failures in XGBoost models). Schema described as ShapWarnings.

ShapWarnings dictionary contains these keys:

mismatch_row_count : int
the count of rows for which additivity check failed
max_normalized_mismatch : float
the maximal relative normalized mismatch value

Examples

import datarobot as dr

# Fetch existing training predictions by their id
training_predictions = dr.TrainingPredictions.get(project_id, prediction_id)

# Iterate over predictions
for row in training_predictions.iterate_rows()
    print(row.row_id, row.prediction)
Attributes:
row_id : int

id of the record in original dataset for which training prediction is calculated

partition_id : str or float

id of the data partition that the row belongs to. “0.0” corresponds to the validation partition or backtest 1.

prediction : float

the model’s prediction for this data row

prediction_values : list of dictionaries

an array of dictionaries with a schema described as PredictionValue

timestamp : str or None

(New in version v2.11) an ISO string representing the time of the prediction in time series project; may be None for non-time series projects

forecast_point : str or None

(New in version v2.11) an ISO string representing the point in time used as a basis to generate the predictions in time series project; may be None for non-time series projects

forecast_distance : str or None

(New in version v2.11) how many time steps are between the forecast point and the timestamp in time series project; None for non-time series projects

series_id : str or None

(New in version v2.11) the id of the series in a multiseries project; may be NaN for single series projects; None for non-time series projects

prediction_explanations : list of dict or None

(New in version v2.21) The prediction explanations for each feature. The total elements in the array are bounded by max_explanations and feature count. Only present if prediction explanations were requested. Schema described as PredictionExplanations.

shap_metadata : dict or None

(New in version v2.21) The additional information necessary to understand SHAP based prediction explanations. Only present if explanation_algorithm equals datarobot.enums.EXPLANATIONS_ALGORITHM.SHAP was added in compute request. Schema described as ShapMetadata.

class datarobot.models.training_predictions.TrainingPredictions(project_id, prediction_id, model_id=None, data_subset=None, explanation_algorithm=None, max_explanations=None, shap_warnings=None)

Represents training predictions metadata and provides access to prediction results.

Notes

Each element in shap_warnings has the following schema:

partition_name : str
the partition used for the prediction record.
value : object
the warnings related to this partition.

The objects in value are:

mismatch_row_count : int
the count of rows for which additivity check failed.
max_normalized_mismatch : float
the maximal relative normalized mismatch value.

Examples

Compute training predictions for a model on the whole dataset

import datarobot as dr

# Request calculation of training predictions
training_predictions_job = model.request_training_predictions(dr.enums.DATA_SUBSET.ALL)
training_predictions = training_predictions_job.get_result_when_complete()
print('Training predictions {} are ready'.format(training_predictions.prediction_id))

# Iterate over actual predictions
for row in training_predictions.iterate_rows():
    print(row.row_id, row.partition_id, row.prediction)

List all training predictions for a project

import datarobot as dr

# Fetch all training predictions for a project
all_training_predictions = dr.TrainingPredictions.list(project_id)

# Inspect all calculated training predictions
for training_predictions in all_training_predictions:
    print(
        'Prediction {} is made for data subset "{}"'.format(
            training_predictions.prediction_id,
            training_predictions.data_subset,
        )
    )

Retrieve training predictions by id

import datarobot as dr

# Getting training predictions by id
training_predictions = dr.TrainingPredictions.get(project_id, prediction_id)

# Iterate over actual predictions
for row in training_predictions.iterate_rows():
    print(row.row_id, row.partition_id, row.prediction)
Attributes:
project_id : str

id of the project the model belongs to

model_id : str

id of the model

prediction_id : str

id of generated predictions

data_subset : datarobot.enums.DATA_SUBSET

data set definition used to build predictions. Choices are:

  • datarobot.enums.DATA_SUBSET.ALL
    for all data available. Not valid for models in datetime partitioned projects.
  • datarobot.enums.DATA_SUBSET.VALIDATION_AND_HOLDOUT
    for all data except training set. Not valid for models in datetime partitioned projects.
  • datarobot.enums.DATA_SUBSET.HOLDOUT
    for holdout data set only.
  • datarobot.enums.DATA_SUBSET.ALL_BACKTESTS
    for downloading the predictions for all backtest validation folds. Requires the model to have successfully scored all backtests. Datetime partitioned projects only.
explanation_algorithm : datarobot.enums.EXPLANATIONS_ALGORITHM

(New in version v2.21) Optional. If set to shap, the response will include prediction explanations based on the SHAP explainer (SHapley Additive exPlanations). Defaults to null (no prediction explanations).

max_explanations : int

(New in version v2.21) The number of top contributors that are included in prediction explanations. Max 100. Defaults to null for datasets narrower than 100 columns, defaults to 100 for datasets wider than 100 columns.

shap_warnings : list

(New in version v2.21) Will be present if explanation_algorithm was set to datarobot.enums.EXPLANATIONS_ALGORITHM.SHAP and there were additivity failures during SHAP values calculation.

classmethod list(project_id)

Fetch all the computed training predictions for a project.

Parameters:
project_id : str

id of the project

Returns:
A list of : py:class:TrainingPredictions objects
classmethod get(project_id, prediction_id)

Retrieve training predictions on a specified data set.

Parameters:
project_id : str

id of the project the model belongs to

prediction_id : str

id of the prediction set

Returns:
TrainingPredictions object which is ready to operate with specified predictions
iterate_rows(batch_size=None)

Retrieve training prediction rows as an iterator.

Parameters:
batch_size : int, optional

maximum number of training prediction rows to fetch per request

Returns:
iterator : TrainingPredictionsIterator

an iterator which yields named tuples representing training prediction rows

get_all_as_dataframe(class_prefix='class_', serializer='json')

Retrieve all training prediction rows and return them as a pandas.DataFrame.

Returned dataframe has the following structure:
  • row_id : row id from the original dataset
  • prediction : the model’s prediction for this row
  • class_<label> : the probability that the target is this class (only appears for classification and multiclass projects)
  • timestamp : the time of the prediction (only appears for out of time validation or time series projects)
  • forecast_point : the point in time used as a basis to generate the predictions (only appears for time series projects)
  • forecast_distance : how many time steps are between timestamp and forecast_point (only appears for time series projects)
  • series_id : he id of the series in a multiseries project or None for a single series project (only appears for time series projects)
Parameters:
class_prefix : str, optional

The prefix to append to labels in the final dataframe. Default is class_ (e.g., apple -> class_apple)

serializer : str, optional

Serializer to use for the download. Options: json (default) or csv.

Returns:
dataframe: pandas.DataFrame
download_to_csv(filename, encoding='utf-8', serializer='json')

Save training prediction rows into CSV file.

Parameters:
filename : str or file object

path or file object to save training prediction rows

encoding : string, optional

A string representing the encoding to use in the output file, defaults to ‘utf-8’

serializer : str, optional

Serializer to use for the download. Options: json (default) or csv.

User Blueprints

class datarobot.UserBlueprint(blender: bool, blueprint_id: str, diagram: str, features: List[str], features_text: str, icons: List[int], insights: str, model_type: str, supported_target_types, user_blueprint_id: str, user_id: str, is_time_series: bool = False, reference_model: bool = False, shap_support: bool = False, supports_gpu: bool = False, blueprint=None, custom_task_version_metadata=None, hex_column_name_lookup: Optional[List[UserBlueprintsHexColumnNameLookupEntryDict]] = None, project_id: Optional[str] = None, vertex_context: Optional[List[VertexContextItem]] = None, blueprint_context: Optional[VertexContextItemMessages] = None, **kwargs)

A representation of a blueprint which may be modified by the user, saved to a user’s AI Catalog, trained on projects, and shared with others.

It is recommended to install the python library called datarobot_bp_workshop, available via pip, for the best experience when building blueprints.

Please refer to http://blueprint-workshop.datarobot.com for tutorials, examples, and other documentation.

Parameters:
blender: bool

Whether the blueprint is a blender.

blueprint_id: string

The deterministic id of the blueprint, based on its content.

custom_task_version_metadata: list(list(string)), Optional

An association of custom entity ids and task ids.

diagram: string

The diagram used by the UI to display the blueprint.

features: list(string)

A list of the names of tasks used in the blueprint.

features_text: string

A description of the blueprint via the names of tasks used.

hex_column_name_lookup: list(UserBlueprintsHexColumnNameLookupEntry), Optional

A lookup between hex values and data column names used in the blueprint.

icons: list(int)

The icon(s) associated with the blueprint.

insights: string

An indication of the insights generated by the blueprint.

is_time_series: bool (Default=False)

Whether the blueprint contains time-series tasks.

model_type: string

The generated or provided title of the blueprint.

project_id: string, Optional

The id of the project the blueprint was originally created with, if applicable.

reference_model: bool (Default=False)

Whether the blueprint is a reference model.

shap_support: bool (Default=False)

Whether the blueprint supports shapley additive explanations.

supported_target_types: list(enum(‘binary’, ‘multiclass’, ‘multilabel’, ‘nonnegative’,
‘regression’, ‘unsupervised’, ‘unsupervisedclustering’))

The list of supported targets of the current blueprint.

supports_gpu: bool (Default=False)

Whether the blueprint supports execution on the GPU.

user_blueprint_id: string

The unique id associated with the user blueprint.

user_id: string

The id of the user who owns the blueprint.

blueprint: list(dict) or list(UserBlueprintTask), Optional

The representation of a directed acyclic graph defining a pipeline of data through tasks and a final estimator.

vertex_context: list(VertexContextItem), Optional

Info about, warnings about, and errors with a specific vertex in the blueprint.

blueprint_context: VertexContextItemMessages

Warnings and errors which may describe or summarize warnings or errors in the blueprint’s vertices

classmethod list(limit: int = 100, offset: int = 0, project_id: Optional[str] = None) → List[datarobot.models.user_blueprints.models.UserBlueprint]

Fetch a list of the user blueprints the current user created

Parameters:
limit: int (Default=100)

The max number of results to return.

offset: int (Default=0)

The number of results to skip (for pagination).

project_id: string, Optional

The id of the project, used to filter for original project_id.

Returns:
list(UserBlueprint)
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get(user_blueprint_id: str, project_id: Optional[str] = None) → datarobot.models.user_blueprints.models.UserBlueprint

Retrieve a user blueprint

Parameters:
user_blueprint_id: string

Used to identify a specific user-owned blueprint.

project_id: string (optional, default is None)

String representation of ObjectId for a given project. Used to validate selected columns in the user blueprint.

Returns:
UserBlueprint
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod create(blueprint, model_type: Optional[str] = None, project_id: Optional[str] = None, save_to_catalog: bool = True) → datarobot.models.user_blueprints.models.UserBlueprint

Create a user blueprint

Parameters:
blueprint: list(dict) or list(UserBlueprintTask)

A list of tasks in the form of dictionaries which define a blueprint.

model_type: string, Optional

The title to give to the blueprint.

project_id: string, Optional

The project associated with the blueprint. Necessary in the event of project specific tasks, such as column selection tasks.

save_to_catalog: bool, (Default=True)

Whether the blueprint being created should be saved to the catalog.

Returns:
UserBlueprint
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod create_from_custom_task_version_id(custom_task_version_id: str, save_to_catalog: bool = True, description: Optional[str] = None) → datarobot.models.user_blueprints.models.UserBlueprint

Create a user blueprint with a single custom task version

Parameters:
custom_task_version_id: string

Id of custom task version from which the user blueprint is created

save_to_catalog: bool, (Default=True)

Whether the blueprint being created should be saved to the catalog

description: string (Default=None)

The description for the user blueprint that will be created from the custom task version.

Returns:
UserBlueprint
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod clone_project_blueprint(blueprint_id: str, project_id: str, model_type: Optional[str] = None, save_to_catalog: bool = True) → datarobot.models.user_blueprints.models.UserBlueprint

Clone a blueprint from a project.

Parameters:
blueprint_id: string

The id associated with the blueprint to create the user blueprint from.

model_type: string, Optional

The title to give to the blueprint.

project_id: string

The id of the project which the blueprint to copy comes from.

save_to_catalog: bool, (Default=True)

Whether the blueprint being created should be saved to the catalog.

Returns:
UserBlueprint
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod clone_user_blueprint(user_blueprint_id: str, model_type: Optional[str] = None, project_id: Optional[str] = None, save_to_catalog: bool = True) → datarobot.models.user_blueprints.models.UserBlueprint

Clone a user blueprint.

Parameters:
model_type: string, Optional

The title to give to the blueprint.

project_id: string, Optional

String representation of ObjectId for a given project. Used to validate selected columns in the user blueprint.

user_blueprint_id: string

The id of the existing user blueprint to copy.

save_to_catalog: bool, (Default=True)

Whether the blueprint being created should be saved to the catalog.

Returns:
UserBlueprint
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod update(blueprint, user_blueprint_id: str, model_type: Optional[str] = None, project_id: Optional[str] = None, include_project_id_if_none: bool = False) → datarobot.models.user_blueprints.models.UserBlueprint

Update a user blueprint

Parameters:
blueprint: list(dict) or list(UserBlueprintTask)

A list of tasks in the form of dictionaries which define a blueprint. If None, will not be passed.

model_type: string, Optional

The title to give to the blueprint. If None, will not be passed.

project_id: string, Optional

The project associated with the blueprint. Necessary in the event of project specific tasks, such as column selection tasks. If None, will not be passed. To explicitly pass None, pass True to include_project_id_if_none (useful if unlinking a blueprint from a project)

user_blueprint_id: string

Used to identify a specific user-owned blueprint.

include_project_id_if_none: bool (Default=False)

Allows project_id to be passed as None, instead of ignored. If set to False, will not pass project_id in the API request if it is set to None. If True, the project id will be passed even if it is set to None.

Returns:
UserBlueprint
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod delete(user_blueprint_id: str) → requests.models.Response

Delete a user blueprint, specified by the userBlueprintId.

Parameters:
user_blueprint_id: string

Used to identify a specific user-owned blueprint.

Returns:
requests.models.Response
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get_input_types() → datarobot.models.user_blueprints.models.UserBlueprintAvailableInput

Retrieve the input types which can be used with User Blueprints.

Returns:
UserBlueprintAvailableInput
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod add_to_project(project_id: str, user_blueprint_ids: Union[str, List[str]]) → datarobot.models.user_blueprints.models.UserBlueprintAddToProjectMenu

Add a list of user blueprints, by id, to a specified (by id) project’s repository.

Parameters:
project_id: string

The projectId of the project for the repository to add the specified user blueprints to.

user_blueprint_ids: list(string) or string

The ids of the user blueprints to add to the specified project’s repository.

Returns:
UserBlueprintAddToProjectMenu
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod get_available_tasks(project_id: Optional[str] = None, user_blueprint_id: Optional[str] = None) → datarobot.models.user_blueprints.models.UserBlueprintAvailableTasks

Retrieve the available tasks, organized into categories, which can be used to create or modify User Blueprints.

Parameters:
project_id: string, Optional
user_blueprint_id: string, Optional
Returns:
UserBlueprintAvailableTasks
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod validate_task_parameters(output_method: str, task_code: str, task_parameters, project_id: Optional[str] = None) → datarobot.models.user_blueprints.models.UserBlueprintValidateTaskParameters

Validate that each value assigned to specified task parameters are valid.

Parameters:
output_method: enum(‘P’, ‘Pm’, ‘S’, ‘Sm’, ‘T’, ‘TS’)

The method representing how the task will output data.

task_code: string

The task code representing the task to validate parameter values.

task_parameters: list(UserBlueprintTaskParameterValidationRequestParamItem)

A list of task parameters and proposed values to be validated.

project_id: string (optional, default is None)

The projectId representing the project where this user blueprint is edited.

Returns:
UserBlueprintValidateTaskParameters
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod list_shared_roles(user_blueprint_id: str, limit: int = 100, offset: int = 0, id: Optional[str] = None, name: Optional[str] = None, share_recipient_type: Optional[str] = None) → List[datarobot.models.user_blueprints.models.UserBlueprintSharedRolesResponseValidator]

Get a list of users, groups and organizations that have an access to this user blueprint

Parameters:
id: str, Optional

Only return the access control information for a organization, group or user with this ID.

limit: int (Default=100)

At most this many results are returned.

name: string, Optional

Only return the access control information for a organization, group or user with this name.

offset: int (Default=0)

This many results will be skipped.

share_recipient_type: enum(‘user’, ‘group’, ‘organization’), Optional

Describes the recipient type, either user, group, or organization.

user_blueprint_id: str

Used to identify a specific user-owned blueprint.

Returns:
list(UserBlueprintSharedRolesResponseValidator)
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod validate_blueprint(blueprint, project_id: Optional[str] = None) → List[datarobot.models.user_blueprints.models.VertexContextItem]

Validate a user blueprint and return information about the inputs expected and outputs provided by each task.

Parameters:
blueprint: list(dict) or list(UserBlueprintTask)

The representation of a directed acyclic graph defining a pipeline of data through tasks and a final estimator.

project_id: string (optional, default is None)

The projectId representing the project where this user blueprint is edited.

Returns:
list(VertexContextItem)
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod update_shared_roles(user_blueprint_id: str, roles: List[Union[datarobot.models.user_blueprints.models.GrantAccessControlWithUsernameValidator, datarobot.models.user_blueprints.models.GrantAccessControlWithIdValidator]]) → requests.models.Response

Share a user blueprint with a user, group, or organization

Parameters:
user_blueprint_id: str

Used to identify a specific user-owned blueprint.

roles: list(or(GrantAccessControlWithUsernameValidator, GrantAccessControlWithIdValidator))

Array of GrantAccessControl objects., up to maximum 100 objects.

Returns:
requests.models.Response
Raises:
datarobot.errors.ClientError

if the server responded with 4xx status

datarobot.errors.ServerError

if the server responded with 5xx status

classmethod search_catalog(search: Optional[str] = None, tag: Optional[str] = None, limit: int = 100, offset: int = 0, owner_user_id: Optional[str] = None, owner_username: Optional[str] = None, order_by: str = '-created') → datarobot.models.user_blueprints.models.UserBlueprintCatalogSearch

Fetch a list of the user blueprint catalog entries the current user has access to based on an optional search term, tags, owner user info, or sort order.

Parameters:
search: string, Optional.

A value to search for in the dataset’s name, description, tags, column names, categories, and latest error. The search is case insensitive. If no value is provided for this parameter, or if the empty string is used, or if the string contains only whitespace, no filtering will be done. Partial matching is performed on dataset name and description fields while all other fields will only match if the search matches the whole value exactly.

tag: string, Optional.

If provided, the results will be filtered to include only items with the specified tag.

limit: int, Optional. (default: 0), at most this many results are returned. To specify no

limit, use 0. The default may change and a maximum limit may be imposed without notice.

offset: int, Optional. (default: 0), this many results will be skipped.
owner_user_id: string, Optional.

Filter results to those owned by one or more owner identified by UID.

owner_username: string, Optional.

Filter results to those owned by one or more owner identified by username.

order_by: string, Optional. Defaults to ‘-created’.

Sort order which will be applied to catalog list, valid options are “catalogName”, “originalName”, “description”, “created”, and “relevance”. For all options other than relevance, you may prefix the attribute name with a dash to sort in descending order. e.g. orderBy=’-catalogName’.

VisualAI

class datarobot.models.visualai.Image(**kwargs)

An image stored in a project’s dataset.

Attributes:
id: str

Image ID for this image.

image_type: str

Image media type. Accessing this may require a server request and an associated delay in returning.

image_bytes: [octet]

Raw octets of this image. Accessing this may require a server request and an associated delay in returning.

height: int

Height of the image in pixels (72 pixels per inch).

width: int

Width of the image in pixels (72 pixels per inch).

classmethod get(project_id, image_id)

Get a single image object from project.

Parameters:
project_id: str

Project that contains the images.

image_id: str

ID of image to load from the project.

class datarobot.models.visualai.SampleImage(**kwargs)

A sample image in a project’s dataset.

If Project.stage is datarobot.enums.PROJECT_STAGE.EDA2 then the target_* attributes of this class will have values, otherwise the values will all be None.

Attributes:
image: Image

Image object.

target_value: str

Value associated with the feature_name.

classmethod list(project_id, feature_name, target_value=None, offset=None, limit=None)

Get sample images from a project.

Parameters:
project_id: str

Project that contains the images.

feature_name: str

Name of feature column that contains images.

target_value: str

Target value to filter images.

offset: int

Number of images to be skipped.

limit: int

Number of images to be returned.

class datarobot.models.visualai.DuplicateImage(**kwargs)

An image that was duplicated in the project dataset.

Attributes:
image: Image

Image object.

count: int

Number of times the image was duplicated.

classmethod list(project_id, feature_name, offset=None, limit=None)

Get all duplicate images in a project.

Parameters:
project_id: str

Project that contains the images.

feature_name: str

Name of feature column that contains images.

offset: int

Number of images to be skipped.

limit: int

Number of images to be returned.

class datarobot.models.visualai.ImageEmbedding(**kwargs)

Vector representation of an image in an embedding space.

A vector in an embedding space will allow linear computations to be carried out between images: for example computing the Euclidean distance of the images.

Attributes:
image: Image

Image object used to create this map.

feature_name: str

Name of the feature column this embedding is associated with.

position_x: int

X coordinate of the image in the embedding space.

position_y: int

Y coordinate of the image in the embedding space.

actual_target_value: object

Actual target value of the dataset row.

classmethod compute(project_id, model_id)

Start creation of image embeddings for the model.

Parameters:
project_id: str

Project to start creation in.

model_id: str

Project’s model to start creation in.

Returns:
str

URL to check for image embeddings progress.

Raises:
datarobot.errors.ClientError

Server rejected creation due to client error. Most likely cause is bad project_id or model_id.

classmethod models(project_id)

List the models in a project.

Parameters:
project_id: str

Project that contains the models.

Returns:
list( tuple(model_id, feature_name) )

List of model and feature name pairs.

classmethod list(project_id, model_id, feature_name)

Return a list of ImageEmbedding objects.

Parameters:
project_id: str

Project that contains the images.

model_id: str

Model that contains the images.

feature_name: str

Name of feature column that contains images.

class datarobot.models.visualai.ImageActivationMap(**kwargs)

Mark areas of image with weight of impact on training.

This is a technique to display how various areas of the region were used in training, and their effect on predictions. Larger values in activation_values indicates a larger impact.

Attributes:
image: Image

Image object used to create this map.

overlay_image: Image

Image object composited with activation heat map.

feature_name: str

Name of the feature column that contains the value this map is based on.

height: int

Height of the original image in pixels.

width: int

Width of the original image in pixels.

actual_target_value: object

Actual target value of the dataset row.

predicted_target_value: object

Predicted target value of the dataset row that contains this image.

activation_values: [ [ int ] ]

A row-column matrix that contains the activation strengths for image regions. Values are integers in the range [0, 255].

classmethod compute(project_id, model_id)

Start creation of a activation map in the given model.

Parameters:
project_id: str

Project to start creation in.

model_id: str

Project’s model to start creation in.

Returns:
str

URL to check for image embeddings progress.

Raises:
datarobot.errors.ClientError

Server rejected creation due to client error. Most likely cause is bad project_id or model_id.

classmethod models(project_id)

List the models in a project.

Parameters:
project_id: str

Project that contains the models.

Returns:
list( tuple(model_id, feature_name) )

List of model and feature name pairs.

classmethod list(project_id, model_id, feature_name, offset=None, limit=None)

Return a list of ImageActivationMap objects.

Parameters:
project_id: str

Project that contains the images.

model_id: str

Model that contains the images.

feature_name: str

Name of feature column that contains images.

offset: int

Number of images to be skipped.

limit: int

Number of images to be returned.

class datarobot.models.visualai.ImageAugmentationOptions(id, name, project_id, min_transformation_probability, current_transformation_probability, max_transformation_probability, min_number_of_new_images, current_number_of_new_images, max_number_of_new_images, transformations=None)

A List of all supported Image Augmentation Transformations for a project. Includes additional information about minimum, maximum, and default values for a transformation.

Attributes:
name: string

The name of the augmentation list

project_id: string

The project containing the image data to be augmented

min_transformation_probability: float

The minimum allowed value for transformation probability.

current_transformation_probability: float

Default setting for probability that each transformation will be applied to an image.

max_transformation_probability: float

The maximum allowed value for transformation probability.

min_number_of_new_images: int

The minimum allowed number of new rows to add for each existing row

current_number_of_new_images: int

The default number of new rows to add for each existing row

max_number_of_new_images: int

The maximum allowed number of new rows to add for each existing row

transformations: array

List of transformations to possibly apply to each image

classmethod get(project_id)

Returns a list of all supported transformations for the given project

Parameters:project_id – sting The id of the project for which to return the list of supported transformations.
Returns:
ImageAugmentationOptions
A list containing all the supported transformations for the project.
class datarobot.models.visualai.ImageAugmentationList(id, name, project_id, feature_name=None, in_use=False, initial_list=False, transformation_probability=0.0, number_of_new_images=1, transformations=None, samples_id=None)

A List of Image Augmentation Transformations

Attributes:
name: string

The name of the augmentation list

project_id: string

The project containing the image data to be augmented

feature_name: string (optional)

name of the feature that the augmentation list is associated with

in_use: boolean

Whether this is the list that will passed in to every blueprint during blueprint generation before autopilot

initial_list: boolean

True if this is the list to be used during training to produce augmentations

transformation_probability: float

Probability that each transformation will be applied to an image. Value should be between 0.01 - 1.0.

number_of_new_images: int

Number of new rows to add for each existing row

transformations: array

List of transformations to possibly apply to each image

samples_id: str

Id of last image augmentation sample generated for image augmentation list.

classmethod create(name, project_id, feature_name=None, in_use=False, initial_list=False, transformation_probability=0.0, number_of_new_images=1, transformations=None, samples_id=None)

create a new image augmentation list

retrieve_samples()

Lists already computed image augmentation sample for image augmentation list. Returns samples only if they have been already computed. It does not initialize computation.

Returns:
List of class ImageAugmentationSample
compute_samples(max_wait=600)

Initializes computation and retrieves list of image augmentation samples for image augmentation list. If samples exited prior to this call method, this will compute fresh samples and return latest version of samples.

Returns:
List of class ImageAugmentationSample
class datarobot.models.visualai.ImageAugmentationSample(image_id, project_id, height, width, original_image_id=None, sample_id=None)

A preview of the type of images that augmentations will create during training.

Attributes:
sample_id : ObjectId

The id of the augmentation sample, used to group related images together

image_id : ObjectId

A reference to the Image which can be used to retrieve the image binary

project_id : ObjectId

A reference to the project containing the image

original_image_id : ObjectId

A reference to the original image that generated this image in the case of an augmented image. If this is None it signifies this is an original image

height : int

Image height in pixels

width : int

Image width in pixels

classmethod compute(augmentation_list, number_of_rows=5)

Start creation of image augmentation samples.

Parameters:
number_of_rows: int

The number of rows from the original dataset to use as input data for the augmentation samples. Defaults to 5.

augmentation_list: ImageAugmentationList

An Image Augmentation list that specifies the transformations to apply to each image during augmentation.

Returns:
str

URL to check for image augmentation samples generation progress.

Raises:
datarobot.errors.ClientError

Server rejected creation due to client error. Most likely cause is bad invalid augmentation_list.

classmethod list(sample_id=None, auglist_id=None)

Return a list of ImageAugmentationSample objects.

If both sample_id and auglist_id are specified, auglist_id will take precedence.

Parameters:
sample_id: str

Unique ID for the set of sample images

auglist_id: str

ID for augmentation list to retrieve samples for

Returns:
List of class ImageAugmentationSample

Word Cloud

class datarobot.models.word_cloud.WordCloud(ngrams: List[WordCloudNgram])

Word cloud data for the model.

Notes

WordCloudNgram is a dict containing the following:

  • ngram (str) Word or ngram value.
  • coefficient (float) Value from [-1.0, 1.0] range, describes effect of this ngram on the target. Large negative value means strong effect toward negative class in classification and smaller target value in regression models. Large positive - toward positive class and bigger value respectively.
  • count (int) Number of rows in the training sample where this ngram appears.
  • frequency (float) Value from (0.0, 1.0] range, relative frequency of given ngram to most frequent ngram.
  • is_stopword (bool) True for ngrams that DataRobot evaluates as stopwords.
  • class (str or None) For classification - values of the target class for corresponding word or ngram. For regression - None.
Attributes:
ngrams : list of dicts

List of dicts with schema described as WordCloudNgram above.

most_frequent(top_n: Optional[int] = 5) → List[WordCloudNgram]

Return most frequent ngrams in the word cloud.

Parameters:
top_n : int

Number of ngrams to return

Returns:
list of dict

Up to top_n top most frequent ngrams in the word cloud. If top_n bigger then total number of ngrams in word cloud - return all sorted by frequency in descending order.

most_important(top_n: Optional[int] = 5) → List[WordCloudNgram]

Return most important ngrams in the word cloud.

Parameters:
top_n : int

Number of ngrams to return

Returns:
list of dict

Up to top_n top most important ngrams in the word cloud. If top_n bigger then total number of ngrams in word cloud - return all sorted by absolute coefficient value in descending order.

ngrams_per_class() → Dict[Optional[str], List[WordCloudNgram]]

Split ngrams per target class values. Useful for multiclass models.

Returns:
dict

Dictionary in the format of (class label) -> (list of ngrams for that class)