New features

Configuration changes

  • Updated black version to 23.1.0.

  • Removes dependency on package mock, since it is part of the standard library.

Documentation changes

  • Added usage of external_llm_context_size in llm_settings in genai_example.rst.

  • Updated doc string for llm_settings to include attribute external_llm_context_size for external LLMs.

  • Updated genai_example.rst to link to DataRobot doc pages for external vector database and external LLM deployment creation.

  • Removed incorrect can_share parameters in Use Case sharing example

API changes

  • Added number_of_clusters parameter to Project.get_model_records to filter models by number of clusters in unsupervised clustering projects.

  • Remove an unsupported NETWORK_EGRESS_POLICY.DR_API_ACCESS value for custom models. This value was used by a feature that was never released as a GA and is not supported in the current API.

  • Implemented support for dr-connector-v1 to DataStore <datarobot.models.DataStore> and DataSource <datarobot.models.DataStore>.

  • Added a new parameter name to DataStore.list for searching data stores by name.

  • Added a new parameter entity_type to the compute and create methods of the classes :class: ShapMatrix <datarobot.insights.shap_matrix.ShapMatrix>, :class: ShapImpact <datarobot.insights.shap_impact.ShapImpact>, :class: ShapPreview <datarobot.insights.shap_preview.ShapPreview>. Insights can be computed for custom models if the parameter entity_type=”customModel” is passed. See also the User Guide: SHAP insights overview.

  • Remove ImportedModel object since it was API for SSE (standalone scoring engine) which is not part of DataRobot anymore.


Experimental changes


New features



  • Updated the validation logic of RelationshipsConfiguration to work with native database connections

API changes

Deprecation summary

Configuration changes

Documentation changes

Experimental changes


New features

API changes

  • Parameter Overrides: Users can now override most of the previously set configuration values directly through parameters when initializing the Client. Exceptions: The endpoint and token values must be initialized from one source (client params, environment, or config file) and cannot be overridden individually, for security and consistency reasons. The new configuration priority is as follows: 1. Client Params 2. Client config_path param 3. Environment Variables 4. Default to reading YAML config file from ~/.config/datarobot/drconfig.yaml

  • DATAROBOT_API_CONSUMER_TRACKING_ENABLED now always defaults to True.

  • Added Databricks personal access token and service principal (also shared credentials via secure config) authentication for uploading datasets from Databricks or creating a project from Databricks data.

  • Added secure config support for AWS long term credentials.

  • Implemented support for dr-database-v1 to DataStore <datarobot.models.DataStore>, DataSource <datarobot.models.DataStore>, and DataDriver <datarobot.models.DataDriver>. Added enum classes to support the changes.

  • You can retrieve the canonical URI for a Use Case using UseCase.get_uri.

  • You can open a Use Case in a browser using UseCase.open_in_browser.


Deprecation summary

Documentation changes

  • Updated genai_example.rst to utilize latest genAI features and methods introduced most recently in the API client.

Experimental changes


  • Fixed how async url is build in Model.get_or_request_feature_impact

  • Fixed setting ssl_verify by env variables.

  • Resolved a problem related to tilde-based paths in the Client’s ‘config_path’ attribute.

  • Changed the force_size default of ImageOptions to apply the same transformations by default, which are applied when image archive datasets are uploaded to DataRobot.


New features


  • Payload property subset renamed to source in Model.request_feature_effect

  • Fixed an issue where Context.trace_context was not being set from environment variables or DR config files.

  • Project.refresh no longer sets Project.advanced_options to a dictionary.

  • Fixed Dataset.modify to clarify behavior of when to preserve or clear categories.

  • Fixed an issue with enums in f-strings resulting in the enum class and property being printed instead of the enum property’s value in Python 3.11 environments.

Deprecation summary

  • Project.refresh will no longer set Project.advanced_options to a dictionary after version 3.5 is released.

    All interactions with Project.advanced_options should be expected to be through the AdvancedOptions class.

Experimental changes


New features



  • Fixed incompatibilities with Pandas 2.0 in DatetimePartitioning.to_dataframe.

  • Fixed a crash when using non-“latin-1” characters in Panda’s DataFrame used as prediction data in BatchPredictionJob.score.

  • Fixed an issue where failed authentication when invoking datarobot.client.Client() raises a misleading error about client-server compatibility.

  • Fixed incompatibilities with Pandas 2.0 in AccuracyOverTime.get_as_dataframe. The method will now throw a ValueError if an empty list is passed to the parameter metrics.

API changes

  • Added parameter unsupervised_type to the class DatetimePartitioning.

  • The sliced insight API endpoint GET: api/v2/insights/<insight_name>/ returns a paginated response. This means that it returns an empty response if no insights data is found, unlike GET: api/v2/projects/<pid>/models/<lid>/<insight_name>/, which returns 404 NOT FOUND in this case. To maintain backwards-compatibility, all methods that retrieve insights data raise 404 NOT FOUND if the insights API returns an empty response.

Deprecation summary

Configuration changes

  • Pins dependency on package urllib3 to be less than version 2.0.0.

Deprecation summary

  • Deprecated parameter user_agent_suffix in datarobot.Client. user_agent_suffix will be removed in v3.4. Please use trace_context instead.

Documentation changes

  • Fixed in-line documentation of DataRobotClientConfig.

  • Fixed documentation around client configuration from environment variables or config file.

Experimental changes


Configuration changes

  • Removes dependency on package contextlib2 since the package is Python 3.7+.

  • Update typing-extensions to be inclusive of versions from 4.3.0 to < 5.0.0.




  • Added format key to Batch Prediction intake and output settings for S3, GCP and Azure

API changes

Deprecation summary

  • Deprecated method Project.create_from_hdfs.

  • Deprecated method DatetimePartitioning.generate.

  • Deprecated parameter in_use from ImageAugmentationList.create as DataRobot will take care of it automatically.

  • Deprecated property Deployment.capabilities from Deployment.

  • ImageAugmentationSample.compute was removed in v3.1. You can get the same information with the method ImageAugmentationList.compute_samples.

  • sample_id parameter removed from ImageAugmentationSample.list. Please use auglist_id instead.

Documentation changes

  • Update the documentation to suggest that setting use_backtest_start_end_format of DatetimePartitioning.to_specification to True will mirror the same behavior as the Web UI.

  • Update the documentation to suggest setting use_start_end_format of Backtest.to_specification to True will mirror the same behavior as the Web UI.



  • Fixed an issue affecting backwards compatibility in datarobot.models.DatetimeModel, where an unexpected keyword from the DataRobot API would break class deserialization.



  • Restored Model.get_leaderboard_ui_permalink, Model.open_model_browser, These methods were accidentally removed instead of deprecated.

  • Fix for ipykernel < 6.0.0 which does not persist contextvars across cells

Deprecation summary

  • Deprecated method Model.get_leaderboard_ui_permalink. Please use Model.get_uri instead.

  • Deprecated method Model.open_model_browser. Please use Model.open_in_browser instead.



  • Added typing-extensions as a required dependency for the DataRobot Python SDK.


New features



  • Dataset.list no longer throws errors when listing datasets with no owner.

  • Fixed an issue with the creation of BatchPredictionJobDefinitions containing a schedule.

  • Fixed error handling in datarobot.helpers.partitioning_methods.get_class.

  • Fixed issue with portions of the payload not using camelCasing in Project.upload_dataset_from_catalog.

API changes

  • The Python client now outputs a DataRobotProjectDeprecationWarning when you attempt to access certain resources (projects, models, deployments, etc.) that are deprecated or disabled as a result of the DataRobot platform’s migration to Python 3.

  • The Python client now raises a TypeError when you try to retrieve a labelwise ROC on a binary model or a binary ROC on a multilabel model.

  • The method Dataset.create_from_data_source now raises InvalidUsageError if username and password are not passed as a pair together.

Deprecation summary

Configuration changes

  • Added a context manager client_configuration that can be used to change the connection configuration temporarily, for use in asynchronous or multithreaded code.

  • Upgraded the Pillow library to version 9.2.0. Users installing DataRobot with the “images” extra (pip install datarobot[images]) should note that this is a required library.

Experimental changes


New features


  • Added support for specifying custom endpoint URLs for S3 access in batch predictions:

    See: endpoint_url parameter.

  • Added guide on working with binary data

  • Added multithreading support to binary data helper functions.

  • Binary data helpers image defaults aligned with application’s image preprocessing.

  • Added the following accuracy metrics to be retrieved for a deployment - TPR, PPV, F1 and MCC Deployment monitoring


  • Don’t include holdout start date, end date, or duration in datetime partitioning payload when holdout is disabled.

  • Removed ICE Plot capabilities from Feature Fit.

  • Handle undefined calendar_name in CalendarFile.create_calendar_from_dataset

  • Raise ValueError for submitted calendar names that are not strings

API changes

  • version field is removed from ImportedModel object

Deprecation summary

  • Reason Codes objects deprecated in 2.13 version were removed. Please use Prediction Explanations instead.

Configuration changes

  • The upper version constraint on pandas has been removed.

Documentation changes

  • Fixed a minor typo in the example for Dataset.create_from_data_source.

  • Update the documentation to suggest that feature_derivation_window_end of datarobot.DatetimePartitioningSpecification class should be a negative or zero.


New features



API changes

  • User can include ICE plots data in the response when requesting Feature Effects/Feature Fit. Extended methods are

Deprecation summary

  • attrs library is removed from library dependencies

  • ImageAugmentationSample.compute was marked as deprecated and will be removed in v2.30. You can get the same information with newly introduced method ImageAugmentationList.compute_samples

  • ImageAugmentationSample.list using sample_id

  • Deprecating scaleout parameters for projects / models. Includes scaleout_modeling_mode, scaleout_max_train_pct, and scaleout_max_train_rows

Configuration changes

  • pandas upper version constraint is updated to include version 1.3.5.

Documentation changes

  • Fixed “from datarobot.enums” import in Unsupervised Clustering example provided in docs.


New features



API changes

Deprecation summary

  • Model.get_all_labelwise_roc_curves has been removed. You can get the same information with multiple calls of Model.get_labelwise_roc_curves, one per data source.

  • Model.get_all_multilabel_lift_charts has been removed. You can get the same information with multiple calls of Model.get_multilabel_lift_charts, one per data source.

Documentation changes

  • This release introduces a new documentation organization. The organization has been modified to better reflect the end-to-end modeling workflow. The new “Tutorials” section has 5 major topics that outline the major components of modeling: Data, Modeling, Predictions, MLOps, and Administration.

  • The Getting Started workflow is now hosted at DataRobot’s API Documentation Home.

  • Added an example of how to set up optimized datetime partitioning for time series projects.


New features



API changes

  • Updated Project.start to use AUTOPILOT_MODE.QUICK when the autopilot_on param is set to True. This brings it in line with Project.set_target.

  • Updated project.start_autopilot to accept the following new GA parameters that are already in the public API: consider_blenders_in_recommendation, run_leakage_removed_feature_list

Deprecation summary

Configuration changes

  • Now requires dependency on package scikit-learn rather than sklearn. Note: This dependency is only used in example code. See this scikit-learn issue for more information.

  • Now permits dependency on package attrs to be less than version 21. This fixes compatibility with apache-airflow.

  • Allow to setup Authorization: <type> <token> type header for OAuth2 Bearer tokens.

Documentation changes

  • Update the documentation with respect to the permission that controls AI Catalog dataset snapshot behavior.


New features



  • Remove the deprecation warnings when using with latest versions of urllib3.

  • FeatureAssociationMatrix.get is now using correct query param name when featurelist_id is specified.

  • Handle scalar values in shapBaseValue while converting a predictions response to a data frame.

  • Ensure that if a configured endpoint ends in a trailing slash, the resulting full URL does not end up with double slashes in the path.

  • Model.request_frozen_datetime_model is now implementing correct validation of input parameter training_start_date.

API changes


New features



API changes

Deprecation summary

Documentation changes


New features


  • Running Autopilot on Leakage Removed feature list can now be specified in AdvancedOptions. By default, Autopilot will always run on Informative Features - Leakage Removed feature list if it exists. If the parameter run_leakage_removed_feature_list is set to False, then Autopilot will run on Informative Features or available custom feature list.

  • Method Project.upload_dataset and Project.upload_dataset_from_data_source support new optional parameter secondary_datasets_config_id for Feature discovery project.


API changes

Deprecation summary

Documentation changes


New features



  • Handle null values in predictionExplanationMetadata["shapRemainingTotal"] while converting a predictions response to a data frame.

  • Handle null values in customModel["latestVersion"]

  • Removed an extra column status from BatchPredictionJob as it caused issues with never version of Trafaret validation.

  • Make predicted_vs_actual optional in Feature Effects data because a feature may have insufficient qualified samples.

  • Make jdbc_url optional in Data Store data because some data stores will not have it.

  • The method Project.get_datetime_models now correctly returns all DatetimeModel objects for the project, instead of just the first 100.

  • Fixed a documentation error related to snake_case vs camelCase in the JDBC settings payload.

  • Make trafaret validator for datasets use a syntax that works properly with a wider range of trafaret versions.

  • Handle extra keys in CustomModelTests and CustomModelVersions

  • ImageEmbedding and ImageActivationMap now supports regression projects.

API changes

  • The default value for the mode param in Project.set_target has been changed from AUTOPILOT_MODE.FULL_AUTO to AUTOPILOT_MODE.QUICK

Documentation changes

  • Added links to classes with duration parameters such as validation_duration and holdout_duration to provide duration string examples to users.

  • The models documentation has been revised to include section on how to train a new model and how to run cross-validation or backtesting for a model.


New features



  • An issue with input validation of the Batch Prediction module

  • parent_model_id was not visible for all frozen models

  • Batch Prediction jobs that used other output types than local_file failed when using .wait_for_completion()

  • A race condition in the Batch Prediction file scoring logic

API changes

  • Three new fields were added to the Dataset object. This reflects the updated fields in the public API routes at api/v2/datasets/. The added fields are:

    • processing_state: Current ingestion process state of the dataset

    • row_count: The number of rows in the dataset.

    • size: The size of the dataset as a CSV in bytes.

Deprecation summary

  • datarobot.enums.VARIABLE_TYPE_TRANSFORM.CATEGORICAL for is deprecated for the following and will be removed in v2.22.
    • meth:Project.batch_features_type_transform

    • meth:Project.create_type_transform_feature


New features

  • There is a new Dataset object that implements some of the public API routes at api/v2/datasets/. This also adds two new feature classes and a details class.


  • It is possible to create an alternative configuration for the secondary dataset which can be used during the prediction

  • You can now filter the deployments returned by the Deployment.list command. You can do this by passing an instance of the DeploymentListFilters class to the filters keyword argument. The currently supported filters are:

    • role

    • service_health

    • model_health

    • accuracy_health

    • execution_environment_type

    • materiality

  • A new workflow is available for making predictions in time series projects. To that end, PredictionDataset objects now contain the following new fields:

    • forecast_point_range: The start and end date of the range of dates available for use as the forecast point, detected based on the uploaded prediction dataset

    • data_start_date: A datestring representing the minimum primary date of the prediction dataset

    • data_end_date: A datestring representing the maximum primary date of the prediction dataset

    • max_forecast_date: A datestring representing the maximum forecast date of this prediction dataset

    Additionally, users no longer need to specify a forecast_point or predictions_start_date and predictions_end_date when uploading datasets for predictions in time series projects. More information can be found in the time series predictions documentation.

  • Per-class lift chart data is now available for multiclass models using Model.get_multiclass_lift_chart.

  • Unsupervised projects can now be created using the Project.start and Project.set_target methods by providing unsupervised_mode=True, provided that the user has access to unsupervised machine learning functionality. Contact support for more information.

  • A new boolean attribute unsupervised_mode was added to datarobot.DatetimePartitioningSpecification. When it is set to True, datetime partitioning for unsupervised time series projects will be constructed for nowcasting: forecast_window_start=forecast_window_end=0.

  • Users can now configure the start and end of the training partition as well as the end of the validation partition for backtests in a datetime-partitioned project. More information and example usage can be found in the backtesting documentation.


  • Updated the user agent header to show which python version.

  • Model.get_frozen_child_models can be used to retrieve models that are frozen from a given model

  • Added datarobot.enums.TS_BLENDER_METHOD to make it clearer which blender methods are allowed for use in time series projects.


  • An issue where uploaded CSV’s would loose quotes during serialization causing issues when columns containing line terminators where loaded in a dataframe, has been fixed

  • Project.get_association_featurelists is now using the correct endpoint name, but the old one will continue to work

  • Python API PredictionServer supports now on-premise format of API response.


New features


  • Added documentation to Project.get_metrics to detail the new ascending field that indicates how a metric should be sorted.

  • Retraining of a model is processed asynchronously and returns a ModelJob immediately.

  • Blender models can be retrained on a different set of data or a different feature list.

  • Word cloud ngrams now has variable field representing the source of the ngram.

  • Method WordCloud.ngrams_per_class can be used to split ngrams for better usability in multiclass projects.

  • Method Project.set_target support new optional parameters featureEngineeringGraphs and credentials.

  • Method Project.upload_dataset and Project.upload_dataset_from_data_source support new optional parameter credentials.

  • Series accuracy retrieval methods (DatetimeModel.get_series_accuracy_as_dataframe and DatetimeModel.download_series_accuracy_as_csv) for multiseries time series projects now support additional parameters for specifying what data to retrieve, including:

    • metric: Which metric to retrieve scores for

    • multiseries_value: Only returns series with a matching multiseries ID

    • order_by: An attribute by which to sort the results


API changes

  • The datarobot package is now no longer a namespace package.

  • datarobot.enums.BLENDER_METHOD.FORECAST_DISTANCE is removed (deprecated in 2.18.0).

Documentation changes

  • Updated Residuals charts documentation to reflect that the data rows include row numbers from the source dataset for projects created in DataRobot 5.3 and newer.


New features

  • Residuals charts can now be retrieved for non-time-aware regression models.

  • Deployment monitoring can now be used to retrieve service stats, service health, accuracy info, permissions, and feature lists for deployments.

  • Time series projects now support the Average by Forecast Distance blender, configured with more than one Forecast Distance. The blender blends the selected models, selecting the best three models based on the backtesting score for each Forecast Distance and averaging their predictions. The new blender method FORECAST_DISTANCE_AVG has been added to datarobot.enums.BLENDER_METHOD.

  • Deployment.submit_actuals can now be used to submit data about actual results from a deployed model, which can be used to calculate accuracy metrics.


  • Monotonic constraints are now supported for OTV projects. To that end, the parameters monotonic_increasing_featurelist_id and monotonic_decreasing_featurelist_id can be specified in calls to Model.train_datetime or Project.train_datetime.

  • When retrieving information about features, information about summarized categorical variables is now available in a new keySummary.

  • For Word Clouds in multiclass projects, values of the target class for corresponding word or ngram can now be passed using the new class parameter.

  • Listing deployments using Deployment.list now support sorting and searching the results using the new order_by and search parameters.

  • You can now get the model associated with a model job by getting the model variable on the model job object.

  • The Blueprint class can now retrieve the recommended_featurelist_id, which indicates which feature list is recommended for this blueprint. If the field is not present, then there is no recommended feature list for this blueprint.

  • The Model class now can be used to retrieve the model_number.

  • The method Model.get_supported_capabilities now has an extra field supportsCodeGeneration to explain whether the model supports code generation.

  • Calls to Project.start and Project.upload_dataset now support uploading data via S3 URI and pathlib.Path objects.

  • Errors upon connecting to DataRobot are now clearer when an incorrect API Token is used.

  • The datarobot package is now a namespace package.

Deprecation summary

  • datarobot.enums.BLENDER_METHOD.FORECAST_DISTANCE is deprecated and will be removed in 2.19. Use FORECAST_DISTANCE_ENET instead.

Documentation changes

  • Various typo and wording issues have been addressed.

  • A new notebook showing regression-specific features is now been added to the examples.

  • Documentation for Access lists has been added.


New features


  • number_of_do_not_derive_features has been added to the datarobot.DatetimePartitioning class to specify the number of features that are marked as excluded from derivation.

  • Users with PyYAML>=5.1 will no longer receive a warning when using the datarobot package

  • It is now possible to use files with unicode names for creating projects and prediction jobs.

  • Users can now embed DataRobot-generated content in a ComplianceDocTemplate using keyword tags. See here for more details.

  • The field calendar_name has been added to datarobot.DatetimePartitioning to display the name of the calendar used for a project.

  • Prediction intervals are now supported for start-end retrained models in a time series project.

  • Previously, all backtests had to be run before prediction intervals for a time series project could be requested with predictions. Now, backtests will be computed automatically if needed when prediction intervals are requested.


  • An issue affecting time series project creation for irregularly spaced dates has been fixed.

  • ComplianceDocTemplate now supports empty text blocks in user sections.

  • An issue when using Predictions.get to retrieve predictions metadata has been fixed.

Documentation changes


New features


  • Information on the effective feature derivation window is now available for time series projects to specify the full span of historical data required at prediction time. It may be longer than the feature derivation window of the project depending on the differencing settings used.

    Additionally, more of the project partitioning settings are also available on the DatetimeModel class. The new attributes are:

    • effective_feature_derivation_window_start

    • effective_feature_derivation_window_end

    • forecast_window_start

    • forecast_window_end

    • windows_basis_unit

  • Prediction metadata is now included in the return of Predictions.get

Documentation changes

  • Various typo and wording issues have been addressed.

  • The example data that was meant to accompany the Time Series examples has been added to the zip file of the download in the examples.



  • CalendarFile.get_access_list has been added to the CalendarFile class to return a list of users with access to a calendar file.

  • A role attribute has been added to the CalendarFile class to indicate the access level a current user has to a calendar file. For more information on the specific access levels, see the sharing documentation.


  • Previously, attempting to retrieve the calendar_id of a project without a set target would result in an error. This has been fixed to return None instead.


New features


  • The dataframe returned from datarobot.PredictionExplanations.get_all_as_dataframe() will now have each class label class_X be the same from row to row.

  • The client is now more robust to networking issues by default. It will retry on more errors and respects Retry-After headers in HTTP 413, 429, and 503 responses.

  • Added Forecast Distance blender for Time-Series projects configured with more than one Forecast Distance. It blends the selected models creating separate linear models for each Forecast Distance.

  • Project can now be shared with other users.

  • Project.upload_dataset and Project.upload_dataset_from_data_source will return a PredictionDataset with data_quality_warnings if potential problems exist around the uploaded dataset.

  • relax_known_in_advance_features_check has been added to Project.upload_dataset and Project.upload_dataset_from_data_source to allow missing values from the known in advance features in the forecast window at prediction time.

  • cross_series_group_by_columns has been added to datarobot.DatetimePartitioning to allow users the ability to indicate how to further split series into related groups.

  • Information retrieval for ROC Curve has been extended to include fraction_predicted_as_positive, fraction_predicted_as_negative, lift_positive and lift_negative


  • Fixes an issue where the client would not be usable if it could not be sure it was compatible with the configured server

API changes

Configuration changes

  • Now requires dependency on package requests to be at least version 2.21.

  • Now requires dependency on package urllib3 to be at least version 1.24.

Documentation changes

  • Advanced model insights notebook extended to contain information on visualization of cumulative gains and lift charts.



  • Fixed an issue where searches of the HTML documentation would sometimes hang indefinitely

Documentation changes

  • Python3 is now the primary interpreter used to build the docs (this does not affect the ability to use the package with Python2)


Documentation changes

  • Documentation for the Model Deployment interface has been removed after the corresponding interface was removed in 2.13.0.


New features

  • The new method Model.get_supported_capabilities retrieves a summary of the capabilities supported by a particular model, such as whether it is eligible for Prime and whether it has word cloud data available.

  • New class for working with model compliance documentation feature of DataRobot: class ComplianceDocumentation

  • New class for working with compliance documentation templates: ComplianceDocTemplate

  • New class FeatureHistogram has been added to retrieve feature histograms for a requested maximum bin count

  • Time series projects now support binary classification targets.

  • Cross series features can now be created within time series multiseries projects using the use_cross_series_features and aggregation_type attributes of the datarobot.DatetimePartitioningSpecification. See the Time Series documentation for more info.


  • Client instantiation now checks the endpoint configuration and provides more informative error messages. It also automatically corrects HTTP to HTTPS if the server responds with a redirect to HTTPS.

  • Project.upload_dataset and Project.create now accept an optional parameter of dataset_filename to specify a file name for the dataset. This is ignored for url and file path sources.

  • New optional parameter fallback_to_parent_insights has been added to Model.get_lift_chart, Model.get_all_lift_charts, Model.get_confusion_chart, Model.get_all_confusion_charts, Model.get_roc_curve, and Model.get_all_roc_curves. When True, a frozen model with missing insights will attempt to retrieve the missing insight data from its parent model.

  • New number_of_known_in_advance_features attribute has been added to the datarobot.DatetimePartitioning class. The attribute specifies number of features that are marked as known in advance.

  • Project.set_worker_count can now update the worker count on a project to the maximum number available to the user.

  • Recommended Models API can now be used to retrieve model recommendations for datetime partitioned projects

  • Timeseries projects can now accept feature derivation and forecast windows intervals in terms of number of the rows rather than a fixed time unit. DatetimePartitioningSpecification and Project.set_target support new optional parameter windowsBasisUnit, either ‘ROW’ or detected time unit.

  • Timeseries projects can now accept feature derivation intervals, forecast windows, forecast points and prediction start/end dates in milliseconds.

  • DataSources and DataStores can now be shared with other users.

  • Training predictions for datetime partitioned projects now support the new data subset dr.enums.DATA_SUBSET.ALL_BACKTESTS for requesting the predictions for all backtest validation folds.

API changes

  • The model recommendation type “Recommended” (deprecated in version 2.13.0) has been removed.

Documentation changes

  • Example notebooks have been updated:
    • Notebooks now work in Python 2 and Python 3

    • A notebook illustrating time series capability has been added

    • The financial data example has been replaced with an updated introductory example.

  • To supplement the embedded Python notebooks in both the PDF and HTML docs bundles, the notebook files and supporting data can now be downloaded from the HTML docs bundle.

  • Fixed a minor typo in the code sample for get_or_request_feature_impact


New features


  • Python 3.7 is now supported.

  • Feature impact now returns not only the impact score for the features but also whether they were detected to be redundant with other high-impact features.

  • A new is_blocked attribute has been added to the Job class, specifying whether a job is blocked from execution because one or more dependencies are not yet met.

  • The Featurelist object now has new attributes reporting its creation time, whether it was created by a user or by DataRobot, and the number of models using the featurelist, as well as a new description field.

  • Featurelists can now be renamed and have their descriptions updated with Featurelist.update and ModelingFeaturelist.update.

  • Featurelists can now be deleted with Featurelist.delete and ModelingFeaturelist.delete.

  • ModelRecommendation.get now accepts an optional parameter of type datarobot.enums.RECOMMENDED_MODEL_TYPE which can be used to get a specific kind of recommendation.

  • Previously computed predictions can now be listed and retrieved with the Predictions class, without requiring a reference to the original PredictJob.


  • The Model Deployment interface which was previously visible in the client has been removed to allow the interface to mature, although the raw API is available as a “beta” API without full backwards compatibility support.

API changes

  • Added support for retrieving the Pareto Front of a Eureqa model. See ParetoFront.

  • A new recommendation type “Recommended for Deployment” has been added to ModelRecommendation which is now returns as the default recommended model when available. See Model Recommendation.

Deprecation summary

  • The feature previously referred to as “Reason Codes” has been renamed to “Prediction Explanations”, to provide increased clarity and accessibility. The old ReasonCodes interface has been deprecated and replaced with PredictionExplanations.

  • The recommendation type “Recommended” is deprecated and will no longer be returned in v2.14 of the API.

Documentation changes


New features

  • Some models now have Missing Value reports allowing users with access to uncensored blueprints to retrieve a detailed breakdown of how numeric imputation and categorical converter tasks handled missing values. See the documentation for more information on the report.


New features

  • The new ModelRecommendation class can be used to retrieve the recommended models for a project.

  • A new helper method cross_validate was added to class Model. This method can be used to request Model’s Cross Validation score.

  • Training a model with monotonic constraints is now supported. Training with monotonic constraints allows users to force models to learn monotonic relationships with respect to some features and the target. This helps users create accurate models that comply with regulations (e.g. insurance, banking). Currently, only certain blueprints (e.g. xgboost) support this feature, and it is only supported for regression and binary classification projects.

  • DataRobot now supports “Database Connectivity”, allowing databases to be used as the source of data for projects and prediction datasets. The feature works on top of the JDBC standard, so a variety of databases conforming to that standard are available; a list of databases with tested support for DataRobot is available in the user guide in the web application. See Database Connectivity for details.

  • Added a new feature to retrieve feature logs for time series projects. Check datarobot.DatetimePartitioning.feature_log_list() and datarobot.DatetimePartitioning.feature_log_retrieve() for details.

API changes

Deprecation summary

Configuration changes

  • Retry settings compatible with those offered by urllib3’s Retry interface can now be configured. By default, we will now retry connection errors that prevented requests from arriving at the server.

Documentation changes

  • “Advanced Model Insights” example has been updated to properly handle bin weights when rebinning.


New features

  • New ModelDeployment class can be used to track status and health of models deployed for predictions.


  • DataRobot API now supports creating 3 new blender types - Random Forest, TensorFlow, LightGBM.

  • Multiclass projects now support blenders creation for 3 new blender types as well as Average and ENET blenders.

  • Models can be trained by requesting a particular row count using the new training_row_count argument with Project.train, Model.train and Model.request_frozen_model in non-datetime partitioned projects, as an alternative to the previous option of specifying a desired percentage of the project dataset. Specifying model size by row count is recommended when the float precision of sample_pct could be problematic, e.g. when training on a small percentage of the dataset or when training up to partition boundaries.

  • New attributes max_train_rows, scaleout_max_train_pct, and scaleout_max_train_rows have been added to Project. max_train_rows specified the equivalent value to the existing max_train_pct as a row count. The scaleout fields can be used to see how far scaleout models can be trained on projects, which for projects taking advantage of scalable ingest may exceed the limits on the data available to non-scaleout blueprints.

  • Individual features can now be marked as a priori or not a priori using the new feature_settings attribute when setting the target or specifying datetime partitioning settings on time series projects. Any features not specified in the feature_settings parameter will be assigned according to the default_to_a_priori value.

  • Three new options have been made available in the datarobot.DatetimePartitioningSpecification class to fine-tune how time-series projects derive modeling features. treat_as_exponential can control whether data is analyzed as an exponential trend and transformations like log-transform are applied. differencing_method can control which differencing method to use for stationary data. periodicities can be used to specify periodicities occurring within the data. All are optional and defaults will be chosen automatically if they are unspecified.

API changes

  • Now training_row_count is available on non-datetime models as well as “rowCount” based datetime models. It reports the number of rows used to train the model (equivalent to sample_pct).

  • Features retrieved from Feature.get now include target_leakage.



  • The documented default connect_timeout will now be correctly set for all configuration mechanisms, so that requests that fail to reach the DataRobot server in a reasonable amount of time will now error instead of hanging indefinitely. If you observe that you have started seeing ConnectTimeout errors, please configure your connect_timeout to a larger value.

  • Version of trafaret library this package depends on is now pinned to trafaret>=0.7,<1.1 since versions outside that range are known to be incompatible.


New features

  • The DataRobot API supports the creation, training, and predicting of multiclass classification projects. DataRobot, by default, handles a dataset with a numeric target column as regression. If your data has a numeric cardinality of fewer than 11 classes, you can override this behavior to instead create a multiclass classification project from the data. To do so, use the set_target function, setting target_type=’Multiclass’. If DataRobot recognizes your data as categorical, and it has fewer than 11 classes, using multiclass will create a project that classifies which label the data belongs to.

  • The DataRobot API now includes Rating Tables. A rating table is an exportable csv representation of a model. Users can influence predictions by modifying them and creating a new model with the modified table. See the documentation for more information on how to use rating tables.

  • scaleout_modeling_mode has been added to the AdvancedOptions class used when setting a project target. It can be used to control whether scaleout models appear in the autopilot and/or available blueprints. Scaleout models are only supported in the Hadoop environment with the corresponding user permission set.

  • A new premium add-on product, Time Series, is now available. New projects can be created as time series projects which automatically derive features from past data and forecast the future. See the time series documentation for more information.

  • The Feature object now returns the EDA summary statistics (i.e., mean, median, minimum, maximum, and standard deviation) for features where this is available (e.g., numeric, date, time, currency, and length features). These summary statistics will be formatted in the same format as the data it summarizes.

  • The DataRobot API now supports Training Predictions workflow. Training predictions are made by a model for a subset of data from original dataset. User can start a job which will make those predictions and retrieve them. See the documentation for more information on how to use training predictions.

  • DataRobot now supports retrieving a model blueprint chart and a model blueprint docs.

  • With the introduction of Multiclass Classification projects, DataRobot needed a better way to explain the performance of a multiclass model so we created a new Confusion Chart. The API now supports retrieving and interacting with confusion charts.


  • DatetimePartitioningSpecification now includes the optional disable_holdout flag that can be used to disable the holdout fold when creating a project with datetime partitioning.

  • When retrieving reason codes on a project using an exposure column, predictions that are adjusted for exposure can be retrieved.

  • File URIs can now be used as sourcedata when creating a project or uploading a prediction dataset. The file URI must refer to an allowed location on the server, which is configured as described in the user guide documentation.

  • The advanced options available when setting the target have been extended to include the new parameter ‘events_count’ as a part of the AdvancedOptions object to allow specifying the events count column. See the user guide documentation in the webapp for more information on events count.

  • PredictJob.get_predictions now returns predicted probability for each class in the dataframe.

  • PredictJob.get_predictions now accepts prefix parameter to prefix the classes name returned in the predictions dataframe.

API changes

  • Add target_type parameter to set_target() and start(), used to override the project default.


Documentation changes

  • Updated link to the publicly hosted documentation.


Documentation changes

  • Online documentation hosting has migrated from PythonHosted to Read The Docs. Minor code changes have been made to support this.


New features

  • Lift chart data for models can be retrieved using the Model.get_lift_chart and Model.get_all_lift_charts methods.

  • ROC curve data for models in classification projects can be retrieved using the Model.get_roc_curve and Model.get_all_roc_curves methods.

  • Semi-automatic autopilot mode is removed.

  • Word cloud data for text processing models can be retrieved using Model.get_word_cloud method.

  • Scoring code JAR file can be downloaded for models supporting code generation.


  • A __repr__ method has been added to the PredictionDataset class to improve readability when using the client interactively.

  • Model.get_parameters now includes an additional key in the derived features it includes, showing the coefficients for individual stages of multistage models (e.g. Frequency-Severity models).

  • When training a DatetimeModel on a window of data, a time_window_sample_pct can be specified to take a uniform random sample of the training data instead of using all data within the window.

  • Installing of DataRobot package now has an “Extra Requirements” section that will install all of the dependencies needed to run the example notebooks.

Documentation changes

  • A new example notebook describing how to visualize some of the newly available model insights including lift charts, ROC curves, and word clouds has been added to the examples section.

  • A new section for Common Issues has been added to Getting Started to help debug issues related to client installation and usage.



  • Fixed a bug with Model.get_parameters raising an exception on some valid parameter values.

Documentation changes

  • Fixed sorting order in Feature Impact example code snippet.


New features

  • A new partitioning method (datetime partitioning) has been added. The recommended workflow is to preview the partitioning by creating a DatetimePartitioningSpecification and passing it into DatetimePartitioning.generate, inspect the results and adjust as needed for the specific project dataset by adjusting the DatetimePartitioningSpecification and re-generating, and then set the target by passing the final DatetimePartitioningSpecification object to the partitioning_method parameter of Project.set_target.

  • When interacting with datetime partitioned projects, DatetimeModel can be used to access more information specific to models in datetime partitioned projects. See the documentation for more information on differences in the modeling workflow for datetime partitioned projects.

  • The advanced options available when setting the target have been extended to include the new parameters ‘offset’ and ‘exposure’ (part of the AdvancedOptions object) to allow specifying offset and exposure columns to apply to predictions generated by models within the project. See the user guide documentation in the webapp for more information on offset and exposure columns.

  • Blueprints can now be retrieved directly by project_id and blueprint_id via Blueprint.get.

  • Blueprint charts can now be retrieved directly by project_id and blueprint_id via BlueprintChart.get. If you already have an instance of Blueprint you can retrieve its chart using Blueprint.get_chart.

  • Model parameters can now be retrieved using ModelParameters.get. If you already have an instance of Model you can retrieve its parameters using Model.get_parameters.

  • Blueprint documentation can now be retrieved using Blueprint.get_documents. It will contain information about the task, its parameters and (when available) links and references to additional sources.

  • The DataRobot API now includes Reason Codes. You can now compute reason codes for prediction datasets. You are able to specify thresholds on which rows to compute reason codes for to speed up computation by skipping rows based on the predictions they generate. See the reason codes documentation for more information.


  • A new parameter has been added to the AdvancedOptions used with Project.set_target. By specifying accuracyOptimizedMb=True when creating AdvancedOptions, longer-running models that may have a high accuracy will be included in the autopilot and made available to run manually.

  • A new option for Project.create_type_transform_feature has been added which explicitly truncates data when casting numerical data as categorical data.

  • Added 2 new blenders for projects that use MAD or Weighted MAD as a metric. The MAE blender uses BFGS optimization to find linear weights for the blender that minimize mean absolute error (compared to the GLM blender, which finds linear weights that minimize RMSE), and the MAEL1 blender uses BFGS optimization to find linear weights that minimize MAE + a L1 penalty on the coefficients (compared to the ENET blender, which minimizes RMSE + a combination of the L1 and L2 penalty on the coefficients).


  • Fixed a bug (affecting Python 2 only) with printing any model (including frozen and prime models) whose model_type is not ascii.

  • FrozenModels were unable to correctly use methods inherited from Model. This has been fixed.

  • When calling get_result for a Job, ModelJob, or PredictJob that has errored, AsyncProcessUnsuccessfulError will now be raised instead of JobNotFinished, consistently with the behavior of get_result_when_complete.

Deprecation summary

  • Support for the experimental Recommender Problems projects has been removed. Any code relying on RecommenderSettings or the recommender_settings argument of Project.set_target and Project.start will error.

  • Project.update, deprecated in v2.2.32, has been removed in favor of specific updates: rename, unlock_holdout, set_worker_count.

Documentation changes

  • The link to Configuration from the Quickstart page has been fixed.



  • Fixed a bug (affecting Python 2 only) with printing blueprints whose names are not ascii.

  • Fixed an issue where the weights column (for weighted projects) did not appear in the advanced_options of a Project.


New features

  • Methods to work with blender models have been added. Use Project.blend method to create new blenders, Project.get_blenders to get the list of existing blenders and BlenderModel.get to retrieve a model with blender-specific information.

  • Projects created via the API can now use smart downsampling when setting the target by passing smart_downsampled and majority_downsampling_rate into the AdvancedOptions object used with Project.set_target. The smart sampling options used with an existing project will be available as part of Project.advanced_options.

  • Support for frozen models, which use tuning parameters from a parent model for more efficient training, has been added. Use Model.request_frozen_model to create a new frozen model, Project.get_frozen_models to get the list of existing frozen models and FrozenModel.get to retrieve a particular frozen model.


  • The inferred date format (e.g. “%Y-%m-%d %H:%M:%S”) is now included in the Feature object. For non-date features, it will be None.

  • When specifying the API endpoint in the configuration, the client will now behave correctly for endpoints with and without trailing slashes.


New features

  • The premium add-on product DataRobot Prime has been added. You can now approximate a model on the leaderboard and download executable code for it. See documentation for further details, or talk to your account representative if the feature is not available on your account.

  • (Only relevant for on-premise users with a Standalone Scoring cluster.) Methods (request_transferable_export and download_export) have been added to the Model class for exporting models (which will only work if model export is turned on). There is a new class ImportedModel for managing imported models on a Standalone Scoring cluster.

  • It is now possible to create projects from a WebHDFS, PostgreSQL, Oracle or MySQL data source. For more information see the documentation for the relevant Project classmethods: create_from_hdfs, create_from_postgresql, create_from_oracle and create_from_mysql.

  • Job.wait_for_completion, which waits for a job to complete without returning anything, has been added.


  • The client will now check the API version offered by the server specified in configuration, and give a warning if the client version is newer than the server version. The DataRobot server is always backwards compatible with old clients, but new clients may have functionality that is not implemented on older server versions. This issue mainly affects users with on-premise deployments of DataRobot.


  • Fixed an issue where Model.request_predictions might raise an error when predictions finished very quickly instead of returning the job.

API changes

  • To set the target with quickrun autopilot, call Project.set_target with mode=AUTOPILOT_MODE.QUICK instead of specifying quickrun=True.

Deprecation summary

  • Semi-automatic mode for autopilot has been deprecated and will be removed in 3.0. Use manual or fully automatic instead.

  • Use of the quickrun argument in Project.set_target has been deprecated and will be removed in 3.0. Use mode=AUTOPILOT_MODE.QUICK instead.

Configuration changes

  • It is now possible to control the SSL certificate verification by setting the parameter ssl_verify in the config file.

Documentation changes

  • The “Modeling Airline Delay” example notebook has been updated to work with the new 2.3 enhancements.

  • Documentation for the generic Job class has been added.

  • Class attributes are now documented in the API Reference section of the documentation.

  • The changelog now appears in the documentation.

  • There is a new section dedicated to configuration, which lists all of the configuration options and their meanings.


New features

  • The DataRobot API now includes Feature Impact, an approach to measuring the relevance of each feature that can be applied to any model. The Model class now includes methods request_feature_impact (which creates and returns a feature impact job) and get_feature_impact (which can retrieve completed feature impact results).

  • A new improved workflow for predictions now supports first uploading a dataset via Project.upload_dataset, then requesting predictions via Model.request_predictions. This allows us to better support predictions on larger datasets and non-ascii files.

  • Datasets previously uploaded for predictions (represented by the PredictionDataset class) can be listed from Project.get_datasets and retrieve and deleted via PredictionDataset.get and PredictionDataset.delete.

  • You can now create a new feature by re-interpreting the type of an existing feature in a project by using the Project.create_type_transform_feature method.

  • The Job class now includes a get method for retrieving a job and a cancel method for canceling a job.

  • All of the jobs classes (Job, ModelJob, PredictJob) now include the following new methods: refresh (for refreshing the data in the job object), get_result (for getting the completed resource resulting from the job), and get_result_when_complete (which waits until the job is complete and returns the results, or times out).

  • A new method Project.refresh can be used to update Project objects with the latest state from the server.

  • A new function datarobot.async.wait_for_async_resolution can be used to poll for the resolution of any generic asynchronous operation on the server.


  • The JOB_TYPE enum now includes FEATURE_IMPACT.

  • The QUEUE_STATUS enum now includes ABORTED and COMPLETED.

  • The Project.create method now has a read_timeout parameter which can be used to keep open the connection to DataRobot while an uploaded file is being processed. For very large files this time can be substantial. Appropriately raising this value can help avoid timeouts when uploading large files.

  • The method Project.wait_for_autopilot has been enhanced to error if the project enters a state where autopilot may not finish. This avoids a situation that existed previously where users could wait indefinitely on their project that was not going to finish. However, users are still responsible to make sure a project has more than zero workers, and that the queue is not paused.

  • Feature.get now supports retrieving features by feature name. (For backwards compatibility, feature IDs are still supported until 3.0.)

  • File paths that have unicode directory names can now be used for creating projects and PredictJobs. The filename itself must still be ascii, but containing directory names can have other encodings.

  • Now raises more specific JobAlreadyRequested exception when we refuse a model fitting request as a duplicate. Users can explicitly catch this exception if they want it to be ignored.

  • A file_name attribute has been added to the Project class, identifying the file name associated with the original project dataset. Note that if the project was created from a data frame, the file name may not be helpful.

  • The connect timeout for establishing a connection to the server can now be set directly. This can be done in the yaml configuration of the client, or directly in the code. The default timeout has been lowered from 60 seconds to 6 seconds, which will make detecting a bad connection happen much quicker.


  • Fixed a bug (affecting Python 2 only) with printing features and featurelists whose names are not ascii.

API changes

  • Job class hierarchy is rearranged to better express the relationship between these objects. See documentation for datarobot.models.job for details.

  • Featurelist objects now have a project_id attribute to indicate which project they belong to. Directly accessing the project attribute of a Featurelist object is now deprecated

  • Support INI-style configuration, which was deprecated in v2.1, has been removed. yaml is the only supported configuration format.

  • The method Project.get_jobs method, which was deprecated in v2.1, has been removed. Users should use the Project.get_model_jobs method instead to get the list of model jobs.

Deprecation summary

  • PredictJob.create has been deprecated in favor of the alternate workflow using Model.request_predictions.

  • Feature.converter (used internally for object construction) has been made private.

  • Model.fetch_resource_data has been deprecated and will be removed in 3.0. To fetch a model from

    its ID, use Model.get.

  • The ability to use Feature.get with feature IDs (rather than names) is deprecated and will be removed in 3.0.

  • Instantiating a Project, Model, Blueprint, Featurelist, or Feature instance from a dict of data is now deprecated. Please use the from_data classmethod of these classes instead. Additionally, instantiating a Model from a tuple or by using the keyword argument data is also deprecated.

  • Use of the attribute Featurelist.project is now deprecated. You can use the project_id attribute of a Featurelist to instantiate a Project instance using Project.get.

  • Use of the attributes Model.project, Model.blueprint, and Model.featurelist are all deprecated now to avoid use of partially instantiated objects. Please use the ids of these objects instead.

  • Using a Project instance as an argument in Featurelist.get is now deprecated. Please use a project_id instead. Similarly, using a Project instance in Model.get is also deprecated, and a project_id should be used in its place.

Configuration changes

  • Previously it was possible (though unintended) that the client configuration could be mixed through environment variables, configuration files, and arguments to datarobot.Client. This logic is now simpler - please see the Getting Started section of the documentation for more information.



  • Fixed a bug with non-ascii project names using the package with Python 2.

  • Fixed an error that occurred when printing projects that had been constructed from an ID only or printing printing models that had been constructed from a tuple (which impacted printing PredictJobs).

  • Fixed a bug with project creation from non-ascii file names. Project creation from non-ascii file names is not supported, so this now raises a more informative exception. The project name is no longer used as the file name in cases where we do not have a file name, which prevents non-ascii project names from causing problems in those circumstances.

  • Fixed a bug (affecting Python 2 only) with printing projects, features, and featurelists whose names are not ascii.


New features

  • Project.get_features and Feature.get methods have been added for feature retrieval.

  • A generic Job entity has been added for use in retrieving the entire queue at once. Calling Project.get_all_jobs will retrieve all (appropriately filtered) jobs from the queue. Those can be cancelled directly as generic jobs, or transformed into instances of the specific job class using ModelJob.from_job and PredictJob.from_job, which allow all functionality previously available via the ModelJob and PredictJob interfaces.

  • Model.train now supports featurelist_id and scoring_type parameters, similar to Project.train.


  • Deprecation warning filters have been updated. By default, a filter will be added ensuring that usage of deprecated features will display a warning once per new usage location. In order to hide deprecation warnings, a filter like warnings.filterwarnings(‘ignore’, category=DataRobotDeprecationWarning) can be added to a script so no such warnings are shown. Watching for deprecation warnings to avoid reliance on deprecated features is recommended.

  • If your client is misconfigured and does not specify an endpoint, the cloud production server is no longer used as the default as in many cases this is not the correct default.

  • This changelog is now included in the distributable of the client.


  • Fixed an issue where updating the global client would not affect existing objects with cached clients. Now the global client is used for every API call.

  • An issue where mistyping a filepath for use in a file upload has been resolved. Now an error will be raised if it looks like the raw string content for modeling or predictions is just one single line.

API changes

  • Use of username and password to authenticate is no longer supported - use an API token instead.

  • Usage of start_time and finish_time parameters in Project.get_models is not supported both in filtering and ordering of models

  • Default value of sample_pct parameter of Model.train method is now None instead of 100. If the default value is used, models will be trained with all of the available training data based on project configuration, rather than with entire dataset including holdout for the previous default value of 100.

  • order_by parameter of Project.list which was deprecated in v2.0 has been removed.

  • recommendation_settings parameter of Project.start which was deprecated in v0.2 has been removed.

  • Project.status method which was deprecated in v0.2 has been removed.

  • Project.wait_for_aim_stage method which was deprecated in v0.2 has been removed.

  • Delay, ConstantDelay, NoDelay, ExponentialBackoffDelay, RetryManager classes from retry module which were deprecated in v2.1 were removed.

  • Package renamed to datarobot.

Deprecation summary

  • Project.update deprecated in favor of specific updates: rename, unlock_holdout, set_worker_count.

Documentation changes

  • A new use case involving financial data has been added to the examples directory.

  • Added documentation for the partition methods.



  • In Python 2, using a unicode token to instantiate the client will now work correctly.



  • The minimum required version of trafaret has been upgraded to 0.7.1 to get around an incompatibility between it and setuptools.



  • Minimal used version of requests_toolbelt package changed from 0.4 to 0.6


New features

  • Default to reading YAML config file from ~/.config/datarobot/drconfig.yaml

  • Allow config_path argument to client

  • wait_for_autopilot method added to Project. This method can be used to block execution until autopilot has finished running on the project.

  • Support for specifying which featurelist to use with initial autopilot in Project.set_target

  • Project.get_predict_jobs method has been added, which looks up all prediction jobs for a project

  • Project.start_autopilot method has been added, which starts autopilot on specified featurelist

  • The schema for PredictJob in DataRobot API v2.1 now includes a message. This attribute has been added to the PredictJob class.

  • PredictJob.cancel now exists to cancel prediction jobs, mirroring ModelJob.cancel

  • Project.from_async is a new classmethod that can be used to wait for an async resolution in project creation. Most users will not need to know about it as it is used behind the scenes in Project.create and Project.set_target, but power users who may run into periodic connection errors will be able to catch the new ProjectAsyncFailureError and decide if they would like to resume waiting for async process to resolve


  • AUTOPILOT_MODE enum now uses string names for autopilot modes instead of numbers

Deprecation summary

  • ConstantDelay, NoDelay, ExponentialBackoffDelay, and RetryManager utils are now deprecated

  • INI-style config files are now deprecated (in favor of YAML config files)

  • Several functions in the utils submodule are now deprecated (they are being moved elsewhere and are not considered part of the public interface)

  • Project.get_jobs has been renamed Project.get_model_jobs for clarity and deprecated

  • Support for the experimental date partitioning has been removed in DataRobot API, so it is being removed from the client immediately.

API changes

  • In several places where AppPlatformError was being raised, now TypeError, ValueError or InputNotUnderstoodError are now used. With this change, one can now safely assume that when catching an AppPlatformError it is because of an unexpected response from the server.

  • AppPlatformError has gained a two new attributes, status_code which is the HTTP status code of the unexpected response from the server, and error_code which is a DataRobot-defined error code. error_code is not used by any routes in DataRobot API 2.1, but will be in the future. In cases where it is not provided, the instance of AppPlatformError will have the attribute error_code set to None.

  • Two new subclasses of AppPlatformError have been introduced, ClientError (for 400-level response status codes) and ServerError (for 500-level response status codes). These will make it easier to build automated tooling that can recover from periodic connection issues while polling.

  • If a ClientError or ServerError occurs during a call to Project.from_async, then a ProjectAsyncFailureError (a subclass of AsyncFailureError) will be raised. That exception will have the status_code of the unexpected response from the server, and the location that was being polled to wait for the asynchronous process to resolve.


New features

  • PredictJob class was added to work with prediction jobs

  • wait_for_async_predictions function added to predict_job module

Deprecation summary

  • The order_by parameter of the Project.list is now deprecated.



  • Projet.set_target will re-fetch the project data after it succeeds, keeping the client side in sync with the state of the project on the server

  • Project.create_featurelist now throws DuplicateFeaturesError exception if passed list of features contains duplicates

  • Project.get_models now supports snake_case arguments to its order_by keyword

Deprecation summary

  • Project.wait_for_aim_stage is now deprecated, as the REST Async flow is a more reliable method of determining that project creation has completed successfully

  • Project.status is deprecated in favor of Project.get_status

  • recommendation_settings parameter of Project.start is deprecated in favor of recommender_settings


  • Project.wait_for_aim_stage changed to support Python 3

  • Fixed incorrect value of SCORING_TYPE.cross_validation

  • Models returned by Project.get_models will now be correctly ordered when the order_by keyword is used


  • Pinned versions of required libraries


Official release of v0.2


  • Updated documentation

  • Renamed parameter name of Project.create and Project.start to project_name

  • Removed Model.predict method

  • wait_for_async_model_creation function added to modeljob module

  • wait_for_async_status_service of Project class renamed to _wait_for_async_status_service

  • Can now use auth_token in config file to configure SDK


  • Fixes a method that pointed to a removed route


  • Added featurelist_id attribute to ModelJob class


  • Removes model attribute from ModelJob class


  • Project creation raises AsyncProjectCreationError if it was unsuccessful

  • Removed Model.list_prime_rulesets and Model.get_prime_ruleset methods

  • Removed Model.predict_batch method

  • Removed Project.create_prime_model method

  • Removed PrimeRuleSet model

  • Adds backwards compatibility bridge for ModelJob async

  • Adds ModelJob.get and ModelJob.get_model


  • Minor bugfixes in wait_for_async_status_service


  • Removes submit_model from Project until server-side implementation is improved

  • Switches training URLs for new resource-based route at /projects/<project_id>/models/

  • Job renamed to ModelJob, and using modelJobs route

  • Fixes an inconsistency in argument order for train methods


  • wait_for_async_status_service timeout increased from 60s to 600s


  • Project.create will now handle both async/sync project creation


  • All routes pluralized to sync with changes in API

  • Project.get_jobs will request all jobs when no param specified

  • dataframes from predict method will have pythonic names

  • Project.get_status created, Project.status now deprecated

  • Project.unlock_holdout created.

  • Added quickrun parameter to Project.set_target

  • Added modelCategory to Model schema

  • Add permalinks feature to Project and Model objects.

  • Project.create_prime_model created


  • Project.set_worker_count fix for compatibility with API change in project update.


  • Add positive class to set_target.

  • Change attributes names of Project, Model, Job and Blueprint
    • features in Model, Job and Blueprint are now processes

    • dataset_id and dataset_name migrated to featurelist_id and featurelist_name.

    • samplepct -> sample_pct

  • Model has now blueprint, project, and featurlist attributes.

  • Minor bugfixes.


  • Minor fixes regarding rename Job attributes. features attributes now named processes, samplepct now is sample_pct.


(May 27, 2015)

  • Minor fixes regarding migrating API from under_score names to camelCase.


(May 20, 2015)

  • Remove Project.upload_file, Project.upload_file_from_url and Project.attach_file methods. Moved all logic that uploading file to Project.create method.


(May 15, 2015)

  • Fix uploading file causing a lot of memory usage. Minor bugfixes.