Prompting

class datarobot.models.genai.chat.Chat

Bases: APIObject

Metadata for a DataRobot GenAI chat.

Variables:
  • id (str) – The chat ID.

  • name (str) – The chat name.

  • llm_blueprint_id (str) – The ID of the LLM blueprint associated with the chat.

  • is_frozen (bool) – Checks whether the chat is frozen. Prompts cannot be submitted to frozen chats.

  • creation_date (str) – The date when the chat was created.

  • creation_user_id (str) – The ID of the creating user.

  • warning (str or None, optional) – The warning about the contents of the chat.

  • prompts_count (int) – The number of chat prompts in the chat.

classmethod create(name, llm_blueprint)

Creates a new chat.

Parameters:
  • name (str) – The chat name.

  • llm_blueprint (LLMBlueprint or str) – The LLM blueprint associated with the created chat, either LLM blueprint or ID.

Returns:

chat – The created chat.

Return type:

Chat

classmethod get(chat)

Retrieve a single chat.

Parameters:

chat (Chat or str) – The chat you want to retrieve. Accepts chat or chat ID.

Returns:

chat – The requested chat.

Return type:

Chat

classmethod list(llm_blueprint=None, sort=None)

List all chats available to the user. If the LLM blueprint is specified, results are restricted to only those chats associated with the LLM blueprint.

Parameters:
  • llm_blueprint (Optional[Union[LLMBlueprint, str]], optional) – Returns only those chats associated with a particular LLM blueprint, specified by either the entity or the ID.

  • sort (Optional[str]) – The property to sort chats by. Prefix the attribute name with a dash ( - ) to sort responses in descending order, (for example, ‘-name’). Supported options are listed in ListChatsSortQueryParams, but the values can differ depending on platform version. The default sort parameter is None, which results in chats returning in order of creation time, descending.

Returns:

chats – Returns a list of chats.

Return type:

list[Chat]

delete()

Delete the single chat.

Return type:

None

update(name)

Update the chat.

Parameters:

name (str) – The new name for the chat.

Returns:

chat – The updated chat.

Return type:

Chat

class datarobot.models.genai.chat_prompt.ChatPrompt

Bases: APIObject

Metadata for a DataRobot GenAI chat prompt.

Variables:
  • id (str) – Chat prompt ID.

  • text (str) – The prompt text.

  • llm_blueprint_id (str) – ID of the LLM blueprint associated with the chat prompt.

  • llm_id (str) – ID of the LLM type. This must be one of the IDs returned by LLMDefinition.list for this user.

  • llm_settings (dict or None) – The LLM settings for the LLM blueprint. The specific keys allowed and the constraints on the values are defined in the response from LLMDefinition.list, but this typically has dict fields. Either: - system_prompt - The system prompt that influences the LLM responses. - max_completion_length - The maximum number of tokens in the completion. - temperature - Controls the variability in the LLM response. - top_p - Sets whether the model considers next tokens with top_p probability mass. or - system_prompt - The system prompt that influences the LLM responses. - validation_id - The ID of the external model LLM validation. - external_llm_context_size - The external LLM’s context size, in tokens, for external model-based LLM blueprints.

  • creation_date (str) – The date the chat prompt was created.

  • creation_user_id (str) – ID of the creating user.

  • vector_database_id (str or None) – ID of the vector database associated with the LLM blueprint, if any.

  • vector_database_settings (VectorDatabaseSettings or None) – The settings for the vector database associated with the LLM blueprint, if any.

  • result_metadata (ResultMetadata or None) – Metadata for the result of the chat prompt submission.

  • result_text (str or None) – The result text from the chat prompt submission.

  • confidence_scores (ConfidenceScores or None) – The confidence scores if there is a vector database associated with the chat prompt.

  • citations (list[Citation]) – List of citations from text retrieved from the vector database, if any.

  • execution_status (str) – The execution status of the chat prompt.

  • chat_id (Optional[str]) – ID of the chat associated with the chat prompt.

  • chat_context_id (Optional[str]) – The ID of the chat context for the chat prompt.

  • chat_prompt_ids_included_in_history (Optional[list[str]]) – The IDs of the chat prompts included in the chat history for this chat prompt.

  • metadata_filter (Optional[Dict[str, Any] | None]) –

    The metadata filter to apply to the vector database.

    Supports: - None or empty dict (no filters): Considers all documents - Multiple field filters (implicit AND): {“a”: 1, “b”: “b”} - Comparison operators: {“field”: {“$gt”: 5}} - Logical operators: {“$and”: […], “$or”: […]} - Nested combinations of the above

    Comparison operators: - $eq: equal to (string, int, float, bool) - $ne: not equal to (string, int, float, bool) - $gt: greater than (int, float) - $gte: greater than or equal to (int, float) - $lt: less than (int, float) - $lte: less than or equal to (int, float) - $in: a value is in list (string, int, float, bool) - $nin: a value is not in list (string, int, float, bool) - $contains: a string contains a value (string) - $not_contains: a string does not contain a value (string)

classmethod create(text, llm_blueprint=None, chat=None, llm=None, llm_settings=None, vector_database=None, vector_database_settings=None, wait_for_completion=False, metadata_filter=None)

Create a new ChatPrompt. This submits the prompt text to the LLM. Either llm_blueprint or chat is required.

Parameters:
  • text (str) – The prompt text.

  • llm_blueprint (LLMBlueprint or str or None, optional) – The LLM blueprint associated with the created chat prompt, either LLMBlueprint or LLM blueprint ID.

  • chat (Chat or str or None, optional) – The chat associated with the created chat prompt, either Chat or chat ID.

  • llm (LLMDefinition, str, or None, optional) – The LLM to use for the chat prompt, either LLMDefinition or LLM blueprint ID.

  • llm_settings (dict or None) – LLM settings to use for the chat prompt. The specific keys allowed and the constraints on the values are defined in the response from LLMDefinition.list but this typically has dict fields: - system_prompt - The system prompt that tells the LLM how to behave. - max_completion_length - The maximum number of tokens in the completion. - temperature - Controls the variability in the LLM response. - top_p - Whether the model considers next tokens with top_p probability mass. Or - system_prompt - The system prompt that tells the LLM how to behave. - validation_id - The ID of the custom model LLM validation for custom model LLM blueprints.

  • vector_database (VectorDatabase, str, or None, optional) – The vector database to use with this chat prompt submission, either VectorDatabase or vector database ID.

  • vector_database_settings (VectorDatabaseSettings or None, optional) – Settings for the vector database, if any.

  • wait_for_completion (bool) – If set to True, chat prompt result response limit is up to 10 minutes, raising a timeout error after that. Otherwise, check current status by using ChatPrompt.get with returned ID.

  • metadata_filter (Optional[Dict[str, Any] | None]) –

    The metadata filter to apply to the vector database.

    Supports: - None or empty dict (no filters): Considers all documents - Multiple field filters (implicit AND): {“a”: 1, “b”: “b”} - Comparison operators: {“field”: {“$gt”: 5}} - Logical operators: {“$and”: […], “$or”: […]} - Nested combinations of the above

    Comparison operators: - $eq: equal to (string, int, float, bool) - $ne: not equal to (string, int, float, bool) - $gt: greater than (int, float) - $gte: greater than or equal to (int, float) - $lt: less than (int, float) - $lte: less than or equal to (int, float) - $in: a value is in list (string, int, float, bool) - $nin: a value is not in list (string, int, float, bool) - $contains: a string contains a value (string) - $not_contains: a string does not contain a value (string)

Returns:

chat_prompt – The created chat prompt.

Return type:

ChatPrompt

update(custom_metrics=None, feedback_metadata=None)

Update the chat prompt.

Parameters:
  • custom_metrics (Optional[list[MetricMetadata]], optional) – The new custom metrics to add to the chat prompt.

  • feedback_metadata (Optional[FeedbackMetadata], optional) – The new feedback to add to the chat prompt.

Returns:

chat_prompt – The updated chat prompt.

Return type:

ChatPrompt

classmethod get(chat_prompt)

Retrieve a single chat prompt.

Parameters:

chat_prompt (ChatPrompt or str) – The chat prompt you want to retrieve, either ChatPrompt or chat prompt ID.

Returns:

chat_prompt – The requested chat prompt.

Return type:

ChatPrompt

classmethod list(llm_blueprint=None, playground=None, chat=None)

List all chat prompts available to the user. If the llm_blueprint, playground, or chat is specified then the results are restricted to the chat prompts associated with that entity.

Parameters:
  • llm_blueprint (Optional[Union[LLMBlueprint, str]], optional) – The returned chat prompts are filtered to those associated with a specific LLM blueprint if it is specified. Accepts either LLMBlueprint or LLM blueprint ID.

  • playground (Optional[Union[Playground, str]], optional) – The returned chat prompts are filtered to those associated with a specific playground if it is specified. Accepts either Playground or playground ID.

  • chat (Optional[Union[Chat, str]], optional) – The returned chat prompts are filtered to those associated with a specific chat if it is specified. Accepts either Chat or chat ID.

Returns:

chat_prompts – A list of chat prompts available to the user.

Return type:

list[ChatPrompt]

delete()

Delete the single chat prompt.

Return type:

None

create_llm_blueprint(name, description='')

Create a new LLM blueprint from an existing chat prompt.

Parameters:
  • name (str) – LLM blueprint name.

  • description (Optional[str]) – Description of the LLM blueprint, by default “”.

Returns:

llm_blueprint – The created LLM blueprint.

Return type:

LLMBlueprint

class datarobot.models.genai.comparison_chat.ComparisonChat

Bases: APIObject

Metadata for a DataRobot GenAI comparison chat.

Variables:
  • id (str) – The comparison chat ID.

  • name (str) – The comparison chat name.

  • playground_id (str) – The ID of the playground associated with the comparison chat.

  • creation_date (str) – The date when the comparison chat was created.

  • creation_user_id (str) – The ID of the creating user.

classmethod create(name, playground)

Creates a new comparison chat.

Parameters:
  • name (str) – The comparison chat name.

  • playground (Playground or str) – The playground associated with the created comparison chat, either Playground or playground ID.

Returns:

comparison_chat – The created comparison chat.

Return type:

ComparisonChat

classmethod get(comparison_chat)

Retrieve a single comparison chat.

Parameters:

comparison_chat (ComparisonChat or str) – The comparison chat you want to retrieve. Accepts ComparisonChat or comparison chat ID.

Returns:

comparison_chat – The requested comparison chat.

Return type:

ComparisonChat

classmethod list(playground=None, sort=None)

List all comparison chats available to the user. If the playground is specified, results are restricted to only those comparison chats associated with the playground.

Parameters:
  • playground (Optional[Union[Playground, str]], optional) – Returns only those comparison chats associated with a particular playground, specified by either the Playground or the playground ID.

  • sort (Optional[str]) – The property to sort comparison chats by. Prefix the attribute name with a dash ( - ) to sort responses in descending order, (for example, ‘-name’). Supported options are listed in ListComparisonChatsSortQueryParams, but the values can differ depending on platform version. The default sort parameter is None, which results in comparison chats returning in order of creation time, descending.

Returns:

comparison_chats – Returns a list of comparison chats.

Return type:

list[ComparisonChat]

delete()

Delete the single comparison chat.

Return type:

None

update(name)

Update the comparison chat.

Parameters:

name (str) – The new name for the comparison chat.

Returns:

comparison_chat – The updated comparison chat.

Return type:

ComparisonChat

class datarobot.models.genai.comparison_prompt.ComparisonPrompt

Bases: APIObject

Metadata for a DataRobot GenAI comparison prompt.

Variables:
  • id (str) – Comparison prompt ID.

  • text (str) – The prompt text.

  • results (list[ComparisonPromptResult]) – The list of results for individual LLM blueprints that are part of the comparison prompt.

  • creation_date (str) – The date when the playground was created.

  • creation_user_id (str) – ID of the creating user.

  • comparison_chat_id (str) – The ID of the comparison chat this comparison prompt is associated with.

  • metadata_filter (Optional[Dict[str, Any] | None]) –

    The metadata filter to apply to the vector database.

    Supports: - None or empty dict (no filters): Considers all documents - Multiple field filters (implicit AND): {“a”: 1, “b”: “b”} - Comparison operators: {“field”: {“$gt”: 5}} - Logical operators: {“$and”: […], “$or”: […]} - Nested combinations of the above

    Comparison operators: - $eq: equal to (string, int, float, bool) - $ne: not equal to (string, int, float, bool) - $gt: greater than (int, float) - $gte: greater than or equal to (int, float) - $lt: less than (int, float) - $lte: less than or equal to (int, float) - $in: a value is in list (string, int, float, bool) - $nin: a value is not in list (string, int, float, bool) - $contains: a string contains a value (string) - $not_contains: a string does not contain a value (string)

update(additional_llm_blueprints=None, wait_for_completion=False, feedback_result=None, **kwargs)

Update the comparison prompt.

Parameters:

additional_llm_blueprints (list[LLMBlueprint or str]) – The additional LLM blueprints you want to submit the comparison prompt.

Returns:

comparison_prompt – The updated comparison prompt.

Return type:

ComparisonPrompt

classmethod create(llm_blueprints, text, comparison_chat=None, wait_for_completion=False, metadata_filter=None)

Create a new ComparisonPrompt. This submits the prompt text to the LLM blueprints that are specified.

Parameters:
  • llm_blueprints (list[LLMBlueprint or str]) – The LLM blueprints associated with the created comparison prompt. Accepts LLM blueprints or IDs.

  • text (str) – The prompt text.

  • comparison_chat (Optional[ComparisonChat or str], optional) – The comparison chat to add the comparison prompt to. Accepts ComparisonChat or comparison chat ID.

  • wait_for_completion (bool) – If set to True code will wait for the chat prompt job to complete before returning the result (up to 10 minutes, raising timeout error after that). Otherwise, you can check current status by using ChatPrompt.get with returned ID.

  • metadata_filter (Optional[Dict[str, Any] | None]) –

    The metadata filter to apply to the vector database.

    Supports: - None or empty dict (no filters): Considers all documents - Multiple field filters (implicit AND): {“a”: 1, “b”: “b”} - Comparison operators: {“field”: {“$gt”: 5}} - Logical operators: {“$and”: […], “$or”: […]} - Nested combinations of the above

    Comparison operators: - $eq: equal to (string, int, float, bool) - $ne: not equal to (string, int, float, bool) - $gt: greater than (int, float) - $gte: greater than or equal to (int, float) - $lt: less than (int, float) - $lte: less than or equal to (int, float) - $in: a value is in list (string, int, float, bool) - $nin: a value is not in list (string, int, float, bool) - $contains: a string contains a value (string) - $not_contains: a string does not contain a value (string)

Returns:

comparison_prompt – The created comparison prompt.

Return type:

ComparisonPrompt

classmethod get(comparison_prompt)

Retrieve a single comparison prompt.

Parameters:

comparison_prompt (str) – The comparison prompt you want to retrieve. Accepts entity or ID.

Returns:

comparison_prompt – The requested comparison prompt.

Return type:

ComparisonPrompt

classmethod list(llm_blueprints=None, comparison_chat=None)

List all comparison prompts available to the user that include the specified LLM blueprints or from the specified comparison chat.

Parameters:
  • llm_blueprints (Optional[List[Union[LLMBlueprint, str]]], optional) – The returned comparison prompts are only those associated with the specified LLM blueprints. Accepts either LLMBlueprint or LLM blueprint ID.

  • comparison_chat (Optional[Union[ComparisonChat, str]], optional) – The returned comparison prompts are only those associated with the specified comparison chat. Accepts either ComparisonChat or comparison chat ID.

Returns:

comparison_prompts – A list of comparison prompts available to the user that use the specified LLM blueprints.

Return type:

list[ComparisonPrompt]

delete()

Delete the single comparison prompt.

Return type:

None

class datarobot.models.genai.playground.Playground

Bases: APIObject

Metadata for a DataRobot GenAI playground.

Variables:
  • id (str) – Playground ID.

  • name (str) – Playground name.

  • description (str) – Description of the playground.

  • use_case_id (str) – Linked use case ID.

  • creation_date (str) – The date when the playground was created.

  • creation_user_id (str) – ID of the creating user.

  • last_update_date (str) – Date when the playground was most recently updated.

  • last_update_user_id (str) – ID of the user who most recently updated the playground.

  • saved_llm_blueprints_count (int) – Number of saved LLM blueprints in the playground.

  • llm_blueprints_count (int) – Number of LLM blueprints in the playground.

  • user_name (str) – The name of the user who created the playground.

classmethod create(name, description='', use_case=None, copy_insights=None)

Create a new playground.

Parameters:
  • name (str) – Playground name.

  • description (Optional[str]) – Description of the playground, by default “”.

  • use_case (Optional[Union[UseCase, str]], optional) – Use case to link to the created playground.

  • copy_insights (CopyInsightsRequest, optional) – If present, copies insights from the source playground to the created playground.

Returns:

playground – The created playground.

Return type:

Playground

classmethod get(playground_id)

Retrieve a single playground.

Parameters:

playground_id (str) – The ID of the playground you want to retrieve.

Returns:

playground – The requested playground.

Return type:

Playground

classmethod list(use_case=None, search=None, sort=None)

List all playgrounds available to the user. If the use_case is specified or can be inferred from the Context then the results are restricted to the playgrounds associated with the UseCase.

Parameters:
  • use_case (Optional[UseCaseLike], optional) – The returned playgrounds are filtered to those associated with a specific Use Case or Cases if specified or can be inferred from the Context. Accepts either the entity or the ID.

  • search (Optional[str]) – String for filtering playgrounds. Playgrounds that contain the string in name will be returned. If not specified, all playgrounds will be returned.

  • sort (Optional[str]) – Property to sort playgrounds by. Prefix the attribute name with a dash to sort in descending order, e.g. sort=’-creationDate’. Currently supported options are listed in ListPlaygroundsSortQueryParams but the values can differ with different platform versions. By default, the sort parameter is None which will result in playgrounds being returned in order of creation time descending.

Returns:

playgrounds – A list of playgrounds available to the user.

Return type:

list[Playground]

update(name=None, description=None)

Update the playground.

Parameters:
  • name (str) – The new name for the playground.

  • description (str) – The new description for the playground.

Returns:

playground – The updated playground.

Return type:

Playground

delete()

Delete the playground.

Return type:

None

class datarobot.enums.PromptType

Bases: StrEnum

Supported LLM blueprint prompting types.

ONE_TIME_PROMPT = 'ONE_TIME_PROMPT'
CHAT_HISTORY_AWARE = 'CHAT_HISTORY_AWARE'
class datarobot.models.genai.user_limits.UserLimits

Bases: APIObject

Counts for user limits for LLM APIs and vector databases.

classmethod get_vector_database_count()

Get the count of vector databases for the user.

Return type:

APIObject

classmethod get_llm_requests_count()

Get the count of LLMs requests made by the user.

Return type:

APIObject

class datarobot.models.genai.chat_prompt.ResultMetadata

Bases: APIObject

Metadata for the result of a chat prompt submission.

Variables:
  • output_token_count (int) – The number of tokens in the output.

  • input_token_count (int) – The number of tokens in the input. This includes the chat history and documents retrieved from a vector database, if any.

  • total_token_count (int) – The total number of tokens processed.

  • estimated_docs_token_count (int) – The estimated number of tokens from the documents retrieved from a vector database, if any.

  • latency_milliseconds (int) – The latency of the chat prompt submission in milliseconds.

  • feedback_result (FeedbackResult) – The lists of user_ids providing positive and negative feedback.

  • metrics (MetricMetadata) – The evaluation metrics for this prompt.

  • final_prompt (Optional[Union[str, dict]], optional) – Representation of the final prompt sent to the LLM.

  • error_message (str or None, optional) – The error message from the LLM response.

  • cost (float or None, optional) – The cost of the chat prompt submission.

class datarobot.models.genai.prompt_trace.PromptTrace

Bases: APIObject

Prompt trace contains aggregated information about a prompt execution.

Variables:
  • timestamp (str) – The timestamp of the trace (ISO 8601 formatted).

  • user (dict) – The user who submitted the prompt.

  • chat_prompt_id (str) – The ID of the chat prompt associated with the trace.

  • use_case_id (str) – The ID of the Use Case the playground is in.

  • comparison_prompt_id (str) – The ID of the comparison prompts associated with the trace.

  • llm_blueprint_id (str) – The ID of the LLM blueprint that the prompt was submitted to.

  • llm_blueprint_name (str) – The name of the LLM blueprint.

  • llm_name (str) – The name of the LLM in the LLM blueprint.

  • llm_vendor (str) – The vendor name of the LLM.

  • llm_license (str) – What type of license the LLM has.

  • llm_settings (dict or None) – The LLM settings for the LLM blueprint. The specific keys allowed and the constraints on the values are defined in the response from LLMDefinition.list, but this typically has dict fields. Either: - system_prompt - The system prompt that influences the LLM responses. - max_completion_length - The maximum number of tokens in the completion. - temperature - Controls the variability in the LLM response. - top_p - Sets whether the model considers next tokens with top_p probability mass. or - system_prompt - The system prompt that influences the LLM responses. - validation_id - The ID of the external model LLM validation. - external_llm_context_size - The external LLM’s context size, in tokens, for external model-based LLM blueprints.

  • chat_name (str or None) – The name of the chat associated with the Trace.

  • chat_id (str or None) – The ID of the chat associated with the Trace.

  • vector_database_id (str or None) – ID of the vector database associated with the LLM blueprint, if any.

  • vector_database_settings (VectorDatabaseSettings or None) – The settings for the vector database associated with the LLM blueprint, if any.

  • result_metadata (ResultMetadata or None) – Metadata for the result of the prompt submission.

  • result_text (str or None) – The result text from the prompt submission.

  • confidence_scores (ConfidenceScores or None) – The confidence scores if there is a vector database associated with the prompt.

  • text (str) – The prompt text submitted to the LLM.

  • execution_status (str) – The execution status of the chat prompt.

  • evaluation_dataset_configuration_id (str or None) – The ID of the evaluation dataset configuration associated with the trace.

classmethod list(playground)

List all prompt traces for a playground.

Parameters:

playground (str) – The ID of the playground to list prompt traces for.

Returns:

prompt_traces – List of prompt traces for the playground.

Return type:

list[PromptTrace]

classmethod export_to_ai_catalog(playground)

Export prompt traces to AI Catalog as a CSV.

Parameters:

playground (str) – The ID of the playground to export prompt traces for.

Returns:

status_url – The URL where the status of the job can be monitored

Return type:

str

class datarobot.models.genai.prompt_trace.TraceMetadata

Bases: APIObject

Trace metadata contains information about all the users and chats that are relevant to this playground.

Variables:

users (list[dict]) – The users who submitted the prompt.

classmethod get(playground)

Get trace metadata for a playground.

Parameters:

playground (str) – The ID of the playground to get trace metadata for.

Returns:

trace_metadata – The trace metadata for the playground.

Return type:

TraceMetadata