Activities
latest
false
Banner background image
Integration Service Activities
Last updated May 9, 2024

Generate Chat Completion

Description

Generate a chat response for the provided request using chat completion models.

Project compatibility

Windows | Cross-platform

Configuration

  • Connection ID - The connection established in Integration Service. Access the drop-down menu to choose, add, or manage connections.

  • Model name - The name or ID of the model or deployment to use for the chat completion. Select an option from the available drop-down list, featuring several Azure OpenAI and Google Vertex models.
  • Prompt - The user prompt for the chat completion request. This field supports String type input.
  • PII detection - Whethre to detect PII frm the input prompt. Boolean value. default value is False.
    • PII filtering - If set to True, any detected PII/PHI is masked before sending to the LLM. If False, the detected PII is included in the prompt. In both cases, the detected PII is available in the output. If set to True, the quality of the output may be impacted. This field is displayed if PII detection is set to True.
    • PII language - The language of the prompt input and output to scan for PII. This field is displayed if PII detection is set to True.
    • PII/PHI category - The optional PII/PHI category or categories to analyze for. If not set, all categories are reviewed. This field is displayed if PII detection is set to True.
  • System prompt - The system prompt or context instruction of the chat completion request. This field supports String type input. This field is displayed if PII detection is set to True.
  • Context grounding - Insert context into prompt from an existing index (Orchestrator bucket) or from a file. Select one of the available options from the drop-down menu: None, Existing index, File resource.
    • Index - Name of the index to reference. This field is displayed if Context grounding is set to Existing index.
    • File - Click to use variable. This field supports IResource type input. This field is displayed if Context grounding is set to File resource.
    • Number of results - Indicates the number of results to be returned.
Manage Properties

Use the Manage Properties wizard to configure or use any of the object's standard or custom fields. You can select fields to add them to the activity canvas. The added standard or custom fields are available in the Properties panel (in Studio Desktop) or under Show additional options (in Studio Web).

Additional options
  • Maximum tokens count - The maximum number of tokens to generate in the completion. The token count of your prompt plus those from the result/completion cannot exceed the value provided for this field. It's best to set this value to be less than the model maximum count so as to have some room for the prompt token count. Default value is 1024. This field supports Int64 type input.
  • Temperature - The value of the creativity factor or sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative responses or completions, or 0 (also called argmax sampling) for ones with a well-defined or more exact answer. The general reccomendation is to alter, from the default value, this or the Nucleus Sample value, but not both values. Default value is 1.
  • Completion choices count - The number of completion choices to generate for the request. The higher the value of this field, the more the number of tokens that will get used. This results in a higher cost, so the user needs to be aware of that when setting the value of this field. Defaults to 1.
  • Stop sequence - Up to four sequences where the API will stop generating further tokens. The returned text does not contain the stop sequence. Default value is null.
  • Presence penalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Default value is 0.
  • Frequency penalty - Number between -2.0 and 2.0. Positivbe values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Default value is 0.
Output
  • Top generated text - The generated text. Automatically generated output variable.
  • Generate Chat completion - Automatically generated output variable.
  • Description
  • Project compatibility
  • Configuration

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.