Activities
latest
false
Banner background image
Integration Service Activities
Last updated Apr 23, 2024

Generate Chat Completion

Description

Generate a chat response for the provided request using chat completion models.

Project compatibility

Windows | Cross-platform

Configuration

  • Connection ID - The connection established in Integration Service. Access the drop-down menu to choose, add, or manage connections.

  • Model name - The name or ID of the model or deployment to use for the chat completion. Select an option from the available drop-down list, featuring several Azure OpenAI and Google Vertex models.
  • Prompt - The user prompt for the chat completion request. This field supports String type input.
Manage Properties

Use the Manage Properties wizard to configure or use any of the object's standard or custom fields. You can select fields to add them to the activity canvas. The added standard or custom fields are available in the Properties panel (in Studio Desktop) or under Show additional options (in Studio Web).

Additional options
  • System prompt - The system prompt or context instruction of the chat completion request. This field supports String type input.
  • Maximum tokens count - The maximum number of tokens to generate in the completion. The token count of your prompt plus those from the result/completion cannot exceed the value provided for this field. It's best to set this value to be less than the model maximum count so as to have some room for the prompt token count. Default value is 1024. This field supports Int64 type input.
  • Temperature - The value of the creativity factor or sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative responses or completions, or 0 (also called argmax sampling) for ones with a well-defined or more exact answer. The general reccomendation is to alter, from the default value, this or the Nucleus Sample value, but not both values. Default value is 1.
  • Completion choices count - The number of completion choices to generate for the request. The higher the value of this field, the more the number of tokens that will get used. This results in a higher cost, so the user needs to be aware of that when setting the value of this field. Defaults to 1.
  • Stop sequence - Up to four sequences where the API will stop generating further tokens. The returned text does not contain the stop sequence. Default value is null.
  • Presence penalty - Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Default value is 0.
  • Frequency penalty - Number between -2.0 and 2.0. Positivbe values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. Default value is 0.
Output
  • Top generated text - The generated text. Automatically generated output variable.
  • Generate Chat completion - Automatically generated output variable.
  • Description
  • Project compatibility
  • Configuration

Was this page helpful?

Get The Help You Need
Learning RPA - Automation Courses
UiPath Community Forum
Uipath Logo White
Trust and Security
© 2005-2024 UiPath. All rights reserved.