<ms-einstein-ai:agent-define-prompt-template
doc:name="Agent define prompt template"
doc:id="f1c29c39-eac9-468c-9c46-4109a66303ec"
config-ref="Einstein_AI"
template="#[payload.template]"
instructions="#[payload.instructions]"
dataset="#[payload.dataset]"
/>
Configuring Agent Operations for Einstein AI Connector
Configure the Agent Define Prompt Template Operation
Use the Agent define prompt template operation to define specific prompt templates for use when integrating with the LLM of your choice. This allows you to create natural language prompts that generate responses, extract information, invoke other prompts, or perform any text-based task.
-
Select the operation on the Anypoint Code Builder or Studio canvas.
-
In the General properties tab for the operation, enter these values:
-
Template
Enter the prompt template for the operation.
-
Instructions
Provide the instructions for the LLM, outlining the goals of the task.
-
Dataset
Specify the dataset for the LLM to evaluate using the provided template and instructions.
-
-
In Additional properties, enter these values:
-
Model name
Select the model name. The default is
OpenAI GPT 3.5 Turbo
. -
Probability
Enter the probability of the model staying accurate. The default is
0.8
. LocaleEnter the localization information, which can include the default locale, input locale(s), and expected output locale(s). The default is
en_US
.
-
This is the XML for this operation:
Output Configuration
This operation responds with a JSON payload containing the main LLM response. Additionally, attributes such as token usage are included as part of the metadata (attributes) but not within the main payload. This additional metadata is particularly useful to track token usage across applications and manage costs associated with LLM adoption.
This is an example response:
{
"response": "{\n \"type\": \"positive\",\n \"response\": \"Thank you for your positive feedback on the training last week. We are glad to hear that you had a great experience. Have a nice day!\"\n}"
}
The operation also returns attributes that aren’t within the main JSON payload, which include information about token usage, for example:
{
"tokenUsage": {
"outputCount": 9,
"totalCount": 18,
"inputCount": 9
},
"additionalAttributes": {}
}
-
tokenUsage
Token usage metadata returned as attributes
-
outputCount
Number of tokens used to generate the output
-
totalCount
Total number of tokens used for input and output
-
inputCount
Number of tokens used to process the input