Contact Us 1-800-596-4880

Configuring Text Generation Operations

Configure the [Agent] Define Prompt Template, [Chat] Answer Prompt, [Chat] Completions, and [Tools] Native Template (Reasoning only) operations.

Configure the Agent Define Prompt Template Operation

The [Agent] Define Prompt Template operation is essential for configuring specific prompt templates with the LLM of your choice. This operation allows you to define and compose AI functions using plain text, enabling the creation of natural language prompts, generating responses, extracting information, invoking other prompts, or performing any text-based task.

Apply the [Agent] Define Prompt Template operation in various scenarios, such as for:

  • Customer Service Agents

    Enhance customer service by providing case summaries, case classifications, summarizing large datasets, and more.

  • Sales Operation Agents

    Assist sales teams in writing sales emails, summarizing cases for specific accounts, assessing the probability of closing deals, and more.

  • Marketing Agents

    Support marketing teams in generating product descriptions, creating newsletters, planning social media campaigns, and more.

To configure the [Agent] Define Prompt Template operation:

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter these values:

    • Template

      Enter the prompt template for the operation.

    • Instructions

      Provide the instructions for the inference provider, outlining the goals of the task.

    • Data

      Specify the data for the inference provider to evaluate using the provided template and instructions.

This is the XML for this operation:

<ms-inference:agent-define-prompt-template doc:name="Agent define prompt template" doc:id="5944353c-c784-4268-9f16-c036e5eaf8e3" config-ref="OpenAIConfig" >
			<ms-inference:template ><![CDATA[You are a customer satisfaction agent, who analyses the customer feedback in the dataset. Answer via json output and add a type for the result only with positive or negative as well as the complete answer]]></ms-inference:template>
			<ms-inference:instructions ><![CDATA[If the customer feedback in the dataset is negative, open a service satisfaction case and apologize to the customer. If the customer feedback in the dataset is positive, thank the customer and wish them a nice day. Do not repeat the feedback and be more direct starting the conversation with formal greetings]]></ms-inference:instructions>
			<ms-inference:data ><![CDATA[The training last week was amazing, we learned so much and the trainer was very friendly]]></ms-inference:data>
</ms-inference:agent-define-prompt-template>

Output Configuration

This operation responds with a JSON payload containing the main LLM response. This is an example response:

{
  "response": "{\n  \"type\": \"positive\",\n  \"response\": \"Thank you for your positive feedback on the training last week. We are glad to hear that you had a great experience. Have a nice day!\"\n}"
}

The operation also returns attributes that aren’t within the main JSON payload, that include information about token usage, for example:

{
  "tokenUsage": {
      "inputCount": 9
      "outputCount": 9,
      "totalCount": 18,
  },
  "additionalAttributes": {
    "finish_reason": "stop",
    "model": "openai/gpt-4o-mini",
    "id": "agentcmpl-fc4137f6-0b40-4018-936f-f0df6c0b5da1"
  }
}
  • tokenUsage: Token usage metadata returned as attributes

    • inputCount: Number of tokens used to process the input

    • outputCount: Number of tokens used to generate the output

    • totalCount: Total number of tokens used for input and output

  • additionalAttributes: Additional metadata from the LLM provider

    • finish_reason: The finish reason for the LLM response

    • model: The ID of the model used

    • id: The ID of the request

Configure the Chat Answer Prompt Operation

The [Chat] Answer Prompt operation is a simple prompt request operation to the configured LLM. It uses a plain text prompt as input and responds with a plain text answer.

Apply the [Chat] Answer Prompt operation in various scenarios, such as for:

  • Basic Chatbots

    Answer simple user prompts.

  • Customer Service Queries

    Provide direct answers to frequently asked questions.

To configure the [Chat] Answer Prompt operation:

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter these values:

    • Prompt

      Enter the prompt as plain text for the operation.

This is the XML for this operation:

<ms-inference:chat-answer-prompt doc:name="Chat answer prompt" doc:id="f513c329-d277-41e6-932c-f5032b3365ac" config-ref="OpenAIConfig" >
	<ms-inference:prompt ><![CDATA[What is the capital of Switzerland?]]></ms-inference:prompt>
</ms-inference:chat-answer-prompt>

Output Configuration

This operation responds with a JSON payload containing the main LLM response. This is an example response:

{
    "response": "The capital of Switzerland is Bern. It's known for its well-preserved medieval old town, which is a UNESCO World Heritage site. Bern became the capital of Switzerland in 1848. The Swiss parliament, the Federal Assembly, is located in Bern."
}

The operation also returns attributes that aren’t within the main JSON payload, that include information about token usage, for example:

{
    "attributes": {
        "tokenUsage": {
            "inputCount": 9,
            "outputCount": 9,
            "totalCount": 18
        },
        "additionalAttributes": {
            "finish_reason": "stop",
            "model": "gpt-4o-mini",
            "id": "chatanswer-gc2425f6-0b70-4018-936f-f0df6c0b5da1"
        }
    }
  • tokenUsage: Token usage metadata returned as attributes

    • inputCount: Number of tokens used to process the input

    • outputCount: Number of tokens used to generate the output

    • totalCount: Total number of tokens used for input and output

  • additionalAttributes: Additional metadata from the LLM provider

    • finish_reason: The finish reason for the LLM response

    • model: The ID of the model used

    • id: The ID of the request

Configure the Chat Completions Operation

The [Chat] Completions operation is useful when you want to retain conversation history for a multi-user chat operation. MuleSoft Object Store or any database can hold previous user conversation, which can be provided as messages into the [Chat] Completions operation.

Apply the [Chat] Completions operation in various scenarios, such as for:

  • Customer Support Chats

    Retaining the context of ongoing support conversations.

  • Multi-user Chat Applications

    Maintaining conversation history for different users.

  • Personal Assistants

    Keeping track of user interactions to provide more relevant responses.

To configure the [Chat] Completions operation:

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter these values:

    • Messages

      Specify the conversation history as a JSON array.

This is the XML for this operation:

<set-variable value='#[%dw 2.0
	output application/json
	---
	[{
		"role": "assistant",
		"content": "You are a helpful assistant."
	},
	{
		"role": "user",
		"content": "What is the capital of Switzerland!"
	}
	]]' doc:name="Set Variable" doc:id="00231872-9564-4fad-b580-4f745c16ac9d" variableName="payload.data"/>
<ms-inference:chat-completions doc:name="Chat completions" doc:id="b2c68037-6af9-4e2a-9297-c57749a38193" config-ref="OpenAIConfig" >
	<ms-inference:messages ><![CDATA[#[payload.data]]]></ms-inference:messages>
</ms-inference:chat-completions>
You can declare the payload.data variable separately or you can inline it with the messages tag.

Output Configuration

This operation responds with a JSON payload containing the main LLM response. This is an example response:

{
   "response": "The capital of Switzerland is **Bern**. 🇨🇭 \n"
}

The operation also returns attributes that aren’t within the main JSON payload, that include information about token usage and additional metadata, for example:

{
    "attributes": {
        "tokenUsage": {
            "inputCount": 22,
            "outputCount": 15,
            "totalCount": 37
        },
        "additionalAttributes": {
            "finish_reason": "stop",
            "model": "gpt-4o-mini",
            "id": "chatcmpl-fc2425f6-0b70-4018-936f-f0df6c0b5da1"
        }
    }
}
  • tokenUsage: Token usage metadata returned as attributes

    • inputCount: Number of tokens used to process the input

    • outputCount: Number of tokens used to generate the output

    • totalCount: Total number of tokens used for input and output

  • additionalAttributes: Additional metadata from the LLM provider

    • finish_reason: The finish reason for the LLM response

    • model: The ID of the model used

    • id: The ID of the request

Configure the Tools Native Template (Reasoning only) Operation

The [Tools] Native Template (Reasoning only) operation is useful if you want to create autonomous agents that can use external tools whenever a prompt can’t be answered directly by the AI model. This operation selects which tools should be used for execution.

Apply the [Tools] Native Template (Reasoning only) operation in various scenarios, such as for:

  • Automating Routine Tasks

    Create autonomous agents that handle routine tasks by calling appropriate APIs.

  • Customer Support

    Automate responses to common queries by integrating tools that provide necessary information.

  • Inventory Management

    Use tools to check inventory levels or order status based on user prompts.

  • Employee Management

    Retrieve employee information or manage employee-related tasks through API calls.

  • Sales and Marketing

    Access CRM data or manage leads and accounts efficiently using predefined tools.

To configure the [Tools] Native Template (Reasoning only) operation:

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter these values:

    • Template

      Enter the template for the operation.

    • Instructions

      Enter the instructions for the operation.

    • Data

      Enter the prompt to send to the LLM, along the tools array. The result suggests a chaining of the tool as a request to fulfill the users request.

    • Tools

      Enter the tools for the operation.

This is an example of the tools array in the payload:

{
   "tools":[
      {
         "type":"function",
         "function":{
            "name":"get_current_temperature",
            "description":"Get the current temperature for a specific location",
            "parameters":{
               "type":"object",
               "properties":{
                  "location":{
                     "type":"string",
                     "description":"The city and state, e.g., San Francisco, CA"
                  },
                  "unit":{
                     "type":"string",
                     "enum":[
                        "Celsius",
                        "Fahrenheit"
                     ],
                     "description":"The temperature unit to use. Infer this from the user's location."
                  }
               },
               "required":[
                  "location",
                  "unit"
               ]
            }
         }
      },
      {
         "type":"function",
         "function":{
            "name":"get_rain_probability",
            "description":"Get the probability of rain for a specific location",
            "parameters":{
               "type":"object",
               "properties":{
                  "location":{
                     "type":"string",
                     "description":"The city and state, e.g., San Francisco, CA"
                  }
               },
               "required":[
                  "location"
               ]
            }
         }
      },
      {
         "type":"function",
         "function":{
            "name":"get_delivery_date",
            "description":"Get the delivery date for a customer's order. Call this whenever you need to know the delivery date, for example when a customer asks 'Where is my package'",
            "parameters":{
               "type":"object",
               "properties":{
                  "order_id":{
                     "type":"string",
                     "description":"The customer's order ID."
                  }
               },
               "required":[
                  "order_id"
               ]
            }
         }
      }
   ]
}

This is the XML for this operation:

<ms-inference:tools-native-template doc:name="Tools native template" doc:id="ff267b16-b0f7-4a8d-8dd8-004a8269862b" config-ref="OpenAIConfig">
    <ms-inference:template ><![CDATA[You are a helpful assistant that can use the following tools to answer questions.]]></ms-inference:template>
    <ms-inference:instructions ><![CDATA[Use the tools to answer the question.]]></ms-inference:instructions>
    <ms-inference:data ><![CDATA[#[payload.dataset]]]></ms-inference:data>
    <ms-inference:tools ><![CDATA[#[payload.tools]]]></ms-inference:tools>
</ms-inference:tools-native-template>

Output Configuration

This operation responds with a JSON payload containing the main LLM response.

Example Response Payload (Tools not used)

If the prompt is a general question that can be answered with public knowledge, such as What is the capital of Switzerland?, the response is:

{
    "response": "The capital of Switzerland is Bern."
}

Example Response Payload (Tools used)

If the prompt requires accessing external data through a tool, such as When will my order (ID = 220) be delivered?, the AI model uses the configured tool to fetch the information, and the response is:

{
    "response": {
        "tools": [
            {
                "function": {
                    "name": "get_delivery_date",
                    "arguments": "{\"order_id\":\"220\"}"
                },
                "id": "call_ylw95cO0kxCt7NELy91cF7bX",
                "type": "function"
            }
        ]
    }
}

The operation also returns attributes that aren’t within the main JSON payload, that include information about token usage, for example:

{
    "attributes": {
        "tokenUsage": {
            "inputCount": 204,
            "outputCount": 16,
            "totalCount": 220
        },
        "additionalAttributes": {
            "finish_reason": "tool_calls",
            "model": "gpt-4o-mini",
            "id": "chatcmpl-AQwXGmrzcO7XG5Ic1Hp0PKIrRoL4O"
        }
    }
}
  • tokenUsage: Token usage metadata returned as attributes

    • inputCount: Number of tokens used to process the input

    • outputCount: Number of tokens used to generate the output

    • totalCount: Total number of tokens used for input and output

  • additionalAttributes: Additional metadata from the LLM provider

    • finish_reason: The finish reason for the LLM response

    • model: The ID of the model used

    • id: The ID of the request

View on GitHub