Contact Us 1-800-596-4880

MuleSoft Inference Connector Examples

These examples show you how to use MuleSoft Inference Connector with the OpenAI inference type. To use other inference types, you must change these configurations.

Prerequisites

Before running these examples, ensure you have:

  • Anypoint Studio 7

  • Mule 4.9.4

  • Java 17

  • Valid API keys

Example: Text Generation

This Mule app showcases the use of three distinct text generation operations to interact with OpenAI. Each operation is demonstrated in its own dedicated flow.

  • [Agent] Define Prompt Template

    This operation is used to create structured prompts for AI agents. In this example, it’s configured to enable an agent to analyze customer feedback and determine appropriate actions based on sentiment, leveraging the configured OpenAI LLM.

  • [Chat] Answer Prompt

    This operation provides simple, direct question-answering capabilities. The example demonstrates its use by asking the LLM a straightforward factual question: What is the capital of Germany?.

  • [Chat] Completions

    This operation facilitates multi-turn, context-aware conversations. The example illustrates this by providing the LLM with an initial conversational context and then asking it to continue the dialogue by asking What is the capital of Switzerland?.

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:ms-inference="http://www.mulesoft.org/schema/mule/ms-inference" xmlns:http="http://www.mulesoft.org/schema/mule/http"
	  xmlns="http://www.mulesoft.org/schema/mule/core"
	  xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ms-inference http://www.mulesoft.org/schema/mule/ms-inference/current/mule-ms-inference.xsd">
	<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="1d867172-d19a-4ab1-b194-52c2ad455d7f" >
		<http:listener-connection host="0.0.0.0" port="8081" />
	</http:listener-config>

	<ms-inference:text-generation-config name="OpenAI_Text_generation_config" doc:name="MuleSoft Inference Text generation config" doc:id="3c24c806-bb9f-4512-bc9b-a51cda5c710a" >
		<ms-inference:openai-connection openAIModelName="${config.llmModel}" apiKey="${config.apiKey}" timeout="60000" >
		</ms-inference:openai-connection>
	</ms-inference:text-generation-config>

	<flow name="AgentDefinePromptTemplateFlow" doc:id="1ee32ee4-8c15-4636-9a49-0784f782cb98" >
		<http:listener doc:name="Listener" doc:id="8725c446-5f45-4732-8fca-ea9006aabed6" config-ref="HTTP_Listener_config" path="/prompttemplate"/>
		<set-variable value='#[%dw 2.0
			output application/json
			---
			{
			"template": "You are a customer satisfaction agent, who analyses the customer feedback in the dataset. Answer via json output and add a type for the result only with positive or negative as well as the complete answer",
			"instructions":"If the customer feedback in the dataset is negative, open a service satisfaction case and apologize to the customer. If the customer feedback in the dataset is positive, thank the customer and wish them a nice day. Do not repeat the feedback and be more direct starting the conversation with formal greetings",
			"dataset": "The training last week was amazing, we learned so much and the trainer was very friendly"
		}]' doc:name="Set Variable" doc:id="209a7846-c3e3-4a8a-9c63-6ab02a772ec4" variableName="testPayload"/>
		<ms-inference:agent-define-prompt-template doc:name="[Agent] Define Prompt Template" doc:id="699b48f4-0ac8-46b5-9f3b-418070e7157d" config-ref="OpenAI_Text_generation_config">
			<ms-inference:template ><![CDATA[#[vars.testPayload.template]]]></ms-inference:template>
			<ms-inference:instructions ><![CDATA[#[vars.testPayload.instructions]]]></ms-inference:instructions>
			<ms-inference:data ><![CDATA[#[vars.testPayload.dataset]]]></ms-inference:data>
		</ms-inference:agent-define-prompt-template>
	</flow>

	<flow name="ChatAnswerPromptFlow" doc:id="698ff181-7968-4294-8f08-51387fbf1dfa" >
		<http:listener doc:name="Listener" doc:id="3ae26012-7b44-41ee-8d6d-bdb51ed9e39e" config-ref="HTTP_Listener_config" path="/chatprompt"/>
		<set-variable value='#[%dw 2.0
			output application/json
			---
			{
			    "prompt": "What is the capital of Germany?"
			}]' doc:name="Set Variable" doc:id="774dde86-5241-4151-b7fa-b5d7bf412508" variableName="testPayload"/>
		<ms-inference:chat-answer-prompt doc:name="[CHAT] Answer Prompt" doc:id="53e77f54-95c1-42da-b447-b5703212e9ca" config-ref="OpenAI_Text_generation_config">
			<ms-inference:prompt ><![CDATA[#[vars.testPayload.prompt]]]></ms-inference:prompt>
		</ms-inference:chat-answer-prompt>
	</flow>

	<flow name="ChatCompletionsFlow" doc:id="ae0b651f-950d-40d0-8c98-9ef050779c06" >
		<http:listener doc:name="Listener" doc:id="a2a68fe6-47da-41b0-954f-3cf51e80d6f2" config-ref="HTTP_Listener_config" path="/chatcompletion"/>
		<set-variable value='#[%dw 2.0
			output application/json
			---
			[{
			  "role": "assistant",
			  "content": "You are a helpful assistant."
			},
			{
			  "role": "user",
			  "content": "What is the capital of Switzerland!"
			}
			]]' doc:name="Set Variable" doc:id="860b37bd-9fa9-4584-90d4-2723b2e00cc5" variableName="testPayload"/>
		<ms-inference:chat-completions doc:name="[Chat] Completions" doc:id="f2eedc4f-a87b-4579-92a6-4022db9f3c03" config-ref="OpenAI_Text_generation_config">
			<ms-inference:messages ><![CDATA[#[vars.testPayload]]]></ms-inference:messages>
		</ms-inference:chat-completions>
	</flow>
</mule>

Example: Image Generation

This Mule app showcases the use of the [Image] Generate operation to interact with OpenAI. This operation is used to generate images based on a prompt. In this example, it’s configured to generate an image of a penguin dancing on the beach being happy with ice cream.

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:ms-inference="http://www.mulesoft.org/schema/mule/ms-inference" xmlns:http="http://www.mulesoft.org/schema/mule/http"
	  xmlns="http://www.mulesoft.org/schema/mule/core"
	  xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ms-inference http://www.mulesoft.org/schema/mule/ms-inference/current/mule-ms-inference.xsd">
	<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="1d867172-d19a-4ab1-b194-52c2ad455d7f" >
		<http:listener-connection host="0.0.0.0" port="8081" />
	</http:listener-config>

	<ms-inference:image-generation-config name="OpenAIImageGenConfig" doc:name="MuleSoft Inference Image generation config" doc:id="813d08c8-7e6d-4c9e-8259-b80a84093016" >
		<ms-inference:openai-image-connection apiKey="${openai.apiKey}" openAIModelName="${openai.imageGenModel}" />
	</ms-inference:image-generation-config>

	<flow name="GenerateImageFlow" doc:id="698ff181-7968-4294-8f08-51387fbf1dfa" >
		<http:listener doc:name="Listener" doc:id="3ae26012-7b44-41ee-8d6d-bdb51ed9e39e" config-ref="HTTP_Listener_config" path="/generateimage"/>
		<set-variable value='#[%dw 2.0
		   output application/json
		    ---
			{
				"prompt" : "Generate a picture of a penguin dancing on the beach being happy with an ice cream."
			}]' doc:name="Set Variable" doc:id="55ffb06c-d91b-4419-8517-a2d83802413a" variableName="testPayload"/>
		<ms-inference:generate-image doc:name="[Image] Generate (only Base64)" doc:id="9c849d82-b443-4253-bdbd-1fad73df7a45" config-ref="OpenAIImageGenConfig">
			<ms-inference:prompt ><![CDATA[#[vars.testPayload.prompt]]]></ms-inference:prompt>
		</ms-inference:generate-image>
	</flow>
</mule>

Example: Toxicity Detection

This Mule app showcases the use of the [Toxicity] Detection by Text operation to interact with OpenAI. This operation is used to detect harmful content in a text. In this example, it’s configured to detect harmful content in the text You are fat.

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:ms-inference="http://www.mulesoft.org/schema/mule/ms-inference" xmlns:http="http://www.mulesoft.org/schema/mule/http"
	  xmlns="http://www.mulesoft.org/schema/mule/core"
	  xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ms-inference http://www.mulesoft.org/schema/mule/ms-inference/current/mule-ms-inference.xsd">
	<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="1d867172-d19a-4ab1-b194-52c2ad455d7f" >
		<http:listener-connection host="0.0.0.0" port="8081" />
	</http:listener-config>

	<ms-inference:moderation-config name="OpenAIInvalidApiKeyModerationConfig" doc:name="MuleSoft Inference Moderation config" doc:id="c28800c5-75d8-4904-8e98-a6b8d965e054" >
		<ms-inference:openai-moderation-connection apiKey="${openai.apiKey}" openAIModelName="${openai.llmModel}"/>
	</ms-inference:moderation-config>

	<flow name="ToxicityDetectionFlow" doc:id="1ee32ee4-8c15-4636-9a49-0784f782cb98" >
		<http:listener doc:name="Listener" doc:id="8725c446-5f45-4732-8fca-ea9006aabed6" config-ref="HTTP_Listener_config" path="/toxicitydetection"/>
		<set-variable value='#[%dw 2.0
		   output application/json
		    ---
			{
    			"prompt": "You are fat"
   			}]' doc:name="Set Variable" doc:id="7ced3e9a-a3ec-4768-85b1-081849fca0fc" variableName="testPayload"/>
		<ms-inference:toxicity-detection-text doc:name="[Toxicity] Detection by Text" doc:id="cd45c2f9-cb7f-4fe0-8311-2a58cc6c11ac" config-ref="OpenAIInvalidApiKeyModerationConfig">
			<ms-inference:text ><![CDATA[#[vars.testPayload.prompt]]]></ms-inference:text>
		</ms-inference:toxicity-detection-text>
	</flow>
</mule>

Example: Tools

This Mule app showcases the use of the [Tools] Native Template (Reasoning only) operation to interact with OpenAI. This operation is used to create autonomous agents that can use external tools whenever a prompt can’t be answered directly by the AI model. This operation provides a request only to execute the tools provided in the payload. It doesn’t execute them. In this example, it’s configured to use the tools to get the current temperature in Zurich in Celsius.

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:ms-inference="http://www.mulesoft.org/schema/mule/ms-inference" xmlns:http="http://www.mulesoft.org/schema/mule/http"
	  xmlns="http://www.mulesoft.org/schema/mule/core"
	  xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/current/mule-http.xsd
http://www.mulesoft.org/schema/mule/ms-inference http://www.mulesoft.org/schema/mule/ms-inference/current/mule-ms-inference.xsd">
	<http:listener-config name="HTTP_Listener_config" doc:name="HTTP Listener config" doc:id="1d867172-d19a-4ab1-b194-52c2ad455d7f" >
		<http:listener-connection host="0.0.0.0" port="8081" />
	</http:listener-config>

	<ms-inference:text-generation-config name="OpenAI_Text_generation_config" doc:name="MuleSoft Inference Text generation config" doc:id="3c24c806-bb9f-4512-bc9b-a51cda5c710a" >
		<ms-inference:openai-connection openAIModelName="${config.llmModel}" apiKey="${config.apiKey}" timeout="60000" >
		</ms-inference:openai-connection>
	</ms-inference:text-generation-config>

	<flow name="ToolsNativeTemplateFlow" doc:id="1ee32ee4-8c15-4636-9a49-0784f782cb98" >
		<http:listener doc:name="Listener" doc:id="8725c446-5f45-4732-8fca-ea9006aabed6" config-ref="HTTP_Listener_config" path="/toolstemplate"/>
		<set-variable
				value="#[%dw 2.0
            output application/json
            ---
            {
                'template': 'You are an helpful assistant',
                'instructions': 'Answer the request with politeness.',
                'dataset': 'What is the current temperature in Zurich in Celsius?',
                'tools': [
                    {
                        'type': 'function',
                        'function': {
                            'name': 'get_current_temperature',
                            'description': 'Get the current temperature for a specific location',
                            'parameters': {
                                'type': 'object',
                                'properties': {
                                    'location': {
                                        'type': 'string',
                                        'description': 'The city and state, e.g., San Francisco, CA'
                                    },
                                    'unit': {
                                        'type': 'string',
                                        'enum': ['Celsius', 'Fahrenheit'],
                                        'description': 'The temperature unit to use. Infer this from the user\'s location.'
                                    }
                                },
                                'required': ['location', 'unit']
                            }
                        }
                    },
                    {
                        'type': 'function',
                        'function': {
                            'name': 'get_rain_probability',
                            'description': 'Get the probability of rain for a specific location',
                            'parameters': {
                                'type': 'object',
                                'properties': {
                                    'location': {
                                        'type': 'string',
                                        'description': 'The city and state, e.g., San Francisco, CA'
                                    }
                                },
                                'required': ['location']
                            }
                        }
                    },
                    {
                        'type': 'function',
                        'function': {
                            'name': 'get_delivery_date',
                            'description': 'Get the delivery date for a customer\'s order. Call this whenever you need to know the delivery date, for example when a customer asks \'Where is my package\'',
                            'parameters': {
                                'type': 'object',
                                'properties': {
                                    'order_id': {
                                        'type': 'string',
                                        'description': 'The customer\'s order ID.'
                                    }
                                },
                                'required': ['order_id']
                            }
                        }
                    }
                ]
            }]"
				doc:name="Set Variable"
				doc:id="9b80bd71-922a-4739-804b-29ac2ba7ef7a"
				variableName="testPayload" />
		<ms-inference:tools-native-template
				doc:name="[Tools] Native Template (Reasoning only)"
				doc:id="e87a2e5e-b19b-4c2d-a4a0-8e991291de97"
				config-ref="OpenAI_Text_generation_config">
			<ms-inference:template><![CDATA[#[vars.testPayload.template]]]></ms-inference:template>
			<ms-inference:instructions><![CDATA[#[vars.testPayload.instructions]]]></ms-inference:instructions>
			<ms-inference:data><![CDATA[#[vars.testPayload.dataset]]]></ms-inference:data>
			<ms-inference:tools><![CDATA[#[vars.testPayload.tools]]]></ms-inference:tools>
		</ms-inference:tools-native-template>
	</flow>
</mule>
View on GitHub