Contact Us 1-800-596-4880

MuleSoft Inference Connector 1.0

Anypoint Connector for MuleSoft Inference (MuleSoft Inference Connector) provides access to inference offerings for large language models (LLMs) from multiple providers including OpenAI, OpenAI Compatible Endpoints, OpenRouter, Heroku AI, Azure AI Foundry, Azure OpenAI, and many others. This connector provides operations to interface directly with the API of various inference providers, enabling seamless integration of AI capabilities into your Mule applications.

For information about compatibility and fixed issues, see the MuleSoft Inference Connector release notes.

Before You Begin

To use this connector, you must be familiar with:

  • Anypoint Connectors

  • Mule runtime engine (Mule)

  • Elements and global elements in a Mule flow

  • How to create a Mule app using Anypoint Code Builder or Anypoint Studio

Before creating an app, you must have:

  • Java 17 (required for compilation and runtime)

  • Apache Maven

  • Credentials to access the MuleSoft Inference Connector target resource

  • Anypoint Platform

  • The latest versions of Anypoint Code Builder or Anypoint Studio

Key Features

MuleSoft Inference Connector simplifies AI integration into Mule applications with:

  • Seamless Interaction with Inferenced LLMs

    Integrate and utilize large language models (LLMs) effortlessly for tasks such as natural language processing, text generation, and more, including advanced features such as image analysis or generation, and toxicity detection in both user-provided and LLM-generated text.

  • Optimized Performance

    Deliver high efficiency and performance in enterprise-grade Mule apps, ensuring smooth handling of AI operations.

  • Secure Integration

    Connect corporate applications and data with innovative new AI offerings without damaging security posture. Provide full TLS support for secure Mule apps.

Supported Inference Providers

  • AI21Labs (beta)

  • Anthropic (beta)

  • Azure AI Foundry

  • Azure OpenAI

  • Cerebras (beta)

  • Cohere (beta)

  • Databricks (beta)

  • DeepInfra (beta)

  • DeepSeek (beta)

  • Docker Models (beta)

  • Fireworks (beta)

  • GitHub Models (beta)

  • GPT4ALL (beta)

  • Groq AI (beta)

  • Heroku AI

  • Hugging Face (beta) (hf-inference only)

  • LM Studio (beta)

  • Mistral AI (beta)

  • NVIDIA NIM (beta)

  • Ollama (beta)

  • OpenAI

  • OpenAI Compatible Endpoints

  • OpenRouter

  • Perplexity (beta)

  • Portkey (beta)

  • Together.ai (beta)

  • Vertex AI Express (beta)

  • XAI (beta)

  • Xinference (beta)

  • ZHIPU AI (beta)

Supported Moderation Providers

  • Mistral AI (beta)

  • OpenAI

Supported Vision Model Providers

  • Anthropic (beta)

  • Azure AI Foundry

  • GitHub Models (beta)

  • Groq AI (beta)

  • Hugging Face (beta) (hf-inference only)

  • Mistral AI (beta)

  • Ollama (beta)

  • OpenAI

  • OpenRouter

  • Portkey (beta)

  • Vertex AI Express (beta)

  • XAI (beta)

Supported Image Generation Providers

  • Heroku AI

  • Hugging Face (beta) (hf-inference only)

  • OpenAI

  • Stability AI (beta)

  • XAI (beta)

To keep pace with the rapidly evolving AI landscape, certain LLMs are marked as beta. These are early-stage integrations that may change based on stability, demand, or provider updates. You can explore them but should do so with awareness that support might be limited and subject to change.

Supported Operations by Offering

Not all operations are supported by each provider. This table provides a detailed view of operation support across all providers.

LLM Provider [Chat] Answer Prompt [Chat] Completions [Agent] Define Prompt Template [Tools] Native Template (Reasoning only)* [Image] Read by (Url or Base64)* [Image] Generate (only Base64)* [Toxicity] Detection by Text*

AI21Labs

Yes

Yes

Yes

Yes

No

No

No

Anthropic

Yes

Yes

Yes

Yes

Yes

No

No

Azure AI Foundry

Yes

Yes

Yes

Yes

Yes

No

No

Azure OpenAI

Yes

Yes

Yes

Yes

No

No

No

Cerebras

Yes

Yes

Yes

Yes

No

No

No

Cohere

Yes

Yes

Yes

Yes

No

No

No

Databricks

Yes

Yes

Yes

Yes

No

No

No

DeepInfra

Yes

Yes

Yes

Yes

No

No

No

DeepSeek

Yes

Yes

Yes

Yes

No

No

No

Docker Models

Yes

Yes

Yes

Yes

No

No

No

Fireworks

Yes

Yes

Yes

Yes

No

No

No

GitHub Models

Yes

Yes

Yes

Yes

Yes

No

No

GPT4ALL

Yes

Yes

Yes

Yes

No

No

No

Groq AI

Yes

Yes

Yes

Yes

Yes

No

No

Heroku AI

Yes

Yes

Yes

Yes

No

Yes

No

Hugging Face

Yes

Yes

Yes

Yes

Yes

Yes

No

LM Studio

Yes

Yes

Yes

Yes

No

No

No

Mistral AI

Yes

Yes

Yes

Yes

Yes

No

Yes

NVIDIA NIM

Yes

Yes

Yes

Yes

No

No

No

Ollama

Yes

Yes

Yes

Yes

No

No

No

OpenAI

Yes

Yes

Yes

Yes

Yes

Yes

Yes

OpenAI Compatible Endpoints

Yes

Yes

Yes

Yes

No

No

No

OpenRouter

Yes

Yes

Yes

Yes

Yes

No

No

Perplexity

Yes

Yes

Yes

No

No

No

No

Portkey

Yes

Yes

Yes

Yes

Yes

No

No

Stability AI

No

No

No

No

No

Yes

No

Together.ai

Yes

Yes

Yes

Yes

No

No

No

Vertex AI Express

Yes

Yes

Yes

No

Yes

No

No

XAI

Yes

Yes

Yes

Yes

Yes

Yes

No

Xinference

Yes

Yes

Yes

Yes

No

No

No

Zhipu

Yes

Yes

Yes

Yes

No

No

No

* indicates that support depends on the specific model used.

Next Step

After you complete the prerequisites, you are ready to create an app and configure the connector using Anypoint Studio or Anypoint Code Builder.

View on GitHub