Contact Us 1-800-596-4880

MuleSoft Inference Connector 1.0 Reference

MuleSoft Inference Connector provides operations to interface directly with the API of various inference providers, enabling seamless integration of AI capabilities into your Mule applications.

Configurations


Image Generation Config

Parameters

Name Type Description Default Value Required

Name

String

Name for this configuration. Connectors reference the configuration with this name.

x

Connection

Connection types for this configuration.

x

Name

String

ID used to reference this configuration.

x

Expiration Policy

Configures an expiration policy for the configuration.

Connection Types

Heroku AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Heroku Inference Model

String

Model name.

x

Heroku Diffusion URL

String

Heroku diffusion URL.

https://us.inference.heroku.com

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Hugging Face
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Hugging Face Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Stability AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Stability AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

xAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

X Ai Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Supported Operations


Moderation Config

Parameters

Name Type Description Default Value Required

Name

String

Name for this configuration. Connectors reference the configuration with this name.

x

Connection

Connection types for this configuration.

x

Name

String

ID used to reference this configuration.

x

Expiration Policy

Configures an expiration policy for the configuration.

Connection Types

Mistral AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Mistral AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Supported Operations


Text Generation Config

Parameters

Name Type Description Default Value Required

Name

String

Name for this configuration. Connectors reference the configuration with this name.

x

Connection

Connection types for this configuration.

x

Name

String

ID used to reference this configuration.

x

Expiration Policy

Configures an expiration policy for the configuration.

Connection Types

AI21Labs
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Ai21 Labs Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Anthropic
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Anthropic Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Azure AI Foundry
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Azure Model Name

String

Model name.

x

[Azure AI Foundry] Resource Name

String

Azure AI Foundry resource name.

[Azure AI Foundry] API Version

String

Azure AI Foundry API version.

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Azure OpenAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Azure Model Name

String

Model name.

x

[Azure OpenAI] Resource Name

String

Azure OpenAI resource name.

[Azure OpenAI] Deployment ID

String

Azure OpenAI deployment ID.

[Azure OpenAI] User

String

Unique identifier representing your end-user, which can help to monitor and detect abuse.

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Cerebras
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Cerebras Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Cohere
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Cohere Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Databricks
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

[Databricks] Serving Endpoint Name

String

Databricks serving endpoint name.

x

[Databricks] Serving Endpoint URL host

String

Databricks serving endpoint URL host.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

DeepInfra
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Deep Infra Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Deepseek
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Deepseek Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Docker
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Docker Model Name

String

Model name.

x

[Docker Models] Base URL

String

Docker models base URL.

http://localhost:12434

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Fireworks
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Fireworks Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

GitHub
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Git Hub Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

GPT4All
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Gpt4all Model Name

String

Model name.

x

[GPT4ALL] Base URL

String

GPT4ALL base URL.

http://localhost:4891/v1

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Groq
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Groq Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Heroku AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Heroku Inference Model

String

Model name.

claude-3-7-sonnet

Heroku Inference URL

String

Inference URL.

https://us.inference.heroku.com

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Hugging Face
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Hugging Face Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

LM Studio
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Lm Studio Model Name

String

Model name.

x

[LM Studio] Base URL

String

LM Studio base URL.

http://localhost:1234/v1

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Mistral AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Mistral AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Nvidia
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Nvidia Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Ollama
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Ollama Model Name

String

Model name.

x

[Ollama] Base URL

String

Ollama base URL.

http://localhost:11434/api

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open AI Model Name

String

OpenAI model name.

gpt-4o-mini

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenAI Compatible Endpoint
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open AI Compatible Model Name

String

OpenAI compatible model name.

gpt-4o-mini

[OpenAI Compatible] Base URL

String

OpenAI compatible base URL.

https://server.endpoint.com

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenRouter
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open Router Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Perplexity
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Perplexity Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Portkey
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Portkey Model Name

String

Model name.

x

[Portkey] Virtual Key

String

Portkey virtual key.

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Together
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Together Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Vertex AI Express
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Vertex AI Express Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

xAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

X Ai Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

XInference
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

X Inference Model Name

String

Model name.

x

[Xinference] Base URL

String

Xinference base URL.

https://inference.top/api/v1

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

ZHIPU_AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Zhipu AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Supported Operations


Vision Config

Parameters

Name Type Description Default Value Required

Name

String

Name for this configuration. Connectors reference the configuration with this name.

x

Connection

Connection types for this configuration.

x

Name

String

ID used to reference this configuration.

x

Expiration Policy

Configures an expiration policy for the configuration.

Connection Types

Anthropic
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Anthropic Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Azure AI Foundry
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Azure AI Foundry Model Name

String

Model name.

x

[Azure AI Foundry] Resource Name

String

Azure AI Foundry resource name.

[Azure AI Foundry] API Version

String

Azure AI Foundry API version.

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

GitHub
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Git Hub Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Groq
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Groq Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Hugging Face
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Hugging Face Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Mistral AI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Mistral AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Ollama
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Ollama Model Name

String

Model name.

x

[Ollama] Base URL

String

Ollama base URL.

http://localhost:11434/api

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open AI Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

OpenRouter
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Open Router Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Portkey
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Portkey Model Name

String

Model name.

x

[Portkey] Virtual Key

String

Portkey virtual key.

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Vertex AI Express
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

Vertex AI Express Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

xAI
Parameters
Name Type Description Default Value Required

Proxy Configuration

One of:

Configures a proxy for outbound connections.

X Ai Model Name

String

Model name.

x

TLS Configuration

TLS

If HTTPS is configured as a protocol, then you must configure at least the keystore configuration.

API Key

String

API key as required by the inference provider.

x

Timeout

Number

Response timeout value set for each inference HTTP request.

60

Timeout Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Response timeout unit for the Timeout field.

SECONDS

Max Tokens

Number

Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs.

500

Temperature

Number

Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic.

0.9

Top P

Number

Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach.

0.9

Reconnection

When the application is deployed, a connectivity test is performed on all connectors. If set to true, deployment fails if the test doesn’t pass after exhausting the associated reconnection strategy.

Supported Operations

Operations

[Image] Generate (only Base64)

<ms-inference:generate-image>

This operation generates an image based on a prompt.

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Prompt

String

User’s prompt.

#[payload]

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

Attributes Type

For Configurations

Throws

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:IMAGE_GENERATION_FAILURE

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:RETRY_EXHAUSTED

[Toxicity] Detection by Text

<ms-inference:toxicity-detection-text>

This operation detects harmful content in a text.

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Text

Any

Text to moderate. Can be a single string or an array of strings.

#[payload]

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

For Configurations

Throws

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:RETRY_EXHAUSTED

  • MS-INFERENCE:TOXICITY_DETECTION_OPERATION_FAILURE

[Agent] Define Prompt Template

<ms-inference:agent-define-prompt-template>

This operation defines a prompt template with instructions and data.

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Template

String

Template string.

x

Instructions

String

Instructions for the LLM.

x

Data

String

Primary data content.

#[payload]

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

Attributes Type

For Configurations

Throws

  • MS-INFERENCE:CHAT_OPERATION_FAILURE

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:INVALID_PROVIDER

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:RETRY_EXHAUSTED

  • MS-INFERENCE:TOOLS_OPERATION_FAILURE

[Chat] Answer Prompt

<ms-inference:chat-answer-prompt>

This operation provides a simple chat answer for a single prompt.

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Prompt

String

User’s prompt.

#[payload]

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

Attributes Type

For Configurations

Throws

  • MS-INFERENCE:CHAT_OPERATION_FAILURE

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:INVALID_PROVIDER

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:RETRY_EXHAUSTED

  • MS-INFERENCE:TOOLS_OPERATION_FAILURE

[Chat] Completions

<ms-inference:chat-completions>

This operation provides chat completions by a messages array, including system or user messages (conversation history).

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Messages

Any

Conversation history as a JSON array.

#[payload]

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

Attributes Type

For Configurations

Throws

  • MS-INFERENCE:CHAT_OPERATION_FAILURE

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:INVALID_PROVIDER

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:RETRY_EXHAUSTED

  • MS-INFERENCE:TOOLS_OPERATION_FAILURE

[Tools] Native Template (Reasoning only)

<ms-inference:tools-native-template>

This operation defines a tools template with instructions and data. This operation is useful if you want to create autonomous agents that can use external tools whenever a prompt can’t be answered directly by the AI model. This operation selects which tools can be used for execution.

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Template

String

Template string.

x

Instructions

String

Instructions for the LLM.

x

Data

String

Primary data content.

#[payload]

Tools

Any

Tools configuration as a JSON array.

x

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

Attributes Type

For Configurations

Throws

  • MS-INFERENCE:CHAT_OPERATION_FAILURE

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:INVALID_PROVIDER

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:RETRY_EXHAUSTED

  • MS-INFERENCE:TOOLS_OPERATION_FAILURE

[Image] Read by (Url or Base64)

<ms-inference:read-image>

This operation reads an image from a URL or Base64 string.

Parameters

Name Type Description Default Value Required

Configuration

String

Name of the configuration to use.

x

Prompt

String

User’s prompt.

x

Image

String

Image URL or Base64 string to send to the Vision Model.

#[payload]

Output Mime Type

String

MIME type of the payload that this operation outputs.

Output Encoding

String

Encoding of the payload that this operation outputs.

Config Ref

ConfigurationProvider

Name of the configuration to use to execute this component.

x

Streaming Strategy

Configures how Mule processes streams. Repeatable streams are the default behavior.

Target Variable

String

Name of the variable that stores the operation’s output.

Target Value

String

Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field.

#[payload]

Error Mappings

Array of Error Mapping

Set of error mappings.

Reconnection Strategy

Retry strategy in case of connectivity errors.

Output

Type

Any

Attributes Type

For Configurations

Throws

  • MS-INFERENCE:CONNECTIVITY

  • MS-INFERENCE:INVALID_CONNECTION

  • MS-INFERENCE:INVALID_PROVIDER

  • MS-INFERENCE:RATE_LIMIT_EXCEEDED

  • MS-INFERENCE:READ_IMAGE_OPERATION_FAILURE

  • MS-INFERENCE:RETRY_EXHAUSTED

Types

TLS

Configures TLS to provide secure communications for the Mule app.

Field Type Description Default Value Required

Enabled Protocols

String

Comma-separated list of protocols enabled for this context.

Enabled Cipher Suites

String

Comma-separated list of cipher suites enabled for this context.

Trust Store

Configures the TLS truststore.

Key Store

Configures the TLS keystore.

Revocation Check

Configures a revocation checking mechanism.

Truststore

Configures the truststore for TLS.

Field Type Description Default Value Required

Path

String

Path to the truststore. Mule resolves the path relative to the current classpath and file system.

Password

String

Password used to protect the truststore.

Type

String

Type of truststore.

Algorithm

String

Encryption algorithm that the truststore uses.

Insecure

Boolean

If true, Mule stops performing certificate validations. Setting this to true can make connections vulnerable to attacks.

Keystore

Configures the keystore for the TLS protocol. The keystore you generate contains a private key and a public certificate.

Field Type Description Default Value Required

Path

String

Path to the keystore. Mule resolves the path relative to the current classpath and file system.

Type

String

Type of keystore.

Alias

String

Alias of the key to use when the keystore contains multiple private keys. By default, Mule uses the first key in the file.

Key Password

String

Password used to protect the private key.

Password

String

Password used to protect the keystore.

Algorithm

String

Encryption algorithm that the keystore uses.

Standard Revocation Check

Configures standard revocation checks for TLS certificates.

Field Type Description Default Value Required

Only End Entities

Boolean

Which elements to verify in the certificate chain:

  • true

    Verify only the last element in the certificate chain.

  • false

    Verify all elements in the certificate chain.

Prefer Crls

Boolean

How to check certificate validity:

  • true

    Check the Certification Revocation List (CRL) for certificate validity.

  • false

    Use the Online Certificate Status Protocol (OCSP) to check certificate validity.

No Fallback

Boolean

Whether to use the secondary method to check certificate validity:

  • true

    Use the method that wasn’t specified in the Prefer Crls field (the secondary method) to check certificate validity.

  • false

    Do not use the secondary method to check certificate validity.

Soft Fail

Boolean

What to do if the revocation server can’t be reached or is busy:

  • true

    Avoid verification failure.

  • false

    Allow the verification to fail.

Custom OCSP Responder

Configures a custom OCSP responder for certification revocation checks.

Field Type Description Default Value Required

Url

String

URL of the OCSP responder.

Cert Alias

String

Alias of the signing certificate for the OCSP response. If specified, the alias must be in the truststore.

CRL File

Specifies the location of the certification revocation list (CRL) file.

Field Type Description Default Value Required

Path

String

Path to the CRL file.

Reconnection

Configures a reconnection strategy for an operation.

Field Type Description Default Value Required

Fails Deployment

Boolean

Configures a reconnection strategy to use when a connector operation fails to connect to an external server.

Reconnection Strategy

Reconnection strategy to use.

Reconnect

Configures a standard reconnection strategy, which specifies how often to reconnect and how many reconnection attempts the connector source or operation can make.

Field Type Description Default Value Required

Frequency

Number

How often to attempt to reconnect, in milliseconds.

Blocking

Boolean

If false, the reconnection strategy runs in a separate, non-blocking thread.

Count

Number

How many reconnection attempts the Mule app can make.

Reconnect Forever

Configures a forever reconnection strategy by which the connector source or operation attempts to reconnect at a specified frequency for as long as the Mule app runs.

Field Type Description Default Value Required

Frequency

Number

How often to attempt to reconnect, in milliseconds.

Blocking

Boolean

If false, the reconnection strategy runs in a separate, non-blocking thread.

Expiration Policy

Configures an expiration policy strategy.

Field Type Description Default Value Required

Max Idle Time

Number

Configures the maximum amount of time that a dynamic configuration instance can remain idle before Mule considers it eligible for expiration.

Time Unit

Enumeration, one of:

  • NANOSECONDS

  • MICROSECONDS

  • MILLISECONDS

  • SECONDS

  • MINUTES

  • HOURS

  • DAYS

Time unit for the Max Idle Time field.

Image Response Attributes

Configures image response attributes.

Field Type Description Default Value Required

Model

String

Model.

Prompt Used

String

Prompt used.

Repeatable In Memory Stream

Configures the in-memory streaming strategy by which the request fails if the data exceeds the MAX buffer size. Always run performance tests to find the optimal buffer size for your specific use case.

Field Type Description Default Value Required

Initial Buffer Size

Number

Initial amount of memory to allocate to the data stream. If the streamed data exceeds this value, the buffer expands by Buffer Size Increment, with an upper limit of Max Buffer Size.

Buffer Size Increment

Number

This is by how much the buffer size expands if it exceeds its initial size. Setting a value of zero or lower means that the buffer should not expand, meaning that a STREAM_MAXIMUM_SIZE_EXCEEDED error is raised when the buffer gets full.

Max Buffer Size

Number

Maximum size of the buffer. If the buffer size exceeds this value, Mule raises a STREAM_MAXIMUM_SIZE_EXCEEDED error. A value of less than or equal to 0 means no limit.

Buffer Unit

Enumeration, one of:

  • BYTE

  • KB

  • MB

  • GB

Unit for the Initial Buffer Size, Buffer Size Increment, and Max Buffer Size fields.

Repeatable File Store Stream

Configures the repeatable file-store streaming strategy by which Mule keeps a portion of the stream content in memory. If the stream content is larger than the configured buffer size, Mule backs up the buffer’s content to disk and then clears the memory.

Field Type Description Default Value Required

In Memory Size

Number

Maximum amount of memory that the stream can use for data. If the amount of memory exceeds this value, Mule buffers the content to disk. To optimize performance:

  • Configure a larger buffer size to avoid the number of times Mule needs to write the buffer on disk. This increases performance, but it also limits the number of concurrent requests your application can process, because it requires additional memory.

  • Configure a smaller buffer size to decrease memory load at the expense of response time.

Buffer Unit

Enumeration, one of:

  • BYTE

  • KB

  • MB

  • GB

Unit for the In Memory Size field.

Error Mapping

Configures error mapping.

Field Type Description Default Value Required

Source

Enumeration, one of:

  • ANY

  • REDELIVERY_EXHAUSTED

  • TRANSFORMATION

  • EXPRESSION

  • SECURITY

  • CLIENT_SECURITY

  • SERVER_SECURITY

  • ROUTING

  • CONNECTIVITY

  • RETRY_EXHAUSTED

  • TIMEOUT

Source of the error.

Target

String

Target of the error.

x

LLM Response Attributes

Configures LLM response attributes.

Field Type Description Default Value Required

Additional Attributes

Additional attributes.

Token Usage

Token usage.

Additional Attributes

Configures additional attributes.

Field Type Description Default Value Required

Finish Reason

String

Finish reason for the LLM response.

Id

String

ID of the request.

Model

String

ID of the model used.

Token Usage

Configures token usage metadata returned as attributes.

Field Type Description Default Value Required

Input Count

Number

Number of tokens used to process the input.

Output Count

Number

Number of tokens used to generate the output.

Total Count

Number

Total number of tokens used for input and output.

Proxy

Configures a proxy for outbound connections.

Field Type Description Default Value Required

Host

String

Host in which the proxy requests are sent.

x

Port

Number

Port in which the proxy requests are sent.

x

Username

String

Username to authenticate against the proxy.

Password

String

Password to authenticate against the proxy.

Non Proxy Hosts

String

Non-proxy hosts.

NTLM Proxy

Configures an NTLM proxy for outbound connections.

Field Type Description Default Value Required

Ntlm Domain

String

NTLM domain.

x

Host

String

Host in which the proxy requests are sent.

x

Port

Number

Port in which the proxy requests are sent.

x

Username

String

Username to authenticate against the proxy.

Password

String

Password to authenticate against the proxy.

Non Proxy Hosts

String

Non-proxy hosts.

View on GitHub