MuleSoft Inference Connector 1.0 Reference
MuleSoft Inference Connector provides operations to interface directly with the API of various inference providers, enabling seamless integration of AI capabilities into your Mule applications.
Configurations
Image Generation Config
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Name |
String |
Name for this configuration. Connectors reference the configuration with this name. |
x |
|
Connection |
Connection types for this configuration. |
x |
||
Name |
String |
ID used to reference this configuration. |
x |
|
Expiration Policy |
Configures an expiration policy for the configuration. |
Connection Types
Heroku AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Heroku Inference Model |
String |
Model name. |
x |
|
Heroku Diffusion URL |
String |
Heroku diffusion URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Hugging Face
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Hugging Face Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Stability AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Stability AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
xAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
X Ai Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Supported Operations
Moderation Config
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Name |
String |
Name for this configuration. Connectors reference the configuration with this name. |
x |
|
Connection |
Connection types for this configuration. |
x |
||
Name |
String |
ID used to reference this configuration. |
x |
|
Expiration Policy |
Configures an expiration policy for the configuration. |
Connection Types
Mistral AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Mistral AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Supported Operations
Text Generation Config
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Name |
String |
Name for this configuration. Connectors reference the configuration with this name. |
x |
|
Connection |
Connection types for this configuration. |
x |
||
Name |
String |
ID used to reference this configuration. |
x |
|
Expiration Policy |
Configures an expiration policy for the configuration. |
Connection Types
AI21Labs
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Ai21 Labs Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Anthropic
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Anthropic Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Azure AI Foundry
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Azure Model Name |
String |
Model name. |
x |
|
[Azure AI Foundry] Resource Name |
String |
Azure AI Foundry resource name. |
||
[Azure AI Foundry] API Version |
String |
Azure AI Foundry API version. |
||
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Azure OpenAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Azure Model Name |
String |
Model name. |
x |
|
[Azure OpenAI] Resource Name |
String |
Azure OpenAI resource name. |
||
[Azure OpenAI] Deployment ID |
String |
Azure OpenAI deployment ID. |
||
[Azure OpenAI] User |
String |
Unique identifier representing your end-user, which can help to monitor and detect abuse. |
||
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Cerebras
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Cerebras Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Cohere
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Cohere Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Databricks
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
[Databricks] Serving Endpoint Name |
String |
Databricks serving endpoint name. |
x |
|
[Databricks] Serving Endpoint URL host |
String |
Databricks serving endpoint URL host. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
DeepInfra
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Deep Infra Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Deepseek
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Deepseek Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Docker
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Docker Model Name |
String |
Model name. |
x |
|
[Docker Models] Base URL |
String |
Docker models base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Fireworks
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Fireworks Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
GitHub
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Git Hub Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
GPT4All
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Gpt4all Model Name |
String |
Model name. |
x |
|
[GPT4ALL] Base URL |
String |
GPT4ALL base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Groq
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Groq Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Heroku AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Heroku Inference Model |
String |
Model name. |
|
|
Heroku Inference URL |
String |
Inference URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Hugging Face
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Hugging Face Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
LM Studio
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Lm Studio Model Name |
String |
Model name. |
x |
|
[LM Studio] Base URL |
String |
LM Studio base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Mistral AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Mistral AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Nvidia
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Nvidia Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Ollama
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Ollama Model Name |
String |
Model name. |
x |
|
[Ollama] Base URL |
String |
Ollama base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open AI Model Name |
String |
OpenAI model name. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenAI Compatible Endpoint
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open AI Compatible Model Name |
String |
OpenAI compatible model name. |
|
|
[OpenAI Compatible] Base URL |
String |
OpenAI compatible base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenRouter
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open Router Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Perplexity
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Perplexity Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Portkey
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Portkey Model Name |
String |
Model name. |
x |
|
[Portkey] Virtual Key |
String |
Portkey virtual key. |
||
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Together
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Together Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Vertex AI Express
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Vertex AI Express Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
xAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
X Ai Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
XInference
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
X Inference Model Name |
String |
Model name. |
x |
|
[Xinference] Base URL |
String |
Xinference base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
ZHIPU_AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Zhipu AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Supported Operations
Vision Config
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Name |
String |
Name for this configuration. Connectors reference the configuration with this name. |
x |
|
Connection |
Connection types for this configuration. |
x |
||
Name |
String |
ID used to reference this configuration. |
x |
|
Expiration Policy |
Configures an expiration policy for the configuration. |
Connection Types
Anthropic
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Anthropic Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Azure AI Foundry
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Azure AI Foundry Model Name |
String |
Model name. |
x |
|
[Azure AI Foundry] Resource Name |
String |
Azure AI Foundry resource name. |
||
[Azure AI Foundry] API Version |
String |
Azure AI Foundry API version. |
||
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
GitHub
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Git Hub Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Groq
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Groq Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Hugging Face
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Hugging Face Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Mistral AI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Mistral AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Ollama
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Ollama Model Name |
String |
Model name. |
x |
|
[Ollama] Base URL |
String |
Ollama base URL. |
|
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open AI Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
OpenRouter
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Open Router Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Portkey
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Portkey Model Name |
String |
Model name. |
x |
|
[Portkey] Virtual Key |
String |
Portkey virtual key. |
||
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
Vertex AI Express
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
Vertex AI Express Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
xAI
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Proxy Configuration |
One of: |
Configures a proxy for outbound connections. |
||
X Ai Model Name |
String |
Model name. |
x |
|
TLS Configuration |
If HTTPS is configured as a protocol, then you must configure at least the keystore configuration. |
|||
API Key |
String |
API key as required by the inference provider. |
x |
|
Timeout |
Number |
Response timeout value set for each inference HTTP request. |
60 |
|
Timeout Unit |
Enumeration, one of:
|
Response timeout unit for the Timeout field. |
SECONDS |
|
Max Tokens |
Number |
Defines the number of LLM tokens to use when generating a response. This field helps control the usage and costs when engaging with LLMs. |
500 |
|
Temperature |
Number |
Number between 0 and 2. The temperature is used to control the randomness of the output. When you set it higher, you’ll get more random outputs. When you set it lower (towards 0), the values are more deterministic. |
0.9 |
|
Top P |
Number |
Controls diversity by creating a nucleus of the most probable words to choose from for the next token. Specifies the cumulative probability score threshold that the tokens must reach. |
0.9 |
|
Reconnection |
When the application is deployed, a connectivity test is performed on all connectors. If set to |
[Image] Generate (only Base64)
<ms-inference:generate-image>
This operation generates an image based on a prompt.
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Prompt |
String |
User’s prompt. |
|
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
[Toxicity] Detection by Text
<ms-inference:toxicity-detection-text>
This operation detects harmful content in a text.
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Text |
Any |
Text to moderate. Can be a single string or an array of strings. |
|
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
[Agent] Define Prompt Template
<ms-inference:agent-define-prompt-template>
This operation defines a prompt template with instructions and data.
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Template |
String |
Template string. |
x |
|
Instructions |
String |
Instructions for the LLM. |
x |
|
Data |
String |
Primary data content. |
|
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
[Chat] Answer Prompt
<ms-inference:chat-answer-prompt>
This operation provides a simple chat answer for a single prompt.
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Prompt |
String |
User’s prompt. |
|
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
[Chat] Completions
<ms-inference:chat-completions>
This operation provides chat completions by a messages array, including system or user messages (conversation history).
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Messages |
Any |
Conversation history as a JSON array. |
|
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
[Tools] Native Template (Reasoning only)
<ms-inference:tools-native-template>
This operation defines a tools template with instructions and data. This operation is useful if you want to create autonomous agents that can use external tools whenever a prompt can’t be answered directly by the AI model. This operation selects which tools can be used for execution.
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Template |
String |
Template string. |
x |
|
Instructions |
String |
Instructions for the LLM. |
x |
|
Data |
String |
Primary data content. |
|
|
Tools |
Any |
Tools configuration as a JSON array. |
x |
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
[Image] Read by (Url or Base64)
<ms-inference:read-image>
This operation reads an image from a URL or Base64 string.
Parameters
Name | Type | Description | Default Value | Required |
---|---|---|---|---|
Configuration |
String |
Name of the configuration to use. |
x |
|
Prompt |
String |
User’s prompt. |
x |
|
Image |
String |
Image URL or Base64 string to send to the Vision Model. |
|
|
Output Mime Type |
String |
MIME type of the payload that this operation outputs. |
||
Output Encoding |
String |
Encoding of the payload that this operation outputs. |
||
Config Ref |
ConfigurationProvider |
Name of the configuration to use to execute this component. |
x |
|
Streaming Strategy |
|
Configures how Mule processes streams. Repeatable streams are the default behavior. |
||
Target Variable |
String |
Name of the variable that stores the operation’s output. |
||
Target Value |
String |
Expression that evaluates the operation’s output. The outcome of the expression is stored in the Target Variable field. |
|
|
Error Mappings |
Array of Error Mapping |
Set of error mappings. |
||
Reconnection Strategy |
Retry strategy in case of connectivity errors. |
Types
TLS
Configures TLS to provide secure communications for the Mule app.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Enabled Protocols |
String |
Comma-separated list of protocols enabled for this context. |
||
Enabled Cipher Suites |
String |
Comma-separated list of cipher suites enabled for this context. |
||
Trust Store |
Configures the TLS truststore. |
|||
Key Store |
Configures the TLS keystore. |
|||
Revocation Check |
Configures a revocation checking mechanism. |
Truststore
Configures the truststore for TLS.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Path |
String |
Path to the truststore. Mule resolves the path relative to the current classpath and file system. |
||
Password |
String |
Password used to protect the truststore. |
||
Type |
String |
Type of truststore. |
||
Algorithm |
String |
Encryption algorithm that the truststore uses. |
||
Insecure |
Boolean |
If |
Keystore
Configures the keystore for the TLS protocol. The keystore you generate contains a private key and a public certificate.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Path |
String |
Path to the keystore. Mule resolves the path relative to the current classpath and file system. |
||
Type |
String |
Type of keystore. |
||
Alias |
String |
Alias of the key to use when the keystore contains multiple private keys. By default, Mule uses the first key in the file. |
||
Key Password |
String |
Password used to protect the private key. |
||
Password |
String |
Password used to protect the keystore. |
||
Algorithm |
String |
Encryption algorithm that the keystore uses. |
Standard Revocation Check
Configures standard revocation checks for TLS certificates.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Only End Entities |
Boolean |
Which elements to verify in the certificate chain:
|
||
Prefer Crls |
Boolean |
How to check certificate validity:
|
||
No Fallback |
Boolean |
Whether to use the secondary method to check certificate validity:
|
||
Soft Fail |
Boolean |
What to do if the revocation server can’t be reached or is busy:
|
Custom OCSP Responder
Configures a custom OCSP responder for certification revocation checks.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Url |
String |
URL of the OCSP responder. |
||
Cert Alias |
String |
Alias of the signing certificate for the OCSP response. If specified, the alias must be in the truststore. |
CRL File
Specifies the location of the certification revocation list (CRL) file.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Path |
String |
Path to the CRL file. |
Reconnection
Configures a reconnection strategy for an operation.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Fails Deployment |
Boolean |
Configures a reconnection strategy to use when a connector operation fails to connect to an external server. |
||
Reconnection Strategy |
Reconnection strategy to use. |
Reconnect
Configures a standard reconnection strategy, which specifies how often to reconnect and how many reconnection attempts the connector source or operation can make.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Frequency |
Number |
How often to attempt to reconnect, in milliseconds. |
||
Blocking |
Boolean |
If |
||
Count |
Number |
How many reconnection attempts the Mule app can make. |
Reconnect Forever
Configures a forever reconnection strategy by which the connector source or operation attempts to reconnect at a specified frequency for as long as the Mule app runs.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Frequency |
Number |
How often to attempt to reconnect, in milliseconds. |
||
Blocking |
Boolean |
If |
Expiration Policy
Configures an expiration policy strategy.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Max Idle Time |
Number |
Configures the maximum amount of time that a dynamic configuration instance can remain idle before Mule considers it eligible for expiration. |
||
Time Unit |
Enumeration, one of:
|
Time unit for the Max Idle Time field. |
Image Response Attributes
Configures image response attributes.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Model |
String |
Model. |
||
Prompt Used |
String |
Prompt used. |
Repeatable In Memory Stream
Configures the in-memory streaming strategy by which the request fails if the data exceeds the MAX buffer size. Always run performance tests to find the optimal buffer size for your specific use case.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Initial Buffer Size |
Number |
Initial amount of memory to allocate to the data stream. If the streamed data exceeds this value, the buffer expands by Buffer Size Increment, with an upper limit of Max Buffer Size. |
||
Buffer Size Increment |
Number |
This is by how much the buffer size expands if it exceeds its initial size. Setting a value of zero or lower means that the buffer should not expand, meaning that a |
||
Max Buffer Size |
Number |
Maximum size of the buffer. If the buffer size exceeds this value, Mule raises a |
||
Buffer Unit |
Enumeration, one of:
|
Unit for the Initial Buffer Size, Buffer Size Increment, and Max Buffer Size fields. |
Repeatable File Store Stream
Configures the repeatable file-store streaming strategy by which Mule keeps a portion of the stream content in memory. If the stream content is larger than the configured buffer size, Mule backs up the buffer’s content to disk and then clears the memory.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
In Memory Size |
Number |
Maximum amount of memory that the stream can use for data. If the amount of memory exceeds this value, Mule buffers the content to disk. To optimize performance:
|
||
Buffer Unit |
Enumeration, one of:
|
Unit for the In Memory Size field. |
Error Mapping
Configures error mapping.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Source |
Enumeration, one of:
|
Source of the error. |
||
Target |
String |
Target of the error. |
x |
LLM Response Attributes
Configures LLM response attributes.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Additional Attributes |
Additional attributes. |
|||
Token Usage |
Token usage. |
Additional Attributes
Configures additional attributes.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Finish Reason |
String |
Finish reason for the LLM response. |
||
Id |
String |
ID of the request. |
||
Model |
String |
ID of the model used. |
Token Usage
Configures token usage metadata returned as attributes.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Input Count |
Number |
Number of tokens used to process the input. |
||
Output Count |
Number |
Number of tokens used to generate the output. |
||
Total Count |
Number |
Total number of tokens used for input and output. |
Proxy
Configures a proxy for outbound connections.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Host |
String |
Host in which the proxy requests are sent. |
x |
|
Port |
Number |
Port in which the proxy requests are sent. |
x |
|
Username |
String |
Username to authenticate against the proxy. |
||
Password |
String |
Password to authenticate against the proxy. |
||
Non Proxy Hosts |
String |
Non-proxy hosts. |
NTLM Proxy
Configures an NTLM proxy for outbound connections.
Field | Type | Description | Default Value | Required |
---|---|---|---|---|
Ntlm Domain |
String |
NTLM domain. |
x |
|
Host |
String |
Host in which the proxy requests are sent. |
x |
|
Port |
Number |
Port in which the proxy requests are sent. |
x |
|
Username |
String |
Username to authenticate against the proxy. |
||
Password |
String |
Password to authenticate against the proxy. |
||
Non Proxy Hosts |
String |
Non-proxy hosts. |