Contact Us 1-800-596-4880

Configuring Chat Operations for Einstein AI Connector

Configure the Chat Answer Prompt Operation

The Chat answer prompt operation sends a request to the configured LLM. This operation uses a plain text prompt as input and responds with a plain text answer.

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter plain text for the Prompt.

  3. In Additional properties, enter these values:

    • Model name

      Select the model name. The default is OpenAI GPT 3.5 Turbo.

    • Probability

      Enter the probability of the model staying accurate. The default is 0.8. Locale

      Enter the localization information, which can include the default locale, input locale(s), and expected output locale(s). The default is en_US.

This is the XML configuration for this operation:

<ms-einstein-ai:chat-answer-prompt
  doc:name="Chat answer prompt"
  doc:id="66426c0e-5626-4dfa-88ef-b09f77577261"
  config-ref="Einstein_AI"
  prompt="#[payload.prompt]"
/>

Configure the Chat Generate from Messages Operation

The Chat generate from messages operation is a prompt request operation to the configured LLM, with provided messages. This operation accepts multiple messages and uses a plain text prompt as input. The operation responds with a plain text answer.

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter plain text for the Messages.

  3. In Additional properties, enter these values:

    • Model name

      Select the model name. The default is OpenAI GPT 3.5 Turbo.

    • Probability

      Enter the probability of the model staying accurate. The default is 0.8. Locale

      Enter the localization information, which can include the default locale, input locale(s), and expected output locale(s). The default is en_US.

This is the configuration XML for this operation:

<ms-einstein-ai:chat-generate-from-messages
  doc:name="Chat generate from messages"
  doc:id="94fa27f3-18ce-436c-8a5f-10b8dbfa4ea3"
  config-ref="Einstein_AI"
  messages="#[payload.messages]"
/>

Configure the Chat Answer Prompt With Memory Operation

The Chat answer prompt with memory operation retains conversation history for a multi-user chat operation.

  1. Select the operation on the Anypoint Code Builder or Studio canvas.

  2. In the General properties tab for the operation, enter these values:

    • Prompt

      Contains the prompt for the operation.

    • Memory Path

      Path to the in-memory database for storing the conversation history.

      You can also use a DataWeave expression for this field, for example:

      #["/Users/john.wick/Desktop/mac-demo/db/" ++ payload.memoryName].

    • Memory name

      Name of the conversation. For multi-user support, enter the unique user ID.

    • Keep last messages

      Maximum number of messages to remember for the conversation defined in Memory name.

  3. In Additional properties, enter these values:

    • Model name

      Select the model name. The default is OpenAI GPT 3.5 Turbo.

    • Probability

      Enter the probability of the model staying accurate. The default is 0.8. Locale

      Enter the localization information, which can include the default locale, input locale(s), and expected output locale(s). The default is en_US.

This is the XML configuration for this operation:

<ms-einstein-ai:chat-answer-prompt-with-memory
  doc:name="Chat answer prompt with memory"
  doc:id="a1d7d0e0-a568-4824-9849-6f1ff03d6dee"
  config-ref="Einstein_AI"
  prompt="#[payload.prompt]"
  memoryPath="#[payload.memoryPath]"
  memoryName="#[payload.memoryName]"
  keepLastMessages="#[payload.lastMessages]"
/>
View on GitHub