<ms-einstein-ai:rag-adhoc-load-document
doc:name="Rag adhoc load document"
doc:id="edaea124-a8aa-4d4a-8f85-0f32ee4c9858"
config-ref="Einstein_AI"
prompt="#[payload.prompt]"
filePath="#[payload.filePath]"
optionType="PARAGRAPH"
/>
Configuring RAG Operations for Einstein AI Connector
Retrieval-Augmented Generation (RAG) is a technique for enhancing AI-generated outputs by retrieving relevant content, and using it to augment AI prompts with additional context. By grounding LLMs with this additional information, they can provide more accurate and reliable responses.
Configure the RAG Adhoc Load Document Operation
The RAG adhoc load document operation retrieves information based on a plain text prompt from an in-memory embedding store.
To configure the RAG adhoc load document operation:
-
Select the operation on the Anypoint Code Builder or Studio canvas.
-
In the General properties tab for the operation, enter these values:
-
Prompt
The prompt to send to the LLM and the embedding store to respond to.
-
File Path
Contains the full file path for the document to ingest into the embedding store. Ensure the file path is accessible.
You can also use a DataWeave expression for this field, for example:
mule.home "/apps/" app.name ++ "/customer-service.pdf"
-
-
In Additional properties, select the values for:
-
Embedding name
-
File type
Type of document to ingest into the embedding store:
-
Text
-
PDF
-
CSV
-
URL
A single URL pointing to web content to ingest.
-
-
Option type
How to split the document prior to ingestion into the vector database
-
Model name
Name of the API model that interacts with the LLM.
-
Probability
Probability of the model API staying accurate
-
Locale
Localization information, which can include the default locale, input locale(s), and expected output locales
-
This is the XML configuration for this operation: