Einstein OpenAI GPT-4o 0806
Supported Models
Einstein supports these predictive multimodal models:
-
OpenAI’s
GPT-4o (gpt-4o-2024-08-06)LLM -
OpenAI’s
GPT-4o Mini (gpt-4o-mini-2024-07-18)LLM -
Google’s
GEMINI-2.0 Flash 001LLM -
Google’s
GEMINI-2.5 FlashLLM -
Google’s
GEMINI-3 Flash(Beta) LLMTo use this model, go to the Einstein Setup page in your Salesforce org and enable beta generative AI models. See Turn On Beta Generative AI Models for more details.
When creating a document action, you can select the model and extraction settings such as PII masking or image recognition. Each model performs differently under various conditions, so select the one that aligns with your specific requirements.
| Model Name | Description | Stability | Image Recognition | Location Callouts | Notes |
|---|---|---|---|---|---|
Suitable for most tasks, performing well on documents in non-Latin languages. Can compare font sizes and identify certain font styles. |
High |
|
Not supported |
|
|
Einstein OpenAI GPT-4o Mini 0718 |
Fast and useful for focused tasks. Tends to exhibit lazy reasoning. |
High |
|
Not supported |
|
Einstein GEMINI-2.0 Flash 001 |
Good for analyzing images due to increased accuracy in this type of documents. |
Standard |
|
Supported |
|
Einstein GEMINI-2.5 Flash |
Good for analyzing images due to increased accuracy in this type of document. Performs quicker and with higher accuracy than Einstein GEMINI-2.0 Flash 001. |
Standard |
|
Supported |
|
Einstein GEMINI-3 Flash (Beta) |
Best for analyzing images due to increased accuracy in this type of document. Performs quicker and with higher accuracy than Einstein GEMINI-2.0 Flash 001 and GEMINI-2.5 Flash. |
High |
|
Supported |
|
Model Errors
In some scenarios, the LLMs stop generating a response and provide an error code to understand the reason that caused this behavior. If the error occurs during the execution of a document action, IDP provides this error in the statusMessage attribute of the response object.
These are some of the most common error codes and their suggested fixes:
| Error Code | Error Description | Suggested Fix |
|---|---|---|
|
The request exceeded the maximum input token limit. |
See Using long context under GPT-4.1 prompting best practices to understand how to refine your prompts. |
|
The response exceeded the maximum output token limit. |
Split the document or select an OpenAI model for this extraction. |
|
The document potentially contains Sensitive Personally Identifiable Information (SPII). |
Delete the conflicting field, or use GPT models to abstract the information. |
For a complete list of error codes, refer to each of the model provider’s documentation:
-
OpenAI models: The completion object
Select Show properties under choices to see the details.
-
Google models: Finish Reason



