<ms-bedrock:
sentiment-analyze
doc:name="Sentiment analyze"
doc:id="9a67cf44-5709-409d-9fc1-2ec34d6bf858"
config-ref="AWS"
TextToAnalyze="#[payload.data]"
/>
Configuring Sentiment Analysis Operations
Configure the SENTIMENT Analyze operation.
Configure the SENTIMENT Analyze Operation
The SENTIMENT Analyze operation is a simple prompt request to the configured LLM. It sends a plain text prompt as input and receives a sentiment response for the input text. The possible sentiment values are NEUTRAL, POSITIVE, or NEGATIVE.
Apply the SENTIMENT Analyze operation in various scenarios, such as for:
-
Customer Feedback Analysis
Determining whether customer feedback is positive, negative, or neutral.
-
Social Media Monitoring
Analyzing the sentiment of social media posts or comments to gauge public opinion.
-
Market Research
Assessing the sentiment of survey responses or market research data.
To configure the SENTIMENT Analyze operation:
-
Select the operation on the Anypoint Code Builder or Studio canvas.
-
In the General properties tab for the operation, enter these values:
-
Text to analyze
The text to be analyzed for sentiment.
-
Model name
The name of the LLM. You can select any model from the supported LLM Providers.
-
Region
The AWS region.
-
Temperature
A value between 0 and 1 that regulates the creativity of LLMs' responses. Use lower temperature if you want more deterministic responses, and use higher temperature if you want more creative or different responses for the same prompt from LLMs on Amazon Bedrock. The current default value is 0.7. For sentiment analysis, use a lower temperature for more consistent results.
-
Top p
The percentage of most-likely candidates that the model considers for the next token. It typically ranges between 0.9 and 0.95. Refer to the model provider documentation to confirm whether this parameter is supported and to verify the appropriate acceptable range.
-
Top k
The number of most-likely candidates that the model considers for the next token. It typically ranges between 0.4 and 0.6. Refer to the model provider documentation to confirm whether this parameter is supported and to verify the appropriate acceptable range.
-
Max token count
The maximum number of tokens to consume during output generation. For consistent and predictable responses, it is recommended to explicitly configure maxTokens based on the model’s supported limits and expected output size.
-
This is the XML for this operation:
Output Configuration
This operation responds with a JSON payload. This is an example response:
{
"inputTextTokenCount": 117,
"results": [
{
"tokenCount": 7,
"outputText": "\nThe sentiment is positive.",
"completionReason": "FINISH"
}
]
}
-
inputTextTokenCountNumber of tokens used to process the input.
-
results-
tokenCount: Number of tokens used to generate the output. -
outputText: The response from the LLM on the prompt sent. -
completionReason: The reason the response finished being generated. Possible values:-
FINISHED– The response was fully generated. -
LENGTH– The response was truncated because of the response length you set. -
STOP_CRITERIA_MET– The response was truncated because the stop criteria was reached. -
RAG_QUERY_WHEN_RAG_DISABLED– The feature is disabled and cannot complete the query. -
CONTENT_FILTERED– The contents were filtered or removed by the content filter applied.
-
-



