AWS Rekognition Moderation
AWS Rekognition Moderation transformation allows users to check sensitive content in an image and detect content that is inappropriate, unwanted, or offensive. It uses machine learning algorithms to detect inappropriate or unwanted content in images, such as nudity, violence, or offensive language. The service can be used to automatically filter out such content from user-generated content or to ensure that an image uploaded to a platform complies with the platform's guidelines.
Installing the AWS Rekognition add-on is a must for using this transformation.
Minimum Confidence (
This parameter helps to determine the minimum level of confidence that
AWS Rekognition Moderation must have in its prediction before it considers an image inappropriate.
Available values are from
99. The default value is
By default, the minimum confidence parameter is set to
55%, which means that Rekognition will only flag an image as inappropriate if it is at least
55% confident in its prediction.
However, this parameter can be adjusted by the user based on their specific needs. For example, if the user wants to be more conservative in their moderation, they may set a higher minimum confidence threshold (e.g.
80%) to reduce the chances of false positives. Conversely, if the user wants to be more permissive, they may set a lower minimum confidence threshold (e.g.
30%) to allow more content through.
Instead of getting the output in a traditional CDN URL, you will get a JSON output from the Context API while the CDN URL will return the original image without any modifications.
- Input Image
- Context API Response
<PixelBinImage url="https://cdn.pixelbin.io/v2/dummy-cloudname/FIDrmb/original/images/transformation/middle-finger.jpeg" />
The response for an API call with the above request body can look like this.
"Name": "Middle Finger",
"ParentName": "Rude Gestures"
"Name": "Rude Gestures",
The Context API Response tab includes a JSON object that contains the following information:
|An array of moderation labels, where each label is represented by a JSON object.|
|A string value that represents the name of the moderation label detected in the image.|
|A float value between |
|A string value that represents the name of the parent moderation label, if any.|
Context API Response provides information about any inappropriate or unsafe content detected in the image, along with their confidence levels and any parent categories. This information can be used to determine whether the image is safe for viewing and take appropriate actions, such as flagging or removing the image.
To learn more about this transformation, click here.