Skip to main content

AWS Rekognition Moderation

AWS Rekognition Moderation transformation allows users to check sensitive content in an image and detect content that is inappropriate, unwanted, or offensive. It uses machine learning algorithms to detect inappropriate or unwanted content in images, such as nudity, violence, or offensive language. The service can be used to automatically filter out such content from user-generated content or to ensure that an image uploaded to a platform complies with the platform's guidelines.


Installing the AWS Rekognition add-on is a must for using this transformation.


Minimum Confidence (c)

This parameter helps to determine the minimum level of confidence that AWS Rekognition Moderation must have in its prediction before it considers an image inappropriate.

Available values are from 0 to 99. The default value is 55.

By default, the minimum confidence parameter is set to 55%, which means that Rekognition will only flag an image as inappropriate if it is at least 55% confident in its prediction.

However, this parameter can be adjusted by the user based on their specific needs. For example, if the user wants to be more conservative in their moderation, they may set a higher minimum confidence threshold (e.g. 80%) to reduce the chances of false positives. Conversely, if the user wants to be more permissive, they may set a lower minimum confidence threshold (e.g. 30%) to allow more content through.


Instead of getting the output in a traditional CDN URL, you will get a JSON output from the Context API while the CDN URL will return the original image without any modifications.

The Context API Response tab includes a JSON object that contains the following information:

ModerationLabelsAn array of moderation labels, where each label is represented by a JSON object.
NameA string value that represents the name of the moderation label detected in the image.
ConfidenceA float value between 0 and 100 represents the level of confidence that the label is present in the image.
ParentNameA string value that represents the name of the parent moderation label, if any.

Overall, the Context API Response provides information about any inappropriate or unsafe content detected in the image, along with their confidence levels and any parent categories. This information can be used to determine whether the image is safe for viewing and take appropriate actions, such as flagging or removing the image.

To learn more about this transformation, click here.

Transform and enhance your images using our powerful AI technology. Organize your images in more efficient manner and our extensible APIs enables seamless integration with your system unleashing the power of our platform. Join the large community of users who use PixelBin to transform their image libraries and achieve excellent performance

Is this page useful?