Skip to main content

Context

Transformation context essentially refers to the output of image transformations in a JSON format. The response output gives you detailed information about the image, the types of transformations applied and the result of each transformation, etc.

After applying some transformations, follow the below steps to get the context in PixelBin Console.

  1. Click on the Context button on the Playground page.

    Context Icon
  2. Click on the Fetch button.

    Fetch Icon
  3. API JSON will be fetched for the specific image transformations.

    JSON Fetched

Context Response

The response will be in JSON format with four root properties, steps, metadata, headers and params.

The below table lists the properties which you will get in the context response.

PropertyDescription
stepsA list that contains all the transformations applied to an image in a sequential manner.
steps[].nameName of an individual transformation.

If tranformation is t.blur() then name will be blur.
If tranformation is remove.bg() then name will be bg.
If tranformation is af.remove() then name will be remove.
steps[].identifierUnique identifier of an individual transformation.

If tranformation is t.blur() then identifier will be t.
If tranformation is remove.bg() then identifier will be remove.
If tranformation is af.remove() then identifier will be af.
steps[].operationName of the transformation's operation. For basic transformations, it is Basic and for other transformations it is the name of the transformation.

If tranformation is t.blur() then operation will be Basic.
If tranformation is remove.bg() then operation will be RemoveBG.
If tranformation is af.remove() then operation will be Artifact.
steps[].dataThe image data after this specific transformation is applied.

Only AWS Rekognition Moderation, AWS Rekognition DetectLabels, Google Vision DetectLabels, Extract Text, Crop Image, Detect Watermarks, Detect Number Plate, and Detect NSFW transformations provide this data.
steps[].paramsThe parameters used for this transformation.
steps[].metadataThe image metadata after this specific transformation is applied. The properties are similar to root metadata properties as show below.
metadataThe metadata of the original image.
metadata.widthWidth of the original image in bytes.
metadata.heightHeight of the original image in bytes.
metadata.channelsNumber of bands. 3 for sRGB and 4 for CMYK.
metadata.extensionFile extension.
metadata.formatActual file format, derived from file data.
metadata.contentTypeActual type of the returned file.
metadata.sizeSize of the original image in bytes.
metadata.assetTypeType of the asset/file. Possible values are image, audio, video or raw.
metadata.isImageAssetWhether the file is an image.
metadata.isAudioAssetWhether the file is an audio.
metadata.isVideoAssetWhether the file is an video.
metadata.isRawAssetWhether the file is raw data.
metadata.isTransformationSupportedWhether transformations can be applied on the file.
headersResponse headers of the API call.
paramsQuery params used in the request API call. Values can be preview, download and f_auto.
info

All the transformations will return the context data, but not all transformations will return a modified image. Such transformations only return output as JSON data, which can only be fetched from the Context API.

Those transformations are AWS Rekognition Moderation, AWS Rekognition DetectLabels, Google Vision DetectLabels, Extract Text, Detect Watermarks, Detect Number Plate, and Detect NSFW.

In addition, the Crop Image transformation will provide output as a transformed image as well as the data in JSON format.


Examples

1. Creating placeholder thumbnail

Let us see an example of transformation context response using a simple chained transformation with resize, blur and convert. The resultant transformation pattern will be t.resize(w:200,h:200)~t.blur(s:2)~t.toFormat(f:webp).

The original image is transformed as shown.

Transformed pot and cup

The below JSON data shows the output of the Context API.

{
"steps": [
{
"data": {},
"name": "resize",
"params": {
"b": "000000",
"f": "cover",
"h": "200",
"k": "lanczos3",
"p": "center",
"w": "200",
"dpr": 1
},
"metadata": {
"size": 3283,
"depth": "uchar",
"space": "srgb",
"width": 200,
"format": "jpeg",
"height": 200,
"density": 72,
"channels": 3,
"hasAlpha": false,
"hasProfile": false,
"isProgressive": false,
"chromaSubsampling": "4:2:0"
},
"operation": "Basic",
"identifier": "t"
},
{
"data": {},
"name": "blur",
"params": {
"s": "2",
"dpr": "1"
},
"metadata": {
"size": 2241,
"depth": "uchar",
"space": "srgb",
"width": 200,
"format": "jpeg",
"height": 200,
"density": 72,
"channels": 3,
"hasAlpha": false,
"hasProfile": false,
"isProgressive": false,
"chromaSubsampling": "4:2:0"
},
"operation": "Basic",
"identifier": "t"
},
{
"data": {},
"name": "toFormat",
"params": {
"f": "webp"
},
"metadata": {
"size": 918,
"depth": "uchar",
"space": "srgb",
"width": 200,
"format": "webp",
"height": 200,
"channels": 3,
"hasAlpha": false,
"hasProfile": false,
"isProgressive": false
},
"operation": "Basic",
"identifier": "t"
}
],
"params": {},
"headers": {
"host": "api.pixelbin.io",
"x-real-ip": {REQUEST_DEVICE_IP_ADDRESS},
"origin": {REQUEST_ORIGIN_DOMAIN_ADDRESS},
"accept": "application/json, text/plain, */*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9"
},
"metadata": {
"size": 296302,
"width": 3240,
"format": "jpeg",
"height": 2592,
"channels": 3,
"assetType": "image",
"extension": "jpeg",
"isRawAsset": false,
"contentType": "image/jpeg",
"isAudioAsset": false,
"isImageAsset": true,
"isVideoAsset": false,
"isTransformationSupported": true
}
}

From the above JSON data after looking at the metadata properties, we can infer that 3 transformations were applied to the original image. The metadata section of the original image is colored green while that of each transformation step is brown colored.

You can track the changes in the size and dimensions of the image after each step is applied, in comparison with the original values below.

StepSize (bytes)Width (px)Height (px)
Original image29639232402592
After t.resize(w:200,h:200)3283200200
After t.blur(s:2)2241200200
After t.toFormat(f:webp)918200200

2. Detect labels using AWS Rekognition

Let us look at another example of context API using the AWS Rekognition DetectLabels add-on. Here we want to get all the labels associated with an image. The resultant transformation pattern will be awsRek.detectLabels().

note

The transformation CDN URL will return the original image in response as no visible image modification is done.

The below JSON data shows the output of the Context API.

{
"steps": [
{
"data": {
"Cup": {
"Name": "Cup",
"Aliases": [
{
"Name": "Mug"
}
],
"Parents": [],
"Instances": [
{
"Confidence": 99.8639907836914,
"BoundingBox": {
"Top": 1949.9951877593994,
"Left": 2494.801290035248,
"Width": 589.9919235706329,
"Height": 420.92675971984863
}
}
],
"Categories": [
{
"Name": "Kitchen and Dining"
}
],
"Confidence": 99.8639907836914
},
"Plant": {
"Name": "Plant",
"Aliases": [],
"Parents": [],
"Instances": [
{
"Confidence": 97.5615234375,
"BoundingBox": {
"Top": 1125.0892553329468,
"Left": 42.312018536031246,
"Width": 1386.9211435317993,
"Height": 1293.3147954940796
}
}
],
"Categories": [
{
"Name": "Plants and Flowers"
}
],
"Confidence": 99.92650604248047
},
"Ikebana": {
"Name": "Ikebana",
"Aliases": [],
"Parents": [
{
"Name": "Flower"
},
{
"Name": "Flower Arrangement"
},
{
"Name": "Plant"
}
],
"Instances": [],
"Categories": [
{
"Name": "Home and Indoors"
}
],
"Confidence": 91.78543853759766
},
"Windowsill": {
"Name": "Windowsill",
"Aliases": [],
"Parents": [
{
"Name": "Window"
}
],
"Instances": [],
"Categories": [
{
"Name": "Home and Indoors"
}
],
"Confidence": 92.34557342529297
},
"Flower Arrangement": {
"Name": "Flower Arrangement",
"Aliases": [],
"Parents": [
{
"Name": "Flower"
},
{
"Name": "Plant"
}
],
"Instances": [],
"Categories": [
{
"Name": "Plants and Flowers"
}
],
"Confidence": 99.92650604248047
}
},
"name": "detectLabels",
"params": {
"l": 5,
"c": 55
},
"metadata": {
"size": 296302,
"depth": "uchar",
"space": "srgb",
"width": 3240,
"format": "jpeg",
"height": 2592,
"density": 72,
"channels": 3,
"hasAlpha": false,
"hasProfile": false,
"isProgressive": false,
"chromaSubsampling": "4:2:0"
},
"operation": "AWS Rekognition Plugin",
"identifier": "awsRek"
}
],
"params": {},
"headers": {
"host": "api.pixelbin.io",
"x-real-ip": {REQUEST_DEVICE_IP_ADDRESS},
"origin": {REQUEST_ORIGIN_DOMAIN_ADDRESS},
"accept": "application/json, text/plain, */*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9"
},
"metadata": {
"size": 296302,
"width": 3240,
"format": "jpeg",
"height": 2592,
"channels": 3,
"assetType": "image",
"extension": "jpeg",
"isRawAsset": false,
"contentType": "image/jpeg",
"isAudioAsset": false,
"isImageAsset": true,
"isVideoAsset": false,
"isTransformationSupported": true
}
}

The AWS Rekognition DetectLabels add-on has returned all possible labels in the data property in the steps list. The data section, as well as the root and step metadata section, are highlighted in the preceding JSON. In this example, it has returned the Cup, Plant, Ikebana, Windowsill, and Flower Arrangement labels.

There is no change in the metadata properties of the original image metadata and the add-on metadata. You can compare the change in the size and dimensions of the image after the add-on is applied with the original image below.

StepSize (bytes)Width (px)Height (px)
Original image29630232402592
After awsRek.detectLabels()29630232402592

Using context data in transformations

Context although useful in their own right, gets more powerful when used in combination with image transformations. You can use the context information of previous transformations by using the $ special character.

For instance, let use look at the below transformation.

As you can see, the number plate of the car is hidden using a colored block. The transformation pattern used for the above output image is:

numPlate.detect()~t.merge(m:wrap,polys:$steps.0.data.polygons,bg:f2994a)

The transformation pattern can be broken down into two major steps, first the Detect Number Plate step and then the merge step.

The output of the Detect Number Plate step is a JSON object that lists down 4 sets of x and y coordinates for all available number plates in the given image. This JSON object is then fed into the merge step using the polys parameter and all the number plates in the transformed image are wrapped with the color f2994a which is passed using the bg parameter.

The context data for the above transformation is:

{
"steps": [
{
"name": "detect",
"operation": "NumberPlateDetection",
"identifier": "numPlate",
"data": {
"polygons": [
[
[451, 290],
[517, 288],
[518, 309],
[451, 312]
]
]
},
"params": {},
"metadata": {
"format": "jpeg",
"size": 64330,
"width": 640,
"height": 427,
"space": "srgb",
"channels": 3,
"depth": "uchar",
"density": 72,
"chromaSubsampling": "4:2:0",
"isProgressive": false,
"hasProfile": false,
"hasAlpha": false
}
},
{
"name": "merge",
"operation": "Basic",
"identifier": "t",
"data": {},
"params": {
"b": "over",
"g": "center",
"h": 0,
"i": "",
"l": 0,
"m": "wrap",
"r": false,
"t": 0,
"w": 0,
"bg": "f2994a",
"tr": "",
"polys": "$steps.0.data.polygons"
}
"metadata": {
"format": "jpeg",
"size": 174180,
"width": 640,
"height": 427,
"space": "srgb",
"channels": 3,
"depth": "uchar",
"density": 72,
"chromaSubsampling": "4:2:0",
"isProgressive": false,
"hasProfile": false,
"hasAlpha": false
}
}
],
"metadata": {
"width": 640,
"height": 427,
"channels": 3,
"extension": "jpeg",
"format": "jpeg",
"contentType": "image/jpeg",
"size": 64330,
"assetType": "image",
"isImageAsset": true,
"isAudioAsset": false,
"isVideoAsset": false,
"isRawAsset": false,
"isTransformationSupported": true
},
"headers": {
"host": "api.pixelbin.io",
"x-real-ip": {REQUEST_DEVICE_IP_ADDRESS},
"origin": {REQUEST_ORIGIN_DOMAIN_ADDRESS},
"accept": "application/json, text/plain, */*",
"accept-encoding": "gzip, deflate, br",
"accept-language": "en-US,en;q=0.9"
},
"params": {}
}

We have two objects in the steps property from the above JSON. The data value from the Detect Number Plate step is highlighted. This data is then fed into the merge polys parameter using the $steps.0.data.polygons value.

To use the context object in the transformation pattern in this way, you need to first access the context using the $ character and then traverse the context JSON using the dot . notation.

For example, let us break down this value: $steps.0.data.polygons.

  1. First, access the steps using $steps to get the below JSON,
[
{
"name": "detect",
"operation": "NumberPlateDetection",
"identifier": "numPlate",
"data": {
"polygons": [
[
[451, 290],
[517, 288],
[518, 309],
[451, 312]
]
]
},
"params": {},
"metadata": {
"format": "jpeg",
"size": 64330,
...
}
},
{
"name": "merge",
"operation": "Basic",
...
}
]
  1. Then, get any item in the steps array using the index notation. In this case, we need the first item, so we will use $steps.0. The result is,
{
"name": "detect",
"operation": "NumberPlateDetection",
"identifier": "numPlate",
"data": {
"polygons": [
[
[451, 290],
[517, 288],
[518, 309],
[451, 312]
]
]
},
...
}
  1. Then, get the data value in the NumberPlateDetection operation using $steps.0.data.
{
"polygons": [
[
[451, 290],
[517, 288],
[518, 309],
[451, 312]
]
]
}
  1. Finally, get the actual value which will be used in the merge step using $steps.0.data.polygons.
[
[
[451, 290],
[517, 288],
[518, 309],
[451, 312]
]
]

Simiarly, you can get other values in the context data using $params, $headers, $metadata, etc.

Transform and enhance your images using our powerful AI technology. Organize your images in more efficient manner and our extensible APIs enables seamless integration with your system unleashing the power of our platform. Join the large community of users who use PixelBin to transform their image libraries and achieve excellent performance

Is this page useful?