Color Filter Array Forensics

Posted 2/1/21

Digital Image Forensics is a field concerned with identifying whether images are original or have been edited, and if the latter, what kinds of edits have been made. I haven’t experimented much with digital forensics, but there’s overlap with steganography and neat encoding tricks like halftone QR codes. I’ve been reading about some forensic techniques in UC Berkeley’s “Tutorial on Digital Image Forensics” (200 page PDF), and Color Filter Array forensics is a fun one.

Digital cameras have hardware limitations that leave very specific patterns in the resulting images, and any photo edits will disrupt these patterns, unless the editor takes care to preserve them. Details follow.

Digital Camera Hardware

Digital images usually consist of three color values per pixel, for red, green, and blue. However, most digital cameras don’t have any color sensors in them. Instead, they have a grid of light/luminosity sensors, and they add a layer of filters in front of the sensors that filter out all but red, all but green, or all but blue light. This is much cheaper to manufacture! But there’s a serious drawback: Each pixel can only record a single red, green, or blue sample, instead of all three.

So cameras fake their color data! They use a light filter grid called a Color Filter Array (most commonly a Bayer Filter), like the following:

One row consists of red/green/red filters, the following row consists of green/blue/green filters, then red/green/red, and so on. The result is that each pixel now has a real value for one color channel, and has two or more neighbors with a real value for each other color channel. We can approximate the missing color channels as an average of our neighbors’ color channels. For example, a red pixel will calculate its “blue” channel as the average of the neighbors in the four corners diagonal from its position, and will calculate its “green” channel as the average of the neighbors above, below, left, and right. This approximation is called a “de-mosaicking” algorithm.

De-mosaicking works okay, because how much will the red value change over the distance of a single pixel? Usually not by very much, unless there’s a sharp edge with high color contrast, in which case this approximation will make colors “bleed” slightly over the sharp edge. Newer cameras try to auto-detect these high-contrast borders and only approximate color channels using the neighbors on the same side of the border, but let’s ignore that for now.

Detecting De-Mosaicking Patterns

While the simulated color data looks mostly correct to the human eye, it leaves an unnatural pattern in the numeric color values for each channel. Specifically, we know that each pixel will have two “simulated” channels that are the average of the same channel in each neighboring pixel with a real value for that channel. This should be easy to check in Python, Matlab, or your image-analysis language of choice:

#!/usr/bin/env python3
from PIL import Image
import numpy as np
from statistics import mean

im = Image.open("bayer_filter_demosaicked.jpg")
pixels = np.array(im)
RED,GREEN,BLUE = [0,1,2]

# .X.
# ...
# .X.
def getVerticalAverage(pixels, i, j, channel):
        rows = pixels.shape[0]
        if( i == 0 ):
                return pixels[i+1,j,channel]
        if( i == rows-1 ):
                return pixels[i-1,j,channel]
        return round(mean([pixels[i-1,j,channel],pixels[i+1,j,channel]]))

# ...
# X.X
# ...
def getHorizontalAverage(pixels, i, j, channel):
        cols = pixels.shape[1]
        if( j == 0 ):
                return pixels[i,j+1,channel]
        if( j == cols-1 ):
                return pixels[i,j-1,channel]
        return round(mean([pixels[i,j-1,channel],pixels[i,j+1,channel]]))

# X.X
# ...
# X.X
def getDiagonalAverage(pixels, i, j, channel):
        rows = pixels.shape[0]
        cols = pixels.shape[1]
        corners = []
        if( i > 0 ):
                if( j > 0 ):
                        corners.append(pixels[i-1,j-1,channel])
                if( j < cols-1 ):
                        corners.append(pixels[i-1,j+1,channel])
        if( i < rows-1 ):
                if( j > 0 ):
                        corners.append(pixels[i+1,j-1,channel])
                if( j < cols-1 ):
                        corners.append(pixels[i+1,j+1,channel])
        return round(mean(corners))

def confirmEqual(i, j, color1, color2):
        if( color1 != color2 ):
                print("Anomaly detected at %d,%d (got %d, expected %d)" % (i,j, color1,color2))

# For every pixel, determine what 'real' color channel it has
# then confirm that its interpolated channels match what we get
# from de-mosaicking
for i,row in enumerate(pixels):
        for j,col in enumerate(row):
                if( i % 2 == 0 ): # Red/Green row
                        if( j % 2 == 0 ): # Red column
                                correctGreen = mean([getHorizontalAverage(pixels,i,j,GREEN),getVerticalAverage(pixels,i,j,GREEN)])
                                correctBlue = getDiagonalAverage(pixels,i,j,BLUE)
                                confirmEqual(i, j, pixels[i,j,GREEN], correctGreen)
                                confirmEqual(i, j, pixels[i,j,BLUE], correctBlue)
                        else: # Green column
                                confirmEqual(i, j, pixels[i,j,RED], getHorizontalAverage(pixels,i,j,RED))
                                confirmEqual(i, j, pixels[i,j,BLUE], getVerticalAverage(pixels,i,j,BLUE))
                else: # Green/Blue row
                        if( j % 2 == 0 ): # Green column
                                confirmEqual(i, j, pixels[i,j,RED], getVerticalAverage(pixels,i,j,RED))
                                confirmEqual(i, j, pixels[i,j,BLUE], getHorizontalAverage(pixels,i,j,BLUE))
                        else: # Blue column
                                correctGreen = mean([getHorizontalAverage(pixels,i,j,GREEN),getVerticalAverage(pixels,i,j,GREEN)])
                                correctRed = getDiagonalAverage(pixels,i,j,RED)
                                confirmEqual(i, j, pixels[i,j,RED], correctRed)
                                confirmEqual(i, j, pixels[i,j,GREEN], correctGreen)

Of course, this is only possible if you know both which Color Filter Array the camera model that took the photo uses, and the details of their de-mosaicking algorithm. For now we’ll assume the basic case of “red/green + green/blue” and “average neighboring color channels ignoring high-contrast borders”. For more on color filter arrays and better de-mosaicking approaches, read here. Let’s also assume the image has only lossless compression, which is often the case for the highest export quality straight off a digital camera.

Image Editing Footprints

If anyone opens our camera’s photos in editing software like Photoshop or GIMP, and makes any color adjustments, they’ll break the de-mosaic pattern. If they use the clone/stamp tool, the stamped portions of the image won’t have color channels averaging their neighbors outside the stamped region, and the de-mosaic pattern will be broken. If they copy a portion of a different image into this one, the pattern will be broken.

Not only can we detect when an image has been altered in this way, we can detect where anomalies occur, and potentially highlight the exact changes made. Amending the above script, we’ll replace “reporting” an anomaly with highlighting anomalies:

# Turn all correct pixels 'red', leaving anomalies for further examination
pixels2 = np.copy(pixels)
def confirmEqual(i, j, color1, color2):
        global pixels2
        if( color1 == color2 ):
                pixels2[i,j,RED] = 255
                pixels2[i,j,GREEN] = 0
                pixels2[i,j,BLUE] = 0

Since Photoshop/GIMP’s changes look “correct” to the human eye, the tools have done their job, and they have no incentive to make their changes undetectable to forensic analysis.

Defeating CFA Anomaly Detection

Unfortunately, this technique is far from flawless. There are two ways to defeat CFA anomaly detection:

  1. Delete the appropriate RGB channels from each pixel after editing the image, and re-run the de-mosaicking algorithm to recreate the de-mosaic pattern. This requires the forger have the same knowledge as a forensic analyst regarding exactly what Color Filter Array and de-mosaicking approach their camera uses.

  2. Export the image using a LOSSY compression algorithm, with the compression rate turned up high enough to perturb the channel values and destroy the de-mosaic pattern. This will make it obvious that the image has been re-saved since being exported from the camera, but will destroy the clear-cut evidence of which portions have been edited, if any.

All in all, a very cool forensic technique, and a fun introduction to the field!