 # Reducing the number of colors of an image using Median Cut algorithm

In my previous post, I talked about the Uniform Quantization algorithm, which was a simple way of reducing the colors in the image. Though it’s easy to implement, it doesn’t always yield good results when there are many colors belonging to the same region. Also, a lot of color regions will not have any colors mapped to them resulting in color map entries get wasted.

In this post, I will talk about the Median Cut algorithm for color quantization. The main idea is to recursively sort the pixels by color space and divide it along the median. The implementation can be found here.

## Algorithm

1. Move all pixels into a single large bucket.
2. Find the color channel (red, green, or blue) in the image with the greatest range.
3. Sort the pixels by that channel values.
4. Find the median and cut the region by that pixel.
5. Repeat the process for both buckets until you have the desired number of colors.

For example, In your image, if the blue channel has the greatest range, then a pixel with an RGB value of (32, 8, 16) is less than a pixel with an RGB value of (1, 2, 24), because 16 < 24. Sort the pixels along blue channel. After the bucket has been sorted, move the upper half of the pixels into a new bucket. Repeat the process on both buckets, giving you 4 buckets, then repeat on all 4 buckets, giving you 8 buckets, then repeat on all 8, giving you 16 buckets. Average the pixels in each bucket and you have a palette of 16 colors.

Since the number of buckets doubles with each iteration, this algorithm can only generate a palette with a number of colors that is a power of two.

## Implementation

The code below implements the Median Cut Color quantization algorithm. You can reduce the colors only in the multiple of 2 because in each recursion you are dividing the block into two. The complete notebook is here.

```import matplotlib.pyplot as plt
import numpy as np

def median_cut_quantize(img, img_arr):
# when it reaches the end, color quantize
print("to quantize: ", len(img_arr))
r_average = np.mean(img_arr[:,0])
g_average = np.mean(img_arr[:,1])
b_average = np.mean(img_arr[:,2])

for data in img_arr:
sample_img[data][data] = [r_average, g_average, b_average]

def split_into_buckets(img, img_arr, depth):

if len(img_arr) == 0:
return

if depth == 0:
median_cut_quantize(img, img_arr)
return

r_range = np.max(img_arr[:,0]) - np.min(img_arr[:,0])
g_range = np.max(img_arr[:,1]) - np.min(img_arr[:,1])
b_range = np.max(img_arr[:,2]) - np.min(img_arr[:,2])

space_with_highest_range = 0

if g_range >= r_range and g_range >= b_range:
space_with_highest_range = 1
elif b_range >= r_range and b_range >= g_range:
space_with_highest_range = 2
elif r_range >= b_range and r_range >= g_range:
space_with_highest_range = 0

print("space_with_highest_range:",space_with_highest_range)

# sort the image pixels by color space with highest range
# and find the median and divide the array.
img_arr = img_arr[img_arr[:,space_with_highest_range].argsort()]
median_index = int((len(img_arr)+1)/2)
print("median_index:", median_index)

#split the array into two buckets along the median
split_into_buckets(img, img_arr[0:median_index], depth-1)
split_into_buckets(img, img_arr[median_index:], depth-1)

flattened_img_array = []
for rindex, rows in enumerate(sample_img):
for cindex, color in enumerate(rows):
flattened_img_array.append([color,color,color,rindex, cindex])

flattened_img_array = np.array(flattened_img_array)

# the 3rd parameter represents how many colors are needed in the power of 2. If the parameter
# passed is 4 its means 2^4 = 16 colors
split_into_buckets(sample_img, flattened_img_array, 4)```

The output of the above program after execution: