{"id":1367,"date":"2020-05-03T16:01:13","date_gmt":"2020-05-03T16:01:13","guid":{"rendered":"https:\/\/muthu.co\/?p=1367"},"modified":"2021-05-24T02:33:51","modified_gmt":"2021-05-24T02:33:51","slug":"extracting-text-regions-from-an-image-using-geometric-properties","status":"publish","type":"post","link":"http:\/\/write.muthu.co\/extracting-text-regions-from-an-image-using-geometric-properties\/","title":{"rendered":"Extracting Text Regions from an image using Geometric properties"},"content":{"rendered":"\n

The problem of building systems that mimic human behavior is not an easy one to solve. Neural Nets solve those problems in one way but if someone thinks that it’s the right way doesn’t know enough about Artificial Intelligence. When you show a child a picture of a dog, just one picture, the child can locate the dog of any form in any other picture. This is not how neural networks work. We need 1000s to pictures of dogs to build a model that can locate a dog in a picture. And the accuracy of this system is heavily reliant on the quality and size of the dataset, and if you don’t have enough data, you can’t even train a Neural Network system, which basically means you don’t have a solution.<\/p>\n\n\n\n

I believe our brain is an advanced pattern matching system. The right way to solve any computer vision problem is to find algorithms that are close to how the brain does it. A pattern matching based solution doesn’t need too much data which is why I love them.<\/p>\n\n\n\n

In this post, I will attempt to solve the problem of Text Segmentation using only geometric properties of components in the image to separate text regions from non-text regions and draw bounding boxes around them. The core idea is that Text usually has a lot of common characteristics. You can find the entire project for text detector here https:\/\/github.com\/muthuspark\/text-detector<\/p>\n\n\n\n

\"\"<\/a><\/figure><\/div>\n\n\n\n

I will attempt to run my algorithm over the above sample image to extract text regions from it.<\/p>\n\n\n\n

Algorithm<\/h2>\n\n\n\n
  1. Binarize the image using an Adaptive Thresholding algorithm.<\/li>
  2. Find connected components in the image and Identify likely text regions using Heuristics filter.<\/li>
  3. Draw bounding boxes around the likely text regions and Increase the size of bounding boxes so that they overlap neighboring boxes.<\/li>
  4. Combine the overlapping boxes and remove the boxes which are not overlapping with any other box.<\/li>
  5. You will be left with a few bounding boxes which can be sent to an OCR system like Tesseract.<\/li><\/ol>\n\n\n\n

    Step 1: Binarize the image using an Adaptive Thresholding algorithm.<\/h3>\n\n\n\n

    Load the image and binarize it using an Adaptive Thresholding algorithm. Generally, there are two approaches to binarize a grayscale image: Global threshold and Local threshold. Adaptive thresholding is a form of thresholding that takes into account spatial variations in illumination which is the case in most of our datasets.<\/p>\n\n\n\n

    from skimage.filters import threshold_sauvola \n\nthresh_sauvola = threshold_sauvola(img, window_size=window_size)\nbinary_sauvola = img < thresh_sauvola<\/code><\/pre>\n\n\n\n
    \"\"<\/a><\/figure><\/div>\n\n\n\n

    Step 2: Find connected components in the image and Identify likely text regions using Heuristics.<\/h3>\n\n\n\n

    We will find the connected components in the image and filter out the non-text regions using few geometric properties which can help us discriminate between text and non-text regions. Some of the well-known properties mentioned in many research papers are:<\/p>\n\n\n\n

    1. When components have a low area ( area < 15 ), they are usually noise and are hardly legible.<\/li>
    2. When the density of the component is too low, it can be a diagonal or a noise element (the normal density of text element is usually greater than 20%)<\/li>
    3.  Aspect ratio,  the ratio of the height and the width should not be too low or too high.<\/li>
    4. Eccentricity measures the shortest length of the paths from a given vertex v to reach any other vertex w of a connected graph.<\/li>
    5. Extent, the proportion of pixels in the bounding box that is also in the region. Computed as the Area divided by the area of the bounding box.<\/li>
    6. Solidity, also known as convexity. The proportion of the pixels in the convex hull that are also in the object. Computed as Area\/ConvexArea.<\/li>
    7. Stroke Width uniformity – Text characters usually have a uniform stroke width across. This is an important geometric property which is a major contributor to our algorithm.<\/li><\/ol>\n\n\n\n
      aspect_ratio = width\/height\n    \nshould_clean = region.area < 15\nshould_clean = should_clean or aspect_ratio < 0.06  or aspect_ratio > 3\nshould_clean = should_clean or region.eccentricity > 0.995\nshould_clean = should_clean or region.solidity < 0.3\nshould_clean = should_clean or region.extent < 0.2 or region.extent > 0.9\n \n# stroke width\nstrokeWidthValues = distance_transform_edt(region.image)\nstrokeWidthMetric = np.std(strokeWidthValues)\/np.mean(strokeWidthValues)\nshould_clean = should_clean or strokeWidthMetric < 0.4\n\nif not should_clean:\n    # record the bounding boxes which are highly likely text.\n    bounding_boxes.append([minr, minc, maxr, maxc])<\/code><\/pre>\n\n\n\n

      All connected components<\/h4>\n\n\n\n
      \"\"<\/a><\/figure><\/div>\n\n\n\n

      Connected components after geometric properties based filtering.<\/h4>\n\n\n\n
      \"\"<\/a><\/figure><\/div>\n\n\n\n

      Step 3: Draw bounding boxes around the likely text regions and Increase their size.<\/h3>\n\n\n\n

      Once we have identified the bounding boxes we will increase their sizes by a small factor so that each box overlaps with the neighboring box. This way the character components in words will overlap with its left and right character components.<\/p>\n\n\n\n

      expansionAmountY = 0.02 # between lines\nexpansionAmountX = 0.03 # between words\nminr, minc, maxr, maxc = region.bbox\n        \nminr = np.floor((1-expansionAmountY) * minr)\nminc = np.floor((1-expansionAmountX) * minc)\nmaxr = np.ceil((1+expansionAmountY) * maxr)\nmaxc = np.ceil((1+expansionAmountX) * maxc)<\/code><\/pre>\n\n\n\n
      \"\"<\/a><\/figure><\/div>\n\n\n\n

      Step 4: Combine the overlapping boxes and remove the boxes which are not overlapping with any other box.<\/h3>\n\n\n\n

      The idea I am using here is to combine boxes which are overlapping with each other and the ones which are almost in the same line with each other. I have listed down the code for each one of them below.<\/p>\n\n\n\n

      Check if two boxes are overlapping.<\/h4>\n\n\n\n
      def is_overlapping(box1, box2):\n#     if (RectA.minc < RectB.maxc && RectA.maxc > RectB.minc &&\n#      RectA.minr > RectB.maxr && RectA.maxr < RectB.minr )\n    if box1[1] < box2[3] and box1[3] > box2[1] and box1[0] < box2[2] and box1[2] > box2[0]:\n        return True\n    return False\n<\/code><\/pre>\n\n\n\n

      Check if the two boxes are in the same line.<\/h4>\n\n\n\n
      def is_almost_in_line(box1, box2):\n    centroid_b1 = [int((box1[0]+box1[2])\/2), int((box1[1]+box1[3])\/2)]\n    centroid_b2 = [int((box2[0]+box2[2])\/2), int((box2[1]+box2[3])\/2)]\n    if (centroid_b2[0]-centroid_b1[0]) == 0:\n        return True\n    \n    angle = (np.arctan(np.abs((centroid_b2[1]-centroid_b1[1])\/(centroid_b2[0]-centroid_b1[0])) )*180)\/np.pi\n    if angle > 80:\n        return True\n    \n    return False<\/code><\/pre>\n\n\n\n

      Combine two boxes which are overlapping each other.<\/h4>\n\n\n\n
      def combine_boxes(box1, box2):\n    minr = np.min([box1[0],box2[0]])\n    minc = np.min([box1[1],box2[1]])\n    maxr = np.max([box1[2],box2[2]])\n    maxc = np.max([box1[3],box2[3]])\n    return [minr, minc, maxr, maxc]<\/code><\/pre>\n\n\n\n

      Group the boxes into bigger boxes.<\/h4>\n\n\n\n
      def group_the_bounding_boxes(bounding_boxes):\n    stime = time.time()\n    number_of_checks = 0\n    box_groups = []\n    dont_check_anymore = []\n    for iindex, box1 in enumerate(bounding_boxes):\n        if iindex in dont_check_anymore:\n            continue\n                \n        group_size = 0\n        bigger_box = box1\n            \n        for jindex, box2 in enumerate(bounding_boxes):\n            if jindex in dont_check_anymore:\n                continue\n                \n            if jindex == iindex:\n                continue\n                \n            number_of_checks+=1\n                    \n            if is_overlapping(bigger_box, box2) and is_almost_in_line(bigger_box, box2):\n                bigger_box = combine_boxes(bigger_box, box2)\n                dont_check_anymore.append(jindex)\n                group_size += 1\n                \n        if group_size > 0:\n            # check if this group overlaps any other \n            # and combine it there otherwise make a new box.\n            combined_with_existing_box = False\n            for kindex, box3 in enumerate(box_groups):\n                if is_overlapping(bigger_box, box3):\n                    bigger_box = combine_boxes(bigger_box, box3)\n                    box_groups[kindex] = bigger_box\n                    combined_with_existing_box = True\n                    break\n                    \n            if not combined_with_existing_box:\n                box_groups.append(bigger_box)\n                \n        else:\n            dont_check_anymore.append(iindex)\n                \n    print(\"number_of_checks:\", number_of_checks)\n    print(\"time_taken:\", str(time.time() - stime))\n    return box_groups<\/code><\/pre>\n\n\n\n

      Output from the above code is as shown below:<\/strong><\/p>\n\n\n\n

      \"text<\/a><\/figure><\/div>\n\n\n\n

      Image extracted using the bounding box coordinates:<\/p>\n\n\n\n

      \"\"<\/a><\/figure><\/div>\n\n\n\n

      As you can see in the image above, the text regions are now connected to one. You can pass this into an OCR system to extract text.<\/p>\n\n\n\n

      Other Samples:<\/p>\n\n\n\n

      Original<\/th>Detected Text Regions<\/th><\/tr><\/thead>
      <\/a><\/td><\/a><\/td><\/tr>
      <\/a><\/td><\/a><\/td><\/tr>
      <\/a><\/td><\/a><\/td><\/tr>
      <\/a><\/td><\/a><\/td><\/tr>
      <\/a><\/td><\/a><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n

      Code for text detector: https:\/\/github.com\/muthuspark\/text-detector<\/a>
      Jupyter notebook:
      https:\/\/nbviewer.jupyter.org\/github\/muthuspark\/text-detector\/blob\/master\/notebooks\/Text%20Segmentation%20in%20Image.ipynb<\/a><\/p>\n\n\n\n

      References<\/h2>\n\n\n\n

      B. Epshtein, E. Ofek and Y. Wexler, “Detecting text in natural scenes with stroke width transform,” 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, 2010, pp. 2963-2970.
      Tran, Tuan Anh Pham et al. \u201cSeparation of Text and Non-text in Document Layout Analysis using a Recursive Filter.\u201d TIIS 9 (2015): 4072-4091.
      Chen, Huizhong, et al. “Robust Text Detection in Natural Images with Edge-Enhanced Maximally Stable Extremal Regions.” Image Processing (ICIP), 2011 18th IEEE International Conference on. IEEE, 2011.<\/p>\n","protected":false},"excerpt":{"rendered":"

      The problem of building systems that mimic human behavior is not an easy one to solve. Neural Nets solve those problems in one way but if someone thinks that it’s the right way doesn’t know enough about Artificial Intelligence. When you show a child a picture of a dog, just one picture, the child can […]<\/p>\n","protected":false},"author":1,"featured_media":1382,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[38],"tags":[47],"_links":{"self":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/1367"}],"collection":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/comments?post=1367"}],"version-history":[{"count":4,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/1367\/revisions"}],"predecessor-version":[{"id":1846,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/posts\/1367\/revisions\/1846"}],"wp:featuredmedia":[{"embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/media\/1382"}],"wp:attachment":[{"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/media?parent=1367"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/categories?post=1367"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/write.muthu.co\/wp-json\/wp\/v2\/tags?post=1367"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}