US20160014300A1 - Image Binarization - Google Patents
Image Binarization Download PDFInfo
- Publication number
- US20160014300A1 US20160014300A1 US14/865,213 US201514865213A US2016014300A1 US 20160014300 A1 US20160014300 A1 US 20160014300A1 US 201514865213 A US201514865213 A US 201514865213A US 2016014300 A1 US2016014300 A1 US 2016014300A1
- Authority
- US
- United States
- Prior art keywords
- pixels
- image
- pixel
- blurred
- binary
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000009826 distribution Methods 0.000 claims description 11
- 238000007619 statistical method Methods 0.000 abstract description 2
- 238000012015 optical character recognition Methods 0.000 description 18
- 208000010587 benign idiopathic neonatal seizures Diseases 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005065 mining Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000002243 precursor Substances 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G06K9/344—
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G06T7/0081—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/40012—Conversion of colour to monochrome
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/40093—Modification of content of picture, e.g. retouching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10008—Still image; Photographic image from scanner, fax or copier
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G06T2207/20148—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/01—Solutions for problems related to non-uniform document background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0081—Image reader
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0094—Multifunctional device, i.e. a device capable of all of reading, reproducing, copying, facsimile transception, file transception
Definitions
- the present disclosure relates to converting grayscale, color or other multi-valued images into binary representations (binarization). It relates further to conversion based on neighborhoods of local pixels, rather than global conversion applied to entireties of images. Operating on blurred and non-blurred representations of the input image typifies the embodiments.
- a hard copy document 10 becomes digitized for computing actions, such as electronic editing, searching, storing compactly, displaying on monitors, etc. It is also digitized as a precursor to other routines, such as machine translation, data extraction, text mining, invoice processing, invoice payment, and the like.
- the hard copy document is any of a variety, but is commonly an invoice, bank statement, receipt, business card, written paper, book, etc. It contains both text 7 and background 9 .
- the text typifies words, numbers, symbols, phrases, etc. having content relating to the topic of the hard copy 10 .
- the background represents the underlying media on which the content appears.
- the background can also include various colors, advertisements, corporate logos, watermarks, texture, creases, speckles, stray marks, and the like.
- the document 10 becomes scanned 12 which results in a grayscale or color image 14 defined by pixels.
- the pixels 16 - 1 , 16 - 2 , 16 - 3 . . . are many and range in volume depending upon the resolution of the scan, e.g., 150 dpi, 300 dpi, etc.
- Each pixel has an intensity value defined according to various scales, but a range of 256 possible values is common, e.g., 0-255.
- the intensity values get converted from 0-255 to one of two possible binary values (black or white) in a binary image 22 .
- Scanned pixel 16 - 3 in image 14 becomes binarized pixel 16 - 3 ′ in binary image 22 with a value of black or white, 1 or 0.
- binarization occurs by splitting in half the intensity scale of the pixels and labeling as black pixels those with relatively dark intensities and white pixels those with light intensities.
- graph 25 for instance, pixels 16 of input image 14 having intensities ranging from 0-127 become labeled black during traditional binarization, while those with intensities from 128-255 become labeled white.
- the result of binarization serves as input to OCR 30 , which creates an output of content 40 available for further processing, storage, searching, text mining, displaying, etc.
- Text 41 , XML 43 , searchable .pdf's 45 , and the like, are common types of OCR outputs.
- the process also often takes place through instructions, software, etc. executing on controller(s) in hardware 50 , such as imaging devices e.g. multi-function printers (MFPs), all-in-ones (AIOs), copier machines, etc.
- MFPs multi-function printers
- AIOs all-in-ones
- copier machines etc.
- not all OCR engines utilize binarization techniques. Instead, some directly take as input to their engine the grayscale or color image output from the scanning function.
- Binarization is also susceptible to losing important information in low contrast regions of documents, especially where light text resides on light backgrounds or dark text resides on dark backgrounds. If a dark text pixel, say an intensity 50 of 255 (graph 25 ), resides nearby dark background pixels, say intensity 75 of 255, application of a global threshold of e.g. 50% results in both text and background pixels being characterized as black after binarization. That the text and backgrounds are similar in color, all text information is lost after binarization. As seen in FIG. 2 , a group of pixels 60 from document 10 ′ have both dark text pixels 7 ′ (from a portion of the letter “N”) and dark background pixels 9 ′.
- Techniques include conversion of grayscale and color images in a manner that maximizes accuracy of optical character recognition (OCR). Rather than applying a single threshold to an entire image for binarization, custom thresholds are selected for pixels in an image based on surrounding pixels. Algorithms first apply a blurring operation to a window of neighboring pixels surrounding each pixel to smooth out halftone backgrounds. Next, a down-sampled distribution of data of the blurred window is calculated, e.g., histogram, to identify a most popular grayscale/color value of the image.
- OCR optical character recognition
- the algorithm assumes that a center pixel in the window comprises part of a uniform region of color and a predetermined threshold is applied by which the centering pixel is either defined as a black or white binary pixel.
- a predetermined threshold is applied by which the centering pixel is either defined as a black or white binary pixel.
- the algorithm assumes the region is not a uniform color. If more pixels are darker than the most popular color, a threshold is selected to maximize the separation between the most popular color and the average of the dark pixels. The algorithm also applies similar logic if more pixels are lighter than the most popular color.
- Pixels in both the blurred image and the original image must be darker than their thresholds in order for a black pixel to be set in the binary image.
- the technique makes it possible to remove uniform backgrounds and halftones, without sacrificing details of small fonts, and reduce OCR processing time.
- systems and methods convert to binary an input image having pixels defining text and background. Upon blurring the image, thresholds are determined by which pixels are defined as either binary black or binary white.
- Thresholds derive from grouped together neighboring pixels having pixels separated out that correspond to the background. For pixels of the input image defined as binary black and having corresponding pixels in the blurred image defined as binary black relative to their thresholds, those are set to black in the binary image, else they are set white. Techniques for devising thresholds, blurring images, grouping together pixels, statistical analysis, etc., typify the embodiments.
- FIG. 1 is a diagrammatic view of an OCR environment according to the prior art
- FIG. 2 is a diagrammatic view of poor binarization results according to techniques of the prior art
- FIG. 3 is a flow chart according to the present invention for creating blurred images from non-blurred originals
- FIGS. 4A-4H are diagrammatic views showing pixel and bounding box terminology and movement according to the present invention.
- FIGS. 5A and 5B are diagrammatic views of a bounding box and corresponding distribution of data in the form of a histogram according to the present invention.
- FIGS. 6A-6D are diagrammatic views of common histograms according to the present invention.
- FIGS. 7A-7E are diagrammatic views of histograms with various bin counts according to the present invention.
- FIG. 8 is a diagrammatic view of a sample bounding box of pixels of an input image having histograms for blurred and non-blurred images thereof;
- FIG. 9 is a flow chart and accompanying diagrams for applying a threshold for deciding binary quantification according to the distribution of data in a histogram
- FIG. 10 is a flow chart and diagram for determining a unique threshold for a bounding box of grouped together neighboring pixels
- FIG. 11 is a flow chart and diagram for variously applying thresholds to binarize individual, multi-valued pixels of an image
- FIG. 12 is a diagrammatic view of the binarization results according to the sample of FIG. 8 ;
- FIG. 13 is a diagrammatic view of a representative input document
- FIG. 14 is a diagrammatic view of the input document of FIG. 13 after poor binarization according to the prior art.
- FIG. 15 is a diagrammatic view of the input document of FIG. 13 after binarization according to the present invention standing in crisp contrast to the results of the prior art in FIG. 14 .
- a grayscale, color or other multi-valued image 14 results from a scanning operation at 12 .
- Individual pixels 16 of the image have pixels that define text and background according to pixel value intensities ranging from 0-255, typically.
- Binarization according to the present invention now provides better binary images 22 that serve as input to OCR engines, especially for use in imaging devices 50 .
- the input image 14 is both blurred 70 and kept original or un-blurred 80 .
- one technique groups together a window or small groups of neighboring pixels 72 and converts pixel value intensities (PVI) to blurred pixel value intensities for the same region 72 ′.
- the window 72 is then advanced across 74 a width of the input image 14 , then down and across 76 , 76 ′, with the process repeating as many times as necessary until a fully blurred image 90 is obtained at 92 .
- the blurred image has pixels defining blurred text and blurred background corresponding to the input image. To actually obtain blurred from original pixel value intensities, light and dark pixels are blended with one another in small neighborhoods such that darker pixels become lighter while lighter pixels become darker.
- each pixel in the blurred image becomes a composite of itself and its surrounding neighbors and noise is removed in the original, especially halftone backgrounds that are effectively converted into uniform backgrounds.
- Many techniques are known for doing this, but a representative technique includes a normalized box filter where each blurred pixel is a mean of its neighbors in the window, each contributing with equal weight.
- pixel value intensities of the original image are summed. They are then divided by the total number of pixels in the region.
- pixel value intensities of 100 are noted for all pixels, except for a center pixel 16 - 4 having a pixel value intensity of 0.
- Summing them together results in 800 which is then divided by nine pixels, or 800/9.
- the result is 88.88, or 89, when rounded to a next closest whole number.
- each pixel has blurred pixels 16 ′ with blurred pixel value intensities of 89 for an entirety of the window.
- an image 14 , 90 has pluralities of pixels 16 , 16 ′ defining a width (w) and length (l).
- a first center pixel of a bounding box is established.
- pixel P 11,11 resides at 11 pixels across the width of the image and 11 pixels down the length of the image.
- a bounding box 100 of grouped together pixels surrounds the center pixel P 11,11 .
- the bounding box 100 is representatively square about the center pixel, the box defines a 21 ⁇ 21 group of pixels neighboring one another for a total of 441 pixels as seen in FIG. 5A .
- Each pixel has a pixel value intensity ranging from 0-255 noted at pixels P 11,11 , P 12,19 and P 21,21 .
- a threshold is determined that will binarize these intensities as either a binary black or binary white for a given pixel under consideration, e.g., the center pixel P 11,11 .
- a distribution of data is devised that characterizes the pixel value intensities of the pixels in the bounding box.
- the distribution is selected as a histogram 110 having equal-sized bins along the x-axis, ranging 0-255, that correspond to the pixel value intensities of the pixels in the bounding box.
- the histogram defines a frequency of occurrence for each of the pixels in the bins. Pixels are lighter when their pixel value intensities range closer to value 255, while they are darker when their pixel value intensities range closer to value 0.
- pixels having the highest frequency are known to represent the most popular color in the bounding box and correspond to the background of the image, rather than text. As noted by comparing FIGS.
- bounding boxes 100 ′ of mostly white pixels have histograms 110 ′ with highest-frequently occurring pixels residing closer to value 255, whereas bounding boxes 100 ′′ of mostly dark pixels have histograms 110 ′′ with highest-frequently occurring pixels residing closer to value 0.
- bounding boxes of pixels of various shades representing both text and background in FIGS. 6C and 6D having multiple relative peaks of pixel value intensities 1, 2, 3, 4, such as in histograms 110 ′′′ and 110 ′′′.
- still other histogram shapes, peaks, trends, etc. exist for a given set of neighboring pixels in bounding boxes, as are other distributions of data.
- a bounding box 100 ′′′ includes grouped together neighboring pixels having pixel value intensities of 1, 2, 0, 5, 8, 61, 3, 60 and 32 in a 3 ⁇ 3 square box of pixels.
- a histogram 110 ⁇ 5 in FIG. 7E spread amongst 256 bins reveals a frequency count of one (1) for all of the pixels in the bounding box 100 ′′′.
- the histogram 110 ⁇ 6 groups together bins 0-3, in a first bin, 4-7 in a next bin, 8-11, in still a next bin, and so on. As there are four pixels having intensities from 0-3, they are noted as the most common color 120 thus, revealing better data from the bounding box 100 ′′′.
- a next bounding box of grouped together pixels needs to occur for another pixel under consideration relative to a local neighborhood of pixels surrounding it. This must also repeat until an entirety of pixels has been grouped together at least once in an image.
- a next bounding box occurs in an image 14 , 90 by advancing the box and center pixel of FIG. 4B across an entirety of the image.
- the original center pixel P 11,11 first advances three pixels to the right across the width of the image to become pixel P 14,11 and the bounding box 100 -D follows to keep the pixel centered. The process repeats until an entire width of the image has been grouped together.
- FIG. 4E the centering pixel is thence advanced back to the starting side of the image and moved three pixels down the length to become pixel P 11,14 .
- the bounding box 100 -F follows in FIG. 4F and keeps the pixel centered.
- FIG. 4G shows an advancing center pixel again moving three pixels to the right of its last position across the width of the image to become pixel P 14,14 and the bounding box 100 -H follows to keep centered the pixel in FIG. 4H .
- the process repeats widthwise and then lengthwise for as often as necessary until an entirety of the image has been grouped together.
- the bounding box can be of various sizes/shapes and programmable as well. From experimentation, bounding boxes of 9 ⁇ 9 squares of 81 total pixels in an image scanned under 300 ⁇ 300 dpi resolution are decent alternatively-sized boxes to the foregoing. Boxes of 11 ⁇ 11 squares of 121 total pixels are also useful.
- a sample portion of an original image is given at 14 ′ as is its corresponding blurred image at 90 ′.
- a grouped together bounding box of pixels 100 - o has both background 9 ′and text 7 ′.
- Its blurred counterpart 100 - b has blurred background 9 ′- b and blurred text 7 ′- b.
- Histograms 110 - o, 110 - b are devised for the boxes and correspond to the original (o) and blurred (b) images, respectively. Thresholds are now determined for pixels under consideration P u.c. and P′ u.c. in the original and blurred boxes according to the flow chart of FIG. 9 .
- a distribution of data such as a histogram, is created for a group of neighboring pixels. From that distribution, a most common color is identified, 902 , e.g., the pixel value intensity having the highest frequency of occurrence. Noting certain amounts of pixel value intensities exist that are comparable to the highest frequency of occurrence, a +/ ⁇ zone of pixels adjacent the most common color is established at 904 .
- This +/ ⁇ zone corresponds to the many pixels in the bounding box that define the background pixels.
- the zone can be programmed to any size, but +/ ⁇ 12 (out of 256 bins) is a number presently used by the inventors.
- the inventors also add a further offset to this zone to ensure that the background pixels are adequately distinguished away from any content, such as text. If, at 912 , a sufficient number of pixels remain 908 , 910 in the histogram outside the region 904 , then the bounding box of neighboring pixels is deemed to have content worthy of uniquely calculating a binarization threshold 914 . Else, a predetermined threshold is applied to the pixels under consideration to define them as either binary black or binary white values. Applying the thresholds proceeds according to FIG. 11 , however, FIG. 10 details the statistical manipulation of the histogram to obtain the calculated threshold for 914 .
- the histogram 110 - o has a most common color of frequency 33 (y-axis) identified as occurring at pixel value intensity 26 (x-axis).
- the pixels of pixel value intensity from the range 23 to 29 are noted as defining the background pixels in the bounding box 100 - o.
- the pixel value intensities and pixel counts are determined from the histogram, 1000 . If fewer than 4 pixels are outside of the +/ ⁇ zone, the bounding box is said to contain a uniform background and a fixed threshold. However, if more than 4 pixels are outside the +/ ⁇ zone, the region is not uniform.
- the background is considered to be light, and all pixels darker than the pixels in the +/ ⁇ zone are considered to be non-background pixels. If, on the other hand, more pixels outside of the +/ ⁇ zone are lighter than the background, the background is considered to be dark, and all pixels lighter than the pixels in the +/ ⁇ zone are considered to be non-background pixels.
- a mathematical product is taken for the pixel value intensities times their frequency of occurrence.
- the entirety of the products are summed (E Product), as are the pixel counts summed (E Pixel Count) 1012 .
- E Product the entirety of the products
- E Pixel Count the pixel counts summed
- thresholds and pixel value intensities are adjusted to ensure they have equivalent scales. That is, if pixels were first noted on a scale of 256 possible intensity values, but the threshold was based on a histogram having only 64 bins, vice 256 bins, the threshold from the 64 bins needs to be converted back to the 256 intensity scale.
- a threshold of 12 From FIG. 10 , becomes multiplied by four so that the threshold value is 48, or 12 ⁇ 4.
- each pixel under consideration in either the blurred image ( 90 ) or original image ( 14 ) is compared to the appropriate threshold value. If the threshold is a predetermined threshold found at 916 , FIG. 9 , the pixel value intensity of the pixel in the image is compared to a programmable value.
- the threshold value of 80 1103 has been found to delineate well the difference between setting a pixel to black binary or white binary.
- a global threshold of 75 is still another useful threshold and used below in the sample code section of the disclosure. It has been found that a relatively low static threshold, less than half the bin count, causes most uniform backgrounds to be converted to white pixels and detail is preserved for small fonts. If a pixel under consideration 1105 had a pixel value intensity of 210 as illustrated in a bounding box 110 - 11 of mostly white pixels, for example, the pixel under consideration would be no longer consider with an intensity of 210, but would be considered a white pixel as 210>80. Conversely, if the pixel value intensity were a value less than or equal to the 80 (not shown), the pixel under consideration would be considered binary black.
- the pixel under consideration had calculated for it a unique threshold value based on neighboring pixels in a bounding box, e.g., as calculated from FIG. 10 , the pixel value intensity would be now compared to the unique threshold value.
- a threshold value 1110 of 44 was found to be the pixel value intensity by which pixels would be adjudicated as either white (greater than 44) or black (less than or equal to 44).
- the threshold value 1110 is first converted to a proper scale for comparison. As 64 is one-fourth the value of 256, the threshold value 1110 of 44 is multiplied by four, which is 176 or 44 ⁇ 4. In turn, the pixel value intensity of 240 is compared to 176, 1112 . As 240 is greater than 176, the binary value of P x,y becomes white.
- a pixel neighboring P x,y one left, P x ⁇ 1,y , has a pixel value intensity of 239 and becomes white relative to the threshold value of 176 as 239 is also greater than 176 (239>176), 1114 .
- a pixel neighboring P x,y one right, P x+1,y , has a pixel value intensity of 150 and becomes adjudicated black relative to the threshold value of 176 as 150 is less than or equal to 176 (150 ⁇ 176), 1116 .
- the pixel under consideration at 1120 is less than or equal to the threshold, and so too is its counterpart pixel in the blurred image less than or equal to its threshold, the pixel becomes set as a black pixel in the binary image, 1122 .
- both the pixel in the original and blurred images must be binary black relative to the threshold in order for the pixel of the image to be set as black in the binary image. If either or both of them are binary white relative to the threshold, the pixel of the image becomes set to white in the binary image.
- FIG. 12 the actual results of applying the foregoing techniques to the original image 14 ′ can be seen in the output image 1200 - 14 . It is dramatic to note the bounding box of pixels 100 -o becoming much more readable and clear at bounding box 100 - o ′. Similarly, the improvement of the present disclosure over the techniques of the prior art can be seen by comparing actual reductions to practice of the invention obtained by the inventors in FIGS. 13-15 .
- an original image 1300 is binarized 1400 according to the techniques of the prior art ( FIG. 14 ) and binarized 1500 according to the techniques of the instant disclosure. Markedly, the image binarized at 1500 is clearer and has more readily ascertainable content than the image binarized at 1400 .
- Image binarization occurred according to the foregoing using a square bounding box or window size of 11 ⁇ 11 pixels. Histograms were generated by subtracting one column of pixels on the left of a center pixel and adding one column of pixels on the right. Because an 11 ⁇ 11 window was used, it was only necessary to calculate histograms on every third row. The binary image was supplied to an OCR engine and decreased OCR processing time was noted by 33 % compared to default binarization used with embedded Exper-OCR brand of Fremont, CA.
- binarization according to the invention increased OCR accuracy for Exper-OCR from 70.6% to 80.7% on fifty-five “challenging” invoices and receipts having traditionally difficult colors, backgrounds, creases, smears, stray marks, etc. It also increased accuracy for embedded Nuance brand OCR from 80.5% to 93.3% on the same set of fifty-five challenging documents which surpasses the OCR accuracy of server-based solutions such as ABBYY brand FineReader.
- image_copy Copy image data to another location and return a pointer
- image_get_rows Return the number of rows of pixels in an image
- image_get_column Return the number of columns of pixels in an image
- image_get_value Return the pixel intensity between 0 and 255 for a given row and column
- image_set_value Set the pixel intensity between 0 and 255 for a given row and column.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Processing (AREA)
- Character Input (AREA)
Abstract
Systems and methods convert to binary an input image having pixels defining text and background. Thresholds are determined by which pixels in the input image and a corresponding blurred image will be defined as either binary black or binary white. Thresholds derive from grouped together neighboring pixels having pixels separated out that correspond to the background. For pixels of the input image defined as binary black and having corresponding pixels in the blurred image defined as binary black relative to their thresholds, those are set to black in the binary image, else they are set white. Techniques for devising thresholds, blurring images, grouping together pixels, statistical analysis, etc., typify the embodiments.
Description
- This application claims priority as a continuation application of U.S. patent application Ser. No. 14/200,580, having the same title, filed Mar. 7, 2014.
- The present disclosure relates to converting grayscale, color or other multi-valued images into binary representations (binarization). It relates further to conversion based on neighborhoods of local pixels, rather than global conversion applied to entireties of images. Operating on blurred and non-blurred representations of the input image typifies the embodiments.
- In traditional optical character recognition (OCR)
environments 5,FIG. 1 , ahard copy document 10 becomes digitized for computing actions, such as electronic editing, searching, storing compactly, displaying on monitors, etc. It is also digitized as a precursor to other routines, such as machine translation, data extraction, text mining, invoice processing, invoice payment, and the like. As is typical, the hard copy document is any of a variety, but is commonly an invoice, bank statement, receipt, business card, written paper, book, etc. It contains bothtext 7 andbackground 9. The text typifies words, numbers, symbols, phrases, etc. having content relating to the topic of thehard copy 10. The background, on the other hand, represents the underlying media on which the content appears. The background can also include various colors, advertisements, corporate logos, watermarks, texture, creases, speckles, stray marks, and the like. Thedocument 10 becomes scanned 12 which results in a grayscale orcolor image 14 defined by pixels. The pixels 16-1, 16-2, 16-3 . . . are many and range in volume depending upon the resolution of the scan, e.g., 150 dpi, 300 dpi, etc. Each pixel has an intensity value defined according to various scales, but a range of 256 possible values is common, e.g., 0-255. Uponbinarization 20, the intensity values get converted from 0-255 to one of two possible binary values (black or white) in abinary image 22. Scanned pixel 16-3 inimage 14 becomes binarized pixel 16-3′ inbinary image 22 with a value of black or white, 1 or 0. In many schemes, binarization occurs by splitting in half the intensity scale of the pixels and labeling as black pixels those with relatively dark intensities and white pixels those with light intensities. Atgraph 25, for instance,pixels 16 ofinput image 14 having intensities ranging from 0-127 become labeled black during traditional binarization, while those with intensities from 128-255 become labeled white. - The result of binarization serves as input to
OCR 30, which creates an output ofcontent 40 available for further processing, storage, searching, text mining, displaying, etc.Text 41, XML 43, searchable .pdf's 45, and the like, are common types of OCR outputs. The process also often takes place through instructions, software, etc. executing on controller(s) inhardware 50, such as imaging devices e.g. multi-function printers (MFPs), all-in-ones (AIOs), copier machines, etc. Of course, not all OCR engines utilize binarization techniques. Instead, some directly take as input to their engine the grayscale or color image output from the scanning function. - Regardless of approach, the data submitted to OCR algorithms does not identify or help specify text characters or other content of the original document. Binarization is also susceptible to losing important information in low contrast regions of documents, especially where light text resides on light backgrounds or dark text resides on dark backgrounds. If a dark text pixel, say an
intensity 50 of 255 (graph 25), resides nearby dark background pixels, sayintensity 75 of 255, application of a global threshold of e.g. 50% results in both text and background pixels being characterized as black after binarization. That the text and backgrounds are similar in color, all text information is lost after binarization. As seen inFIG. 2 , a group ofpixels 60 fromdocument 10′ have bothdark text pixels 7′ (from a portion of the letter “N”) anddark background pixels 9′. - If a global threshold of 50% is used to binarize this image, all pixels are homogeneously declared the same binary value, black 60′, as they all reside less than the threshold. Informational content is lost between the
original text 7′ andbackground 9′. Similarly, informational content is lost when light text pixels or dot-matrix style pixels reside on light colored backgrounds. Traditional binarization techniques simply do not allow dark and light text to be discerned clearly enough when positioned on/nearby dark and light background regions within the same image. - What is needed is better binarization. What is further needed is better discernment of light and dark text from light and dark backgrounds to avoid loss of content. Further needs also contemplate instructions or software executable on controller(s) in hardware, such as imaging devices. Additional benefits and alternatives are sought when devising solutions.
- The above-mentioned and other problems are solved by methods and apparatus for improved image binarization. Techniques include conversion of grayscale and color images in a manner that maximizes accuracy of optical character recognition (OCR). Rather than applying a single threshold to an entire image for binarization, custom thresholds are selected for pixels in an image based on surrounding pixels. Algorithms first apply a blurring operation to a window of neighboring pixels surrounding each pixel to smooth out halftone backgrounds. Next, a down-sampled distribution of data of the blurred window is calculated, e.g., histogram, to identify a most popular grayscale/color value of the image. If most of the blurred pixels are similar in color to the most popular color, the algorithm assumes that a center pixel in the window comprises part of a uniform region of color and a predetermined threshold is applied by which the centering pixel is either defined as a black or white binary pixel. On the other hand, if a significant number of pixels in the blurred window are statistically different than the most popular color, the algorithm assumes the region is not a uniform color. If more pixels are darker than the most popular color, a threshold is selected to maximize the separation between the most popular color and the average of the dark pixels. The algorithm also applies similar logic if more pixels are lighter than the most popular color. Pixels in both the blurred image and the original image must be darker than their thresholds in order for a black pixel to be set in the binary image. The technique makes it possible to remove uniform backgrounds and halftones, without sacrificing details of small fonts, and reduce OCR processing time.
- In a representative design, systems and methods convert to binary an input image having pixels defining text and background. Upon blurring the image, thresholds are determined by which pixels are defined as either binary black or binary white.
- Thresholds derive from grouped together neighboring pixels having pixels separated out that correspond to the background. For pixels of the input image defined as binary black and having corresponding pixels in the blurred image defined as binary black relative to their thresholds, those are set to black in the binary image, else they are set white. Techniques for devising thresholds, blurring images, grouping together pixels, statistical analysis, etc., typify the embodiments.
- These and other embodiments are set forth in the description below. Their advantages and features will become readily apparent to skilled artisans. The claims set forth particular limitations.
-
FIG. 1 is a diagrammatic view of an OCR environment according to the prior art; -
FIG. 2 is a diagrammatic view of poor binarization results according to techniques of the prior art; -
FIG. 3 is a flow chart according to the present invention for creating blurred images from non-blurred originals; -
FIGS. 4A-4H are diagrammatic views showing pixel and bounding box terminology and movement according to the present invention; -
FIGS. 5A and 5B are diagrammatic views of a bounding box and corresponding distribution of data in the form of a histogram according to the present invention; -
FIGS. 6A-6D are diagrammatic views of common histograms according to the present invention; -
FIGS. 7A-7E are diagrammatic views of histograms with various bin counts according to the present invention; -
FIG. 8 is a diagrammatic view of a sample bounding box of pixels of an input image having histograms for blurred and non-blurred images thereof; -
FIG. 9 is a flow chart and accompanying diagrams for applying a threshold for deciding binary quantification according to the distribution of data in a histogram; -
FIG. 10 is a flow chart and diagram for determining a unique threshold for a bounding box of grouped together neighboring pixels; -
FIG. 11 is a flow chart and diagram for variously applying thresholds to binarize individual, multi-valued pixels of an image; -
FIG. 12 is a diagrammatic view of the binarization results according to the sample ofFIG. 8 ; -
FIG. 13 is a diagrammatic view of a representative input document; -
FIG. 14 is a diagrammatic view of the input document ofFIG. 13 after poor binarization according to the prior art; and -
FIG. 15 is a diagrammatic view of the input document ofFIG. 13 after binarization according to the present invention standing in crisp contrast to the results of the prior art inFIG. 14 . - In the following detailed description, reference is made to the accompanying drawings where like numerals represent like details. The embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It is to be understood that other embodiments may be utilized and that changes may be made without departing from the scope of the invention. The following detailed description, therefore, is not to be taken in a limiting sense and the scope of the invention is defined only by the appended claims and their equivalents. In accordance with the features of the invention, methods and apparatus teach improved image binarization.
- With reference to
FIG. 1 , a grayscale, color or othermulti-valued image 14 results from a scanning operation at 12.Individual pixels 16 of the image have pixels that define text and background according to pixel value intensities ranging from 0-255, typically. Binarization according to the present invention now provides betterbinary images 22 that serve as input to OCR engines, especially for use inimaging devices 50. - With reference to
FIG. 3 , theinput image 14 is both blurred 70 and kept original or un-blurred 80. For blurring, one technique groups together a window or small groups of neighboringpixels 72 and converts pixel value intensities (PVI) to blurred pixel value intensities for thesame region 72′. Thewindow 72 is then advanced across 74 a width of theinput image 14, then down and across 76, 76′, with the process repeating as many times as necessary until a fully blurredimage 90 is obtained at 92. The blurred image has pixels defining blurred text and blurred background corresponding to the input image. To actually obtain blurred from original pixel value intensities, light and dark pixels are blended with one another in small neighborhoods such that darker pixels become lighter while lighter pixels become darker. By sharing pixel value intensities in this way, each pixel in the blurred image becomes a composite of itself and its surrounding neighbors and noise is removed in the original, especially halftone backgrounds that are effectively converted into uniform backgrounds. Many techniques are known for doing this, but a representative technique includes a normalized box filter where each blurred pixel is a mean of its neighbors in the window, each contributing with equal weight. - At 78, pixel value intensities of the original image are summed. They are then divided by the total number of pixels in the region. In a
square window 72 of 3×3 pixels, or nine total pixels, pixel value intensities of 100 are noted for all pixels, except for a center pixel 16-4 having a pixel value intensity of 0. Summing them together results in 800 which is then divided by nine pixels, or 800/9. The result is 88.88, or 89, when rounded to a next closest whole number. In the correspondingwindow 72′ in theblurred image 90, each pixel has blurredpixels 16′ with blurred pixel value intensities of 89 for an entirety of the window. It can be said that lighter pixel value intensities of the pixels of the input image have changed relatively dark (e.g., 100 PVIs in theoriginal image 14 become 89 PVIs in the blurred image 90), while darker pixel value intensities of the pixels of the input image have changed relatively light (e.g., 0 PVI in theoriginal image 14 at pixel 16-4 becomes 89 PVI atpixel 16′-4′ in the blurred image 90). Of course, other schemes are possible for blurring to remove halftones and noise. Regardless of approach, however, theoriginal image 80 andblurred image 90 now proceed to having thresholds determined 95. Pixels of the original image and corresponding pixels in the blurred image become defined as either black or white binary representations, instead of being defined with multi-valued intensities. - With reference to
FIG. 4A , animage pixels FIG. 4B , abounding box 100 of grouped together pixels surrounds the center pixel P11,11. As thebounding box 100 is representatively square about the center pixel, the box defines a 21×21 group of pixels neighboring one another for a total of 441 pixels as seen inFIG. 5A . Each pixel has a pixel value intensity ranging from 0-255 noted at pixels P11,11, P12,19 and P21,21. A threshold is determined that will binarize these intensities as either a binary black or binary white for a given pixel under consideration, e.g., the center pixel P11,11. - With reference to
FIG. 5B , a distribution of data is devised that characterizes the pixel value intensities of the pixels in the bounding box. The distribution is selected as ahistogram 110 having equal-sized bins along the x-axis, ranging 0-255, that correspond to the pixel value intensities of the pixels in the bounding box. Along the y-axis, the histogram defines a frequency of occurrence for each of the pixels in the bins. Pixels are lighter when their pixel value intensities range closer to value 255, while they are darker when their pixel value intensities range closer tovalue 0. Also, pixels having the highest frequency are known to represent the most popular color in the bounding box and correspond to the background of the image, rather than text. As noted by comparingFIGS. 6A and 6B , boundingboxes 100′ of mostly white pixels havehistograms 110′ with highest-frequently occurring pixels residing closer tovalue 255, whereas boundingboxes 100″ of mostly dark pixels havehistograms 110″ with highest-frequently occurring pixels residing closer tovalue 0. Of course, there may be bounding boxes of pixels of various shades representing both text and background inFIGS. 6C and 6D having multiple relative peaks ofpixel value intensities histograms 110′″ and 110′″. Of course, still other histogram shapes, peaks, trends, etc. exist for a given set of neighboring pixels in bounding boxes, as are other distributions of data. Similarly, there are other ways to represent the data of the pixels in the histogram. - As noted in
FIGS. 7A-7C , the x-axis can be cut in half scale inFIG. 7B , fromFIG. 7A , and half again inFIG. 7C . In this way the bin count of pixel value intensities along the x-axis can be reduced from 256 (FIG. 7A) to 128 (FIG. 7B ) to 64 (FIG. 7C ). This reduces the amount of noise or anomalies in the bounding boxes and provides more meaningful pictures of data. As illustrated inFIG. 7D , abounding box 100′″ includes grouped together neighboring pixels having pixel value intensities of 1, 2, 0, 5, 8, 61, 3, 60 and 32 in a 3×3 square box of pixels. As no two intensities equal one another, ahistogram 110 −5 inFIG. 7E spread amongst 256 bins reveals a frequency count of one (1) for all of the pixels in thebounding box 100′″. InFIG. 7D , in contrast, thehistogram 110 −6 groups together bins 0-3, in a first bin, 4-7 in a next bin, 8-11, in still a next bin, and so on. As there are four pixels having intensities from 0-3, they are noted as the mostcommon color 120 thus, revealing better data from thebounding box 100′″. - With reference back to
FIGS. 4A-4H , skilled artisans will appreciate the foregoing describes a bounding box of grouped together pixels and distributions of data for only a limited group of pixels in an image having many more thousands of pixels. As such, a next bounding box of grouped together pixels needs to occur for another pixel under consideration relative to a local neighborhood of pixels surrounding it. This must also repeat until an entirety of pixels has been grouped together at least once in an image. With reference toFIGS. 4C-4D , a next bounding box occurs in animage FIG. 4B across an entirety of the image. The original center pixel P11,11 first advances three pixels to the right across the width of the image to become pixel P14,11 and the bounding box 100-D follows to keep the pixel centered. The process repeats until an entire width of the image has been grouped together. - In
FIG. 4E , the centering pixel is thence advanced back to the starting side of the image and moved three pixels down the length to become pixel P11,14. The bounding box 100-F follows inFIG. 4F and keeps the pixel centered. Similarly,FIG. 4G shows an advancing center pixel again moving three pixels to the right of its last position across the width of the image to become pixel P14,14 and the bounding box 100-H follows to keep centered the pixel inFIG. 4H . The process repeats widthwise and then lengthwise for as often as necessary until an entirety of the image has been grouped together. Of course, the bounding box can be of various sizes/shapes and programmable as well. From experimentation, bounding boxes of 9×9 squares of 81 total pixels in an image scanned under 300×300 dpi resolution are decent alternatively-sized boxes to the foregoing. Boxes of 11×11 squares of 121 total pixels are also useful. - With reference to
FIG. 8 , a sample portion of an original image is given at 14′ as is its corresponding blurred image at 90′. A grouped together bounding box of pixels 100-o has bothbackground 9′andtext 7′. Its blurred counterpart 100-b has blurredbackground 9′-b andblurred text 7′-b. Histograms 110-o, 110-b are devised for the boxes and correspond to the original (o) and blurred (b) images, respectively. Thresholds are now determined for pixels under consideration Pu.c. and P′u.c. in the original and blurred boxes according to the flow chart ofFIG. 9 . - At 900, a distribution of data, such as a histogram, is created for a group of neighboring pixels. From that distribution, a most common color is identified, 902, e.g., the pixel value intensity having the highest frequency of occurrence. Noting certain amounts of pixel value intensities exist that are comparable to the highest frequency of occurrence, a +/− zone of pixels adjacent the most common color is established at 904. This +/− zone corresponds to the many pixels in the bounding box that define the background pixels. The zone can be programmed to any size, but +/− 12 (out of 256 bins) is a number presently used by the inventors. The inventors also add a further offset to this zone to ensure that the background pixels are adequately distinguished away from any content, such as text. If, at 912, a sufficient number of pixels remain 908, 910 in the histogram outside the
region 904, then the bounding box of neighboring pixels is deemed to have content worthy of uniquely calculating a binarization threshold 914. Else, a predetermined threshold is applied to the pixels under consideration to define them as either binary black or binary white values. Applying the thresholds proceeds according toFIG. 11 , however,FIG. 10 details the statistical manipulation of the histogram to obtain the calculated threshold for 914. - At 1000, the histogram 110-o has a most common color of frequency 33 (y-axis) identified as occurring at pixel value intensity 26 (x-axis). After applying the +/− zone of 12 out of 256 bins, or 3 (12/4) out of 64 bins, the pixels of pixel value intensity from the
range 23 to 29 are noted as defining the background pixels in the bounding box 100-o. Of the pixels remaining outside this zone, the pixel value intensities and pixel counts are determined from the histogram, 1000. If fewer than 4 pixels are outside of the +/−zone, the bounding box is said to contain a uniform background and a fixed threshold. However, if more than 4 pixels are outside the +/− zone, the region is not uniform. If more pixels outside of the +/− zone are darker than the background, the background is considered to be light, and all pixels darker than the pixels in the +/− zone are considered to be non-background pixels. If, on the other hand, more pixels outside of the +/− zone are lighter than the background, the background is considered to be dark, and all pixels lighter than the pixels in the +/− zone are considered to be non-background pixels. - At 1004, a mathematical product is taken for the pixel value intensities times their frequency of occurrence. At 1008 the entirety of the products are summed (E Product), as are the pixel counts summed (E Pixel Count) 1012. By dividing the former by the latter (Σ Product/Σ Pixel Count), the average pixel intensity of the non-background pixels is obtained 1016. The
threshold 1018 is found by taking the average of the peak value and the average intensity of the non-background pixels. IfΣ 733/Σ59=12.42, the average pixel intensity ofnon-background pixels 1016 is obtained in the histogram 110-o by rounding down 12.42 to the nearest whole number, e.g. average=12. Thethreshold 1018 is found by adding the average pixel intensity of non-background pixels (e.g., average=12) to the intensity of the most popular color (e.g., 26), which gives 12+26=38, and dividing the result by two, or 38/2=19. Similarly, a threshold is obtained for histogram 110-b for the blurred image (not shown). To apply these thresholds, reference is taken toFIG. 11 . - At 1100, thresholds and pixel value intensities are adjusted to ensure they have equivalent scales. That is, if pixels were first noted on a scale of 256 possible intensity values, but the threshold was based on a histogram having only 64 bins, vice 256 bins, the threshold from the 64 bins needs to be converted back to the 256 intensity scale.
- Thus, a threshold of 12, From
FIG. 10 , becomes multiplied by four so that the threshold value is 48, or 12×4. After that, 1102, each pixel under consideration in either the blurred image (90) or original image (14) is compared to the appropriate threshold value. If the threshold is a predetermined threshold found at 916,FIG. 9 , the pixel value intensity of the pixel in the image is compared to a programmable value. Upon experimentation, the threshold value of 80 (1103) has been found to delineate well the difference between setting a pixel to black binary or white binary. (A global threshold of 75 is still another useful threshold and used below in the sample code section of the disclosure.) It has been found that a relatively low static threshold, less than half the bin count, causes most uniform backgrounds to be converted to white pixels and detail is preserved for small fonts. If a pixel underconsideration 1105 had a pixel value intensity of 210 as illustrated in a bounding box 110-11 of mostly white pixels, for example, the pixel under consideration would be no longer consider with an intensity of 210, but would be considered a white pixel as 210>80. Conversely, if the pixel value intensity were a value less than or equal to the 80 (not shown), the pixel under consideration would be considered binary black. - Alternatively, if the pixel under consideration had calculated for it a unique threshold value based on neighboring pixels in a bounding box, e.g., as calculated from
FIG. 10 , the pixel value intensity would be now compared to the unique threshold value. By reference to pixel Px,y in a bounding box 100-12 of neighboring pixels, and having a distribution of data 110-12 for the box, athreshold value 1110 of 44 was found to be the pixel value intensity by which pixels would be adjudicated as either white (greater than 44) or black (less than or equal to 44). As Px,y has a pixel value intensity of 240 out of 256 possible intensity values and the threshold has been found in a histogram having only 64 bins, thethreshold value 1110 is first converted to a proper scale for comparison. As 64 is one-fourth the value of 256, thethreshold value 1110 of 44 is multiplied by four, which is 176 or 44×4. In turn, the pixel value intensity of 240 is compared to 176, 1112. As 240 is greater than 176, the binary value of Px,y becomes white. Similarly, a pixel neighboring Px,y, one left, Px−1,y, has a pixel value intensity of 239 and becomes white relative to the threshold value of 176 as 239 is also greater than 176 (239>176), 1114. Conversely, a pixel neighboring Px,y, one right, Px+1,y, has a pixel value intensity of 150 and becomes adjudicated black relative to the threshold value of 176 as 150 is less than or equal to 176 (150≦176), 1116. - Regardless of the pixel being compared to a predetermined or uniquely calculated threshold, and regardless of whether the pixel is found in the original or blurred image, if the pixel under consideration at 1120 is less than or equal to the threshold, and so too is its counterpart pixel in the blurred image less than or equal to its threshold, the pixel becomes set as a black pixel in the binary image, 1122. Else, the pixel becomes set as a white pixel in the binary image, 1124. In other words, both the pixel in the original and blurred images must be binary black relative to the threshold in order for the pixel of the image to be set as black in the binary image. If either or both of them are binary white relative to the threshold, the pixel of the image becomes set to white in the binary image.
- With reference to
FIG. 12 , the actual results of applying the foregoing techniques to theoriginal image 14′ can be seen in the output image 1200-14. It is dramatic to note the bounding box of pixels 100-o becoming much more readable and clear at bounding box 100-o′. Similarly, the improvement of the present disclosure over the techniques of the prior art can be seen by comparing actual reductions to practice of the invention obtained by the inventors inFIGS. 13-15 . InFIG. 13 , anoriginal image 1300 is binarized 1400 according to the techniques of the prior art (FIG. 14 ) and binarized 1500 according to the techniques of the instant disclosure. Markedly, the image binarized at 1500 is clearer and has more readily ascertainable content than the image binarized at 1400. - Image binarization occurred according to the foregoing using a square bounding box or window size of 11×11 pixels. Histograms were generated by subtracting one column of pixels on the left of a center pixel and adding one column of pixels on the right. Because an 11×11 window was used, it was only necessary to calculate histograms on every third row. The binary image was supplied to an OCR engine and decreased OCR processing time was noted by 33% compared to default binarization used with embedded Exper-OCR brand of Fremont, CA. In addition, binarization according to the invention increased OCR accuracy for Exper-OCR from 70.6% to 80.7% on fifty-five “challenging” invoices and receipts having traditionally difficult colors, backgrounds, creases, smears, stray marks, etc. It also increased accuracy for embedded Nuance brand OCR from 80.5% to 93.3% on the same set of fifty-five challenging documents which surpasses the OCR accuracy of server-based solutions such as ABBYY brand FineReader.
- Sample Code to Execute Embodiments of the Disclosure
-
#include <stdio.h> #include <stdlib.h> #include <string.h> #include “simpleimage.h” #include “util.h” #define BINS_BITS 6 #define SHIFT_BITS (8-BINS_BITS) #define HISTO_BINS (1<<BINS_BITS) int main(int argc, char *argv[ ]) { image_t image, out_image, blur_image, threshold_image; int rowid, colid, i, windowsum, shiftx, shifty; int k=0: int PADSIZE=5; if (argc < 3) { fprintf(stderr, “Usage: %s <imagein> <imageout> \n”, argv[0]); exit(-1); } char* binary_filename=argv[2]; image = pnm_read(arg[1]); out_image = image_copy(&image); blur_image = image_copy(&image); threshold_image = image_copy(&image); int numrows=image_get_rows(&image); int numcols=image_get_columns(&image); //Create 3x3 blurred image for (rowid=1; rowid<numrows- 1; rowid++) { for (colid=1; colid<numcols-1; colid++) { windowsum=0; for (shiftx=-1; shiftx<=1; shiftx++) { for (shifty=-1; shifty<=1; shifty++) { windowsum+=image_get_value(&image,rowid+shifty,colid+shiftx,0); } } image_set_value(8thlur_image,rowid,colid,0,windowsum/9); } } int myhist[HISTO_BINS]; int pixelvalue1, pixelvalue2, subimagerowid, subimagecolid; int peakvalue; int peakindex; int threshold; int peakoffset; //Create histograms efficiently, skip every two rows for (rowid=0; rowid<numrows; rowid+=3) { for (colid=0; colid<numcols, colid++) { if ((rowid<PADS1ZE) || (rowid>=(numrows-PADSIZE)) || (colid<PADSIZE) || (colid>=(numcols-PADSIZE))) { //For the border, just use a static threshold threshold=128; } else { if(colid==PADSIZE) //Create full histogram for first column { //Zero out histogram for (i=0; i<HISTO_BINS; i++) { myhist[i]=0; } //Make histograms and measure peak peakvalue=0; peakindex=0; for (subimagerowid=rowid-PADSIZE; subimagerowid<=rowid+PADSIZE; subiinagerowid++) { for (subimagecolid=colid-PADSIZE; subimagecolid<=colid+PADSIZE; subimagecolid++) { pixelvalue1=image_get_value(&blur_image,subimagerowid,subimagecolid,k); myhist[pixelvalue1>>SHIFT_BITS]++; } } } else { //Make histograms and measure peak for (subimagerowid=rowid-PADSIZE; subimagerowid<=rowid+PADSIZE; subimagerowid++) { //Subtract left column subimagecolid=colid-PADSIZE-1; pixelvalue1=image_get_value(&blur_image,subimagerowid,subimagecolid,k); myhist[pixelvalue1>>SHIFT_BITS]--; //Add right column subimagecolid=colid+PADSIZE; pixelvalue1=image_get_value(&blur_image,subimagerowid,subimagecolid,k); myhist[pixelvalue1>>SHIFT_BITS]++; } } peakvalue=0 peakindex=0; int averagedvalue=0; //Find peak of histogram for (i=1; i<HISTO_BINS; i++) { averagedvalue=myhist[i]; if (averagedvalue>peakvalue) { peakvalue=averagedvalue; peakindex=i; } } peakoffset=(28>>SHIFT_BITS); //Count pixels to the left of peak - offset int leftcount=0; int leftsum=0; int leftaverage; for (i=0; i<(peakindex-peakoffset); i++) { leftcount=leftcount+myhist[i]; //number of pixels leftsum=leftsum+(myhist[i]*i); //n*i } if (leftcount==0) { leftaverage=0; } else { leftaverage=leftsum/leftcount; } //Count pixels to the right of peak + offset int rightcount=0; int rightsum=0; int rightaverage; for (i=peakindex+peakoffset+1; i<HISTO_BINS; i++) { rightcount=rightcount+myhist[i]; //number of pixels rightsum=rightsum+(myhist[i]*i); //n*i } if (rightcount==0) { rightaverage=0; } else { rightaverage=rightsum/rightcount; } if ((leftcount<=4) && (rightcount<=4)) { //Even if only 4 pixels are non-background, go ahead and count them as background //all background //use global threshold of 75 //background needs to be pretty dark to stay dark threshold=75; } else if (leftcount>=rightcount) { //More pixels to the left of peak threshold=(peakindex+leftaverage)<<(SHIFT_BITS-1); } else { //More pixels to the right of the peak threshold=(peakindex+rightaverage)<<(SHIFT_BITS-1); } } image_set_value(&threshold_image,rowid+2,colid,0,threshold); image_set_value(&threshold_image,rowid,colid,0,threshold); image_set_value(&threshold_image,rowid+1,colid,0,threshold); } } //Create output image for (rowid=0; rowid<numrows; rowid++) { for (colid=0; colid<numcols; colid++) { if ((rowid<PADSIZE) || (rowid>=(numrows-PADSIZE)) || (colid<PADSIZE) || (colid>=(numcols-PADSIZE))) { image_set_value(&out_image,rowid,colid,0,0); } else { threshold =image_get_value(&threshold_image_rowid,colid,0); pixelvalue1=image_get_value(&image,rowid,colid,0); pixelvalue2=image_get_value(&blur_image,rowid,colid,0); if ((pixelvalue1<=threshold) && (pixelvalue2<=threshold)) { //pixel significantly darker than bg image_set_value(&out_image,rowid,colid,0,1); } else { image_set_value(&out_image,rowid,colid,0,0); } } } } //Write Binary image to file int height=image_get_rows(&image); int width=image_get_columns(&image); uint8_t byte; FILE *fp1; int width_div8; if( binary_filename == NULL ) /* if no file given, write to stdout */ { fp1 = stdout; } else if ( (fp1 = fopen( binary_filename, “wb” )) == NULL ) { fprintf( stderr, “Output file %s could not be opened\n”, binary_filename ); return(1); } //Write Binary Image to file width_div8 = (width + 7) / 8; fprintf( fp1, “P4\n%d %d\n”,width_div8*8, height); //int oneorzero; int shiftbit; for (rowid= 0; rowid<height; rowid++) { for (colid=0; colid<width; colid+=8) { for (shiftbit=0; shiftbit<8; shifthit++) { if (shiftbit==0) { byte=0; } if ((colid+shiftbit)<width) { byte=byte | (image_get_value(&out_image,rowid,colid+shiftbit,k)<<(7- shiftbit)); } } fpute(byte,fp1); } } return 0; } ,
whereby the six referenced functions are defined as: - pnm_read Read in a pnm image and return a pointer to the location in memory:
- image_copy—Copy image data to another location and return a pointer;
- image_get_rows—Return the number of rows of pixels in an image;
- image_get_column Return the number of columns of pixels in an image;
- image_get_value—Return the pixel intensity between 0 and 255 for a given row and column; and
- image_set_value—Set the pixel intensity between 0 and 255 for a given row and column.
- The foregoing illustrates various aspects of the invention. It is not intended to be exhaustive. Rather, it is chosen to provide the best illustration of the principles of the invention and its practical application to enable one of ordinary skill in the art to utilize the invention. All modifications and variations are contemplated within the scope of the invention as determined by the appended claims. Relatively apparent modifications include combining one or more features of various embodiments with features of other embodiments.
Claims (10)
1. A method of converting to a binary image an input image having pluralities of pixels defining text and background each with an intensity value defined on a scale of multiple intensity values, comprising:
blurring the input image to achieve a blurred image so that the pixels of the input image have corresponding blurred pluralities of pixels each with blurred intensity values defined on said scale of multiple intensity values;
for a pixel under consideration in either the input image or the blurred image, devising a neighborhood of adjacent pixels;
for the neighborhood of adjacent pixels, identifying a pixel having the intensity value or blurred intensity value with a greatest frequency of occurrence;
establishing a zone of background pixels nearby the pixel with the greatest frequency of occurrence; and
determining whether or not other pixels outside the zone of background pixels are sufficiently many in order to calculate a unique binarization threshold of said intensity value by which the pixel under consideration will later become identified as a black pixel or a white pixel in the binary image or are insufficiently many in order to apply a predetermined binarization threshold of said intensity value.
2. The method of claim 1 , further including advancing to a next pixel under consideration in either the input image or the blurred image and devising a next neighborhood of adjacent pixels.
3. The method of claim 2 , for the next neighborhood of adjacent pixels, identifying a next pixel having the intensity value or blurred intensity value with the greatest frequency of occurrence.
4. The method of claim 3 , establishing a next zone of background pixels nearby the next pixel with the greatest frequency of occurrence.
5. The method of claim 4 , further including determining whether or not other pixels outside the next zone of background pixels are sufficiently many in order to calculate a next unique binarization threshold of said intensity value by which the next pixel under consideration will become identified as said black pixel or said white pixel in the binary image or are insufficiently many in order to apply said predetermined binarization threshold of said intensity value.
6. The method of claim 2 , wherein the advancing to the next pixel under consideration in either the input image or the blurred image further includes advancing away three pixels in a same row or column of pixels from the pixel under consideration.
7. The method of claim 6 , wherein the devising the next neighborhood of adjacent pixels further includes establishing a square bounding box of pixels surrounding the next pixel under consideration.
8. The method of claim 1 , wherein said determining whether or not other pixels outside the zone of background pixels are sufficiently or insufficiently many includes determining whether or not more than four pixels exist outside the zone of background pixels.
9. The method of claim 1 , wherein the devising said neighborhood of adjacent pixels further includes creating a distribution of data in the form of a histogram noting frequency of occurrence versus pixel value intensity.
10. The method of claim 1 , wherein if the pixel under consideration in the input image and the blurred image corresponding thereto are defined as binary black relative to said unique binarization threshold or the predetermined binarization threshold, setting the pixel under consideration to black in the binary image, else setting the pixel under consideration to white in the binary image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/865,213 US20160014300A1 (en) | 2014-03-07 | 2015-09-25 | Image Binarization |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/200,580 US9172842B2 (en) | 2014-03-07 | 2014-03-07 | Image binarization for optical character recognition |
US14/865,213 US20160014300A1 (en) | 2014-03-07 | 2015-09-25 | Image Binarization |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/200,580 Continuation US9172842B2 (en) | 2014-03-07 | 2014-03-07 | Image binarization for optical character recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160014300A1 true US20160014300A1 (en) | 2016-01-14 |
Family
ID=54018672
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/200,580 Active 2034-04-30 US9172842B2 (en) | 2014-03-07 | 2014-03-07 | Image binarization for optical character recognition |
US14/865,213 Abandoned US20160014300A1 (en) | 2014-03-07 | 2015-09-25 | Image Binarization |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/200,580 Active 2034-04-30 US9172842B2 (en) | 2014-03-07 | 2014-03-07 | Image binarization for optical character recognition |
Country Status (1)
Country | Link |
---|---|
US (2) | US9172842B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180014316A1 (en) * | 2015-03-24 | 2018-01-11 | Huawei Technologies Co., Ltd. | Method for sending uplink multi-user transmission trigger frame, access point, and station |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10127672B2 (en) * | 2015-10-12 | 2018-11-13 | International Business Machines Corporation | Separation of foreground and background in medical images |
JP6503478B2 (en) * | 2016-01-20 | 2019-04-17 | 株式会社Pfu | Mobile terminal, image processing method, and program |
US20170278286A1 (en) * | 2016-03-22 | 2017-09-28 | Le Holdings (Beijing) Co., Ltd. | Method and electronic device for creating title background in video frame |
US10832085B2 (en) | 2018-09-06 | 2020-11-10 | International Business Machines Corporation | Image binarization using mean restrain |
US10614345B1 (en) | 2019-04-12 | 2020-04-07 | Ernst & Young U.S. Llp | Machine learning based extraction of partition objects from electronic documents |
US11113518B2 (en) | 2019-06-28 | 2021-09-07 | Eygs Llp | Apparatus and methods for extracting data from lineless tables using Delaunay triangulation and excess edge removal |
US11170362B2 (en) | 2019-07-17 | 2021-11-09 | Mastercard International Incorporated | Methods, systems, and networks for authentication based on physical condition of a mobile device |
US11915465B2 (en) | 2019-08-21 | 2024-02-27 | Eygs Llp | Apparatus and methods for converting lineless tables into lined tables using generative adversarial networks |
US10810709B1 (en) * | 2019-11-21 | 2020-10-20 | Eygs Llp | Systems and methods for improving the quality of text documents using artificial intelligence |
US11238618B2 (en) * | 2019-11-26 | 2022-02-01 | International Business Machines Corporation | Image recognition |
US11625934B2 (en) | 2020-02-04 | 2023-04-11 | Eygs Llp | Machine learning based end-to-end extraction of tables from electronic documents |
US11069043B1 (en) * | 2020-03-13 | 2021-07-20 | Konica Minolta Business Solutions U.S.A., Inc. | Background noise reduction using a variable range of color values dependent upon the initial background color distribution |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090324100A1 (en) * | 2008-06-27 | 2009-12-31 | Palo Alto Research Center Incorporated | Method and system for finding a document image in a document collection using localized two-dimensional visual fingerprints |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4018310B2 (en) | 1999-04-21 | 2007-12-05 | 株式会社リコー | Image binarization apparatus, image imaging apparatus, image binarization method, image imaging method, and computer-readable recording medium storing a program for causing a computer to function as each step of the method |
US7636467B2 (en) | 2005-07-29 | 2009-12-22 | Nokia Corporation | Binarization of an image |
US8351720B2 (en) | 2008-04-24 | 2013-01-08 | Hewlett-Packard Development Company, L.P. | Method and system providing edge enhanced image binarization |
US8406554B1 (en) | 2009-12-02 | 2013-03-26 | Jadavpur University | Image binarization based on grey membership parameters of pixels |
-
2014
- 2014-03-07 US US14/200,580 patent/US9172842B2/en active Active
-
2015
- 2015-09-25 US US14/865,213 patent/US20160014300A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090324100A1 (en) * | 2008-06-27 | 2009-12-31 | Palo Alto Research Center Incorporated | Method and system for finding a document image in a document collection using localized two-dimensional visual fingerprints |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180014316A1 (en) * | 2015-03-24 | 2018-01-11 | Huawei Technologies Co., Ltd. | Method for sending uplink multi-user transmission trigger frame, access point, and station |
Also Published As
Publication number | Publication date |
---|---|
US9172842B2 (en) | 2015-10-27 |
US20150256713A1 (en) | 2015-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9172842B2 (en) | Image binarization for optical character recognition | |
US9342892B2 (en) | Image binarization | |
US7054485B2 (en) | Image processing method, apparatus and system | |
US8947736B2 (en) | Method for binarizing scanned document images containing gray or light colored text printed with halftone pattern | |
Gupta et al. | OCR binarization and image pre-processing for searching historical documents | |
US7469063B2 (en) | Apparatus, method and storage medium storing program for recognizing characters | |
US7062099B2 (en) | Image processing method and apparatus using self-adaptive binarization | |
US7292375B2 (en) | Method and apparatus for color image processing, and a computer product | |
JP4987960B2 (en) | Conversion of digital images containing strings into token-based files for rendering | |
US6628833B1 (en) | Image processing apparatus, image processing method, and recording medium with image processing program to process image according to input image | |
Singh et al. | Optical character recognition (OCR) for printed devnagari script using artificial neural network | |
US20070140560A1 (en) | Layout analysis program, layout analysis apparatus and layout analysis method | |
US10586125B2 (en) | Line removal method, apparatus, and computer-readable medium | |
CA2790210C (en) | Resolution adjustment of an image that includes text undergoing an ocr process | |
US7221795B2 (en) | Document processing method, recording medium having recorded thereon document processing program, document processing program, document processing apparatus, and character-input document | |
RU2520407C1 (en) | Method and system of text improvement at digital copying of printed documents | |
Konya et al. | Adaptive methods for robust document image understanding | |
US9418312B2 (en) | Coarse document classification | |
Boiangiu et al. | Automatic Correction of OCR Results Using Similarity Detection for Words and Fonts | |
Sagar et al. | Character recognition on palm-leaf manuscripts—a survey | |
Shi et al. | Dynamic local connectivity and its application to page segmentation | |
Htun et al. | Page Segmentation and Document Layout Analysis for Scanned Image by using Smearing Algorithm | |
Kaur et al. | Improved XY cut Page Segmentation Algorithm for Border Noise | |
JPH04343193A (en) | Binary threshold setting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LEXMARK INTERNATIONAL, INC., KENTUCKY Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BOOTH, ROBERT REED;PHELPS, MICHAEL JO;REEL/FRAME:036654/0722 Effective date: 20150925 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |