US20150302598A1 - Line segmentation method - Google Patents
Line segmentation method Download PDFInfo
- Publication number
- US20150302598A1 US20150302598A1 US14/254,096 US201414254096A US2015302598A1 US 20150302598 A1 US20150302598 A1 US 20150302598A1 US 201414254096 A US201414254096 A US 201414254096A US 2015302598 A1 US2015302598 A1 US 2015302598A1
- Authority
- US
- United States
- Prior art keywords
- character
- width
- likelihood
- error
- widths
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 89
- 230000011218 segmentation Effects 0.000 title abstract description 31
- 230000001419 dependent effect Effects 0.000 claims abstract description 5
- 238000004590 computer program Methods 0.000 claims description 2
- 238000012015 optical character recognition Methods 0.000 description 14
- 238000005259 measurement Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002224 dissection Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G06T7/0079—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/158—Segmentation of character regions using character size, text spacings or pitch estimation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/196—Recognition using electronic means using sequential comparisons of the image signals with a plurality of references
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30176—Document
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Definitions
- the present invention relates to a line segmentation method and more particularly to a line segmentation method used within an OCR system.
- OCR Optical Character Recognition
- These systems automatically convert a paper document into a searchable text document.
- OCR systems are typically composed of three main steps: line segmentation, feature extraction and character classification. But, as illustrated in FIG. 1 , feature extraction is often presented as part of the character classification. In that way, starting from an image of a character string, known optical character recognition systems are first applying a line segmentation to become images of individual characters and subsequently a character classification step is executed to identify the characters. While character classification techniques have become extremely robust over the past years, line segmentation remains still a critical step of OCR, in particular in the case of Asian text.
- Line segmentation also often called character segmentation.
- the image representing a text line is decomposed into individual sub-images which constitute the character images.
- Different methods can be used to segment a line.
- a known line segmentation method is the detection of inter-character breaks or word breaks (adapted to Latin characters) as a way to isolate individual characters. This is described for example in WO2011128777 and WO201126755.
- a third type of segmentation technique uses a combination of the first two and is known as “oversegmentation” method.
- the image is oversegmented with different dissection methods as illustrated in FIG. 2 .
- Several plausible segmentation solutions are analyzed by the same or different character classification methods and the best segmentation solution is then chosen. When the segmentation becomes difficult, as is the case for example for Asian characters, many possible segmentation solutions are evaluated which leads to extremely long computation times for analyzing the input string image.
- the method of segmenting characters in a character string image comprises the steps of:
- the method further comprises the step of comparing the likelihood of error with a second threshold value higher than the first threshold value; and wherein the step of comparing the likelihood of error with the first threshold value is only executed if the likelihood of error is lower than the second threshold value.
- the second threshold value has the advantage that it allows for fast filtering of candidates which have no chance of giving a positive result.
- the method further comprises the step of calculating the starting point for the next character if the likelihood of error corresponding to the first width is lower than the second threshold value, and keeping the calculated starting point of the next character in memory.
- the method further comprises the step of updating character statistics values contained in a database if the likelihood of error corresponding to the first width is lower than the first threshold value.
- This database contains information on the maximal and average sizes of characters in the text and reference characters. These values are used when estimating the widths of the characters in the generation of the list of potential character widths in order to improve the speed and accuracy of the method.
- the list of potential character widths are sorted from most likely to less likely, wherein the most likely width is such that it is the widest width containing a maximum number of connected components which are not larger than an estimated maximum width for a character stored in the database.
- the two less likely widths of the list of potential character widths are an average global width and half of the average global width, wherein the average global width is the height of the character string image for a first character in the character string image and the average global width is calculated based on a previous average global width and an average character width stored in the database for a subsequent character in the character string image.
- the advantage of this is that the average global width will identify Asian characters, while half of the average global width will identify Latin characters because the size of Asian characters is around twice the size of Latin characters and therefore the line segmentation method can by this means be applied to Latin characters, Asian characters and a combination thereof.
- the method further comprises the steps of:
- Line segmentation and character classification are combined and performed one after the other as long as no solution has been found, and until a solution is found. This allows to reduce the number of steps required to perform such a method and also improves the accuracy of the method.
- the character string image is a vertical character string image and all widths are heights.
- Asian characters can be written in lines but also in columns.
- the method is certainly not limited to lines and can easily be adapted to lines just by changing the widths of the characters into height and vice versa.
- the method further comprises the step of updating a character statistics database with the average global width value at a successful iteration.
- the step of generating a list of potential character widths is based on data retrieved from a database which contains reference characters for a given point size, the width of the biggest reference characters, the average width of the reference characters and the size of the average space between reference characters.
- the database further contains estimates of statistical values of the characters, wherein the database is updated at each successful iteration.
- the maximum character width is a maximum character width for Asian characters.
- a computer program product comprising a computer usable medium having control logic stored therein for causing a computing device to segment a character string image in an input image, the control log comprising:
- FIG. 1 shows the different steps in an Optical Character Recognition process according to the prior art.
- FIG. 2 illustrates a type of line segmentation in the state of the art known as oversegmentation.
- FIG. 3 shows a line segmentation method according to an embodiment of the invention.
- FIG. 4 illustrates a line segmentation method with a character statistics database.
- top, bottom, over, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. The terms so used are interchangeable under appropriate circumstances and the embodiments of the invention described herein can operate in other orientations than described or illustrated herein.
- FIG. 3 illustrates a flow diagram of an optical character recognition (OCR) method according to an embodiment of the present invention.
- the input of the method is a character string image 110 .
- line segmentation 120 is performed on the character string image 110 .
- Preliminary information on the potential widths of the character analyzed is calculated. This preliminary information on the potential widths of the character allows for a new sequence of steps which improves the speed of the OCR method.
- oversegmentation is still used, not all potential solutions ( 210 , 220 , 230 ) need to be analyzed systematically by the OCR method.
- the potential solutions are generated by means of a list of candidate character widths 310 and are sorted from most likely to less likely.
- the OCR method first analyzes the most likely potential solution 210 . If a condition on the measurement error is satisfied 320 , the character is classified 150 , the other potential solutions are discarded and the next character is analyzed. If the condition on the measurement error is not satisfied 330 , the next most likely potential solution is analyzed 220 . This process is iteratively repeated as long as no character has been successfully classified or until all potential solutions have been evaluated.
- the method as described here, is applied to segment a line of text.
- the same method can be used to segment a column of text as is often the case for Asian text.
- a list of candidate character widths 310 is generated before the analysis of a character image. The generation of this list of candidate character widths will be described later on in the application.
- the list contains N+2 candidate widths wherein the N first widths are widths for which no cut is to be performed in the character string image 110 to extract a character and the last two widths are widths for which a cut needs to be performed in order to isolate and extract a character in the character string image 110 .
- Starting points are x coordinates which define the position of the new character image to analyse.
- a list of initial starting points is created at the beginning of the algorithm, where the first initial starting point of the list corresponds to the first black pixel in the left of the image.
- Other pre-defined starting points correspond to the end of the line, or the most right pixel.
- Other starting points are added to the list of starting points during the OCR process. The method ensures that all starting points present in the list are processed.
- a character image is entirely defined by a starting point coordinate and a width brought into relation with a list of connected components. The height of the line is the same for all characters. At the end of the OCR process, the character is classified.
- a character classification method 140 is applied to the potential solution to determine if a character can be classified for this potential solution.
- the character classification method 140 is based on Gabor functions.
- a character classification method 140 requires two inputs:
- the output is a likelihood of error P err which is used to compute the character C n .
- the likelihood of error P err is compared with two threshold parameters: a threshold to have a low likelihood of error Tl err and a threshold to have a high likelihood of error Th err .
- the threshold to have a low likelihood of error Tl err defines the condition to have a successfully classified character.
- a line segmentation method uses a character statistics database 400 as illustrated in FIG. 4 .
- the elements of the database will now be listed. A more detailed description on how each of the elements are used will follow further on in the application.
- the database contains:
- FIG. 4 shows a flow diagram of the line segmentation process according to an embodiment of the invention.
- the process is illustrated for a segmentation of character C n .
- the list 310 of all N+2 candidate character widths for character C n is generated and a first candidate character width w 1 is taken from the list of candidate character widths.
- These two values, SP n and w 1 are the inputs 410 for the character classification method 140 in step 420 .
- the output of step 420 is the likelihood of error P err .
- the character C n is a potential solution.
- the character C n can be considered as successfully classified and the character statistics database 400 is updated as explained later on in the description.
- the method can move to the next starting point SP n+1 405 to determine the next character C n+1 without processing the other widths for the current starting point SP n . If the likelihood of error P err is higher than the threshold on the error to have a low likelihood of error Tl err 423 , the character classification method is executed with the next candidate width i, w i 430 , as described hereunder.
- the character classification method is executed with the next candidate width i, w i 430 . Again, there are two options depending on the value of P err . If P err is lower than Th err 431 , the character C n is memorized with the width w i , the starting point of the next character is calculated and added to the list of starting points to be processed if needed 405 and if P err is also lower than Tl err 432 the character statistics database is updated 400 .
- the list of all N+2 candidate widths ⁇ w 1 ⁇ for character n (C n ) is generated as follows: the candidate widths are sorted from most likely to less likely and the number of candidate widths varies from character to character, depending on the geometry of the potential character measured with the number of connected components. It is assumed, based on observations, that the width of Asian characters is common for most characters, except for a few characters which have then a smaller width. According to an embodiment of the present invention, the most likely width corresponds to that which contains the biggest set of connected component, not wider than the estimated width of the wider Asian character (w Max,A,t ) plus the estimated average space between characters (s t ).
- Non-touching characters can be non-touching, or touching. Non-touching characters have a higher probability to occur and are therefore to be taken into account first.
- the candidate width with index i (w i ), calculated in pixels, is such that it is the i th biggest width with a set of p (p ⁇ 0) connected components smaller than the widest Asian character (w Max,A,t ) plus the average estimate space between characters (s t ).
- Width w i has p connected components
- width w i+1 has p or less connected components and is such that w i+1 ⁇ w i .
- the widest Asian character (w Max,A,t ) and the estimate space between characters (s t ) are evaluated in the character statistics database. There are N possible non-touching characters.
- Cuts need to be performed if two adjacent characters are touching, the characters will be cut at the most likely place which is calculated from the average global width G n ⁇ 1 of the character which can be found in the character statistics database updated at the previous iteration (n ⁇ 1) for character C n .
- the width with index N+1, w N+1 corresponds to the sum of the average global width of Asian characters G n ⁇ 1 and the average space s t .
- the width with index N+2, w N+2 corresponds to the sum of the average width of Latin characters G n ⁇ 1 /2 and the average space s t . It is assumed that the width of Latin characters is half of the width of Asian characters.
- w Max,A,t , G n ⁇ 1 ,s t ,s t are values which come from the character statistics database updated each time a character has been classified (or P err ⁇ Tl err ).
- the database contains a data structure which stores the character information extracted from the lines and a library of reference characters as well as statistical values on these characters.
- the single data structure is created at the beginning of the process, the structure is then empty.
- the data structure, stored in memory is updated at each iteration and its structure is similar to a graph.
- proportionality ratio represents a conversion of the point size of the characters in the library to the point size of the characters in the text.
- This value represents the local estimate of the width of character n and is further used to evaluate the global estimate of the width of characters at the step n.
- G n ⁇ 1 is the global estimate of the average width of characters updated at step n ⁇ 1
- L n is the local estimate of the average size of characters at step n
- G 0 is the height of the line
- This embodiment illustrates the case of a line segmentation method but the method is not limited to a line.
- Asian text can also be written in columns and the same method can also be used.
- the width of the character has to be replaced by the height of the character, and the starting point coordinate is the (y) coordinate of the first pixel of a character at the top of the character string image.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Character Input (AREA)
- Character Discrimination (AREA)
Abstract
Description
- The present invention relates to a line segmentation method and more particularly to a line segmentation method used within an OCR system.
- Optical Character Recognition (OCR) systems are known. These systems automatically convert a paper document into a searchable text document. OCR systems are typically composed of three main steps: line segmentation, feature extraction and character classification. But, as illustrated in
FIG. 1 , feature extraction is often presented as part of the character classification. In that way, starting from an image of a character string, known optical character recognition systems are first applying a line segmentation to become images of individual characters and subsequently a character classification step is executed to identify the characters. While character classification techniques have become extremely robust over the past years, line segmentation remains still a critical step of OCR, in particular in the case of Asian text. - Different approaches of line segmentation exist (also often called character segmentation). The image representing a text line is decomposed into individual sub-images which constitute the character images. Different methods can be used to segment a line. A known line segmentation method is the detection of inter-character breaks or word breaks (adapted to Latin characters) as a way to isolate individual characters. This is described for example in WO2011128777 and WO201126755.
- Another known line segmentation method, described for example in WO2011142977, uses chop lines which are processed afterwards to identify the lines that separate characters. Still other methods, such as for example in EP0138445B1, assume a constant pitch between characters.
- Above described line segmentation methods are known as dissection methods. This type of method is less efficient for text composed of Asian text and Asian text combined with Latin text because in this type of text there is often no clear break or pitch between characters and Asian characters are not made of a single connected component but mostly of several connected components (e.g. radicals for Chinese characters).
- Another type of method of line segmentation is based on the recognition of components in the image that match classes in a particular alphabet. Such methods require however long computation times.
- A third type of segmentation technique uses a combination of the first two and is known as “oversegmentation” method. The image is oversegmented with different dissection methods as illustrated in
FIG. 2 . Several plausible segmentation solutions are analyzed by the same or different character classification methods and the best segmentation solution is then chosen. When the segmentation becomes difficult, as is the case for example for Asian characters, many possible segmentation solutions are evaluated which leads to extremely long computation times for analyzing the input string image. - It is an aim of the present invention to provide a method for segmenting characters in a character string image which provides fast and accurate segmentation of a line.
- These aims are achieved according to the invention with a method for segmenting characters in a character string image showing the technical characteristics of the first independent claim. The method of segmenting characters in a character string image according to the invention comprises the steps of:
-
- a) determining a first starting point coordinate of a pixel contrasting to a background,
- b) generating a list of potential character widths dependent on a maximum character width and on characteristics of the portion of the character string image corresponding to the maximum character width,
- c) determining a second portion of the character string image corresponding to the first starting point coordinate and the first width,
- d) applying a classification method on the second portion of the character string image providing a likelihood of error for the first width and a candidate character,
- e) comparing the likelihood of error with a first threshold determined by a trade-off between speed and accuracy; and
- f) selecting the candidate character as the character corresponding to the first width if the likelihood of error corresponding to the first width is lower than the threshold value.
An advantage of this method is that line segmentation and character classification are made a combined process character by character. This creates a huge advantage in reduced calculation time because the required number of steps to execute line segmentation and character classification of a character string image is seriously reduced. The result is an increase of the speed and the accuracy of the method.
- In other embodiments according to the present invention, the method further comprises the step of comparing the likelihood of error with a second threshold value higher than the first threshold value; and wherein the step of comparing the likelihood of error with the first threshold value is only executed if the likelihood of error is lower than the second threshold value.
- The second threshold value has the advantage that it allows for fast filtering of candidates which have no chance of giving a positive result.
- In another embodiment according to the present invention, the method further comprises the step of calculating the starting point for the next character if the likelihood of error corresponding to the first width is lower than the second threshold value, and keeping the calculated starting point of the next character in memory.
- In another embodiment according to the present invention, the method further comprises the step of updating character statistics values contained in a database if the likelihood of error corresponding to the first width is lower than the first threshold value.
- This database contains information on the maximal and average sizes of characters in the text and reference characters. These values are used when estimating the widths of the characters in the generation of the list of potential character widths in order to improve the speed and accuracy of the method.
- In another embodiment according to the current invention, the list of potential character widths are sorted from most likely to less likely, wherein the most likely width is such that it is the widest width containing a maximum number of connected components which are not larger than an estimated maximum width for a character stored in the database.
- In another embodiment according to the current invention, the two less likely widths of the list of potential character widths are an average global width and half of the average global width, wherein the average global width is the height of the character string image for a first character in the character string image and the average global width is calculated based on a previous average global width and an average character width stored in the database for a subsequent character in the character string image.
- The advantage of this is that the average global width will identify Asian characters, while half of the average global width will identify Latin characters because the size of Asian characters is around twice the size of Latin characters and therefore the line segmentation method can by this means be applied to Latin characters, Asian characters and a combination thereof.
- In another embodiment according to the current invention, if the likelihood of error corresponding to the previous width of the list of potential character widths is higher than the second threshold value, the method further comprises the steps of:
-
- a) determining a second portion of the character string image corresponding to the starting point coordinate and to the next width of the list;
- b) applying a classification method on the second portion of the character string image providing a likelihood of error for this width and a candidate character;
- c) comparing the likelihood of error with the threshold value stored in the database;
- d) repeating steps a), b) and c) until the likelihood of error corresponding to this width is lower than the threshold value or until all the widths contained in the list of potential character widths have been processed;
- e) selecting the character candidate as the character corresponding to the width if the likelihood of error corresponding to the width is lower than the first threshold value.
- Line segmentation and character classification are combined and performed one after the other as long as no solution has been found, and until a solution is found. This allows to reduce the number of steps required to perform such a method and also improves the accuracy of the method.
- In another embodiment according to the current invention, the character string image is a vertical character string image and all widths are heights.
- Asian characters can be written in lines but also in columns. The method is certainly not limited to lines and can easily be adapted to lines just by changing the widths of the characters into height and vice versa.
- In another embodiment, the method further comprises the step of updating a character statistics database with the average global width value at a successful iteration.
- In another embodiment according to the current invention, the step of generating a list of potential character widths is based on data retrieved from a database which contains reference characters for a given point size, the width of the biggest reference characters, the average width of the reference characters and the size of the average space between reference characters.
- In another embodiment of the current invention; the database further contains estimates of statistical values of the characters, wherein the database is updated at each successful iteration.
- In another embodiment of the current invention, the maximum character width is a maximum character width for Asian characters.
- In another embodiment of the current invention, a computer program product comprising a computer usable medium having control logic stored therein for causing a computing device to segment a character string image in an input image, the control log comprising:
-
- a) first control readable program code means for determining a first starting point coordinate of a pixel contrasting to a background,
- b) second control readable program code means for generating a list of potential character widths dependent on a maximum character width and on characteristics of the portion of the character string image corresponding to the maximum character width,
- c) third control readable program code means for determining a second portion of the character string image corresponding to the first starting point coordinate and the first width on the list of potential character widths,
- d) fourth control readable program code means for applying a classification method on the second portion of the character string image providing a likelihood of error for the first width and a candidate character,
- e) fifth control readable program code means for comparing the likelihood of error with a first threshold determined by a trade-off between speed and accuracy; and
- f) sixth control readable program code means for selecting the candidate character as the character corresponding to the first width if the likelihood of error corresponding to the first width is lower than the threshold value.
- The invention will be further elucidated by means of the following description and the appended figures.
-
FIG. 1 shows the different steps in an Optical Character Recognition process according to the prior art. -
FIG. 2 illustrates a type of line segmentation in the state of the art known as oversegmentation. -
FIG. 3 shows a line segmentation method according to an embodiment of the invention. -
FIG. 4 illustrates a line segmentation method with a character statistics database. - The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes. The dimensions and the relative dimensions do not necessarily correspond to actual reductions to practice of the invention.
- Furthermore, the terms first, second, third and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequential or chronological order. The terms are interchangeable under appropriate circumstances and the embodiments of the invention can operate in other sequences than described or illustrated herein.
- Moreover, the terms top, bottom, over, under and the like in the description and the claims are used for descriptive purposes and not necessarily for describing relative positions. The terms so used are interchangeable under appropriate circumstances and the embodiments of the invention described herein can operate in other orientations than described or illustrated herein.
- Furthermore, the various embodiments, although referred to as “preferred” are to be construed as exemplary manners in which the invention may be implemented rather than as limiting the scope of the invention.
- The term “comprising”, used in the claims, should not be interpreted as being restricted to the elements or steps listed thereafter; it does not exclude other elements or steps. It needs to be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the scope of the expression “a device comprising A and B” should not be limited to devices consisting only of components A and B, rather with respect to the present invention, the only enumerated components of the device are A and B, and further the claim should be interpreted as including equivalents of those components.
- Referring to
FIG. 3 ,FIG. 3 illustrates a flow diagram of an optical character recognition (OCR) method according to an embodiment of the present invention. The input of the method is acharacter string image 110. In a first step,line segmentation 120 is performed on thecharacter string image 110. Preliminary information on the potential widths of the character analyzed is calculated. This preliminary information on the potential widths of the character allows for a new sequence of steps which improves the speed of the OCR method. Although oversegmentation is still used, not all potential solutions (210, 220, 230) need to be analyzed systematically by the OCR method. The potential solutions are generated by means of a list ofcandidate character widths 310 and are sorted from most likely to less likely. The OCR method first analyzes the most likelypotential solution 210. If a condition on the measurement error is satisfied 320, the character is classified 150, the other potential solutions are discarded and the next character is analyzed. If the condition on the measurement error is not satisfied 330, the next most likely potential solution is analyzed 220. This process is iteratively repeated as long as no character has been successfully classified or until all potential solutions have been evaluated. - The method, as described here, is applied to segment a line of text. However, the same method can be used to segment a column of text as is often the case for Asian text.
- As described above, a list of
candidate character widths 310, ordered from most likely to less likely to occur, is generated before the analysis of a character image. The generation of this list of candidate character widths will be described later on in the application. The list contains N+2 candidate widths wherein the N first widths are widths for which no cut is to be performed in thecharacter string image 110 to extract a character and the last two widths are widths for which a cut needs to be performed in order to isolate and extract a character in thecharacter string image 110. - Starting points are x coordinates which define the position of the new character image to analyse. A list of initial starting points is created at the beginning of the algorithm, where the first initial starting point of the list corresponds to the first black pixel in the left of the image. Other pre-defined starting points correspond to the end of the line, or the most right pixel. Other starting points are added to the list of starting points during the OCR process. The method ensures that all starting points present in the list are processed.
- A character image is entirely defined by a starting point coordinate and a width brought into relation with a list of connected components. The height of the line is the same for all characters. At the end of the OCR process, the character is classified.
- Once a potential solution is created, a
character classification method 140 is applied to the potential solution to determine if a character can be classified for this potential solution. In an embodiment of the invention, thecharacter classification method 140 is based on Gabor functions. - A
character classification method 140, according to an embodiment of the invention, requires two inputs: -
- the starting point coordinate of character n, SPn. The starting point coordinate is the (x) coordinate of the first pixel of a character at the bottom left of the character to analyze,
- a candidate width w1 taken from the list of candidate character widths for character n.
- The output is a likelihood of error Perr which is used to compute the character Cn. The likelihood of error Perr is compared with two threshold parameters: a threshold to have a low likelihood of error Tlerr and a threshold to have a high likelihood of error Therr. The values of Tlerr and Therr can be adjusted depending on the speed versus accuracy requirements. In a preferred embodiment of the invention, the values of Tlerr and Therr are set to Tlerr=20% and Therr=99.9%. The threshold to have a low likelihood of error Tlerr defines the condition to have a successfully classified character.
- A line segmentation method according to an embodiment of the invention uses a
character statistics database 400 as illustrated inFIG. 4 . The elements of the database will now be listed. A more detailed description on how each of the elements are used will follow further on in the application. The database contains: -
- a library of reference sizes (height and width) for Asian and Latin characters, and for a selected point size, stored in memory,
- the reference maximal size for Asian and Latin characters for the selected point size, stored in memory, respectively wMax,A,r, and wMax,L,r,
- the reference mean inter-character space, same for Asian and Latin text, for the selected point size, sr,
- the estimated maximum width of Asian and Latin characters in the text being analyzed: respectively wMax,A,t, and wMax,L,t,
- the mean inter-character space for Asian or Latin characters in the text being analyzed, st,
- the local estimate of the width of Asian and Latin character n, respectively Ln,A and Ln,L, which represents the width of the corresponding reference character, calculated only for characters which have been classified. It is a measurement of the point size of the character computed using the actual width and value of character n.
- the global estimate of the width of characters, Gn, which represents the width of the corresponding reference character, calculated only for characters which have been classified. The value of Gn is a running average of the previously measured local estimates Ln and is therefore a more accurate measurement of the average character point size. This value is more reliable because it is more tolerant to characters wrongly classified.
- Referring to
FIG. 4 ,FIG. 4 shows a flow diagram of the line segmentation process according to an embodiment of the invention. The process is illustrated for a segmentation of character Cn. Thelist 310 of all N+2 candidate character widths for character Cn is generated and a first candidate character width w1 is taken from the list of candidate character widths. These two values, SPn and w1, are theinputs 410 for thecharacter classification method 140 instep 420. The output ofstep 420 is the likelihood of error Perr. - Depending on the value of Perr, two options are possible. If the likelihood of error Perr is is lower than the threshold on the error to have a high likelihood of
error Th err 421, the character Cn is a potential solution. The character Cn corresponding to the first candidate width w1, is then kept in memory and the starting point of the next character is calculated and added to the list of starting points to be processed if needed: SPn+1=SPn+w1+sA,t, 425. If further the likelihood of error Perr is also lower than the threshold on the error to have a low likelihood oferror Ti err 422, the character Cn can be considered as successfully classified and thecharacter statistics database 400 is updated as explained later on in the description. The method can move to the nextstarting point SP n+1 405 to determine the next character Cn+1 without processing the other widths for the current starting point SPn. If the likelihood of error Perr is higher than the threshold on the error to have a low likelihood oferror Tl err 423, the character classification method is executed with the next candidate width i,w i 430, as described hereunder. - If however the likelihood of error Perr is is higher than the threshold on the error to have a high likelihood of
error Th err 424, the character Cn corresponding to thecandidate width 1, w1, is not kept in memory and no new starting point is calculated. - The character classification method is executed with the next candidate width i,
w i 430. Again, there are two options depending on the value of Perr. If Perr is lower thanTh err 431, the character Cn is memorized with the width wi, the starting point of the next character is calculated and added to the list of starting points to be processed if needed 405 and if Perr is also lower thanTl err 432 the character statistics database is updated 400. If Perr is however higher than Tlerr and/or Therr (435, 433), the character classification method is executed with the next candidate width i+1, wi+1 until all the widths of the list have been processed (i=N) or until a character has been successfully classified (Perr<Tlerr). - For i=N+1, the same process is repeated but now the width wN+1 is such that a first cut is performed for the width value wN+1=
w Max,A,t 440. If no character has been classified with a low probability of error (443 or 445) Perr<Tlerr for i=N+1 then the process is repeated for i=N+2 where WN+2=w Max,i,t 450 and again different paths are possible such as 451 with 452, 451 with 453 or 454. - So as not to analyze all solutions of the oversegmentation, the list of all N+2 candidate widths {w1} for character n (Cn) is generated as follows: the candidate widths are sorted from most likely to less likely and the number of candidate widths varies from character to character, depending on the geometry of the potential character measured with the number of connected components. It is assumed, based on observations, that the width of Asian characters is common for most characters, except for a few characters which have then a smaller width. According to an embodiment of the present invention, the most likely width corresponds to that which contains the biggest set of connected component, not wider than the estimated width of the wider Asian character (wMax,A,t) plus the estimated average space between characters (st).
- Characters can be non-touching, or touching. Non-touching characters have a higher probability to occur and are therefore to be taken into account first.
- For non-touching characters, (no cut is necessary), the candidate width with index i (wi), calculated in pixels, is such that it is the ith biggest width with a set of p (p≧0) connected components smaller than the widest Asian character (wMax,A,t) plus the average estimate space between characters (st). Width wi has p connected components, width wi+1 has p or less connected components and is such that wi+1≦wi.
- The widest Asian character (wMax,A,t) and the estimate space between characters (st) are evaluated in the character statistics database. There are N possible non-touching characters.
- Cuts need to be performed if two adjacent characters are touching, the characters will be cut at the most likely place which is calculated from the average global width Gn−1 of the character which can be found in the character statistics database updated at the previous iteration (n−1) for character Cn. The width with index N+1, wN+1, corresponds to the sum of the average global width of Asian characters Gn−1 and the average space st. The width with index N+2, wN+2 corresponds to the sum of the average width of Latin characters Gn−1/2 and the average space st. It is assumed that the width of Latin characters is half of the width of Asian characters.
- To summarize, at each iteration the list of input candidate widths of character n is given by:
- wi=width of the ith biggest set of p connected components such that wi=wMax+st, i=1, . . . , N; N≧0
- wN+1=Gn−1+st,
- wN+2=Gn−1/2+st
- where wMax,A,t, Gn−1,st,st are values which come from the character statistics database updated each time a character has been classified (or Perr<Tlerr).
- The database contains a data structure which stores the character information extracted from the lines and a library of reference characters as well as statistical values on these characters. The single data structure is created at the beginning of the process, the structure is then empty. The data structure, stored in memory is updated at each iteration and its structure is similar to a graph.
- All the parameters of the database are summarized in the following table:
-
TABLE 1 parameters of the characters statistics database and the evaluation of the different parameters of the database is now explained. Individual Characters Max Mean Reference wA, r, wi, r (stored in wMax, A, r, wmean, A, r, wmean, L, r library, for each wMax, L, r sr character of a selected point size) Text wi wMax, A, t, Ln, A, Ln, L = wMax, L, t Ln, A/2 Gn, A, Gn, L = Gn, A/2 st - The width of the biggest Asian and Latin character is evaluated as follows:
-
- where the proportionality ratio represents a conversion of the point size of the characters in the library to the point size of the characters in the text.
- The same is done for the average size of Asian and Latin characters, respectively:
-
- This value represents the local estimate of the width of character n and is further used to evaluate the global estimate of the width of characters at the step n.
- The global estimate of the width of characters at step n, Gn is calculated using the following equation:
-
- where Gn−1 is the global estimate of the average width of characters updated at step n−1, Ln is the local estimate of the average size of characters at step n,n is the index of the current step of the method and G0 is the height of the line (Asian characters are assumed square). This equation is valid for Asian and Latin characters. It is assumed that for Latin characters, the global estimate of the width is half of the global estimate of Asian characters.
- Finally, the same proportionality is applied to estimate the inter character space in the text st, when the point size of the text is different from the point size of the reference characters:
-
- This embodiment illustrates the case of a line segmentation method but the method is not limited to a line. Asian text can also be written in columns and the same method can also be used. In this case, the width of the character has to be replaced by the height of the character, and the starting point coordinate is the (y) coordinate of the first pixel of a character at the top of the character string image.
Claims (14)
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/254,096 US9183636B1 (en) | 2014-04-16 | 2014-04-16 | Line segmentation method |
BE2015/5242A BE1025503B1 (en) | 2014-04-16 | 2015-04-15 | LINE SEGMENTATION METHOD |
CN201580022048.5A CN106255979B (en) | 2014-04-16 | 2015-04-15 | Row dividing method |
JP2016562596A JP6693887B2 (en) | 2014-04-16 | 2015-04-15 | Line segmentation method |
PCT/EP2015/058181 WO2015158781A1 (en) | 2014-04-16 | 2015-04-15 | Line segmentation method |
KR1020167030778A KR102345498B1 (en) | 2014-04-16 | 2015-04-15 | Line segmentation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/254,096 US9183636B1 (en) | 2014-04-16 | 2014-04-16 | Line segmentation method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20150302598A1 true US20150302598A1 (en) | 2015-10-22 |
US9183636B1 US9183636B1 (en) | 2015-11-10 |
Family
ID=53051796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/254,096 Active 2034-05-08 US9183636B1 (en) | 2014-04-16 | 2014-04-16 | Line segmentation method |
Country Status (6)
Country | Link |
---|---|
US (1) | US9183636B1 (en) |
JP (1) | JP6693887B2 (en) |
KR (1) | KR102345498B1 (en) |
CN (1) | CN106255979B (en) |
BE (1) | BE1025503B1 (en) |
WO (1) | WO2015158781A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150086113A1 (en) * | 2012-04-12 | 2015-03-26 | Tata Consultancy Services Limited | System and Method for Detection and Segmentation of Touching Characters for OCR |
US20150371399A1 (en) * | 2014-06-19 | 2015-12-24 | Kabushiki Kaisha Toshiba | Character Detection Apparatus and Method |
US9836646B2 (en) | 2015-10-15 | 2017-12-05 | I.R.I.S. | Method for identifying a character in a digital image |
US11151371B2 (en) * | 2018-08-22 | 2021-10-19 | Leverton Holding, Llc | Text line image splitting with different font sizes |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9798943B2 (en) * | 2014-06-09 | 2017-10-24 | I.R.I.S. | Optical character recognition method |
CN106156766B (en) | 2015-03-25 | 2020-02-18 | 阿里巴巴集团控股有限公司 | Method and device for generating text line classifier |
CN110135426B (en) * | 2018-02-09 | 2021-04-30 | 北京世纪好未来教育科技有限公司 | Sample labeling method and computer storage medium |
CN110858317A (en) * | 2018-08-24 | 2020-03-03 | 北京搜狗科技发展有限公司 | Handwriting recognition method and device |
CN114241090B (en) * | 2021-12-31 | 2022-11-04 | 广州朗国电子科技股份有限公司 | OCR-based electronic whiteboard straight line drawing method, system, equipment and medium |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4562594A (en) | 1983-09-29 | 1985-12-31 | International Business Machines Corp. (Ibm) | Method and apparatus for segmenting character images |
US5060277A (en) * | 1985-10-10 | 1991-10-22 | Palantir Corporation | Pattern classification means using feature vector regions preconstructed from reference data |
JPH04270485A (en) * | 1991-02-26 | 1992-09-25 | Sony Corp | Printing character recognition device |
JPH05128307A (en) * | 1991-10-31 | 1993-05-25 | Toshiba Corp | Character recognition device |
US6041141A (en) * | 1992-09-28 | 2000-03-21 | Matsushita Electric Industrial Co., Ltd. | Character recognition machine utilizing language processing |
CN1145872C (en) * | 1999-01-13 | 2004-04-14 | 国际商业机器公司 | Method for automatically cutting and identiying hand written Chinese characters and system for using said method |
JP2001195544A (en) * | 2000-01-07 | 2001-07-19 | Fujitsu Ltd | Character segmenting device |
US7734636B2 (en) * | 2005-03-31 | 2010-06-08 | Xerox Corporation | Systems and methods for electronic document genre classification using document grammars |
JP2007058803A (en) * | 2005-08-26 | 2007-03-08 | Canon Inc | Online hand-written character recognition device, and online hand-written character recognition method |
JP4424309B2 (en) * | 2006-01-23 | 2010-03-03 | コニカミノルタビジネステクノロジーズ株式会社 | Image processing apparatus, character determination program, and character determination method |
JP4662066B2 (en) * | 2006-07-12 | 2011-03-30 | 株式会社リコー | Image processing apparatus, image forming apparatus, image distribution apparatus, image processing method, program, and recording medium |
JP4860574B2 (en) * | 2006-09-13 | 2012-01-25 | 株式会社キーエンス | Character segmentation device, method and program |
CN101398894B (en) * | 2008-06-17 | 2011-12-07 | 浙江师范大学 | Automobile license plate automatic recognition method and implementing device thereof |
CN101770576A (en) * | 2008-12-31 | 2010-07-07 | 北京新岸线网络技术有限公司 | Method and device for extracting characters |
DE102009029186A1 (en) | 2009-09-03 | 2011-03-10 | BSH Bosch und Siemens Hausgeräte GmbH | Dishwasher with a fleet storage and associated method |
US8385652B2 (en) | 2010-03-31 | 2013-02-26 | Microsoft Corporation | Segmentation of textual lines in an image that include western characters and hieroglyphic characters |
US8571270B2 (en) | 2010-05-10 | 2013-10-29 | Microsoft Corporation | Segmentation of a word bitmap into individual characters or glyphs during an OCR process |
US8606010B2 (en) * | 2011-03-18 | 2013-12-10 | Seiko Epson Corporation | Identifying text pixels in scanned images |
JP5075997B2 (en) * | 2011-03-30 | 2012-11-21 | 株式会社東芝 | Electronic device, program, and character string recognition method |
US8611662B2 (en) * | 2011-11-21 | 2013-12-17 | Nokia Corporation | Text detection using multi-layer connected components with histograms |
JP5547226B2 (en) * | 2012-03-16 | 2014-07-09 | 株式会社東芝 | Image processing apparatus and image processing method |
US9330070B2 (en) * | 2013-03-11 | 2016-05-03 | Microsoft Technology Licensing, Llc | Detection and reconstruction of east asian layout features in a fixed format document |
-
2014
- 2014-04-16 US US14/254,096 patent/US9183636B1/en active Active
-
2015
- 2015-04-15 KR KR1020167030778A patent/KR102345498B1/en active IP Right Grant
- 2015-04-15 WO PCT/EP2015/058181 patent/WO2015158781A1/en active Application Filing
- 2015-04-15 CN CN201580022048.5A patent/CN106255979B/en active Active
- 2015-04-15 JP JP2016562596A patent/JP6693887B2/en active Active
- 2015-04-15 BE BE2015/5242A patent/BE1025503B1/en active IP Right Grant
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150086113A1 (en) * | 2012-04-12 | 2015-03-26 | Tata Consultancy Services Limited | System and Method for Detection and Segmentation of Touching Characters for OCR |
US9922263B2 (en) * | 2012-04-12 | 2018-03-20 | Tata Consultancy Services Limited | System and method for detection and segmentation of touching characters for OCR |
US20150371399A1 (en) * | 2014-06-19 | 2015-12-24 | Kabushiki Kaisha Toshiba | Character Detection Apparatus and Method |
US10339657B2 (en) * | 2014-06-19 | 2019-07-02 | Kabushiki Kaisha Toshiba | Character detection apparatus and method |
US9836646B2 (en) | 2015-10-15 | 2017-12-05 | I.R.I.S. | Method for identifying a character in a digital image |
US11151371B2 (en) * | 2018-08-22 | 2021-10-19 | Leverton Holding, Llc | Text line image splitting with different font sizes |
US11869259B2 (en) | 2018-08-22 | 2024-01-09 | Leverton Holding Llc | Text line image splitting with different font sizes |
Also Published As
Publication number | Publication date |
---|---|
BE1025503B1 (en) | 2019-03-27 |
JP2017515222A (en) | 2017-06-08 |
BE1025503A1 (en) | 2019-03-20 |
CN106255979B (en) | 2019-07-12 |
KR20170004983A (en) | 2017-01-11 |
US9183636B1 (en) | 2015-11-10 |
KR102345498B1 (en) | 2021-12-31 |
WO2015158781A1 (en) | 2015-10-22 |
CN106255979A (en) | 2016-12-21 |
JP6693887B2 (en) | 2020-05-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9183636B1 (en) | Line segmentation method | |
US10936862B2 (en) | System and method of character recognition using fully convolutional neural networks | |
US10643094B2 (en) | Method for line and word segmentation for handwritten text images | |
CN110114776B (en) | System and method for character recognition using a fully convolutional neural network | |
US9836646B2 (en) | Method for identifying a character in a digital image | |
US9286527B2 (en) | Segmentation of an input by cut point classification | |
KR20160132842A (en) | Detecting and extracting image document components to create flow document | |
US10025976B1 (en) | Data normalization for handwriting recognition | |
US11270143B2 (en) | Computer implemented method and system for optical character recognition | |
US8787702B1 (en) | Methods and apparatus for determining and/or modifying image orientation | |
CN115546809A (en) | Table structure identification method based on cell constraint and application thereof | |
Papandreou et al. | Slant estimation and core-region detection for handwritten Latin words | |
Liang et al. | Performance evaluation of document layout analysis algorithms on the UW data set | |
US10970848B2 (en) | Font family and size aware character segmentation | |
Boillet et al. | Confidence estimation for object detection in document images | |
CN111310442B (en) | Method for mining shape-word error correction corpus, error correction method, device and storage medium | |
CN109409370B (en) | Remote desktop character recognition method and device | |
Kaur et al. | Adverse conditions and techniques for cross-lingual text recognition | |
Hakro et al. | Interactive thinning for segmentation-based and segmentation-free Sindhi OCR | |
CN110858305B (en) | System and method for recognizing picture characters by using installed fonts | |
KR910007032B1 (en) | A method for truncating strings of characters and each character in korean documents recognition system | |
JP2015230559A (en) | Object identification device, method, and program | |
Tóth et al. | Preprocessing algorithm for deciphering historical inscriptions using string metric | |
CN117373050A (en) | Method for identifying drawing pipeline with high precision | |
Yang et al. | Local projection-based character segmentation method for historical Chinese documents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: I.R.I.S., BELGIUM Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COLLET, FREDERIC;HAUTOT, JORDI;DAUW, MICHEL;AND OTHERS;SIGNING DATES FROM 20141125 TO 20141127;REEL/FRAME:034308/0522 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
CC | Certificate of correction | ||
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |