CN108805128B - Character segmentation method and device - Google Patents

Character segmentation method and device Download PDF

Info

Publication number
CN108805128B
CN108805128B CN201710312140.6A CN201710312140A CN108805128B CN 108805128 B CN108805128 B CN 108805128B CN 201710312140 A CN201710312140 A CN 201710312140A CN 108805128 B CN108805128 B CN 108805128B
Authority
CN
China
Prior art keywords
point
interval
segmentation
points
characters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710312140.6A
Other languages
Chinese (zh)
Other versions
CN108805128A (en
Inventor
李俊玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Holding Co Ltd
Original Assignee
Jingdong Technology Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Holding Co Ltd filed Critical Jingdong Technology Holding Co Ltd
Priority to CN201710312140.6A priority Critical patent/CN108805128B/en
Publication of CN108805128A publication Critical patent/CN108805128A/en
Application granted granted Critical
Publication of CN108805128B publication Critical patent/CN108805128B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words

Abstract

The invention provides a character segmentation method and a character segmentation device, which can cut a plurality of continuous adhered characters into complete single characters, avoid the situation that a complete character is cut into two halves, and improve the accuracy of character recognition to a certain extent. The character segmentation method of the invention comprises the following steps: projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projected image as pre-segmentation points, so as to obtain a pre-segmentation point set; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value; calculating the average width of the single character according to the total number of characters and the total width of the characters; traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points, and determining an actual segmentation point set by combining the average width of the single character; and traversing the actual segmentation point set, and determining pixel points between adjacent actual segmentation points, so as to obtain a single segmented character graph of the character image to be segmented.

Description

Character segmentation method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for character segmentation.
Background
OCR text recognition technology has a wide and deep application in many fields today, such as scientific research, reading, information retrieval, etc. The text and the report can be processed in batches by utilizing the OCR text recognition technology, so that the processing efficiency is improved, and the labor cost is reduced. In the OCR text recognition process, it can be roughly classified into: the method comprises the steps of picture input, binarization processing, denoising cutting, character intelligent recognition and result output, wherein denoising cutting is an important link, namely useless noise points in the picture are removed, target texts to be recognized in the picture are cut by taking characters as units, each character forms a single picture, available materials are provided for the following character intelligent recognition links, and therefore the denoising cutting result greatly determines the efficiency and accuracy of the whole text recognition.
In the technical scheme in the prior art, the denoising cutting technology of the picture text is mostly realized by a projection mode. The method comprises the following steps:
1) And (3) performing binarization processing on the picture, and converting the color picture into a black-and-white picture. After binarization, the characters in the figure can be regarded as connected black pixels.
2) And performing projection operation on the target character. Taking the numeral "1" as an example, this character is placed in a two-dimensional coordinate system, the upper left corner is the origin coordinate, the pixels are units, and projections are made to the X axis and the Y axis, and the effect is as shown in fig. 1.
3) Similarly, when such a plurality of characters are together, as in fig. 2, the projection presented is in an ideal state, with an X-axis projection image as in fig. 3 (a), and a Y-axis projection image as in fig. 3 (b). The two-dimensional projection image has a plurality of curves, each curve is the projection of one character, a certain section SumY in the middle is 0, and the projected image is 0 in the section because the character interval part has no black pixel points. Thus, from the projection image on the X-axis, a specific position of each character in the lateral direction in the picture can be determined. And similarly, the position of the character in the Y-axis direction is obtained through the graph. Cutting according to specific coordinates of the pixel points in the picture to obtain a picture of a single character.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art: in an actual service scene, denoising and cutting of pictures are solved only by means of a traditional projection mode, and the accuracy is very low in many times due to the diversity and complexity of the pictures in the service scene. As shown in fig. 4, after the binarization process, some characters in the figure are stuck and cannot be completely separated, so that the projected image cannot be divided according to the 0 value point of SumY.
Disclosure of Invention
In view of this, the embodiment of the invention provides a method and a device for character segmentation, which can cut a plurality of continuous adhered characters into complete single characters, avoid the situation that a complete character is cut into two halves, and improve the accuracy of character recognition to a certain extent.
In order to achieve the above object, according to a first aspect of an embodiment of the present invention, there is provided a character segmentation method.
The character segmentation method of the invention comprises the following steps: projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projected image as pre-segmentation points, so as to obtain a pre-segmentation point set; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value;
calculating the average width of the single character according to the total number of characters and the total width of the characters;
traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points, and determining an actual segmentation point set by combining the average width of the single character;
and traversing the actual segmentation point set, and determining pixel points between adjacent actual segmentation points, so as to obtain a single segmented character graph of the character image to be segmented.
Optionally, the step of searching for intermediate points of the blank interval from the projection image as pre-segmentation points, thereby obtaining a pre-segmentation point set includes: searching points with projection values smaller than a set value from the projection image, and sequentially recording the abscissa of each point; according to the abscissa of two adjacent points, the abscissa of the middle point of the two adjacent points is calculated in turn, so that a pre-segmentation point coordinate set blank_point= { b is obtained 0 ,b 1 ,...,b i ,..,b m -a }; wherein m represents the total number of pre-segmentation points; less than or equal to the total number N of characters char ;b i The abscissa of the ith pre-segmentation point is represented.
Optionally, the step of calculating the average width of the individual characters from the total number of characters and the total width of the characters comprises: the average width of individual characters is calculated according to the following formula, average width of individual characters=total width of characters/total number of characters.
Optionally, traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points, and determining the actual segmentation point set in combination with the average width of the single character comprises: traversing a prearting point set blank_point, and then respectively calculating interval between the abscissa coordinates of two adjacent prearting points; wherein interval=blank_point [ i+1]]-blank_point[i]I.e.0, m); comparing the average width W of the interval and the average width W of the single character, determining the abscissa of the actual segmentation point according to a preset recognition rule, and writing the actual segmentation point into the actual segmentation point set segment_point; wherein b 0 Is the abscissa of the first actual segmentation point in the set of actual segmentation points.
Optionally, the step of comparing the sizes of interval and W and determining the actual division point according to a preset recognition rule includes: when the first coefficient W < interval < second coefficient W, determining that the interval contains a character, namely blank_point [ i+1] as an actual division point; wherein the second coefficient is greater than the first coefficient.
Optionally, the step of comparing the sizes of interval and W and determining the actual division point according to a preset recognition rule includes: when the second coefficient W < interval is less than or equal to the third coefficient W, wherein the third coefficient is greater than the second coefficient, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point start, and determining an actual division point by the following steps: step A: calculating an abscissa seg_point of a first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, expanding a first preset number of pixels on the left and right sides by taking the first adhesion division point as the center, searching a point with the minimum projection value in the expanded range, taking the point with the minimum projection value as a first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval; and (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section; step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
Optionally, the step of comparing the sizes of interval and W and determining the actual division point according to a preset recognition rule includes: when interval > third coefficient is equal to W, determining that the interval comprises adhesion characters with the number of characters larger than 3, and determining the actual division point of the interval according to the following steps: step a: retracting the front and rear of the interval section into pixels of a second preset number; step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point; step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
Optionally, the step of traversing the set of actual segmentation points to determine the pixel points between adjacent actual segmentation points includes: determining the abscissa of adjacent actual division points; and taking the pixel points with the pixel point abscissas belonging to the adjacent practical dividing point abscissas as the pixel points between the practical dividing points.
Optionally, the character image to be segmented includes a binary image including only the character to be segmented.
According to a second aspect of the embodiment of the present invention, there is provided a character segmentation apparatus.
The character segmentation apparatus of the present invention includes: the projection module is used for projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projection image to serve as pre-segmentation points, so that a pre-segmentation point set is obtained; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value; a calculation module for calculating the average width of the single character according to the total number of characters and the total width of the characters; the determining module is used for traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points and determining an actual segmentation point set by combining the average width of the single character; and the character determining module is used for traversing the actual segmentation point set and determining pixel points between adjacent actual segmentation points so as to obtain a segmented single character graph of the character image to be segmented.
Optionally, the projection module is further configured to: searching points with projection values smaller than a set value from the projection image, and sequentially recording the abscissa of each point; according to the abscissa of two adjacent points, the abscissa of the middle point of the two adjacent points is calculated in turn, so that a pre-segmentation point coordinate set blank_point= { b is obtained 0 ,b 1 ,...,b i ,..,b m -a }; wherein m represents the total number of pre-segmentation points; less than or equal to the total number N of characters char ;b i The abscissa of the ith pre-segmentation point is represented.
Optionally, the computing module is further configured to: the average width of individual characters is calculated according to the following formula, average width of individual characters=total width of characters/total number of characters.
Optionally, the determining module is further configured to: traversing a prearting point set blank_point, and then respectively calculating interval between the abscissa coordinates of two adjacent prearting points; comparing the average width W of the interval and the average width W of the single character, determining the abscissa of the actual segmentation point according to a preset recognition rule, and writing the actual segmentation point into the actual segmentation point set segment_point; wherein b 0 Is the abscissa of the first actual segmentation point in the set of actual segmentation points.
Optionally, the determining module is further configured to: when the first coefficient W < interval < second coefficient W, the interval is determined to contain a character, i.e. blank_point [ i+1] as the actual division point.
Optionally, the determining module is further configured to: when the second coefficient W < interval is less than or equal to the third coefficient W, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point, and determining an actual division point by the following steps: step A: calculating an abscissa seg_point of a first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, expanding a first preset number of pixels on the left and right sides by taking the first adhesion division point as the center, searching a point with the minimum projection value in the expanded range, taking the point with the minimum projection value as a first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval; and (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section; step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
Optionally, the determining module is further configured to: when interval > third coefficient is equal to W, determining that the interval comprises adhesion characters with the number of characters larger than 3, and determining the actual division point of the interval according to the following steps: step a: retracting the front and rear of the interval section into pixels of a second preset number; step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point; step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
Optionally, the character determining module is further configured to: determining the abscissa of adjacent actual division points; and taking the pixel points with the pixel point abscissas belonging to the adjacent practical dividing point abscissas as the pixel points between the practical dividing points.
Optionally, the character image to be segmented includes a binary image including only the character to be segmented.
According to a third aspect of an embodiment of the present invention, there is provided an electronic device.
The electronic device of the present invention includes: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the character segmentation method provided by the invention.
According to a third aspect of embodiments of the present invention, a computer-readable medium is provided.
The computer readable medium of the present invention has stored thereon a computer program which, when executed by a processor, implements the character segmentation method provided by the present invention.
One embodiment of the above invention has the following advantages or benefits: for the condition that a plurality of characters are continuously adhered, different segmentation rules are adopted for segmentation according to different adhered character numbers, so that the plurality of continuous adhered characters can be effectively cut into complete single characters, the condition that a complete character is cut into two halves is avoided, and the recognition accuracy of the characters is improved to a certain extent.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic view of the projections of numeral 1 to the X-axis and Y-axis, respectively; wherein, (a) is a projection view in the X-axis direction; (b) is a projection view in the Y-axis direction;
FIG. 2 is an image of a plurality of characters;
FIG. 3 is a schematic view of a plurality of characters projected onto the X-axis and Y-axis, respectively; wherein, (a) is a projection view in the X-axis direction; (b) is a projection view in the Y-axis direction;
FIG. 4 is a schematic diagram of the character image after binarization;
FIG. 5 is a schematic diagram of a method of segmenting characters according to an embodiment of the present invention;
FIG. 6 is a schematic illustration of a pre-processing area to be intercepted in an example of an insurance document;
FIG. 7 is an image of a pre-processed region binarized;
FIG. 8 is an image of a binarized image after being subjected to an etching process and edge detection;
fig. 9 is a projection view of the edge detection image after horizontal projection;
FIG. 10 is an image of an extracted policy number text line;
FIG. 11 is a projection view of a policy number text line image after vertical projection;
FIG. 12 is a diagram of a locating policy number;
FIG. 13 is a vertical projection of a policy number image;
FIG. 14 is an image of a single character after segmentation;
FIG. 15 is a schematic diagram of a character segmentation apparatus according to an embodiment of the present invention;
Fig. 16 is a schematic diagram of a computer system suitable for use in implementing the terminal device of the embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 5 is a schematic diagram of a method for dividing characters according to an embodiment of the present application, and as shown in fig. 5, the method for dividing characters according to an embodiment of the present application includes the following steps S50 to S53.
Step S50: and projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projected image as pre-segmentation points, so as to obtain a pre-segmentation point set. In the step, the character image to be segmented is a binary image only containing characters to be segmented; the mentioned blank interval is a point where the vertical projection value is smaller than the set value, for example, the set value is 2 here, that is, it is explained that if the vertical projection value is smaller than 2 pixels, the vertical direction is considered to contain no character information; in this step, it should be noted that the first pre-segmentation point is preset to be the coordinates of the blank position before the first character, and if there is no blank position, the coordinates of the first pre-segmentation point are set to be 0. When searching for the middle point of the blank interval in the step, firstly searching for a point with a projection value smaller than a set value (namely, a point with a projection value smaller than 2 pixels in the vertical direction) from the projection image, and sequentially recording the abscissa of each point; then, according to the abscissa of the two adjacent points, the abscissa of the middle point of the two adjacent points is calculated in turn, so as to obtain a pre-segmentation point coordinate set blank_point= { b 0 ,b 1 ,...,b i ,..,b m -a }; wherein m represents the total number of pre-segmentation points; less than or equal to the total number N of characters char ;b i The abscissa of the ith pre-segmentation point is represented.
Step S51: the average width of the individual characters is calculated from the total number of characters and the total width of the characters. In this step, the average width of individual characters is calculated according to the following formula, the average width of individual characters=the total width of characters/the total number of characters.
Step S52: traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points, and determining an actual segmentation point set by combining the average width of the single characters. In the step, firstly traversing a pre-segmentation point set blank_point, and then respectively calculating interval between the abscissa coordinates of two adjacent pre-segmentation points;
wherein interval = blank_point [ i+1] -blank_point [ i ], i e [0, m);
then comparing the size relation of the interval and the average width W of the single character, determining the abscissa of the actual segmentation point according to a preset recognition rule, and writing the actual segmentation point into the actual segmentation point set segment_point; wherein b 0 Is the abscissa of the first actual segmentation point in the set of actual segmentation points.
The relationship between Interval and W includes the following three:
When the first coefficient W < interval < second coefficient W, determining that the interval contains a character, namely blank_point [ i+1] as an actual division point; wherein the second coefficient is greater than the first coefficient.
When the second coefficient W < interval is less than or equal to the third coefficient W, wherein the third coefficient is greater than the second coefficient, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point start, and determining an actual division point by the following steps:
step A: calculating an abscissa seg_point of a first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, expanding a first preset number of pixels on the left and right sides by taking the first adhesion division point as the center, searching a point with the minimum projection value in the expanded range, taking the point with the minimum projection value as a first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval;
and (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section;
Step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
When interval > third coefficient is equal to W, determining that the interval comprises adhesion characters with the number of characters larger than 3, and determining the actual division point of the interval according to the following steps:
step a: retracting the front and rear of the interval section into pixels of a second preset number;
step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point;
step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
Step S53: and traversing the actual segmentation point set, and determining pixel points between adjacent actual segmentation points, so as to obtain a single segmented character graph of the character image to be segmented. In this step, first, the abscissa of adjacent actual division points is determined; then taking the pixel points with the pixel point abscissas belonging to the adjacent practical dividing point abscissas as the pixel points between the practical dividing points; and finally, according to a single character diagram between adjacent actual segmentation points.
The technical scheme of the invention is described in detail below by taking identification of an insurance number on an insurance document as an example. In this embodiment, firstly, an image to be processed is preprocessed, and a binarized image containing only the segmented characters is obtained by various image processing methods, and the specific preprocessing process is as follows:
in the policy, the area of the area occupied by the policy number is relatively small to the whole Zhang Baoshan, and the position is relatively fixed, so that the subsequent processing efficiency is improved, the waste of computing resources is avoided, the input policy image is firstly converted into a gray image, and the gray image is subjected to area interception. In the embodiment of the invention, the policy number is set to be unified at the upper right corner of the policy image, the upper right corner of the policy is taken as a starting point, the area of the upper right corner of the policy is cut according to the width not greater than 1/2 of the width of the full graph and the length not greater than 1/6 of the length of the full graph, and as shown in fig. 6, the area is defined as a preprocessing area.
The preprocessing area in fig. 6 is subjected to adaptive binarization processing by using a local adaptive threshold value method, the size of a local neighborhood block is 35 x 35, after binarization processing, pixels larger than the threshold value are set to 255, and pixels smaller than the threshold value are set to 0. Due to the image characteristics of the policy, the pixel value of the background area after binarization is usually 255, and the pixel value of the foreground image is 0, as shown in fig. 7.
The binarized pretreatment area (namely, figure 7) is subjected to horizontal corrosion treatment, and the corrosion template is not suitable to exceed 3 pixels because the character spacing of the policy number is small; the processing can remove scattered noise points in fig. 7, and can strengthen outline information of foreground characters to a certain extent, so that accuracy of positioning of subsequent text lines is improved.
The binarized picture after corrosion treatment is firstly subjected to sobel edge detection operation, and the result is shown in fig. 8, in the picture after edge detection, the pixel point corresponding to the character only has the outline of the outermost circle, so that when projection is performed, the error of the projection picture caused by fonts and thickness can be reduced, the projected value is ensured to be related to the size of the character and the number of the characters as far as possible, and the relative position of the character is conveniently positioned through the projected image.
In the technical scheme of the embodiment of the invention, as shown in fig. 9, since the relative positions of the two lines of explanatory characters below the policy number are fixed, and the word heights and the character numbers of the two lines of explanatory characters are close, that is, the horizontal projection characteristics of the two lines of explanatory characters are almost the same, the positions of the text lines of the explanatory characters adjacent to the policy number are more reliable. In one embodiment of the present invention, locating the upper boundary position of the text line of the plaintext immediately adjacent to the policy number may be performed by:
Step 1): the maximum horizontal projection value proj_max can be obtained according to the horizontal projection value horiz_proj, and a threshold value thred is set as a judging condition for extracting the text line of the explanatory text:
thred=proj_max*0.6
step 2): traversing horiz_proj, comparing horizontal projection values with a threshold value row by row, inducing continuous rows with horizontal projection values larger than the threshold value into a segment list sublist, and storing row coordinates sublist= [ row ] of the continuous rows i ,row i+1 ,…row i+n ]Where i is the starting row coordinate of the consecutive rows and n is the total number of rows of the consecutive rows; according to the line number characteristics occupied by the description text, the length of the segment list subist extracted by using the thread is not less than 10, so that when the length of the segment list does not meet the condition, the subsequent processing is not performed; after traversing, the segment list set seg_list= [ sublist ] is obtained 1 ,sublist 2 ,…,sublist m ]M is the number of segment lists. It should be noted that, since the processing is performed in the order of going from top to bottom, the sublist 2 The first value in will be greater than sublist 1 The last value in.
Step 3): the multiple segment lists obtained in the step 2) are calculated and screened to obtain the upper boundary position count_row of the text line of the statement text close to the policy number, and the specific operation steps are as follows:
a: calculating the maximum value of each segment list sublist and the row coordinate corresponding to the maximum value;
b: by traversing seg_list, the distance between two lines of explanatory text is very small, and the distance between two sublist representing the explanatory text is set to be no more than 15. When two adjacent sublist meets the condition, calculating the mean and variance of the maximum values of the two sublist;
c: selecting two adjacent sublist of minimum variance i ,sublist i+1 Text lines as descriptive text are located and therefore sublist i The first element of (2) is the upper boundary position line coordinate count_row of the text line of the instruction text immediately adjacent to the policy number.
Since there is no obvious adhesion between the explanatory text and the policy number line, the horiz_proj is scanned upwards with the upper boundary count_row as the starting point, the midpoint of the interval with the first horizontal projection value of 0 is used as the lower boundary position of the policy number, the midpoint of the interval with the second horizontal projection value of 0 is used as the upper boundary position of the policy number, and the extracted policy number text line is shown in fig. 10 according to the located coordinates of the upper and lower boundary positions.
Finally, the policy number text image line in fig. 10 is projected in the vertical direction, as shown in fig. 11. Due to the "insurance policy number: "and policy characters have obvious and fixed position characteristics, so" policy number "can be determined by the characteristic that the projection value of the virtual frame position in the figure is 0: the character is separated from the policy character cut, and the left and right boundaries of the policy number are positioned. At this time, accurate positioning of the policy number can be obtained as shown in fig. 12.
After the image of the policy number shown in fig. 12 is obtained, it can be seen from fig. 12 that some characters are stuck together, and the policy number shown in fig. 12 is subjected to vertical projection (projection view is shown in fig. 13), and the middle point of the blank interval is obtained as a pre-segmentation point. The blank here means that the projection value in the vertical direction is not higher than 2 pixels, i.e., the vertical direction is considered to contain no character information. It should be noted that, the first pre-cutting point is preset to be the coordinates of the blank position before the first character, and if there is no blank position, the coordinates of the first pre-cutting point are set to be 0.
Calculating the middle point of each blank interval according to the vertical projection, thereby obtaining a pre-segmentation point coordinate set blank_point= { b 0 ,b 1 ,...,b i ,..,b m M represents the total number of pre-segmentation points, which is less than or equal to the total number of characters N char ,b i Representing the ith pre-segmentation point coordinate. The spacing of two adjacent pre-split points can be expressed as interval=blank_point [ i+1]-blank_point[i],i∈[0,m);
As can be seen from FIG. 13, two adjacent pre-cut point intervals may contain one non-sticky policy character or may contain a plurality of sticky policy characters; the width W of the average individual character can be obtained from the total number of characters and the overall width of the policy number. By comparing the interval between two adjacent pre-segmentation points with the average width of a single character, whether the two adjacent pre-segmentation points contain sticky characters can be judged:
When the first coefficient W < interval < second coefficient W, determining that the interval contains a character, namely blank_point [ i+1] as an actual division point; wherein the second coefficient is greater than the first coefficient; in this embodiment, the first coefficient is 0.6 and the second coefficient is 1.2; that is, when 0.6×w < interval is equal to or less than 1.2×w, it is determined that the interval contains only one character, so that blank_point [ i+1] is an actual division point.
When the second coefficient W < interval is less than or equal to the third coefficient W, wherein the third coefficient is set to 3.2, that is, when 1.2 x W < interval is less than or equal to 3.2 x W, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point start, and determining an actual division point by the following steps:
step A: calculating the abscissa seg_point of the first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, the equal division points are not necessarily actual division points because the character widths are not completely equal, expanding pixels of a first preset number (3 in the embodiment) on the left and right sides by taking the first adhesion division point as the center, searching the point with the minimum projection value in the expanded range, taking the point with the minimum projection value as the first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval;
And (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section;
step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
When interval >3.2×w, determining that the interval section includes sticky characters with a number of characters greater than 3, and determining an actual division point of the interval section according to the following steps:
step a: retracting the front and rear of the interval section into pixels of a second preset number;
step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point;
step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
When all the actual segmentation points are found through the steps, traversing the actual segmentation point coordinate set segment_point to find the pixel points of each segmentation interval, thereby obtaining a segmented single character image, as shown in fig. 14.
Finally, in the prior art, training samples by using a convolutional neural network can obtain a character recognition model. In the prediction stage, a character image is input, and the recognition model outputs the character with the highest recognition probability. The recognition model in the embodiment of the invention comprises, but is not limited to, a convolutional neural network, and a common supervised classifier KNN, SVM and the like can be used.
Fig. 15 is a schematic diagram of a character segmentation apparatus according to an embodiment of the present invention. As shown in fig. 15, the character segmentation apparatus 150 according to the embodiment of the present invention mainly includes: a projection module 151, a calculation module 152, a determination module 153, and a character determination module 154; the projection module 151 is configured to perform vertical projection on a character image to be segmented, and find a middle point of a blank interval from the projected image as a pre-segmentation point, so as to obtain a pre-segmentation point set; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value; the calculating module 152 is configured to calculate an average width of the individual characters according to the total number of characters and the total width of characters; the determining module 153 is configured to traverse the pre-segmentation point set, calculate an interval between two adjacent pre-segmentation points, and determine an actual segmentation point set by combining an average width of the single character; the character determining module 154 is configured to traverse the actual segmentation point set and determine pixel points between adjacent actual segmentation points, so as to obtain a segmented single character map of the character image to be segmented; the character picture to be segmented comprises a binarized image only containing characters to be segmented.
The projection module 151 of the split character device 150 may also be used to: searching for a projection value smaller than a set value from the projection imageAnd recording the abscissa of each point in turn; according to the abscissa of two adjacent points, the abscissa of the middle point of the two adjacent points is calculated in turn, so that a pre-segmentation point coordinate set blank_point= { b is obtained 0 ,b 1 ,...,b i ,..,b m -a }; wherein m represents the total number of pre-segmentation points; less than or equal to the total number N of characters char ;b i The abscissa of the ith pre-segmentation point is represented.
The calculation module 152 of the split character device 150 may also be configured to: the average width of individual characters is calculated according to the following formula, average width of individual characters=total width of characters/total number of characters.
The determination module 153 of the split character device 150 may also be used to: traversing a prearting point set blank_point, and then respectively calculating interval between the abscissa coordinates of two adjacent prearting points; comparing the average width W of the interval and the average width W of the single character, determining the abscissa of the actual segmentation point according to a preset recognition rule, and writing the actual segmentation point into the actual segmentation point set segment_point; wherein b 0 Is the abscissa of the first actual segmentation point in the set of actual segmentation points.
The determination module 153 of the split character device 150 may also be used to: when the first coefficient W < interval < second coefficient W, determining that the interval contains a character, namely blank_point [ i+1] as an actual division point; wherein the second coefficient is greater than the first coefficient.
The determination module 153 of the split character device 150 may also be used to: when the second coefficient W < interval is less than or equal to the third coefficient W, wherein the third coefficient is greater than the second coefficient, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point start, and determining an actual division point by the following steps: step A: calculating an abscissa seg_point of a first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, expanding a first preset number of pixels on the left and right sides by taking the first adhesion division point as the center, searching a point with the minimum projection value in the expanded range, taking the point with the minimum projection value as a first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval; and (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section; step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
The determination module 153 of the split character device 150 may also be used to: when interval > third coefficient is equal to W, determining that the interval comprises adhesion characters with the number of characters larger than 3, and determining the actual division point of the interval according to the following steps: step a: retracting the front and rear of the interval section into pixels of a second preset number; step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point; step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
The character determination module 154 of the split character device 150 may also be configured to: determining the abscissa of adjacent actual division points; and taking the pixel points with the pixel point abscissas belonging to the adjacent practical dividing point abscissas as the pixel points between the practical dividing points.
Referring now to FIG. 16, there is illustrated a schematic diagram of a computer system 1600 suitable for use in implementing a terminal device of an embodiment of the present application. The terminal device shown in fig. 16 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present application.
As shown in fig. 16, the computer system 1600 includes a Central Processing Unit (CPU) 1601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 1603. In RAM1603, various programs and data required for the operation of system 1600 are also stored. The CPU1601, ROM1602, and RAM1603 are connected to each other by a bus 1604. An input/output (I/O) interface 1605 is also connected to the bus 1604.
The following components are connected to the I/O interface 1605: an input portion 1606 including a keyboard, a mouse, and the like; an output portion 1607 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 1608 including a hard disk or the like; and a communication section 1609 including a network interface card such as a LAN card, a modem, or the like. The communication section 1609 performs communication processing via a network such as the internet. The drive 1610 is also connected to the I/O interface 1605 as needed. A removable medium 1611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed as needed on the drive 1610 so that a computer program read out therefrom is installed into the storage section 1608 as needed.
In particular, according to embodiments of the present disclosure, the process of literal segmentation may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the character segmentation method of the present inventive subject matter. In such embodiments, the computer program may be downloaded and installed from a network via the communication portion 1609, and/or installed from the removable media 1611. The above-described functions defined in the system of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 1601.
The computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present application, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes a projection module, a calculation module, a determination module, and a character determination module. The names of the units do not limit the unit itself in some cases, for example, the projection module may also be described as a module for searching for intermediate points of a blank interval from the projection image as pre-segmentation points, so as to obtain a pre-segmentation point set.
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to include: projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projected image as pre-segmentation points, so as to obtain a pre-segmentation point set; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value; calculating the average width of the single character according to the total number of characters and the total width of the characters; traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points, and determining an actual segmentation point set by combining the average width of the single character; and traversing the actual segmentation point set, and determining pixel points between adjacent actual segmentation points, so as to obtain a single segmented character graph of the character image to be segmented.
According to the technical scheme provided by the embodiment of the application, for the condition that a plurality of characters are continuously adhered, different segmentation rules are adopted for segmentation according to different adhered character numbers, so that the plurality of continuous adhered characters can be effectively cut into complete single characters, the condition that a complete character is cut into two halves is avoided, and the character recognition accuracy is improved to a certain extent.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (20)

1. A character segmentation method, comprising:
projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projected image as pre-segmentation points, so as to obtain a pre-segmentation point set; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value;
calculating the average width of the single character according to the total number of characters and the total width of the characters;
traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points, and determining an actual segmentation point set by combining the average width of the single character; the method comprises the following steps: judging whether the adjacent two pre-segmentation points contain adhesion characters or not by comparing the interval between the adjacent two pre-segmentation points and the average width of a single character, and determining the actual segmentation point set according to a judgment result;
And traversing the actual segmentation point set, and determining pixel points between adjacent actual segmentation points, so as to obtain a single segmented character graph of the character image to be segmented.
2. The method of claim 1, wherein the step of finding intermediate points of the blank space from the projection image as pre-segmentation points, thereby obtaining a set of pre-segmentation points comprises:
searching points with projection values smaller than a set value from the projection image, and sequentially recording the abscissa of each point;
according to the abscissa of two adjacent points, the abscissa of the middle point of the two adjacent points is calculated in turn, so that a pre-segmentation point coordinate set blank_point= { b is obtained 0 ,b 1 ,...,b i ,..,b m -a }; wherein m represents the total number of pre-segmentation points; less than or equal to the total number N of characters char ;b i Representing the ith pre-allocationThe abscissa of the cut point.
3. The method of claim 1, wherein the step of calculating an average width of individual characters from the total number of characters and the total width of characters comprises:
the average width of individual characters is calculated according to the following formula, average width of individual characters=total width of characters/total number of characters.
4. The method of claim 1, wherein the steps of traversing the set of pre-segmentation points, calculating the spacing of two adjacent pre-segmentation points, and determining the set of actual segmentation points in combination with the average width of the single character comprise:
Traversing a prearting point set blank_point, and then respectively calculating interval between the abscissa coordinates of two adjacent prearting points;
wherein interval = blank_point [ i+1] -blank_point [ i ], i e [0, m);
comparing the average width W of the interval and the average width W of the single character, determining the abscissa of the actual segmentation point according to a preset recognition rule, and writing the actual segmentation point into the actual segmentation point set segment_point; wherein b 0 Is the abscissa of the first actual segmentation point in the set of actual segmentation points.
5. The method of claim 4, wherein the step of comparing the sizes of interval and W to determine the actual division point according to a preset recognition rule comprises:
when the first coefficient W < interval < second coefficient W, determining that the interval contains a character, namely blank_point [ i+1] as an actual division point; wherein the second coefficient is greater than the first coefficient.
6. The method of claim 4, wherein the step of comparing the sizes of interval and W to determine the actual division point according to a preset recognition rule comprises:
when the second coefficient W < interval is less than or equal to the third coefficient W, wherein the third coefficient is greater than the second coefficient, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point start, and determining an actual division point by the following steps:
Step A: calculating an abscissa seg_point of a first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, expanding a first preset number of pixels on the left and right sides by taking the first adhesion division point as the center, searching a point with the minimum projection value in the expanded range, taking the point with the minimum projection value as a first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval;
and (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section;
step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
7. The method of claim 4, wherein the step of comparing the sizes of interval and W to determine the actual division point according to a preset recognition rule comprises:
when interval > third coefficient is equal to W, determining that the interval comprises adhesion characters with the number of characters larger than 3, and determining the actual division point of the interval according to the following steps:
step a: retracting the front and rear of the interval section into pixels of a second preset number;
Step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point;
step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
8. The method of claim 1, wherein the step of traversing the set of actual segmentation points to determine pixel points between adjacent actual segmentation points comprises:
determining the abscissa of adjacent actual division points;
and taking the pixel points with the pixel point abscissas belonging to the adjacent practical dividing point abscissas as the pixel points between the practical dividing points.
9. The method according to any one of claims 1 to 8, wherein the character picture to be segmented comprises a binarized image containing only characters to be segmented.
10. A character segmentation apparatus, comprising:
the projection module is used for projecting the character image to be segmented in the vertical direction, and searching the intermediate points of the blank intervals from the projection image to serve as pre-segmentation points, so that a pre-segmentation point set is obtained; the blank interval is a point of which the projection value in the vertical direction is smaller than a set value;
A calculation module for calculating the average width of the single character according to the total number of characters and the total width of the characters;
the determining module is used for traversing the pre-segmentation point set, calculating the interval between two adjacent pre-segmentation points and determining an actual segmentation point set by combining the average width of the single character; the method is particularly used for judging whether the adjacent two pre-segmentation points contain adhesion characters or not by comparing the interval between the adjacent two pre-segmentation points and the average width of a single character, and determining the actual segmentation point set according to a judgment result;
and the character determining module is used for traversing the actual segmentation point set and determining pixel points between adjacent actual segmentation points so as to obtain a segmented single character graph of the character image to be segmented.
11. The apparatus of claim 10, wherein the projection module is further configured to:
searching points with projection values smaller than a set value from the projection image, and sequentially recording the abscissa of each point;
according to the abscissa of two adjacent points, the abscissa of the middle point of the two adjacent points is calculated in turn, so that a pre-segmentation point coordinate set blank_point= { b is obtained 0 ,b 1 ,...,b i ,..,b m -a }; wherein m represents the total number of pre-segmentation points; less than or equal to the total number N of characters char ;b i The abscissa of the ith pre-segmentation point is represented.
12. The apparatus of claim 10, wherein the computing module is further to:
the average width of individual characters is calculated according to the following formula, average width of individual characters=total width of characters/total number of characters.
13. The apparatus of claim 10, wherein the means for determining is further configured to:
traversing a prearting point set blank_point, and then respectively calculating interval between the abscissa coordinates of two adjacent prearting points;
comparing the average width W of the interval and the average width W of the single character, determining the abscissa of the actual segmentation point according to a preset recognition rule, and writing the actual segmentation point into the actual segmentation point set segment_point; wherein b 0 Is the abscissa of the first actual segmentation point in the set of actual segmentation points.
14. The apparatus of claim 13, wherein the means for determining is further configured to:
when the first coefficient W < interval < second coefficient W, determining that the interval contains a character, namely blank_point [ i+1] as an actual division point; wherein the second coefficient is greater than the first coefficient.
15. The apparatus of claim 13, wherein the means for determining is further configured to:
When the second coefficient W < interval is less than or equal to the third coefficient W, wherein the third coefficient is greater than the second coefficient, determining that the interval contains 3 sticky characters, calculating the average width W of the characters in the interval, taking the blank_point [ i ] as a starting point start, and determining an actual division point by the following steps:
step A: calculating an abscissa seg_point of a first adhesion division point in the interval according to the following formula, wherein seg_point=start+w, expanding a first preset number of pixels on the left and right sides by taking the first adhesion division point as the center, searching a point with the minimum projection value in the expanded range, taking the point with the minimum projection value as a first actual division point in the interval, and updating the seg_point value according to the abscissa of the first actual division point in the interval;
and (B) step (B): repeatedly executing the step A by taking the updated seg_point value in the step A as a starting point, thereby obtaining a second actual division point in the interval section;
step C: and taking the blank_point [ i+1] as the last actual division point in the interval.
16. The apparatus of claim 13, wherein the means for determining is further configured to:
When interval > third coefficient is equal to W, determining that the interval comprises adhesion characters with the number of characters larger than 3, and determining the actual division point of the interval according to the following steps:
step a: retracting the front and rear of the interval section into pixels of a second preset number;
step b: searching a point with the minimum vertical projection value in the interval after the retraction, taking the point as an adhesion dividing point of the interval, and dividing the interval into two subintervals according to the adhesion dividing point;
step c: and calculating interval intervals of the subintervals, and determining actual division points of the subintervals according to the preset identification rules and the average width of the single characters.
17. The apparatus of claim 10, wherein the character determination module is further to:
determining the abscissa of adjacent actual division points;
and taking the pixel points with the pixel point abscissas belonging to the adjacent practical dividing point abscissas as the pixel points between the practical dividing points.
18. The apparatus according to any one of claims 10 to 17, wherein the character picture to be segmented comprises a binarized image containing only characters to be segmented.
19. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-9.
20. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-9.
CN201710312140.6A 2017-05-05 2017-05-05 Character segmentation method and device Active CN108805128B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710312140.6A CN108805128B (en) 2017-05-05 2017-05-05 Character segmentation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710312140.6A CN108805128B (en) 2017-05-05 2017-05-05 Character segmentation method and device

Publications (2)

Publication Number Publication Date
CN108805128A CN108805128A (en) 2018-11-13
CN108805128B true CN108805128B (en) 2023-11-07

Family

ID=64053713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710312140.6A Active CN108805128B (en) 2017-05-05 2017-05-05 Character segmentation method and device

Country Status (1)

Country Link
CN (1) CN108805128B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110033004B (en) * 2019-03-25 2021-01-05 广东奥普特科技股份有限公司 Method for identifying adhesive characters
CN110059695B (en) * 2019-04-23 2021-08-27 厦门商集网络科技有限责任公司 Character segmentation method based on vertical projection and terminal
CN110728687B (en) * 2019-10-15 2022-08-02 卓尔智联(武汉)研究院有限公司 File image segmentation method and device, computer equipment and storage medium
CN110738522B (en) * 2019-10-15 2022-12-09 卓尔智联(武汉)研究院有限公司 User portrait construction method and device, computer equipment and storage medium
CN111079762B (en) * 2019-11-26 2022-02-08 合肥联宝信息技术有限公司 Cutting method of adhesive characters and electronic equipment
CN111428069A (en) * 2020-03-11 2020-07-17 中交第二航务工程局有限公司 Construction data acquisition method for slot milling machine
CN111695550B (en) * 2020-03-26 2023-12-08 深圳市新良田科技股份有限公司 Text extraction method, image processing device and computer readable storage medium
CN111783781B (en) * 2020-05-22 2024-04-05 深圳赛安特技术服务有限公司 Malicious term recognition method, device and equipment based on product agreement character recognition
CN112700458A (en) * 2020-12-31 2021-04-23 南京太司德智能电气有限公司 Electric power SCADA warning interface character segmentation and processing method
CN112966678B (en) * 2021-03-11 2023-01-24 南昌航空大学 Text detection method and system
CN113780294B (en) * 2021-09-10 2023-11-14 泰康保险集团股份有限公司 Text character segmentation method and device
CN115953785B (en) * 2023-03-15 2023-05-16 山东薪火书业有限公司 Digital editing system based on teaching aid book content enhancement

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770576A (en) * 2008-12-31 2010-07-07 北京新岸线网络技术有限公司 Method and device for extracting characters
CN102156865A (en) * 2010-12-14 2011-08-17 上海合合信息科技发展有限公司 Handwritten text line character segmentation method and identification method
CN102496013A (en) * 2011-11-11 2012-06-13 苏州大学 Chinese character segmentation method for off-line handwritten Chinese character recognition
CN104820827A (en) * 2015-04-28 2015-08-05 电子科技大学 Method for recognizing punctiform characters on surfaces of cables

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9836646B2 (en) * 2015-10-15 2017-12-05 I.R.I.S. Method for identifying a character in a digital image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770576A (en) * 2008-12-31 2010-07-07 北京新岸线网络技术有限公司 Method and device for extracting characters
CN102156865A (en) * 2010-12-14 2011-08-17 上海合合信息科技发展有限公司 Handwritten text line character segmentation method and identification method
CN102496013A (en) * 2011-11-11 2012-06-13 苏州大学 Chinese character segmentation method for off-line handwritten Chinese character recognition
CN104820827A (en) * 2015-04-28 2015-08-05 电子科技大学 Method for recognizing punctiform characters on surfaces of cables

Also Published As

Publication number Publication date
CN108805128A (en) 2018-11-13

Similar Documents

Publication Publication Date Title
CN108805128B (en) Character segmentation method and device
US10896349B2 (en) Text detection method and apparatus, and storage medium
US11164027B2 (en) Deep learning based license plate identification method, device, equipment, and storage medium
US8965127B2 (en) Method for segmenting text words in document images
CN110942074B (en) Character segmentation recognition method and device, electronic equipment and storage medium
US7873215B2 (en) Precise identification of text pixels from scanned document images
US9275030B1 (en) Horizontal and vertical line detection and removal for document images
CN108108734B (en) License plate recognition method and device
US10438083B1 (en) Method and system for processing candidate strings generated by an optical character recognition process
US10169673B2 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
CN114519858B (en) Document image recognition method and device, storage medium and electronic equipment
Michalak et al. Fast Binarization of Unevenly Illuminated Document Images Based on Background Estimation for Optical Character Recognition Purposes.
CN112801232A (en) Scanning identification method and system applied to prescription entry
CN111507337A (en) License plate recognition method based on hybrid neural network
CN114724133B (en) Text detection and model training method, device, equipment and storage medium
CN109101974B (en) Denoising method and denoising device for linear interference
CN110807457A (en) OSD character recognition method, device and storage device
CN112560856A (en) License plate detection and identification method, device, equipment and storage medium
CN114511862B (en) Form identification method and device and electronic equipment
CN107330470B (en) Method and device for identifying picture
CN111488870A (en) Character recognition method and character recognition device
CN111767751B (en) Two-dimensional code image recognition method and device
CN114529570A (en) Image segmentation method, image identification method, user certificate subsidizing method and system
CN111383193A (en) Image restoration method and device
CN116994261B (en) Intelligent recognition system for big data accurate teaching intelligent question card image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 101111 Room 221, 2nd Floor, Block C, 18 Kechuang 11th Street, Beijing Economic and Technological Development Zone

Applicant after: Jingdong Technology Holding Co.,Ltd.

Address before: Room 221, floor 2, block C, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 101111

Applicant before: Jingdong Digital Technology Holding Co.,Ltd.

Address after: 101111 Room 221, 2nd Floor, Block C, 18 Kechuang 11th Street, Beijing Economic and Technological Development Zone

Applicant after: Jingdong Digital Technology Holding Co.,Ltd.

Address before: Room 221, floor 2, block C, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 101111

Applicant before: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

Address after: 101111 Room 221, 2nd Floor, Block C, 18 Kechuang 11th Street, Beijing Economic and Technological Development Zone

Applicant after: JINGDONG DIGITAL TECHNOLOGY HOLDINGS Co.,Ltd.

Address before: Room 221, floor 2, block C, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 101111

Applicant before: BEIJING JINGDONG FINANCIAL TECHNOLOGY HOLDING Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant