CN111767909B - Character recognition method and device and computer readable storage medium - Google Patents

Character recognition method and device and computer readable storage medium Download PDF

Info

Publication number
CN111767909B
CN111767909B CN202010397170.3A CN202010397170A CN111767909B CN 111767909 B CN111767909 B CN 111767909B CN 202010397170 A CN202010397170 A CN 202010397170A CN 111767909 B CN111767909 B CN 111767909B
Authority
CN
China
Prior art keywords
character
clustering
confidence
image
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010397170.3A
Other languages
Chinese (zh)
Other versions
CN111767909A (en
Inventor
罗文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Lianbao Information Technology Co Ltd
Original Assignee
Hefei Lianbao Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Lianbao Information Technology Co Ltd filed Critical Hefei Lianbao Information Technology Co Ltd
Priority to CN202010397170.3A priority Critical patent/CN111767909B/en
Publication of CN111767909A publication Critical patent/CN111767909A/en
Application granted granted Critical
Publication of CN111767909B publication Critical patent/CN111767909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Character Input (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a character recognition method, a device and a computer readable storage medium, wherein the method comprises the following steps: clustering the designated images according to the first clustering parameters to obtain a first classification set; wherein the first set of classifications includes at least one first set of classifications; screening the first classification set according to prior information to determine a first character area; classifying the first character region through a classifier to obtain a classification result and confidence information; when the confidence information meets the preset threshold, determining a character recognition result corresponding to the specified image according to the classification result, and recognizing fuzzy characters by applying the method provided by the embodiment of the method, wherein the recognition accuracy is high.

Description

Character recognition method and device and computer readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a character recognition method, a character recognition device, and a computer-readable storage medium.
Background
The character recognition is a technology for recognizing characters on a carrier, when the method is used for recognizing characters on a display screen, due to the influence of external environment, hardware conditions, factors of the display screen and the like, the contrast ratio of the characters and the display screen is low, so that the problems of fuzzy character areas and low recognition accuracy are caused.
Disclosure of Invention
The embodiment of the invention provides a character recognition method, character recognition equipment and a computer readable storage medium, and has high character recognition accuracy.
An embodiment of the present invention provides a character recognition method, including: clustering the designated images according to the first clustering parameters to obtain a first classification set; wherein the first set of classifications includes at least one first set of classifications; screening the first classification set according to prior information to determine a first character area; classifying the first character region through a classifier to obtain a classification result and confidence information; and when the confidence information meets the preset threshold, determining a character recognition result corresponding to the specified image according to the classification result.
In an embodiment, the method further comprises: when the confidence information does not meet the preset threshold, determining a second clustering parameter according to the set step associated with the first clustering parameter; clustering the designated image based on the second clustering parameter to determine a second character area; the second character area is used for determining a character recognition result corresponding to the designated image.
In an embodiment, before clustering the designated images according to the first clustering parameter, the method further comprises: carrying out binarization segmentation on the designated image to obtain a connected domain; screening the connected domain according to a preset condition to obtain a non-character connected domain; the non-character connected domain is used for preprocessing the appointed images before the appointed images are clustered.
In an embodiment, the filtering the first classification set according to the prior information to determine the first character region includes: determining the difference degree of each first classification set according to the prior information; sorting all the first classification sets according to the difference degree to determine a first classification set with the minimum difference degree; and carrying out binarization processing on the first classification set with the minimum difference degree to obtain a first character area.
In an embodiment, the classifying the first character region by the classifier to obtain a classification result and confidence information includes: carrying out segmentation transformation on the first character area to obtain a character image; adjusting the size of the character image according to an interpolation method to obtain a character image with a preset size; and classifying the character images with the preset size through a classifier to obtain a classification result and confidence information.
In an embodiment, the confidence information includes a current confidence and a current clustering parameter, and the preset threshold includes a confidence threshold and a parameter threshold; correspondingly, when the confidence information meets the preset threshold, determining a character recognition result corresponding to the designated image according to the classification result, including: when the current confidence coefficient meets the confidence coefficient threshold value, determining the classification result as a character recognition result corresponding to the specified image; when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter meets the parameter threshold value, determining a next round of clustering parameters according to the set step associated with the current clustering parameter; and when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter does not meet the parameter threshold value, acquiring the current confidence coefficient and all the previous confidence coefficients, and sorting the current confidence coefficient and all the previous confidence coefficients to determine the classification result with the maximum corresponding confidence coefficient as the character recognition result corresponding to the specified image.
Another aspect of an embodiment of the present invention provides a character recognition apparatus, including: the clustering module is used for clustering the designated images according to the first clustering parameters to obtain a first classification set; wherein the first set of classifications includes at least one first set of classifications; the screening module is used for screening the first classification set according to prior information to determine a first character area; the classification module is used for classifying the first character region through a classifier to obtain a classification result and confidence information; and the determining module is used for determining the character recognition result corresponding to the specified image according to the classification result when the confidence information meets the preset threshold value.
In an implementation manner, the determining module is further configured to determine, when the confidence information does not satisfy the preset threshold, a second clustering parameter according to a set step associated with the first clustering parameter; the clustering module is further used for clustering the designated images based on the second clustering parameters to determine a second character area; the second character area is used for determining a character recognition result corresponding to the designated image.
In an embodiment, the apparatus further comprises: the segmentation module is used for carrying out binarization segmentation on the specified image to obtain a connected domain; the screening module is further used for screening the connected domain according to a preset condition to obtain a non-character connected domain; the non-character connected domain is used for preprocessing the appointed images before the appointed images are clustered.
In one embodiment, the screening module includes: the determining submodule is used for determining the difference degree of each first classification set according to the prior information; the sorting submodule is used for sorting all the first classification sets according to the difference degree so as to determine the first classification set with the minimum difference degree; and the processing submodule is used for carrying out binarization processing on the first classification set with the minimum difference degree to obtain a first character area.
In an embodiment, the classification module includes: the segmentation submodule is used for carrying out segmentation transformation on the first character area to obtain a character image; the adjusting submodule is used for adjusting the size of the character image according to an interpolation method to obtain a character image with a preset size; and the classification submodule is used for classifying the character images with the preset size through a classifier to obtain a classification result and confidence information.
In an embodiment, the confidence information includes a current confidence and a current clustering parameter, and the preset threshold includes a confidence threshold and a parameter threshold; accordingly, the determining module includes: when the current confidence coefficient meets the confidence coefficient threshold value, determining the classification result as a character recognition result corresponding to the specified image; when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter meets the parameter threshold value, determining a next round of clustering parameters according to the set step associated with the current clustering parameter; and when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter does not meet the parameter threshold value, acquiring the current confidence coefficient and all the previous confidence coefficients, and sorting the current confidence coefficient and all the previous confidence coefficients to determine the classification result with the maximum corresponding confidence coefficient as the character recognition result corresponding to the specified image.
Another aspect of the embodiments of the present invention provides a computer-readable storage medium, which includes a set of computer-executable instructions, and when executed, is configured to perform any one of the character recognition methods described above.
The character recognition method, the character recognition device and the computer-readable storage medium provided by the embodiment of the invention are used for recognizing the characters in the designated image so as to determine the character recognition result corresponding to the characters in the designated image. The method is particularly suitable for determining corresponding character recognition results in the fuzzy images and/or the fuzzy characters, and the character recognition results are obtained by clustering the designated images, screening according to prior information, classifying by a classifier and meeting confidence coefficient, so that the obtained character recognition results have the characteristic of high accuracy.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 is a schematic flow chart illustrating an implementation of a character recognition method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart illustrating an implementation of determining a second character region by a character recognition method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a flow chart of determining a character recognition result by a character recognition method according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an implementation flow of image preprocessing of a character recognition method according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a flow chart of a method for sorting and screening a set of character recognition according to an embodiment of the present invention;
fig. 6 is a block diagram of a character recognition apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart illustrating an implementation of a character recognition method according to an embodiment of the present invention.
Referring to fig. 1, in one aspect, an embodiment of the present invention provides a character recognition method, where the method includes: operation 101, performing clustering processing on the designated image according to the first clustering parameter to obtain a first classification set; wherein the first classification set comprises at least one first classification set; an operation 102, of screening the first classification set according to the prior information to determine a first character region; operation 103, classifying the first character region through the classifier to obtain a classification result and confidence information; and in operation 104, when the confidence information meets a preset threshold, determining a character recognition result corresponding to the designated image according to the classification result.
The character recognition method provided by the embodiment of the invention is used for recognizing the characters in the designated image so as to determine the character recognition result corresponding to the characters in the designated image. The method is particularly suitable for determining corresponding character recognition results in the fuzzy images and/or the fuzzy characters, and the character recognition results are obtained by clustering the designated images, screening according to prior information, classifying by a classifier and meeting confidence coefficient, so that the obtained character recognition results have the characteristic of high accuracy.
In operation 101 of the present invention, a first classification set is obtained by clustering the designated images according to the first clustering parameter, where the first classification set includes at least one first classification set. The number of the first classification sets may be 1 or more than 1. The first classification set is a different cluster set corresponding to the designated image, and the clustering rule of the first classification set is determined based on the first clustering parameter. The designated image may be an original image or a preprocessed image, the designated image includes characters, the characters may be clear character content or fuzzy character content, the characters may be located in a clear background or a fuzzy background, for example, in one case, the characters are fonts with clear boundaries and have a large contrast difference with the background, in another case, the characters are fonts with fuzzy boundaries and shapes and have a large contrast difference with the background, and further, the characters may be complete characters or incomplete characters. For example, only a half of the characters in the designated image, or a plurality of character blocks which are not connected with each other are formed in the designated image, each character block corresponds to one part of the complete characters, and the complete character content is obtained by splicing the character blocks. The background on the designated image can be various carriers for carrying characters, such as screens, paper, cloth and other carriers with the surfaces capable of forming characters, the background can be a single carrier or a spliced and combined carrier of multiple carriers, and the color of the background can comprise one or more. The character is positioned in the middle of the designated image through preprocessing the designated image. For example, in the designated image, the character is located in the first area at the upper right corner, and the other areas except the first area are cut off by preprocessing to obtain the preprocessed designated object, it is understood that in the preprocessing, the purpose is to locate the character at the center position of the designated image, after the preprocessing, the designated image may further include background contents except the character, and according to the setting of the preprocessing, the ratio of the character to the background contents may not be unique, that is, under the condition that the designated image satisfies the character centering, the area occupied by the background contents may be larger than, smaller than or equal to the area occupied by the character. Wherein, the background refers to the content without characters in the designated image, i.e. the content without characters in the designated image. The method includes the steps of performing clustering processing on the preprocessed designated images, classifying character contents and background contents to obtain a first classification set, wherein the first classification set corresponds to the character contents and also corresponds to the background contents, and the number of the first classification sets corresponding to the character contents and the number of the first classification sets corresponding to the background contents can be 1 or more than 1. The first clustering parameter is used as a clustering parameter to enable the device to perform clustering processing on the images, and the first clustering parameter may not be unique as required. It is understood that there may be more than one first classification set corresponding to the character content and more than one first classification set corresponding to the background content, based on the actual situation of the background content and the character content. Further, the first classification set obtained by the clustering process is a pixel set. The clustering process can classify the image pixels by using K-means clustering, the first clustering parameter is the number of classification categories, the number of classification categories can be n, and the value of n can be set to be positive integers of 2, 3, 4, 5 and other arbitrary values according to experience. It should be added that the clustering process can segment the image by using ISODATA (Iterative Self-Organizing Data Analysis method), which avoids setting the first clustering parameter. It is further necessary to supplement that, before performing clustering processing, the method needs to convert the pre-processed designated image into a gray image, then perform gaussian filtering processing on the gray image to remove noise points, then re-determine the image without noise points as a designated image, and perform clustering processing on the re-determined designated image.
In operation 102, the a priori information may characterize size information or position information corresponding to the ambiguous character, such as width, height, aspect ratio, position, aspect ratio, etc. of the character associated with the character. The first classification set is screened through the prior information, the screening mode may be to remove the first classification set that does not satisfy the prior information, for example, the first classification set that does not satisfy the width and/or width condition is determined according to the width and/or height in the prior information to obtain the first character region, and the screening mode may also be to directly screen and obtain the first classification set that satisfies the prior condition to obtain the first character region. The first character region corresponds to a first sorted set that satisfies the prior information.
In the method operation 103, the first character region is classified by the classifier to obtain a classification result and confidence information. The classifier is obtained by constructing a convolutional neural network, generating a diversified training image set by using a data amplification method and training the character classifier through the training image set, wherein the diversified training image set comprises but is not limited to characters with various definitions and fonts. The method includes the steps that a classifier is used for classifying character images to obtain classification results and confidence degree information of corresponding characters, the classification results are used for determining the characters corresponding to a designated image, for example, when the designated image contains characters 'A', the classification results obtained through classification of the classifier are characters 'A', and the confidence degree information is used for evaluating the confidence degree of the classification results.
In operation 104 of the method, when the confidence information satisfies the preset threshold, a character recognition result corresponding to the designated image is determined according to the classification result. The preset threshold is determined as required, and is used for distinguishing the confidence level of the confidence level information, for example, the preset threshold is determined to be 0.9, when the confidence level information is greater than 0.9, the confidence level of the classification result is high, and when the confidence level is less than 0.9, the confidence level of the classification result is low. When the confidence information meets a preset threshold value, the confidence of the classification result can be considered to be high, and at the moment, the classification result can be output as a character recognition result corresponding to the specified image; and when the confidence information does not meet the preset threshold, the confidence of the classification result can be considered to be low, at the moment, clustering can be carried out again to obtain the classification result with high corresponding confidence information, and the classification result with high corresponding confidence information is output as the character recognition result of the corresponding specified image.
To facilitate understanding of the above embodiments, a specific implementation scenario is provided below for description.
In the scene, the method is applied to a character recognition device with a data processing function, and the character recognition device firstly obtains an initial image which comprises a character A; then, preprocessing the initial image, cutting off the background around the character, and enabling the character "A" to be located at a centered position, wherein the centered position is not accurately centered, and only the character needs to be kept at the intermediate position to obtain a specified image; then, clustering the designated images according to the first clustering parameters to obtain a first classification set, wherein the first classification set comprises a plurality of first classification sets which are associated or not associated with the characters A, and the first classification sets belong to a pixel set; then, screening the first classification set according to prior information corresponding to the character 'A', removing the first classification set which is not associated with the character to determine the first classification set corresponding to the character 'A', and determining a character region through the first classification set; then, the character region is classified by a classifier, the classification result is obtained as a character "a", the confidence information is 0.95, and the confidence information satisfies a preset threshold value, so that the character recognition result is obtained as the character "a".
Fig. 2 is a schematic flow chart illustrating an implementation of determining a second character region by a character recognition method according to an embodiment of the present invention.
Referring to fig. 2, in an embodiment of the present invention, the method further comprises: in operation 201, when the confidence information does not satisfy the preset threshold, determining a second clustering parameter according to the set step associated with the first clustering parameter; operation 202, clustering the designated image based on the second clustering parameter to determine a second character region; in operation 203, the second character region is used to determine a character recognition result corresponding to the designated image.
When the confidence information does not meet the preset threshold, clustering processing and subsequent operation processing are required to be carried out on the specified image again by using other clustering parameters different from the first clustering parameter so as to re-determine the classification result and the confidence information. When the first clustering parameter is a parameter corresponding to the K-means clustering method, the second clustering parameter is also a parameter corresponding to the K-means clustering method. The step setting may be a fixed step or a non-fixed step, for example, the step setting may be 1, 2, 3, 4, 5 or any other positive number, the step setting may also be a step associated with the number of cycles, such as n +1, n-1, 2n, n/2, and n is the number of cycles. In an implementation case, the step is set to be 1, the first clustering parameter is 2, and the second clustering parameter is 3, it is understood that if the confidence information corresponding to the second clustering parameter still does not satisfy the confidence threshold, according to the step, a third clustering parameter of 4, a fourth clustering parameter of 5, and a fifth clustering parameter of 6 … may also be generated, which will not be described in detail below. The clustering processing methods corresponding to the first clustering parameter, the second clustering parameter, the third clustering parameter, the fourth clustering parameter, the fifth clustering parameter and the subsequent clustering parameters are the same, and in this embodiment, the clustering processing method adopts a K-means clustering method. Specifically, after finishing clustering according to the Nth clustering parameter, obtaining an Nth classification set; wherein the Nth classification set comprises at least one Nth classification set; screening the Nth classified set according to the same prior information to determine an Nth character region; classifying the Nth character region through a classifier to obtain a classification result and confidence information; and when the confidence information meets a preset threshold, determining a character recognition result corresponding to the specified image according to the classification result.
Fig. 3 is a schematic flow chart illustrating an implementation of determining a character recognition result by a character recognition method according to an embodiment of the present invention.
Referring to fig. 3, in the embodiment of the present invention, the confidence information includes a current confidence and a current clustering parameter, and the preset threshold includes a confidence threshold and a parameter threshold; correspondingly, in operation 104, when the confidence information satisfies the preset threshold, determining a character recognition result corresponding to the designated image according to the classification result, including: in operation 1041, when the current confidence meets the confidence threshold, determining the classification result as a character recognition result corresponding to the designated image; operation 1042, when the current confidence does not meet the confidence threshold and the current clustering parameter meets the parameter threshold, determining a next round of clustering parameters according to the set step associated with the current clustering parameter; in operation 1043, when the current confidence does not satisfy the confidence threshold and the current clustering parameter does not satisfy the parameter threshold, the current confidence and all previous confidences are obtained, and the current confidence and all previous confidences are sorted, so that the classification result with the maximum corresponding confidence is determined as the character recognition result corresponding to the designated image.
The preset threshold of the method comprises a confidence coefficient threshold and a parameter threshold, wherein the confidence coefficient threshold is a threshold associated with the confidence coefficient information, and the parameter threshold is a threshold associated with the parameter cycle number. In order to avoid that the confidence information can not meet the threshold value related to the confidence information all the time, the threshold value judgment condition of the method is as follows:
and when the current confidence coefficient meets the confidence coefficient threshold value, determining the classification result as the character recognition result of the corresponding specified image. For example, the confidence threshold is 0.9, the current confidence obtained according to the first clustering parameter is 0.91, and the confidence threshold is satisfied, that is, the classification result corresponding to the first clustering parameter is determined as the character recognition result corresponding to the designated image.
And when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter meets the parameter threshold value, determining the next round of clustering parameters according to the set step associated with the current clustering parameter. The current confidence coefficient refers to a current confidence coefficient value obtained in a parameter threshold value, and the next round of clustering parameters can be determined step by step according to the setting associated with the current clustering parameters under the condition that the current clustering parameters meet the parameter threshold value. For example, the confidence threshold is 0.9, the parameter threshold is 5 rounds, the current confidence obtained according to the first clustering parameter is 0.8, the 1 st round is judged to be smaller than the parameter threshold 5 rounds, the second clustering parameter is generated according to the first clustering parameter and the set step, the current confidence obtained according to the second clustering parameter is 0.85, the 2 nd round is judged to be smaller than the parameter threshold 5 rounds, the third clustering parameter is generated according to the second clustering parameter and the set step, the current confidence obtained according to the third clustering parameter is 0.91, the confidence threshold is met, and the classification result corresponding to the third clustering parameter is determined to be the character recognition result corresponding to the designated image. It will be appreciated that the parameter threshold may be determined by a specific value of the clustering parameter, or by a value associated with the clustering parameter, such as the number of rounds of the clustering parameter or a threshold formula associated with the clustering parameter.
And when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter does not meet the parameter threshold value, acquiring the current confidence coefficient and all the previous confidence coefficients, and sequencing the current confidence coefficient and all the previous confidence coefficients to determine the classification result with the maximum corresponding confidence coefficient as the character recognition result of the corresponding specified image. For example, the confidence threshold is 0.9, the current confidence obtained according to the first clustering parameter is 0.81, the current confidence obtained according to the second clustering parameter is 0.82, the current confidence obtained according to the third clustering parameter is 0.85, the current confidence obtained according to the fourth clustering parameter is 0.82, and the current confidence obtained according to the fifth clustering parameter is 0.86, it is determined that the current clustering parameter does not satisfy the parameter threshold 5 in 5 rounds, at this time, the current confidence obtained corresponding to the first clustering parameter of 0.81, the current confidence obtained corresponding to the second clustering parameter of 0.82, the current confidence obtained corresponding to the third clustering parameter of 0.85, the current confidence obtained corresponding to the fourth clustering parameter of 0.82, and the current confidence obtained corresponding to the fifth clustering parameter of 0.86 are sorted, and the current confidence obtained corresponding to the fifth clustering parameter of the highest confidence is 0.86. And determining the classification result obtained corresponding to the fifth clustering parameter as the character recognition result corresponding to the appointed image.
Fig. 4 is a schematic diagram of an implementation flow of image preprocessing of a character recognition method according to an embodiment of the present invention.
Referring to fig. 4, in the embodiment of the present invention, in operation 101, before performing clustering processing on a specific image according to a first clustering parameter, the method further includes: operation 401, performing binarization segmentation on the specified image to obtain a connected domain; operation 402, screening connected domains according to preset conditions to obtain non-character connected domains; the non-character connected domain is used for preprocessing the appointed images before clustering the appointed images.
The method for preprocessing the appointed image not only enables the characters to be centered, but also screens the characters according to preset conditions. Specifically, the preset condition of the method may be that the aspect ratio requirement of the character is met, the method may perform image binarization processing by using a maximum inter-class variance method to obtain a binarized image, perform connected domain segmentation on the binarized image to obtain a plurality of connected domains, determine a connected domain which does not meet the condition as a non-character connected domain by comparing the aspect ratio corresponding to the character with the aspect ratio of the minimum connected rectangle of the connected domain according to prior information of the aspect ratio corresponding to the character, and determine the connected domain which does not meet the condition as the non-character connected domain, for example, when the aspect ratio of the character is greater than 1, determine the connected domain which has an aspect ratio smaller than 1 as the non-character connected domain. After the character is known to be centered, the preset condition of the method may also be to select the image significant edge, for example, after the connected domain segmentation is performed on the binary image to obtain a plurality of connected domains, the method traverses the connected domains to obtain the connected domains located in a certain range around the image and determine the connected domains as non-character connected domains. It is understood that the connected component removed based on the preset condition is a non-character connected component. The method also includes merging the remaining connected domains. According to the method, the image is preprocessed based on the residual connected domains, the content corresponding to the residual connected domains in the image is determined as the designated image, and other content is removed. The method can also be used for preprocessing the image according to the non-character connected domain so as to remove the part of the image corresponding to the non-character connected domain and determine the residual image as the designated image.
Fig. 5 is a schematic flow chart illustrating an implementation of sorting set screening of a character recognition method according to an embodiment of the present invention.
Referring to fig. 5, in an embodiment of the present invention, the operation 102, of filtering the first classification set according to the prior information to determine the first character region, includes: operation 1021, determining a difference degree of each first classification set according to the prior information; operation 1022, sorting all the first sorted sets according to the difference degree to determine the first sorted set with the smallest difference degree; in operation 1023, a binarization process is performed on the first classification set with the minimum difference to obtain a first character area.
The method may include, in operation 1021, the a priori information may be a height of the character and/or a degree of positional deviation of the character. When the prior check information is the height of a character, the method firstly projects the first classification set in the horizontal direction, and removes the pixel set which does not meet the condition according to a judgment formula of the height condition, wherein the judgment formula of the height condition can be as follows:
thod1<region_h<thod2
where is the height of each first sorted set, the statistical minimum of the height of the same 1 character, and the statistical maximum of the height of the same 2. A first character region is determined using the first set of classifications that satisfies the height condition.
When the prior information is the position deviation of the character, the method may traverse the first classification sets, and calculate the position deviation of each first classification set from the center of the image, where the position deviation of the first classification set from the center of the image may describe the degree of deviation of the first classification set from the center of the image, and the calculation formula of the position deviation of each first classification set from the center of the image may be:
Figure BDA0002488015180000131
where dis _ degree is the degree of positional deviation, N is the number of pixels w, h of the first classification set is the width and height of the image, and xi、yiWidth and height of the first classification set. Then, according to the prior information of the character at the central significant position of the image, the position deviation degrees of each first classification set are sorted, and the pixel set corresponding to the minimum value of the position deviation degrees is considered as the pixel of the character area. Namely, the first classification set with the minimum position deviation degree is subjected to binarization processing to obtain a first character area。
When the prior information can be the height of the character and the position deviation of the character, the prior information of the height of the character can be used for obtaining the pixel sum meeting the height condition, then the position deviation of the character is used for obtaining the first classification set with the minimum position deviation, and the first classification set with the minimum position deviation is subjected to binarization processing to be converted into a binarization image, namely a first character area.
In this embodiment of the present invention, in operation 103, classifying the first character region by the classifier to obtain a classification result and confidence information, including: operation 1031, performing segmentation transformation on the first character region to obtain a character image; operation 1032, performing size adjustment on the character image according to an interpolation method to obtain a character image with a preset size; in operation 1033, the character images with the preset size are classified by the classifier, and a classification result and confidence information are obtained.
It is to be understood that the number of characters in the first character region may be one or more. When the number of characters is plural, the same line of characters is segmented by using a stuck character segmentation method to obtain a single character image. And then, carrying out character position and size normalization processing on the single character image, specifically, carrying out horizontal direction and vertical direction projection on the single character image to remove blank areas of the character image, and then converting the character image into the same size as the image in the training set by using a bilinear interpolation method, namely, meeting the preset size requirement of the input image of the classifier. And classifying the character image by using a classifier to obtain a classification result and confidence information.
In order to facilitate understanding of the above embodiments, a detailed description is provided below. In this scenario, the character recognition method provided by the embodiment of the present invention is applied to a character recognition device having a data processing function.
When the device acquires an image containing characters, the device firstly carries out preprocessing on the image, namely carrying out centering processing on the characters in the image and removing the remarkable edges of the image. Specifically, the device firstly performs binarization processing on an image by using a maximum inter-class variance method to obtain a binary image, then performs connected domain segmentation on the binary image to obtain a plurality of connected domains, calculates the aspect ratio of the minimum external rectangle of each connected domain, uses the statistical aspect ratio of the minimum external rectangle of the character as a screening condition, removes the connected domains which do not meet the screening condition, traverses the connected domains, removes the connected domains positioned around the image, merges the remaining connected domains, performs interception on the image according to the merged remaining connected domains to obtain an image corresponding to the merged remaining connected domains, and determines the image corresponding to the merged remaining connected domains as a specified image.
Then, the color image is converted into a gray image, the gray image is subjected to Gaussian filtering processing to obtain a filtered image, pixels of the filtered image are classified by using a K-means clustering method, the number of classification categories is n, and the value of n can be set to be 2, 3, 4, 5 or other positive numbers according to experience. In the first round, n is set to 2. A classified pixel set is obtained, and the classified pixel set comprises a plurality of pixel sets.
Performing horizontal projection on the classified pixel set, removing the pixel set with the height not meeting the condition, and obtaining the pixel set with the height meeting the condition, wherein the judgment condition is as follows:
thod1<region_h<thod2
where is the height of each pixlet, the statistical minimum of the height of the same character, which is 1, and the statistical maximum of the height of the same character, which is 2.
Then, traversing the pixel sets with the heights meeting the conditions, and calculating the position deviation degree of each pixel set with the heights meeting the conditions from the center of the image, wherein the calculation formula is as follows:
Figure BDA0002488015180000151
where dis _ degree is the degree of positional deviation, N is the number of pixels w, h of the first classification set is the width and height of the image, and xi、yiWidth and height of the first classification set. The degree of positional deviation describes the degree to which a set of pixels is displaced from the center of an image, according to the character in the imageAnd sequencing the calculated position deviation degrees according to the prior information of the central significant position, and determining a pixel set corresponding to the minimum value of the position deviation degrees as a character region pixel set.
And then, carrying out image binarization processing on the character area pixel set to obtain a binarized character image, and segmenting the binarized character image by using an adhesion character segmentation method to obtain a single character image.
And then, carrying out character position and size normalization processing on the single character image, specifically, firstly, carrying out horizontal direction and vertical direction projection on the single character image, removing a blank area of the character image, and then, converting the character image into the same size as the image in the training set by using a bilinear interpolation method to obtain the single character image which can be used for inputting the classifier.
Finally, inputting a single character image which can be used for inputting a classifier into the classifier, classifying the character image through the classifier to obtain the class and the class confidence (score) of the character image, judging according to the judgment condition, and if the judgment result meets the requirement
Figure BDA0002488015180000161
And classifying the image pixels by reusing the K-means clustering, wherein the classification category number is n ', the value of n ' can be set to be 2, 3, 4, 5 or other positive numbers according to experience, and further, n ' is determined according to n and preset steps. Specifically, a step is set to 1, and in the second round, n' is 3. It can be understood that if score of the round does not satisfy 0.9, the number of classification categories in the next round is 4, and so on, which is not described in detail below.
If the judgment result is satisfied
Figure BDA0002488015180000162
And sequencing the first rounds and all the category confidence degrees obtained in the first round to obtain the character image category corresponding to the maximum category confidence degree, namely the final character recognition result.
And if the judgment result meets score >0.9, outputting the character image category corresponding to the category confidence coefficient as a character image recognition result.
The training method of the classifier comprises the following steps: according to the character image template, a data amplification method is used for generating diversified data sets, a convolutional neural network is constructed, a classifier is trained, feature extraction is carried out through the convolutional neural network, the feature expression capacity is higher, and the character classification accuracy is higher. The classifier may also use HOG + SVM to train the classifier.
Fig. 6 is a block diagram of a character recognition apparatus according to an embodiment of the present invention.
Referring to fig. 6, another aspect of the present invention provides a character recognition apparatus, including: the clustering module 601 is configured to perform clustering processing on the designated image according to the first clustering parameter to obtain a first classification set; wherein the first classification set comprises at least one first classification set; a screening module 602, configured to screen the first classification set according to the prior information to determine a first character region; a classification module 603, configured to classify the first character region by a classifier, and obtain a classification result and confidence information; the determining module 604 is configured to determine, when the confidence information satisfies a preset threshold, a character recognition result corresponding to the designated image according to the classification result.
In this embodiment of the present invention, the determining module 604 is further configured to determine, when the confidence information does not satisfy the preset threshold, a second clustering parameter according to the set step associated with the first clustering parameter; the clustering module 601 is further configured to perform clustering processing on the designated image based on the second clustering parameter to determine a second character region; the second character area is used for determining a character recognition result corresponding to the designated image.
In an embodiment of the present invention, the apparatus further includes: a segmentation module 605, configured to perform binarization segmentation on the specified image to obtain a connected domain; the screening module 602 is further configured to screen the connected domain according to a preset condition to obtain a non-character connected domain; the non-character connected domain is used for preprocessing the appointed images before clustering the appointed images.
In an embodiment of the present invention, the screening module 602 includes: a determining submodule 6021, configured to determine a difference of each first classification set according to the prior information; the sorting submodule 6022 is configured to sort all the first sorted sets according to the difference degree to determine a first sorted set with a minimum difference degree; the processing submodule 6023 is configured to perform binarization processing on the first classification set with the minimum difference degree to obtain a first character region.
In this embodiment of the present invention, the classification module 603 includes: a segmentation submodule 6031, configured to perform segmentation transformation on the first character region to obtain a character image; an adjusting submodule 6032 configured to perform size adjustment on the character image according to an interpolation method to obtain a character image of a preset size; the classification submodule 6033 is configured to classify the character image with the preset size by using the classifier, and obtain a classification result and confidence information.
In the embodiment of the invention, the confidence information comprises the current confidence and the current clustering parameters, and the preset threshold comprises a confidence threshold and a parameter threshold; accordingly, the determining module 604 includes: when the current confidence coefficient meets a confidence coefficient threshold value, determining the classification result as a character recognition result of the corresponding specified image; when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter meets the parameter threshold value, determining the next round of clustering parameters according to the set step associated with the current clustering parameter; and when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter does not meet the parameter threshold value, acquiring the current confidence coefficient and all the previous confidence coefficients, and sequencing the current confidence coefficient and all the previous confidence coefficients to determine the classification result with the maximum corresponding confidence coefficient as the character recognition result of the corresponding specified image.
In another aspect, the present invention provides a computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform any of the character recognition methods described above.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A method of character recognition, the method comprising:
clustering the designated images according to the first clustering parameters to obtain a first classification set; wherein the first set of classifications includes at least one first set of classifications;
screening the first classification set according to prior information to determine a first character area;
classifying the first character region through a classifier to obtain a classification result and confidence information;
when the confidence information meets a preset threshold value, determining a character recognition result corresponding to the specified image according to the classification result;
when the confidence information does not meet the preset threshold, determining a second clustering parameter according to the set step associated with the first clustering parameter;
clustering the designated image based on the second clustering parameter to determine a second character area; the second character area is used for determining a character recognition result corresponding to the designated image;
before clustering the designated images according to the first clustering parameter, the method further comprises:
carrying out binarization segmentation on an original image to obtain a connected domain;
screening the connected domain according to a preset condition to obtain a non-character connected domain; the non-character connected domain is used for preprocessing the original image before the designated image is subjected to clustering processing.
2. The method of claim 1, wherein the filtering the first sorted set according to a priori information to determine a first character region comprises:
determining the difference degree of each first classification set according to the prior information;
sorting all the first classification sets according to the difference degree to determine a first classification set with the minimum difference degree;
and carrying out binarization processing on the first classification set with the minimum difference degree to obtain a first character area.
3. The method of claim 1, wherein the classifying the first character region by a classifier to obtain a classification result and confidence information comprises:
carrying out segmentation transformation on the first character area to obtain a character image;
adjusting the size of the character image according to an interpolation method to obtain a character image with a preset size;
and classifying the character images with the preset size through a classifier to obtain a classification result and confidence information.
4. The method of claim 1, wherein the confidence information comprises a current confidence and a current clustering parameter, and the preset threshold comprises a confidence threshold and a parameter threshold;
correspondingly, when the confidence information meets the preset threshold, determining a character recognition result corresponding to the designated image according to the classification result, including:
when the current confidence coefficient meets the confidence coefficient threshold value, determining the classification result as a character recognition result corresponding to the specified image;
when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter meets the parameter threshold value, determining a next round of clustering parameters according to the set step associated with the current clustering parameter;
and when the current confidence coefficient does not meet the confidence coefficient threshold value and the current clustering parameter does not meet the parameter threshold value, acquiring the current confidence coefficient and all the previous confidence coefficients, and sorting the current confidence coefficient and all the previous confidence coefficients to determine the classification result with the maximum corresponding confidence coefficient as the character recognition result corresponding to the specified image.
5. A character recognition apparatus, characterized in that the apparatus comprises:
the clustering module is used for clustering the designated images according to the first clustering parameters to obtain a first classification set; wherein the first set of classifications includes at least one first set of classifications;
the screening module is used for screening the first classification set according to prior information to determine a first character area;
the classification module is used for classifying the first character region through a classifier to obtain a classification result and confidence information;
the determining module is used for determining a character recognition result corresponding to the specified image according to the classification result when the confidence information meets a preset threshold;
the determining module is further configured to determine a second clustering parameter according to the set step associated with the first clustering parameter when the confidence information does not satisfy the preset threshold;
the clustering module is further used for clustering the designated images based on the second clustering parameters to determine a second character area; the second character area is used for determining a character recognition result corresponding to the designated image;
the apparatus further comprises:
the segmentation module is used for carrying out binarization segmentation on the original image to obtain a connected domain;
the screening module is further used for screening the connected domain according to a preset condition to obtain a non-character connected domain; the non-character connected domain is used for preprocessing the original image before the designated image is subjected to clustering processing.
6. A computer-readable storage medium comprising a set of computer-executable instructions that, when executed, perform the character recognition method of any of claims 1-4.
CN202010397170.3A 2020-05-12 2020-05-12 Character recognition method and device and computer readable storage medium Active CN111767909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010397170.3A CN111767909B (en) 2020-05-12 2020-05-12 Character recognition method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010397170.3A CN111767909B (en) 2020-05-12 2020-05-12 Character recognition method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111767909A CN111767909A (en) 2020-10-13
CN111767909B true CN111767909B (en) 2022-02-01

Family

ID=72719232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010397170.3A Active CN111767909B (en) 2020-05-12 2020-05-12 Character recognition method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111767909B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949514A (en) * 2021-03-09 2021-06-11 广州文石信息科技有限公司 Scanned document information processing method and device, electronic equipment and storage medium
CN114758339B (en) * 2022-06-15 2022-09-20 深圳思谋信息科技有限公司 Method and device for acquiring character recognition model, computer equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256631A (en) * 2007-02-26 2008-09-03 富士通株式会社 Method, apparatus, program and readable storage medium for character recognition
CN103279736A (en) * 2013-04-27 2013-09-04 电子科技大学 License plate detection method based on multi-information neighborhood voting
CN103593695A (en) * 2013-11-15 2014-02-19 天津大学 Method for positioning DPM two-dimension code area
CN106650553A (en) * 2015-10-30 2017-05-10 比亚迪股份有限公司 License plate recognition method and system
CN108154144A (en) * 2018-01-12 2018-06-12 江苏省新通智能交通科技发展有限公司 A kind of name of vessel character locating method and system based on image
CN108205670A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 A kind of licence plate recognition method and device
CN108256493A (en) * 2018-01-26 2018-07-06 中国电子科技集团公司第三十八研究所 A kind of traffic scene character identification system and recognition methods based on Vehicular video
CN109034149A (en) * 2017-06-08 2018-12-18 北京君正集成电路股份有限公司 A kind of character identifying method and device
CN109800744A (en) * 2019-03-18 2019-05-24 深圳市商汤科技有限公司 Image clustering method and device, electronic equipment and storage medium
CN110991437A (en) * 2019-11-28 2020-04-10 北京嘉楠捷思信息技术有限公司 Character recognition method and device, and training method and device of character recognition model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018125926A1 (en) * 2016-12-27 2018-07-05 Datalogic Usa, Inc Robust string text detection for industrial optical character recognition

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101256631A (en) * 2007-02-26 2008-09-03 富士通株式会社 Method, apparatus, program and readable storage medium for character recognition
CN103279736A (en) * 2013-04-27 2013-09-04 电子科技大学 License plate detection method based on multi-information neighborhood voting
CN103593695A (en) * 2013-11-15 2014-02-19 天津大学 Method for positioning DPM two-dimension code area
CN106650553A (en) * 2015-10-30 2017-05-10 比亚迪股份有限公司 License plate recognition method and system
CN108205670A (en) * 2016-12-16 2018-06-26 杭州海康威视数字技术股份有限公司 A kind of licence plate recognition method and device
CN109034149A (en) * 2017-06-08 2018-12-18 北京君正集成电路股份有限公司 A kind of character identifying method and device
CN108154144A (en) * 2018-01-12 2018-06-12 江苏省新通智能交通科技发展有限公司 A kind of name of vessel character locating method and system based on image
CN108256493A (en) * 2018-01-26 2018-07-06 中国电子科技集团公司第三十八研究所 A kind of traffic scene character identification system and recognition methods based on Vehicular video
CN109800744A (en) * 2019-03-18 2019-05-24 深圳市商汤科技有限公司 Image clustering method and device, electronic equipment and storage medium
CN110991437A (en) * 2019-11-28 2020-04-10 北京嘉楠捷思信息技术有限公司 Character recognition method and device, and training method and device of character recognition model

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Text Detection and Recognition from Scene Images using MSER and CNN;Savita Choudhary et al.;《2018 Second International Conference on Advances in Electronics, Computer and Communications (ICAECC-2018)》;20181004;第1-4页 *
一种基于证据推理的自适应聚类算法;张扬 等;《现代导航》;20190430;第119-124页 *
基于模糊C均值聚类和支持向量机的信号识别方法;顾敏剑;《计算机与数字工程》;20131231;第41卷(第3期);第367-369、465页 *
自然场景文本区域定位;黄晓明 等;《重庆邮电大学学报(自然科学版)》;20151031;第27卷(第5期);第700-705页 *

Also Published As

Publication number Publication date
CN111767909A (en) 2020-10-13

Similar Documents

Publication Publication Date Title
CN108171104B (en) Character detection method and device
US8442319B2 (en) System and method for classifying connected groups of foreground pixels in scanned document images according to the type of marking
Trier et al. Improvement of “integrated function algorithm” for binarization of document images
US20080310721A1 (en) Method And Apparatus For Recognizing Characters In A Document Image
CN110503054B (en) Text image processing method and device
Aggarwal et al. A robust method to authenticate car license plates using segmentation and ROI based approach
Paunwala et al. A novel multiple license plate extraction technique for complex background in Indian traffic conditions
CN110706235B (en) Far infrared pedestrian detection method based on two-stage cascade segmentation
CN111767909B (en) Character recognition method and device and computer readable storage medium
CN113688838B (en) Red handwriting extraction method and system, readable storage medium and computer equipment
CN112686248B (en) Certificate increase and decrease type detection method and device, readable storage medium and terminal
CN115131590B (en) Training method of target detection model, target detection method and related equipment
CN113971792A (en) Character recognition method, device, equipment and storage medium for traffic sign board
Brisinello et al. Optical Character Recognition on images with colorful background
KR101571681B1 (en) Method for analysing structure of document using homogeneous region
CN113221696A (en) Image recognition method, system, equipment and storage medium
CN112200789A (en) Image identification method and device, electronic equipment and storage medium
JP6377214B2 (en) Text detection method and apparatus
CN112070116A (en) Automatic art painting classification system and method based on support vector machine
Seuret et al. Pixel level handwritten and printed content discrimination in scanned documents
CN115690434A (en) Noise image identification method and system based on expert field denoising result optimization
Nasiri et al. A new binarization method for high accuracy handwritten digit recognition of slabs in steel companies
Deb et al. Statistical characteristics in HSI color model and position histogram based vehicle license plate detection
Sathya et al. Vehicle license plate recognition (vlpr)
Hussain A hybrid approach handwritten character recognition for mizo using artificial neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant