CN111401354A - End-to-end self-adaptive vertical sticky character recognition method - Google Patents
End-to-end self-adaptive vertical sticky character recognition method Download PDFInfo
- Publication number
- CN111401354A CN111401354A CN202010210522.XA CN202010210522A CN111401354A CN 111401354 A CN111401354 A CN 111401354A CN 202010210522 A CN202010210522 A CN 202010210522A CN 111401354 A CN111401354 A CN 111401354A
- Authority
- CN
- China
- Prior art keywords
- characters
- sticky
- picture
- answer
- character
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/457—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by analysing connectivity, e.g. edge linking, connected component analysis or slices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Character Discrimination (AREA)
Abstract
The invention relates to a method for recognizing vertical conglutination characters based on end-to-end self-adaptation, which comprises the following steps: (1) character positioning: carrying out connected domain positioning on the characters of the answer area of the picture through fingerprint separation; (2) character screening: comparing and judging the positioning coordinates with the height of the text line of the answer area, screening out a coordinate frame containing at least two characters in the vertical direction, and intercepting an answer picture containing sticky side characters to be selected; (3) size conversion: the intercepted answer picture to be selected and containing the sticky edge characters is processed in a unified mode; (4) feature extraction: sequentially extracting longitudinal feature maps of the adhesion characters from top to bottom in the Y-axis direction of a vertical coordinate by adopting a convolutional neural network CNN, and serializing the longitudinal feature maps to obtain a time sequence correlation feature sequence; (5) character recognition: and (4) transmitting the time sequence correlation characteristic sequence to a softmax layer for calculation and classification of multiple characters and outputting a result to finish the identification of the vertical adhesive characters.
Description
Technical Field
The invention relates to the technical field of text OCR, in particular to a method for recognizing vertical conglutinated characters based on end-to-end self-adaptation.
Background
With the gradual maturity of the character OCR technology, various automatic reading products are produced. The technology trends towards diversification, an extreme trend of integration and substitution begins to appear, and the traditional OCR technology gradually shows more defects when facing practical problems and slowly enters a bottleneck period. The rise of artificial intelligence technology in the field of character OCR not only shows extraordinary recognition capability and strong development momentum, but also breaks through the technical barriers of various traditional OCR and is widely used.
In the aspect of character OCR, especially the writing form of handwritten characters is free, and the font form is thousands of different from person to person. The handwritten characters are frequently adhered up and down, left and right, and the integrity of the characters is difficult to ensure only by a character positioning and dividing method. For example, when a student answers on a scroll, the written characters are adhered and exceed the answering area due to the limited range of the answering area. This causes serious problems in positioning answers, even failure to accurately position the answers of students, and finally failure to correctly identify the answers. The end-to-end OCR recognition mode enables the recognition of the stuck characters and is gradually applied to the recognition of complicated and variable handwritten characters, but the recognition of characters in the vertical direction is rarely involved, but the method is still suitable for the actual changing needs.
In order to accurately position answers, the application provides an end-to-end self-adaptive vertical adhesion character recognition method, which is used for solving the problem of recognition of vertical adhesion characters and achieving the purpose of accurate recognition.
Disclosure of Invention
The invention aims to solve the technical problem of providing an end-to-end self-adaptive vertical adhesion character recognition method, solving the problem of recognition of vertical adhesion characters, improving the recognition accuracy and achieving the aim of accurate recognition.
In order to solve the technical problems, the invention adopts the technical scheme that: the method for recognizing the vertical conglutination character based on the end-to-end self-adaptation specifically comprises the following steps:
(1) character positioning: carrying out connected domain positioning on characters in an answer area of the picture through handprint separation to obtain a positioning coordinate of each character in the answer area of the picture;
(2) character screening: comparing the positioning coordinates obtained in the step (1) with the height of the text line of the answer area, judging and screening a coordinate frame containing at least two characters in the vertical direction, and intercepting an answer picture containing sticky side characters to be selected;
(3) size conversion: the intercepted answer picture to be selected and containing the sticky edge characters is subjected to size unification treatment, and the sizes of the answer pictures containing the sticky edge characters are converted into consistency;
(4) feature extraction: sequentially extracting longitudinal feature maps of the adhesive characters from top to bottom in the Y-axis direction of a vertical coordinate by adopting a convolutional neural network CNN, and serializing the longitudinal feature maps to obtain a time sequence correlation feature sequence;
(5) character recognition: recording the time sequence related characteristic sequence of the sticky characters obtained in the step (4) as: xi={x1,x2,x3,…,xi-1,xiAnd (i is less than or equal to n), the characters are conveyed to the softmax layer for calculation and classification of the characters, the result is output according to the maximum probability, and finally the recognition of the vertical adhesion characters is completed. The recognition of vertical adhesion characters is achieved, the recognition accuracy is improved, and the purpose of accurate recognition is achieved.
By adopting the technical scheme, the end-to-end self-adaptive vertical adhesion character recognition method adopts an end-to-end feature extraction and classification method, and the transmission direction of the feature sequence is converted from the transverse X-axis direction to the longitudinal Y-axis direction for prediction output, so that the purpose of vertical adhesion character recognition is achieved.
As a preferred embodiment of the present invention, the conditions for performing the size unification processing in the step (3) are: if the height of the answer picture containing the sticky characters is larger than 128dpi, the original height is kept, the compressed font is prevented from becoming small or deforming, and the width is uniformly set to be 32 dpi; if the height of the answer picture containing the sticky characters is smaller than 128dpi, adding white edges to expand the height of the answer picture containing the sticky characters to 128dpi size, and uniformly setting the width to 32 dpi.
As a preferred technical solution of the present invention, the network architecture of the convolutional neural network CNN in step (4) includes a ResNet network architecture or a DensNet network architecture, the network architecture adopted by the convolutional neural network CNN is divided into multiple layers, and the size of the convolutional kernel is set to 3 × 3. The step length can be adjusted correspondingly according to the actual output requirement. The ResNet network architecture can be viewed as a variety of combinations of parallel modules or sequential modules that can provide rich feature integration while ensuring low computational complexity. Compared with the DensNet network architecture, the DensNet network architecture fully utilizes the feature information between each layer, enhances the transfer of features, lightens the disappearance of gradients, and reduces the number of parameters to a certain extent. The network architecture of the convolutional neural network CNN is preferably a ResNet network architecture or a DenSnNet network architecture, and other network architectures can be selected to realize the coordinate conversion; the ResNet network is a network which refers to VGG19, is modified on the basis of the network, adds a residual error unit through a short circuit mechanism, and mainly reflects the change that ResNet directly uses the convolution of stride 2 to carry out down-sampling and replaces a full connection layer with a global average pore layer; an important design principle of ResNet is: when the feature map size is reduced by half, the number of feature maps is doubled, which keeps the complexity of the network layer; compared with the common network, the ResNet adds a short circuit mechanism between every two layers, so that residual error learning is formed, wherein the dotted line indicates that the number of featuremas is changed; 34-layer ResNet, a deeper network can also be constructed as shown in Table 1; it can be seen from the table that for the ResNet of 18-layer and 34-layer, which performs residual learning between two layers, when the network is deeper, it performs residual learning between three layers, the three layers of convolution kernels are 1x1, 3x3 and 1x1, respectively, and it is noted that the number of feature maps of the hidden layer is relatively small and is 1/4 of the number of output feature maps.
As a preferred technical solution of the present invention, the feature extraction in step (4) specifically includes the following steps:
s41, selecting a network architecture of the convolutional neural network CNN;
s42, sequentially extracting longitudinal feature maps of the sticky characters from top to bottom in the Y-axis direction of the ordinate through a network architecture;
s43, inputting the sequence of the timing correlation characteristics into a Bi-directional Bi L STM to learn the timing correlation characteristics, and obtaining a timing correlation characteristic sequence.
As a preferred embodiment of the present invention, the probability calculation formula of softmax in step (5) is:
wherein SiRepresenting the probability output of the character, wherein the output mapping range is limited to the range of (0-1); xi,XjThe signature sequence is correlated in time sequence.
As a preferred technical solution of the present invention, in the step (1), the character is located in a connected component, and the locating of the connected component specifically includes: firstly, traversing a first point P (x, y) with a pixel value in a picture according to rows and columns, giving one label to the first point P, and then pressing foreground pixels adjacent to the pixel point into a stack; secondly, popping up pixels on the stack top, endowing the pixels on the stack top with a label with the same pixel value as the first label, then pressing foreground pixels adjacent to the pixels on the stack top into the stack, and repeating the steps until the stack is empty to obtain a certain communicated area in the picture; and finally, repeating the steps to complete the traversal of the whole picture and finally obtain the connected regions of all the characters so as to obtain the positioning coordinates of each character in the answer region of the picture.
Compared with the prior art, the invention has the beneficial effects that: the method for recognizing the vertical adhesive characters based on the end-to-end self-adaptation solves the problem of recognizing the vertical adhesive characters, improves the recognition accuracy rate and achieves the aim of accurate recognition.
Drawings
The technical scheme of the invention is further described by combining the accompanying drawings as follows:
FIG. 1 is a flow chart of a method for end-to-end adaptive vertical sticky character based recognition of the present invention;
FIG. 2 is a schematic diagram of the positioning of vertical sticky characters in step (1) in the end-to-end adaptive vertical sticky character-based recognition method of the present invention;
fig. 3 is a schematic diagram of the vertical character feature extraction process in step (4) of the end-to-end adaptive vertical sticky character recognition method of the present invention.
Detailed Description
For the purpose of enhancing the understanding of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and examples, which are provided for the purpose of illustration only and are not intended to limit the scope of the present invention.
Example (b): as shown in fig. 1, the method for recognizing vertical sticky characters based on end-to-end adaptation specifically includes the following steps:
the method for recognizing the vertical conglutination character based on the end-to-end self-adaptation specifically comprises the following steps:
(1) character positioning: carrying out connected domain positioning on characters in an answer area of the picture through handprint separation to obtain a positioning coordinate of each character in the answer area of the picture; as shown in fig. 2, in the step (1), connected component location is performed on the character, and the specific step of connected component location is: firstly, traversing a first point P (x, y) with a pixel value in a picture according to rows and columns, giving one label to the first point P, and then pressing foreground pixels adjacent to the pixel point into a stack; secondly, popping up pixels on the stack top, endowing the pixels on the stack top with a label with the same pixel value as the first label, then pressing foreground pixels adjacent to the pixels on the stack top into the stack, and repeating the steps until the stack is empty to obtain a certain communicated area in the picture; finally, repeating the steps to complete the traversal of the whole picture and finally obtain the communication areas of all the characters so as to obtain the positioning coordinates of each character in the answer area of the picture;
(2) character screening: comparing the positioning coordinates obtained in the step (1) with the height of the text line of the answer area, judging and screening a coordinate frame containing at least two characters in the vertical direction, and intercepting an answer picture containing sticky side characters to be selected;
(3) size conversion: the intercepted answer picture to be selected and containing the sticky edge characters is subjected to size unification treatment, and the sizes of the answer pictures containing the sticky edge characters are converted into consistency; the condition for performing size unification processing in the step (3) is as follows: if the height of the answer picture containing the sticky characters is larger than 128dpi, the original height is kept, the compressed font is prevented from becoming small or deforming, and the width is uniformly set to be 32 dpi; if the height of the answer picture containing the sticky characters is smaller than 128dpi, adding a white edge to expand the height of the answer picture containing the sticky characters to 128dpi size, and uniformly setting the width to be 32 dpi;
(4) feature extraction: sequentially extracting longitudinal characteristic graphs of the adhesive characters from top to bottom in the Y-axis direction of a vertical coordinate by adopting a Convolutional Neural Network (CNN), and serializing (Map-to-Sequence) the longitudinal characteristic graphs to obtain a time Sequence correlation characteristic Sequence;
as shown in fig. 3, the feature extraction in step (4) specifically includes the following steps:
s41, selecting a network architecture of the convolutional neural network CNN;
s42, sequentially extracting longitudinal feature maps of the sticky characters from top to bottom in the Y-axis direction of the ordinate through a network architecture;
s43, inputting the sequence of the time sequence correlation characteristics into a Bi-directional Bi L STM to learn the time sequence correlation characteristics to obtain a time sequence correlation characteristic sequence;
the network architecture of the convolutional neural network CNN in the step (4) comprises a ResNet network architecture or a DensNet network architecture; the network architecture adopted by the convolutional neural network CNN is divided into multiple layers, and the size of the convolutional kernel is set to 3 × 3. The step length can be adjusted correspondingly according to the actual output requirement. Where the ResNet network architecture can be viewed as a variety of combinations of parallel modules or sequential modules, it can provide rich feature integration while ensuring low computational complexity. Compared with the DensNet network architecture, the DensNet network architecture fully utilizes the feature information between each layer, enhances the transfer of features, lightens the disappearance of gradients and reduces the quantity of parameters to a certain extent; the ResNet network is a network which refers to VGG19, is modified on the basis of the network, adds a residual error unit through a short circuit mechanism, and mainly reflects the change that ResNet directly uses the convolution of stride 2 to carry out down-sampling and replaces a full connection layer with a global average pore layer; an important design principle of ResNet is: when the feature map size is reduced by half, the number of feature maps is doubled, which keeps the complexity of the network layer; compared with the common network, the ResNet adds a short circuit mechanism between every two layers, so that residual error learning is formed, wherein the dotted line indicates that the number of featuremas is changed; 34-layer ResNet, a deeper network can also be constructed as shown in Table 1; as can be seen from the table, for the ResNet of 18-layer and 34-layer, which performs residual learning between two layers, when the network is deeper, it performs residual learning between three layers, the three layers of convolution kernels are 1x1, 3x3 and 1x1, respectively, one notable is that the number of feature maps of the hidden layer is relatively small and is 1/4 of the number of output feature maps;
(5) character recognition: recording the time sequence related characteristic sequence of the sticky characters obtained in the step (4) as: xi={x1,x2,x3,…,xi-1,xiAnd (i is less than or equal to n), the probability calculation formula of softmax is as follows:
wherein SiRepresenting the probability output of the character, wherein the output mapping range is limited to the range of (0-1); xi,XjCorrelating the characteristic sequence for the time sequence; and the characters are conveyed to the softmax layer for prediction, calculation and classification of the characters and the result is output, so that the recognition of the vertical adhesion characters is completed, the recognition of the vertical adhesion characters is realized, the recognition accuracy is improved, and the aim of accurate recognition is fulfilled.
It is obvious to those skilled in the art that the present invention is not limited to the above embodiments, and it is within the scope of the present invention to adopt various insubstantial modifications of the method concept and technical scheme of the present invention, or to directly apply the concept and technical scheme of the present invention to other occasions without modification.
Claims (6)
1. A method for recognizing vertical conglutinated characters based on end-to-end self-adaptation is characterized by comprising the following steps:
(1) character positioning: carrying out connected domain positioning on characters in an answer area of the picture through handprint separation to obtain a positioning coordinate of each character in the answer area of the picture;
(2) character screening: comparing the positioning coordinates obtained in the step (1) with the height of the text line of the answer area, judging and screening a coordinate frame containing at least two characters in the vertical direction, and intercepting an answer picture containing sticky side characters to be selected;
(3) size conversion: the intercepted answer picture to be selected and containing the sticky edge characters is subjected to size unification treatment, and the sizes of the answer pictures containing the sticky edge characters are converted into consistency;
(4) feature extraction: sequentially extracting longitudinal feature maps of the adhesive characters from top to bottom in the Y-axis direction of a vertical coordinate by adopting a convolutional neural network CNN, and serializing the longitudinal feature maps to obtain a time sequence correlation feature sequence;
(5) character recognition: recording the time sequence related characteristic sequence of the sticky characters obtained in the step (4) as: xi={x1,x2,x3,…,xi-1,xiAnd (i is less than or equal to n), the characters are conveyed to the softmax layer for calculation and classification of the characters, the result is output according to the maximum probability, and finally the recognition of the vertical adhesion characters is completed.
2. The method for recognizing vertical sticky characters based on end-to-end adaptation according to claim 1, wherein the condition for performing size unification in step (3) is: if the height of the answer picture containing the sticky characters is larger than 128dpi, the original height is kept, the compressed font is prevented from becoming small or deforming, and the width is uniformly set to be 32 dpi; if the height of the answer picture containing the sticky characters is smaller than 128dpi, adding white edges to expand the height of the answer picture containing the sticky characters to 128dpi size, and uniformly setting the width to 32 dpi.
3. The end-to-end adaptive vertical sticky character recognition method according to claim 1, wherein the network architecture of the Convolutional Neural Network (CNN) in the step (4) comprises a ResNet network architecture or a DensNet network architecture; the network architecture adopted by the convolutional neural network CNN is divided into multiple layers, and the size of the convolutional kernel is set to 3 × 3.
4. The end-to-end adaptive vertical sticky character recognition method according to claim 3, wherein the feature extraction in the step (4) specifically comprises the following steps:
s41, selecting a network architecture of the convolutional neural network CNN;
s42, sequentially extracting longitudinal feature maps of the sticky characters from top to bottom in the Y-axis direction of the ordinate through a network architecture;
s43, inputting the sequence of the timing correlation characteristics into a Bi-directional Bi L STM to learn the timing correlation characteristics, and obtaining a timing correlation characteristic sequence.
5. The method for recognizing vertical sticky characters based on end-to-end adaptation according to claim 3, wherein the probability calculation formula of softmax in the step (5) is as follows:
wherein SiRepresenting the probability output of the character, wherein the output mapping range is limited in the range of 0-1; xi,XjThe signature sequence is correlated in time sequence.
6. The end-to-end adaptive vertical sticky character recognition method according to claim 3, wherein the step (1) is to perform connected component localization on the character, and the specific step of the connected component localization is to: firstly, traversing a first point P (x, y) with a pixel value in a picture according to rows and columns, giving one label to the first point P, and then pressing foreground pixels adjacent to the pixel point into a stack; secondly, popping up pixels on the stack top, endowing the pixels on the stack top with a label with the same pixel value as the first label, then pressing foreground pixels adjacent to the pixels on the stack top into the stack, and repeating the steps until the stack is empty to obtain a certain communicated area in the picture; and finally, repeating the steps to complete the traversal of the whole picture and finally obtain the connected regions of all the characters so as to obtain the positioning coordinates of each character in the answer region of the picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010210522.XA CN111401354B (en) | 2020-03-24 | 2020-03-24 | End-to-end self-adaption based vertical adhesion character recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010210522.XA CN111401354B (en) | 2020-03-24 | 2020-03-24 | End-to-end self-adaption based vertical adhesion character recognition method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111401354A true CN111401354A (en) | 2020-07-10 |
CN111401354B CN111401354B (en) | 2023-07-11 |
Family
ID=71432787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010210522.XA Active CN111401354B (en) | 2020-03-24 | 2020-03-24 | End-to-end self-adaption based vertical adhesion character recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111401354B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085022A (en) * | 2020-09-09 | 2020-12-15 | 上海蜜度信息技术有限公司 | Method, system and equipment for recognizing characters |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871843A (en) * | 2017-12-01 | 2019-06-11 | 北京搜狗科技发展有限公司 | Character identifying method and device, the device for character recognition |
CN110188761A (en) * | 2019-04-22 | 2019-08-30 | 平安科技(深圳)有限公司 | Recognition methods, device, computer equipment and the storage medium of identifying code |
CN110378310A (en) * | 2019-07-25 | 2019-10-25 | 南京红松信息技术有限公司 | A kind of automatic generation method of the handwriting samples collection based on answer library |
CN110555462A (en) * | 2019-08-02 | 2019-12-10 | 深圳索信达数据技术有限公司 | non-fixed multi-character verification code identification method based on convolutional neural network |
-
2020
- 2020-03-24 CN CN202010210522.XA patent/CN111401354B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109871843A (en) * | 2017-12-01 | 2019-06-11 | 北京搜狗科技发展有限公司 | Character identifying method and device, the device for character recognition |
CN110188761A (en) * | 2019-04-22 | 2019-08-30 | 平安科技(深圳)有限公司 | Recognition methods, device, computer equipment and the storage medium of identifying code |
CN110378310A (en) * | 2019-07-25 | 2019-10-25 | 南京红松信息技术有限公司 | A kind of automatic generation method of the handwriting samples collection based on answer library |
CN110555462A (en) * | 2019-08-02 | 2019-12-10 | 深圳索信达数据技术有限公司 | non-fixed multi-character verification code identification method based on convolutional neural network |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112085022A (en) * | 2020-09-09 | 2020-12-15 | 上海蜜度信息技术有限公司 | Method, system and equipment for recognizing characters |
CN112085022B (en) * | 2020-09-09 | 2024-02-13 | 上海蜜度科技股份有限公司 | Method, system and equipment for recognizing characters |
Also Published As
Publication number | Publication date |
---|---|
CN111401354B (en) | 2023-07-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11741578B2 (en) | Method, system, and computer-readable medium for improving quality of low-light images | |
CN107133622B (en) | Word segmentation method and device | |
CN110782420A (en) | Small target feature representation enhancement method based on deep learning | |
CN108171297A (en) | A kind of answer card identification method and device | |
CN112381057A (en) | Handwritten character recognition method and device, storage medium and terminal | |
CN110619326B (en) | English test paper composition detection and identification system and method based on scanning | |
CN112149535B (en) | Lane line detection method and device combining SegNet and U-Net | |
CN111178290A (en) | Signature verification method and device | |
CN113223025A (en) | Image processing method and device, and neural network training method and device | |
CN113222055B (en) | Image classification method and device, electronic equipment and storage medium | |
CN110443235B (en) | Intelligent paper test paper total score identification method and system | |
CN110599455A (en) | Display screen defect detection network model, method and device, electronic equipment and storage medium | |
CN111862115A (en) | Mask RCNN-based remote sensing image segmentation method | |
CN113052057A (en) | Traffic sign identification method based on improved convolutional neural network | |
CN110866900A (en) | Water body color identification method and device | |
CN112686104A (en) | Deep learning-based multi-vocal music score identification method | |
CN111460782A (en) | Information processing method, device and equipment | |
CN113436222A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN111401354A (en) | End-to-end self-adaptive vertical sticky character recognition method | |
CN110298236B (en) | Automatic Braille image identification method and system based on deep learning | |
CN118135584A (en) | Automatic handwriting form recognition method and system based on deep learning | |
CN111832390B (en) | Handwritten ancient character detection method | |
CN115565182A (en) | Handwritten Chinese character recognition method based on complexity grouping | |
CN114241486A (en) | Method for improving accuracy rate of identifying student information of test paper | |
CN114639110A (en) | Intelligent reading method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |