CN112183335A - Handwritten image recognition method and system based on unsupervised learning - Google Patents
Handwritten image recognition method and system based on unsupervised learning Download PDFInfo
- Publication number
- CN112183335A CN112183335A CN202011038830.5A CN202011038830A CN112183335A CN 112183335 A CN112183335 A CN 112183335A CN 202011038830 A CN202011038830 A CN 202011038830A CN 112183335 A CN112183335 A CN 112183335A
- Authority
- CN
- China
- Prior art keywords
- image
- vertical
- horizontal
- unsupervised learning
- feature vectors
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 239000013598 vector Substances 0.000 claims abstract description 58
- 239000011159 matrix material Substances 0.000 claims abstract description 35
- 238000013145 classification model Methods 0.000 claims abstract description 16
- 238000007781 pre-processing Methods 0.000 claims abstract description 12
- 238000010801 machine learning Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 7
- 238000012545 processing Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 claims description 10
- 230000009467 reduction Effects 0.000 claims description 8
- 238000007477 logistic regression Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 7
- 230000003993 interaction Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000002203 pretreatment Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/22—Character recognition characterised by the type of writing
- G06V30/226—Character recognition characterised by the type of writing of cursive writing
- G06V30/2268—Character recognition characterised by the type of writing of cursive writing using stroke segmentation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Character Discrimination (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image recognition, and relates to a handwritten image recognition method and system based on unsupervised learning, which comprises the following steps: s1, preprocessing the image to obtain a binarization matrix of the image; s2, extracting the characteristic values of the binary matrix in the horizontal, vertical and oblique directions, and obtaining characteristic vectors in the horizontal, vertical and oblique directions; s3, fusing the feature vectors in the horizontal, vertical and oblique directions into a feature vector; s4, training the fused feature vectors by using a machine learning model to obtain an image classification model; s5, inputting the handwriting image to be detected into the image classification model to obtain a recognition result. The basic characteristics of the handwritten characters, namely strokes in four directions of horizontal, vertical, left and right placement, are fully considered, the content of the handwritten characters can be better read, and a borrowable thought is provided for the construction of image characteristics in other vertical fields.
Description
Technical Field
The invention relates to a handwritten image recognition method and system based on unsupervised learning, and belongs to the technical field of image recognition.
Background
The handwriting recognition process usually recognizes the handwriting in the image through a machine, that is, the handwriting content of people is converted into an electronic image form for analysis and processing through scanning, photographing and other modes. In the process, the extraction and fusion of image features are mainly involved. Image feature extraction is a basic concept in computer image processing, and refers to extracting image information through a computer and determining whether a point of each image belongs to an image feature. The end result is to divide the points on the image into different subsets, which often belong to isolated points, continuous curves or continuous regions. Image features may be extracted and represented from the angles of edges, corners, regions, ridges, etc. of the image. Currently, commonly used image features are color features, texture features, shape features, and spatial relationship features. From the aspect of application value, the handwriting recognition research has important practical value in the fields of human-computer interaction and automatic processing of character information. In the field of human-computer interaction, the naturalness and friendliness of human-computer interaction can be improved, and the method is an important component of a future intelligent human-computer interface. In the field of automatic processing of text information, the method can save labor, improve the working efficiency, accelerate the information flow and adapt to the requirements of the information era. Meanwhile, the method has important economic and social benefits in the application fields of character recognition of video images based on a video camera or a digital camera, character recognition in certificate anti-counterfeiting, character recognition on workpieces in industrial field environments, character recognition in name card management and the like.
At present, with innovation and development of deep Neural networks such as Convolutional Neural Networks (CNNs), the problem of supervised Chinese character recognition is basically solved, that is, recognition of Chinese characters is feasible and has high accuracy under the condition of labels, but from the current technical level, recognition of Chinese characters is basically impossible under the unsupervised condition, and the deep Neural networks are difficult to process the features of pictures under the unsupervised condition.
Handwriting recognition is an important research content in the fields of pattern recognition and artificial intelligence, and has wide application in the fields of man-machine natural interaction, automatic processing of character information and the like. In fact, because the handwriting has the characteristics of complex structure, different styles of different writers, different weights, different stroke weights, different font sizes, different rotation directions, different inclination angles and the like, continuous strokes, broken strokes and even redundant strokes can be generated in the writing process. Therefore, unsupervised handwriting recognition is always a difficult problem in the field of character recognition, and is not well solved at present, and still obtains extensive attention and research of people.
Disclosure of Invention
In view of the above problems, an object of the present invention is to provide a method and a system for recognizing handwritten images based on unsupervised learning, which fully consider the basic characteristics of handwritten characters, i.e. strokes in four directions, i.e. horizontal, vertical, left, right and left, and is helpful for better reading the content of the handwritten characters, and provide a borrowable idea for the construction of image characteristics in other vertical fields.
In order to achieve the purpose, the invention adopts the following technical scheme: a handwritten image recognition method based on unsupervised learning comprises the following steps: s1, preprocessing the image to obtain a binarization matrix of the image; s2, extracting the characteristic values of the binary matrix in the horizontal, vertical and oblique directions, and obtaining characteristic vectors in the horizontal, vertical and oblique directions; s3, fusing the feature vectors in the horizontal, vertical and oblique directions into a feature vector; s4, training the fused feature vectors by using a machine learning model to obtain an image classification model; s5, inputting the handwriting image to be detected into the image classification model to obtain a recognition result.
Further, the preprocessing method in S1 is: firstly, carrying out gray processing on the color image to generate a gray image, and then carrying out binarization processing on the gray image to obtain a binarization matrix corresponding to the image.
Further, the binarization processing method comprises the following steps: and expressing the pixel points with the gray scale larger than the threshold value in the gray-scale image by 1, and expressing the pixel points with the gray scale smaller than the threshold value by 0.
Further, the method for obtaining the feature vectors in the horizontal and vertical directions in step S2 is as follows: for the binarization matrix, extracting a picture horizontal characteristic vector by performing column addition in the horizontal direction; the vertical eigenvector is obtained by performing column addition in the vertical direction and transposing the matrix obtained after the column addition in the vertical direction.
Further, the method for obtaining the feature vector in the oblique direction in step S2 is as follows: and for the binarization matrix, superposing vectors in the binarization matrix along the oblique direction to obtain a feature vector in the oblique direction.
Further, the inclination direction is a direction inclined by 45 degrees in the horizontal direction or a direction inclined by 135 degrees in the horizontal direction.
Further, the feature vector fused in step S3 is input into a popular learning dimension reduction method, the high-dimensional feature is reduced to two dimensions, the two dimensions are visually represented, and whether the feature vector fused can reflect the difference of the handwritten characters is observed.
Furthermore, the popular learning dimensionality reduction method adopts a T-distribution random neighborhood embedding T-SNE method to reduce dimensionality, and data in a high-dimensional space is represented again in a low-dimensional space.
Further, the machine learning model adopts a logistic regression model to realize multi-classification problems, and the multi-classification problems realize a multi-classification function by constructing a plurality of two classification models.
The invention also discloses a handwriting image recognition system based on unsupervised learning, which comprises: the image preprocessing module is used for preprocessing the image to obtain a binarization matrix of the image; the characteristic extraction module is used for extracting characteristic values of the binary matrix in the horizontal direction, the vertical direction and the inclined direction and obtaining characteristic vectors in the horizontal direction, the vertical direction and the inclined direction; the fusion module is used for fusing the feature vectors in the horizontal direction, the vertical direction and the inclined direction into one feature vector; the recognition model generation module is used for training the fused feature vectors by utilizing a machine learning model to obtain an image classification model; and the recognition module is used for inputting the handwritten image to be detected into the image classification model to obtain a recognition result.
Due to the adoption of the technical scheme, the invention has the following advantages:
1. the invention provides a method for directly extracting the characteristics of an image under the unsupervised condition, which considers the basic characteristics of the handwritten characters, namely strokes in four directions of horizontal, vertical, left-falling and right-falling, is favorable for better reading the contents of the handwritten characters and provides a borrowable thought for the construction of the image characteristics in other vertical fields.
2. The feature extraction method provided by the invention is simple in construction idea but can be intuitively explained by using the characteristics of the Chinese characters, and the overall calculation complexity is extremely simple, so that the feature extraction method is an innovative, effective and simple feature extraction method.
3. The novel image feature extraction method provided by the invention reduces the threshold of handwritten information recognition. The method is beneficial to sharing and communication of handwriting recognition technology, and can promote sustainable progress of scientific research.
Drawings
FIG. 1 is a flow chart of a method for unsupervised learning-based handwriting image recognition in an embodiment of the invention;
FIG. 2 is a gray scale image with graying processing according to an embodiment of the present invention;
FIG. 3 is a visualization result diagram after performing popular learning dimensionality reduction on the extracted image features in an embodiment of the present invention;
fig. 4 is a result diagram of classifying the extracted image features into a machine learning model according to an embodiment of the present invention.
Detailed Description
The present invention is described in detail by way of specific embodiments in order to better understand the technical direction of the present invention for those skilled in the art. It should be understood, however, that the detailed description is provided for a better understanding of the invention only and that they should not be taken as limiting the invention. In describing the present invention, it is to be understood that the terminology used is for the purpose of description only and is not intended to be indicative or implied of relative importance.
Unsupervised learning refers to learning without a priori knowledge or difficulty in manual labeling, in which case the machine-learned data is unlabeled, label-free and class-unknown. Clustering is a typical case of unsupervised learning, and during the clustering process, similar samples need to be gathered together, and each sample has no fixed label.
Example one
The embodiment discloses a handwritten image recognition method based on unsupervised learning, which comprises the following steps as shown in fig. 1:
s1, constructing an image data set to be identified, performing unification processing on the pixel size of each image to obtain a handwritten Chinese character image set with consistent pixel size, and performing preprocessing on the image to obtain a binarization matrix of the image.
The pretreatment method in S1 comprises the following steps: firstly, carrying out gray processing on a color image containing brightness and color to generate a gray image, wherein the gray image only contains brightness information and does not contain color information, and after the gray image is subjected to gray processing, the representation of each pixel point in the image is an integer between 0 and 255, wherein 0 represents black, and 255 represents white. And then carrying out binarization processing on the gray level image, wherein pixel points with gray levels larger than a threshold value in the gray level image are represented by 1, and pixel points with gray levels smaller than the threshold value in the gray level image are represented by 0. For example, a pixel point having a grayscale value of 120 or more is represented by 1, and a pixel point having a grayscale value of less than 120 is represented by 0. Each pixel point is represented by 0 or 1, and finally, a matrix which is composed of 0 and 1 and represents the image, namely a binarization matrix corresponding to the image, can be used.
If the image size is 32 × 32 pixels, the picture set may be expressed as Img ═ { Img ═ Img1,img2,…,imgnIn python language }The graying process is img.convert ("L"), each image corresponds to a 32 × 32 matrix, each element in the matrix is an integer between 0 and 255 after the graying process, and the grayed image is as shown in fig. 2.
The execution code of python for the image binarization processing is img.point (lambda x:1if x >120else 0), wherein x is each pixel point in the matrix, and if the image size is 32 × 32 pixels, the matrix after binarization processing is a 0-1 matrix for each image.
S2 performs feature value extraction on the binary matrix in the horizontal, vertical, and oblique directions, and obtains feature vectors in the horizontal, vertical, and oblique directions.
The method for obtaining the feature vectors in the horizontal and vertical directions in step S2 includes: for the binarization matrix, extracting a picture horizontal characteristic vector by performing column addition in the horizontal direction; the vertical eigenvector is obtained by performing column addition in the vertical direction and transposing the matrix obtained after the column addition in the vertical direction.
The method for obtaining the feature vector in the oblique direction in step S2 includes: and for the binarization matrix, superposing vectors in the binarization matrix along the oblique direction to obtain a feature vector in the oblique direction. The inclination direction is a direction inclined by 45 degrees in the horizontal direction or a direction inclined by 135 degrees in the horizontal direction.
The extracted horizontal and vertical feature vectors are (a, b), and if the image size is 32 × 32 pixels, the lengths of the vectors a and b are both 32.
The feature vectors extracted in the direction inclined by 45 degrees in the horizontal direction and/or the direction inclined by 135 degrees in the horizontal direction are (c, d), and if the image size is 32 × 32 pixels, the lengths of the vectors c and d are both 32.
S3 merges the feature vectors in the horizontal, vertical, and oblique directions into one feature vector, and regards this merged feature vector as a simplified feature representation of the image content.
And simply combining the feature vectors in the horizontal direction, the vertical direction and the inclined direction to form a one-dimensional vector containing all elements in the horizontal direction, the vertical direction and the inclined direction. The simple superposition of horizontal, vertical and inclined directions is considered that the Chinese characters mainly comprise strokes in four directions of horizontal, vertical, left falling and right falling.
Of the fused feature vectors, a final vector v is np.
Inputting the feature vector fused in the step S3 into a popular learning dimension reduction method, reducing the high-dimensional feature dimension to two dimensions, visually representing the two dimensions, and observing whether the fused feature vector can reflect the difference of the handwritten characters. The popular learning dimensionality reduction method adopts a T-distribution random neighborhood embedding T-SNE method to reduce dimensionality and represents data in a high-dimensional space in a low-dimensional space again. The visualization results are shown in fig. 3. In fig. 3, dots are "0" characters, triangles are "3" characters, five-pointed stars are "5" characters, and squares are "9" characters, so that different characters can be well distinguished from each other in the visual result.
And S4, training the fused feature vectors by using a machine learning model to obtain an image classification model.
The machine learning model implements the multi-classification problem using a logistic regression model, for example, the logistic regression algorithm in the skearn library of python. The multi-classification problem is transformed on the basis of the traditional logistic regression two-classification problem, the multi-classification problem is regarded as a classification result and other problems, and the multi-classification problem realizes the multi-classification function by constructing a plurality of two-classification models.
S5, inputting the handwriting image to be detected into the image classification model to obtain a recognition result. The recognition result is shown in fig. 4, and in fig. 4, the recognition accuracy of the handwritten image reaches about 90%, which proves the effectiveness of the recognition method in the method.
Example two
Based on the same inventive concept, the embodiment discloses a handwriting image recognition system based on unsupervised learning, which comprises:
the image preprocessing module is used for preprocessing the image to obtain a binarization matrix of the image;
the characteristic extraction module is used for extracting characteristic values of the binary matrix in the horizontal direction, the vertical direction and the inclined direction and obtaining characteristic vectors in the horizontal direction, the vertical direction and the inclined direction;
the fusion module is used for fusing the feature vectors in the horizontal direction, the vertical direction and the inclined direction into one feature vector;
the recognition model generation module is used for training the fused feature vectors by utilizing a machine learning model to obtain an image classification model;
and the recognition module is used for inputting the handwritten image to be detected into the image classification model to obtain a recognition result.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims. The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A handwritten image recognition method based on unsupervised learning is characterized by comprising the following steps:
s1, preprocessing the image to obtain a binarization matrix of the image;
s2, extracting the characteristic values of the binary matrix in the horizontal, vertical and oblique directions, and obtaining characteristic vectors in the horizontal, vertical and oblique directions;
s3 merging the feature vectors of the horizontal, vertical and oblique directions into one feature vector;
s4, training the fused feature vectors by using a machine learning model to obtain an image classification model;
s5, inputting the handwriting image to be detected into the image classification model to obtain a recognition result.
2. The unsupervised learning-based handwritten image recognition method according to claim 1, characterized in that said preprocessing method in S1 is: firstly, carrying out gray processing on a color image to generate a gray image, and then carrying out binarization processing on the gray image to obtain a binarization matrix corresponding to the image.
3. The unsupervised learning-based handwritten image recognition method according to claim 2, characterized in that said binarization processing method is: and expressing the pixel points with the gray scale larger than the threshold value in the gray scale image by using 1, and expressing the pixel points with the gray scale smaller than the threshold value by using 0.
4. The unsupervised learning-based handwritten image recognition method according to claim 1, characterized in that the method of obtaining feature vectors in horizontal and vertical directions in step S2 is: for the binarization matrix, extracting a picture horizontal characteristic vector by performing column addition in the horizontal direction; the vertical eigenvector is obtained by performing column addition in the vertical direction and transposing the matrix obtained after the column addition in the vertical direction.
5. The unsupervised learning-based handwritten image recognition method according to claim 4, characterized in that the method of obtaining feature vectors in the oblique direction in step S2 is: and for the binarization matrix, superposing vectors in the binarization matrix along an inclined direction to obtain a characteristic vector in the inclined direction.
6. The unsupervised learning-based handwritten image recognition method according to claim 5, characterized in that said oblique direction is a direction inclined by 45 degrees in the horizontal direction or a direction inclined by 135 degrees in the horizontal direction.
7. The unsupervised learning-based handwritten image recognition method according to claim 1, characterized in that the feature vectors fused in step S3 are inputted into a popular learning dimension reduction method, the high-dimensional features thereof are reduced to two dimensions, and are visually represented, and whether the fused feature vectors can reflect the distinction of handwritten characters is observed.
8. The unsupervised learning-based handwriting image recognition method of claim 7, wherein the popular learning dimensionality reduction method adopts a T-distribution random neighborhood embedding T-SNE method for dimensionality reduction to represent data in a high-dimensional space in a low-dimensional space again.
9. The unsupervised learning-based handwritten image recognition method according to any of claims 1-7, characterized in that the machine learning model adopts a logistic regression model to implement a multi-classification problem, and the multi-classification problem implements a multi-classification function by constructing a plurality of binary models.
10. A system for handwriting image recognition based on unsupervised learning, comprising:
the image preprocessing module is used for preprocessing an image to obtain a binarization matrix of the image;
the characteristic extraction module is used for extracting characteristic values of the binary matrix in the horizontal direction, the vertical direction and the inclined direction and obtaining characteristic vectors in the horizontal direction, the vertical direction and the inclined direction;
the fusion module is used for fusing the feature vectors in the horizontal direction, the vertical direction and the inclined direction into one feature vector;
the recognition model generation module is used for training the fused feature vectors by utilizing a machine learning model to obtain an image classification model;
and the recognition module is used for inputting the handwritten image to be detected into the image classification model to obtain a recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011038830.5A CN112183335A (en) | 2020-09-28 | 2020-09-28 | Handwritten image recognition method and system based on unsupervised learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011038830.5A CN112183335A (en) | 2020-09-28 | 2020-09-28 | Handwritten image recognition method and system based on unsupervised learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112183335A true CN112183335A (en) | 2021-01-05 |
Family
ID=73945166
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011038830.5A Pending CN112183335A (en) | 2020-09-28 | 2020-09-28 | Handwritten image recognition method and system based on unsupervised learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112183335A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205137A (en) * | 2021-04-30 | 2021-08-03 | 中国人民大学 | Image identification method and system based on capsule parameter optimization |
CN113221901A (en) * | 2021-05-06 | 2021-08-06 | 中国人民大学 | Immature self-checking system-oriented picture literacy conversion method and system |
CN113591743A (en) * | 2021-08-04 | 2021-11-02 | 中国人民大学 | Calligraphy video identification method, system, storage medium and computing device |
CN114998634A (en) * | 2022-08-03 | 2022-09-02 | 广州此声网络科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
WO2023226783A1 (en) * | 2022-05-24 | 2023-11-30 | 华为技术有限公司 | Data processing method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054178A (en) * | 2011-01-20 | 2011-05-11 | 北京联合大学 | Chinese painting image identifying method based on local semantic concept |
CN103902981A (en) * | 2014-04-02 | 2014-07-02 | 浙江师范大学 | Method and system for identifying license plate characters based on character fusion features |
CN103996057A (en) * | 2014-06-12 | 2014-08-20 | 武汉科技大学 | Real-time handwritten digital recognition method based on multi-feature fusion |
CN108171654A (en) * | 2017-11-20 | 2018-06-15 | 西北大学 | Chinese character image super-resolution reconstruction method with interference suppression |
CN109034021A (en) * | 2018-07-13 | 2018-12-18 | 昆明理工大学 | A kind of recognition methods again for easily obscuring digital handwriting body |
WO2019232853A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Chinese model training method, chinese image recognition method, device, apparatus and medium |
-
2020
- 2020-09-28 CN CN202011038830.5A patent/CN112183335A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054178A (en) * | 2011-01-20 | 2011-05-11 | 北京联合大学 | Chinese painting image identifying method based on local semantic concept |
CN103902981A (en) * | 2014-04-02 | 2014-07-02 | 浙江师范大学 | Method and system for identifying license plate characters based on character fusion features |
CN103996057A (en) * | 2014-06-12 | 2014-08-20 | 武汉科技大学 | Real-time handwritten digital recognition method based on multi-feature fusion |
CN108171654A (en) * | 2017-11-20 | 2018-06-15 | 西北大学 | Chinese character image super-resolution reconstruction method with interference suppression |
WO2019232853A1 (en) * | 2018-06-04 | 2019-12-12 | 平安科技(深圳)有限公司 | Chinese model training method, chinese image recognition method, device, apparatus and medium |
CN109034021A (en) * | 2018-07-13 | 2018-12-18 | 昆明理工大学 | A kind of recognition methods again for easily obscuring digital handwriting body |
Non-Patent Citations (1)
Title |
---|
薛扬 等: "基于最优文档嵌入的《红楼梦》作者辨析", 《中文信息学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113205137A (en) * | 2021-04-30 | 2021-08-03 | 中国人民大学 | Image identification method and system based on capsule parameter optimization |
CN113221901A (en) * | 2021-05-06 | 2021-08-06 | 中国人民大学 | Immature self-checking system-oriented picture literacy conversion method and system |
CN113591743A (en) * | 2021-08-04 | 2021-11-02 | 中国人民大学 | Calligraphy video identification method, system, storage medium and computing device |
CN113591743B (en) * | 2021-08-04 | 2023-11-24 | 中国人民大学 | Handwriting video identification method, system, storage medium and computing device |
WO2023226783A1 (en) * | 2022-05-24 | 2023-11-30 | 华为技术有限公司 | Data processing method and apparatus |
CN114998634A (en) * | 2022-08-03 | 2022-09-02 | 广州此声网络科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
CN114998634B (en) * | 2022-08-03 | 2022-11-15 | 广州此声网络科技有限公司 | Image processing method, image processing device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112183335A (en) | Handwritten image recognition method and system based on unsupervised learning | |
Avadesh et al. | Optical character recognition for Sanskrit using convolution neural networks | |
Shanthi et al. | A novel SVM-based handwritten Tamil character recognition system | |
US20070009155A1 (en) | Intelligent importation of information from foreign application user interface using artificial intelligence | |
JP2009514110A (en) | Human detection by pause | |
CN112069900A (en) | Bill character recognition method and system based on convolutional neural network | |
Liang et al. | Deep infrared pedestrian classification based on automatic image matting | |
CN111460782A (en) | Information processing method, device and equipment | |
Mondal et al. | tsegGAN: a generative adversarial network for segmenting touching nontext components from text ones in handwriting | |
Suresh et al. | Telugu Optical Character Recognition Using Deep Learning | |
Kataria et al. | CNN-bidirectional LSTM based optical character recognition of Sanskrit manuscripts: A comprehensive systematic literature review | |
Dhivya et al. | Ancient Tamil Character Recognition from Stone Inscriptions–A Theoretical Analysis | |
Zaaboub et al. | Neural network-based system for automatic passport stamp classification | |
CN118135584A (en) | Automatic handwriting form recognition method and system based on deep learning | |
Jena et al. | Odia characters and numerals recognition using hopfield neural network based on zoning feature | |
Sarkar et al. | Suppression of non-text components in handwritten document images | |
Choudhary et al. | A neural approach to cursive handwritten character recognition using features extracted from binarization technique | |
Singh et al. | A comprehensive survey on Bangla handwritten numeral recognition | |
Vasudevan et al. | Flowchart knowledge extraction on image processing | |
Jabir Ali et al. | A convolutional neural network based approach for recognizing malayalam handwritten characters | |
Yan et al. | SMFNet: One Shot Recognition of Chinese Character Font Based on Siamese Metric Model | |
Mir et al. | Printed Urdu Nastalique script recognition using analytical approach | |
Sahu et al. | A survey on handwritten character recognition | |
Hemalatha et al. | Handwritten Text Recognition Using Machine Learning | |
Ouadid et al. | Tifinagh Character Recognition: A Survey |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210105 |