CN113362096A - Frame advertisement image matching method based on deep learning - Google Patents

Frame advertisement image matching method based on deep learning Download PDF

Info

Publication number
CN113362096A
CN113362096A CN202010149359.0A CN202010149359A CN113362096A CN 113362096 A CN113362096 A CN 113362096A CN 202010149359 A CN202010149359 A CN 202010149359A CN 113362096 A CN113362096 A CN 113362096A
Authority
CN
China
Prior art keywords
advertisement
deep
calculating
image matching
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010149359.0A
Other languages
Chinese (zh)
Inventor
陈岩
刘杨
王金海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHIZHONG INFORMATION TECHNOLOGY (SHANGHAI) CO LTD
Original Assignee
CHIZHONG INFORMATION TECHNOLOGY (SHANGHAI) CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHIZHONG INFORMATION TECHNOLOGY (SHANGHAI) CO LTD filed Critical CHIZHONG INFORMATION TECHNOLOGY (SHANGHAI) CO LTD
Priority to CN202010149359.0A priority Critical patent/CN113362096A/en
Publication of CN113362096A publication Critical patent/CN113362096A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Development Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Accounting & Taxation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Evolutionary Biology (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a frame advertisement image matching method based on deep learning, which comprises the following steps: s1, establishing a picture data set; s2, calculating an embedded vector of a low-dimensional space for each picture in the picture data set by using a deep convolutional neural network as a deep advertisement characteristic, wherein the deep advertisement characteristic comprises distinguishing information of advertisement content; s3, randomly selecting two advertisement pictures and calculating the depth advertisement characteristics, calculating the cosine similarity of the two depth advertisement characteristics, and judging whether the two advertisements are matched according to a predefined threshold; and S4, replacing two new advertisement pictures, and repeating the step S3. The technical scheme of the invention has the beneficial effects that: the accuracy of image matching identification is improved: from 98.2% to 99%; and (3) reducing the amount of manually recognized photos: the number of the sheets per year is reduced from 198w to 110w per year.

Description

Frame advertisement image matching method based on deep learning
Technical Field
The invention belongs to the field of application of an image recognition and matching technology to information propagation, and particularly relates to a frame advertisement image matching method based on deep learning.
Background
Due to the development of advertisement media services, a large number of advertisement schemes are put in the elevator of the cell every week, the putting period is shortened continuously, and the advertisement scheme for correctly identifying and classifying the slots faces a huge challenge. In the traditional method, the advertisement scheme identification is operated manually, and a large amount of manpower is consumed. Moreover, because the picture replacing period is short, a large number of picture recognition and classification tasks need to be completed in a short time, and the accuracy is often not guaranteed. On the basis, if a system capable of accurately and automatically identifying and matching photos taken by workers can be developed, the system is very meaningful and can greatly improve the working efficiency.
In recent years, computer vision technology has matured day by day, and great transformation is brought to the traditional identification method. Advanced computer vision technology can not only liberate manpower from fussy manual identification matching, but also greatly improve accuracy, and an advertisement identification system is developed under the background. The advertisement recognition system automatically performs image matching based on a deep learning method, so that not only are errors caused by manpower reduced, but also part of manpower can be liberated, and the value of the advertisement recognition system is more greatly exerted. The traditional automatic matching system is based on feature point (SIFT) matching, uses SIFT to extract features, matches according to the feature points, and if the number of matched feature point pairs meets a specific threshold value, the matching is considered to be successful; if not, the match is considered to fail (see FIG. 1). For similar reference, CN 106066887B discloses a method for fast searching and analyzing advertisement sequence images, which comprises: dividing an advertisement image database into LOGO and scene images by using an image complexity characteristic and decision classification tree method; secondly, extracting and storing HOG characteristics and SIFT characteristics for images in an advertisement database; extracting HOG features and SIFT features of an image to be matched, calculating Euclidean distances between the image to be matched and HOG feature vectors of all images of the same type in an advertisement database, sequencing the Euclidean distances from small to large, and screening first S candidate images; calculating the logarithm of SIFT feature matching points of the image to be matched and the image with the most logarithm as an image corresponding to the image to be matched; and aiming at the images to be matched of the advertisement sequence, calculating the time length of each advertisement to obtain advertisement playing information. The above method mainly has two disadvantages: 1. the accuracy of matching is low when the content of different advertising schemes is mostly the same. For example, in the advertisement of fig. 2, "i love my home", different schemes only have difference in the information of the salesman, and the number of matched feature points is higher than the threshold value in the conventional method, so that the case of successful matching is erroneously determined (actually, different schemes); 2. the inverted advertisement content cannot be accurately detected. In the case of upside down magazine, because it is the same graph, the conventional method can still match a large number of pairs of feature points, resulting in the number of matched pairs of feature points exceeding the threshold value and returning a false matching result (see fig. 3).
Disclosure of Invention
In view of the above, an object of the present invention is to provide a frame advertisement image matching method based on deep learning, which matches a shot picture with a corresponding advertisement scheme, so as to ensure that an advertisement is accurately delivered to a specified location, thereby solving the deficiencies in the prior art.
In order to achieve the purpose, the invention is realized by the following technical scheme:
the method for matching the frame advertisement image based on deep learning is provided, and comprises the following steps:
s1, establishing a picture data set;
s2, calculating an embedded vector of a low-dimensional space for each picture in the picture data set by using a deep convolutional neural network as a deep advertisement characteristic, wherein the deep advertisement characteristic comprises distinguishing information of advertisement content;
s3, randomly selecting two advertisement pictures and calculating the depth advertisement characteristics, calculating the cosine similarity of the two depth advertisement characteristics, and judging whether the two advertisements are matched according to a predefined threshold;
and S4, replacing two new advertisement pictures, and repeating the step S3.
According to the frame advertisement image matching method based on deep learning, Softmax is adopted as a Loss function to train a deep convolutional neural network to obtain the feature vector of the image, and meanwhile the Large-Margin Softmax Loss method is adopted to reduce the included angle between the weight vector and the feature vector.
The frame advertisement image matching method based on deep learning includes normalizing the feature vector x on the basis of w x of Softmax, and multiplying the normalized feature vector x by a scale factor to amplify:
wherein w is a weight vector and α is a scale factor.
Figure BDA0002400341420000021
z=α·y
The technical scheme of the invention has the beneficial effects that:
-improving the accuracy of image matching recognition: the accuracy is improved by 0.8%, and the error rate is reduced by 44.4%;
-reducing the amount of manually recognized photos: the number of the sheets per year is reduced from 198w to 110w per year.
Drawings
FIG. 1 is a diagram illustrating a conventional feature point matching technique;
FIG. 2 is a schematic diagram of an advertisement to be identified with only local subtle changes;
FIG. 3 is a schematic diagram of feature point matching for an inverted picture;
FIG. 4 is a schematic flow chart of a matching method according to the present invention;
FIG. 5 is a graph of the angle distribution of matched and unmatched pairs of images according to the present invention;
fig. 6 is a schematic diagram of feature differentiation of 10 digital pictures on the MNIST;
FIG. 7 is a schematic diagram of the Large-Margin Softmax Loss algorithm flow.
Detailed Description
The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
Referring to fig. 4, the method for matching a frame advertisement image based on deep learning of the present invention includes the following steps:
and S1, establishing a picture data set. The small frame pictures and the periodical pictures of the past year can be collected, and the total number of the pictures is 9935, and 373471 photos. And randomly extracting 5000 groups of matching photos and 5000 groups of non-matching photos, taking the photos as a verification set after manual error correction, and taking the other photos as a training set.
S2, calculating an embedded vector (Embedding) of a low-dimensional space (512 dimensions) for each picture in the picture data set by using a deep Convolutional Neural Network (CNN) as a deep advertisement characteristic, wherein the deep advertisement characteristic comprises distinguishing information of advertisement contents.
S3, randomly selecting two advertisement pictures and calculating the depth advertisement characteristics, calculating the cosine similarity of the two depth advertisement characteristics, and judging whether the two advertisements are matched according to a predefined threshold value. And randomly selecting equivalent advertisement picture pairs which are matched with (1) and not matched with (0) from the advertisement picture verification set, calculating the characteristics of each pair of pictures, then calculating the vector inner product, and obtaining the included angle theta by using an inverse cosine function. The matched and unmatched picture diagonal angle distribution graph shown in fig. 5 can be obtained, a fitted normal distribution curve is presented, and the mean value mu and the standard deviation sigma corresponding to the algorithm are displayed at the same time. Specific values are as follows in table 1:
TABLE 1
Threshold value (degree) mu_0 sigma_0 mu_1 sigma_1
23.0420 42.4932 6.7359 9.6546 3.5097
The core of influencing the matching readiness lies in whether the deep convolutional neural network can obtain the advertisement characteristics with high discrimination. The Convolutional Neural Network (CNN) is trained with a relatively difficult task of 'super-classification', and the network is forced to form a relatively compact and strong-discriminability deep advertisement feature at the first full connection layer (FC layer) for later advertisement identification.
And training the CNN by using Softmax as a loss function to obtain the feature vector of the picture. The 2-dimensional feature mapping visualization of the 10 classes on the MNIST as shown in FIG. 6, the different classes are obviously separated, but the condition does not meet the requirement of feature vector comparison in advertisement identification. Cosine distance (cosine distance) is commonly used for calculating the similarity of the feature vectors, and the larger the cosine is, the smaller the cosine distance (included angle) is, and the higher the similarity of the vectors is. For feature mapping (feature embedding) for advertisement identification, Softmax encourages separation of features of different classes, but does not encourage much separation of features.
The depth features of Softmax training will partition the entire hyperspace or hypersphere according to the number of classes, ensuring that the classes are separable, which is very suitable for multi-classification tasks such as MNIST and ImageNet, since the test classes must be in the training classes. However, Softmax does not require intra-class compactness and inter-class separation, which is not very suitable for the task of advertisement recognition because the number of 1W advertisements in the training set is very small compared with the number of advertisements in the test set, and it is impossible to obtain training samples of all advertisements. In particular, it is also generally required that the training set and the test set do not overlap. Therefore, Softmax needs to be modified, separability is guaranteed, and the feature vector classes are as compact as possible and are separated as far as possible.
Therefore, the method of Large Margin (Large Margin) is adopted, so that the included angle between the weight vector W and the feature vector f is smaller. Specifically, the following method is used:
on the basis of w x of Softmax, normalizing the feature vector x, and multiplying by a scale factor to amplify:
Figure BDA0002400341420000041
z=α·y
the scale factor alpha uses a fixed value of 64. The weight and the characteristic normalization enable the CNN to be more concentrated on the optimized included angle, and the obtained deep advertisement characteristics are more separated.
After feature normalization, the feature vectors are fixedly mapped to the hypersphere with the radius of 1, so that understanding and optimization are facilitated. But this also compresses the space of feature expression; multiplying by the scale factor α is equivalent to enlarging the radius of the hypersphere to α, the hypersphere becomes larger and the space for feature expression is larger (the larger the radius the larger the surface area of the sphere). In addition, after the characteristics are normalized, the similarity of characteristic vectors is calculated by advertisement identification, the L2 distance and the cosine distance are equivalent in meaning, the calculated amount is the same, and the convenience is improved.
On this basis, an Additive Margin (Additive Margin) is used. For example, the included angle between the advertisement feature xi and the corresponding weight W is θ, and a fixed value m (0.5) is added on the basis of the included angle. The result of this is to "push" the features away from the weights, as shown in FIG. 7, thereby increasing the difficulty of the classification task and making the resulting features more cohesive.
And S4, replacing two new advertisement pictures, and repeating the step S3. Based on the feature angle θ and the above parameters, the matching probability of the pair of pictures can be calculated, and the specific code refers to the following:
def get_prob(theta):
prob_0=norm.pdf(theta,mu_0,sigma_0)
prob_1=norm.pdf(theta,mu_1,sigma_1)
total=prob_0+prob_1
return prob_1/total
the technical scheme of the invention improves the accuracy of image matching identification: from 98.2% to 99% (accuracy improved by 0.8%, error rate reduced by 44.4%), while reducing the amount of manually recognized photos: the number of the sheets per year is reduced from 198w to 110w per year.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (3)

1. A frame advertisement image matching method based on deep learning is characterized by comprising the following steps:
s1, establishing a picture data set;
s2, calculating an embedded vector of a low-dimensional space for each picture in the picture data set by using a deep convolutional neural network as a deep advertisement characteristic, wherein the deep advertisement characteristic comprises distinguishing information of advertisement content;
s3, randomly selecting two advertisement pictures and calculating the depth advertisement characteristics, calculating the cosine similarity of the two depth advertisement characteristics, and judging whether the two advertisements are matched according to a predefined threshold;
and S4, replacing two new advertisement pictures, and repeating the step S3.
2. The frame advertisement image matching method based on deep learning as claimed in claim 1, wherein Softmax is used as a Loss function to train a deep convolutional neural network to obtain the feature vector of the image, and meanwhile, a Large-Margin Softmax Loss method is used to reduce the included angle between the weight vector and the feature vector.
3. The frame advertisement image matching method based on deep learning of claim 2, wherein the feature vector x is normalized on the basis of w x of Softmax and multiplied by a scale factor to be amplified:
Figure FDA0002400341410000011
z=α·y
wherein w is a weight vector and α is a scale factor.
CN202010149359.0A 2020-03-04 2020-03-04 Frame advertisement image matching method based on deep learning Pending CN113362096A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010149359.0A CN113362096A (en) 2020-03-04 2020-03-04 Frame advertisement image matching method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010149359.0A CN113362096A (en) 2020-03-04 2020-03-04 Frame advertisement image matching method based on deep learning

Publications (1)

Publication Number Publication Date
CN113362096A true CN113362096A (en) 2021-09-07

Family

ID=77523871

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010149359.0A Pending CN113362096A (en) 2020-03-04 2020-03-04 Frame advertisement image matching method based on deep learning

Country Status (1)

Country Link
CN (1) CN113362096A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529713A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image enhancement method based on deep learning

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529996A (en) * 2016-10-24 2017-03-22 北京百度网讯科技有限公司 Deep learning-based advertisement display method and device
CN108062575A (en) * 2018-01-03 2018-05-22 广东电子工业研究院有限公司 A kind of high similarity graph picture identification and sorting technique
CN108549883A (en) * 2018-08-06 2018-09-18 国网浙江省电力有限公司 A kind of face recognition methods again
CN108846694A (en) * 2018-06-06 2018-11-20 厦门集微科技有限公司 A kind of elevator card put-on method and device, computer readable storage medium
CN108960258A (en) * 2018-07-06 2018-12-07 江苏迪伦智能科技有限公司 A kind of template matching method based on self study depth characteristic
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
WO2020015075A1 (en) * 2018-07-18 2020-01-23 平安科技(深圳)有限公司 Facial image comparison method and apparatus, computer device, and storage medium
CN110781917A (en) * 2019-09-18 2020-02-11 北京三快在线科技有限公司 Method and device for detecting repeated image, electronic equipment and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529996A (en) * 2016-10-24 2017-03-22 北京百度网讯科技有限公司 Deep learning-based advertisement display method and device
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
CN108062575A (en) * 2018-01-03 2018-05-22 广东电子工业研究院有限公司 A kind of high similarity graph picture identification and sorting technique
CN108846694A (en) * 2018-06-06 2018-11-20 厦门集微科技有限公司 A kind of elevator card put-on method and device, computer readable storage medium
CN108960258A (en) * 2018-07-06 2018-12-07 江苏迪伦智能科技有限公司 A kind of template matching method based on self study depth characteristic
WO2020015075A1 (en) * 2018-07-18 2020-01-23 平安科技(深圳)有限公司 Facial image comparison method and apparatus, computer device, and storage medium
CN108549883A (en) * 2018-08-06 2018-09-18 国网浙江省电力有限公司 A kind of face recognition methods again
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN110781917A (en) * 2019-09-18 2020-02-11 北京三快在线科技有限公司 Method and device for detecting repeated image, electronic equipment and readable storage medium

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
"人脸识别的LOSS(下)", pages 2, Retrieved from the Internet <URL:https://zhuanlan.zhihu.com/p/34436551> *
PEANUT_范: "AM-Softmax Loss", pages 1, Retrieved from the Internet <URL:https://blog.csdn.net/u013841196/article/details/89920902> *
刘施乐;: "基于深度学习的人脸识别技术研究", 电子制作, no. 24 *
孙洁;丁笑君;杜磊;李秦曼;邹奉元;: "基于卷积神经网络的织物图像特征提取与检索研究进展", 纺织学报, no. 12 *
张惠凡等: "基于卷积神经网络的鸟类视频图像检索研究", 科研信息化技术与应用, vol. 8, no. 5, pages 50 *
彭晏飞;高艺;杜婷婷;桑雨;訾玲玲;: "生成对抗网络的单图像超分辨率重建方法", 计算机科学与探索, no. 09 *
彭骋;: "基于深度学习的图像检索系统", 通讯世界, no. 06 *
有三AI: "一文道尽softmax loss及其变种", pages 3, Retrieved from the Internet <URL:https://blog.csdn.net/u013841196/article/details/89920902> *
李振东;钟勇;曹冬平;: "深度卷积特征向量用于快速人脸图像检索", 计算机辅助设计与图形学学报, no. 12 *
李振东;钟勇;陈蔓;曹冬平;: "基于深度特征的快速人脸图像检索方法", 光学学报, no. 10 *
邓建国;张素兰;张继福;荀亚玲;刘爱琴;: "监督学习中的损失函数及应用研究", 大数据, no. 01 *
黄旭;凌志刚;李绣心;: "融合判别式深度特征学习的图像识别算法", 中国图象图形学报, no. 04 *
龚锐;丁胜;章超华;苏浩;: "基于深度学习的轻量级和多姿态人脸识别方法", 计算机应用, no. 03 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529713A (en) * 2022-01-14 2022-05-24 电子科技大学 Underwater image enhancement method based on deep learning

Similar Documents

Publication Publication Date Title
CN102414680B (en) Utilize the semantic event detection of cross-domain knowledge
US8358856B2 (en) Semantic event detection for digital content records
CN102542058B (en) Hierarchical landmark identification method integrating global visual characteristics and local visual characteristics
CN108733778B (en) Industry type identification method and device of object
CN103699523B (en) Product classification method and apparatus
Tarawneh et al. Invoice classification using deep features and machine learning techniques
EP2808827A1 (en) System and method for OCR output verification
WO2017016240A1 (en) Banknote serial number identification method
CN104732413A (en) Intelligent individuation video advertisement pushing method and system
CN109376796A (en) Image classification method based on active semi-supervised learning
CN105389593A (en) Image object recognition method based on SURF
CN106845358A (en) A kind of method and system of handwritten character characteristics of image identification
CN111460961A (en) CDVS-based similarity graph clustering static video summarization method
Gordo et al. Document classification and page stream segmentation for digital mailroom applications
CN111340032A (en) Character recognition method based on application scene in financial field
CN110442736B (en) Semantic enhancer spatial cross-media retrieval method based on secondary discriminant analysis
CN115062186A (en) Video content retrieval method, device, equipment and storage medium
CN114357307A (en) News recommendation method based on multi-dimensional features
CN114495139A (en) Operation duplicate checking system and method based on image
CN110020638A (en) Facial expression recognizing method, device, equipment and medium
CN113362096A (en) Frame advertisement image matching method based on deep learning
CN113657377A (en) Structured recognition method for airplane ticket printing data image
CN109472307A (en) A kind of method and apparatus of training image disaggregated model
TWI761090B (en) Dialogue data processing system and method thereof and computer readable medium
CN104778478A (en) Handwritten numeral identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210907