CN109034160B - A kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks - Google Patents
A kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks Download PDFInfo
- Publication number
- CN109034160B CN109034160B CN201810734321.2A CN201810734321A CN109034160B CN 109034160 B CN109034160 B CN 109034160B CN 201810734321 A CN201810734321 A CN 201810734321A CN 109034160 B CN109034160 B CN 109034160B
- Authority
- CN
- China
- Prior art keywords
- led
- decimal point
- neural network
- network model
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 24
- 238000007781 pre-processing Methods 0.000 claims abstract description 20
- 230000008569 process Effects 0.000 claims abstract description 7
- 238000003062 neural network model Methods 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 10
- 238000004806 packaging method and process Methods 0.000 claims description 8
- 230000011218 segmentation Effects 0.000 claims description 8
- 238000002372 labelling Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 6
- 238000013434 data augmentation Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 238000001514 detection method Methods 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 238000012856 packing Methods 0.000 claims description 2
- 238000002203 pretreatment Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/148—Segmentation of character regions
- G06V30/153—Segmentation of character regions using recognition of characters or words
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Character Discrimination (AREA)
Abstract
The invention discloses a kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks, include the following steps: for the digital instrument LED picture sample of acquisition to be divided into independent LED character picture, LED character picture is sent into network model after pretreatment and is trained;Picture to be identified is inputted trained network model to identify.Wherein, network model is made of LED character convolutional neural networks model and decimal point convolutional neural networks model, and the preprocessing process of LED character picture includes LED numeral sample image preprocessing step and decimal point samples pictures pre-treatment step.The present invention is sent into training in network model after carrying out region cutting after the LED character picture scaling comprising decimal point, i.e., recurrence orientation problem is converted to classification problem.Because decimal point and LED character recognition are two different networks, model recognition result will not be interfered with each other, more flexible in terms of Networked E-Journals.
Description
Technical Field
The technical field is computer field, in particular to the identification of digital electric meters in pictures, which is applied to the automatic identification of digital electric meters.
Background
LED digital ammeter is commonly seen in novel instrument, compares in traditional mechanical type ammeter, has the degree of accuracy height, and the low power dissipation is small, advantages such as easily discernment, extensively applies to industrial fields such as chemical industry, electron, electric power. However, in many cases, reading identification of these LED electric meters requires manual work, which is not only labor intensive and inefficient, but also has dangerous factors in some scenarios, such as LED reading work in high-voltage substations.
According to the Chinese patent application with the patent application number of 201710195995.5 and the name of ' an autonomous identification method of a patrol robot of a transformer substation ' and a patrol robot ', LED numbers are subjected to binarization processing at first, and then character cutting is carried out through projection in the vertical direction to obtain a cut picture of a single character; and after the pictures are scaled to be uniform in size, performing score judgment on the digital pictures to be predicted by a template matching method, and taking the template serial number with the highest matching score as an identification result. The method has high requirements on the quality of the LED digital picture, and the LED digital picture is fuzzy and inclined, and the edge of the LED character and the background cannot be well distinguished by binarization due to illumination change. In the subsequent template matching process, matching errors are caused. Lack versatility and robustness.
The Chinese patent application with the patent application number of 201710195624.7 and named as a method and a system for automatically segmenting and identifying liquid crystal numbers of instruments discloses a method and a system for automatically segmenting and identifying the numbers of liquid crystal instruments. And filtering the picture by using a LoG operator, carrying out expansion corrosion operation after Otsu binarization to obtain a connected graph, and carrying out horizontal and vertical direction projection on the connected graph for cutting a single character. Extracting feature vectors identifies the segmented single character regions using a single character recognition library. And for the decimal point, carrying out binarization on the image by setting a larger threshold value, so that only a binarization image containing the decimal point and noise is reserved, and determining and comparing a circumscribed rectangle of the binarization region with a previous binarization character region to confirm the relative position of the decimal point. The method has a large limitation. In the aspect of character binary segmentation, because the characters and the background cannot be distinguished after binarization due to the change of the illumination environment, decimal point positioning judgment depends on threshold setting, and therefore the method has no universality and stability. And character segmentation and decimal point positioning rely on artificial prior judgment logic. The algorithm can fail in a complex and variable real environment.
Because the LED digital electricity meter is widely used in a plurality of fields, the shapes of the LED digital electricity meter are different, which causes differences in the shapes of the LED fonts, the colors of the fonts, the textures of the LED back panel, and the like, and it is impossible to set a reasonable threshold value for the LED font pictures with different shapes and colors in the process using the conventional training method (for example, feature value extraction + SVM). The recognition effect of the trained model on some pictures with poor quality is poor.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the prior art, the automatic identification method with the small digital instrument based on the convolutional neural network is provided, and the CNN network constructed for the LED characters can have good generalization performance on various LED fonts.
The technical scheme is as follows: a convolution neural network-based automatic identification method for a digital instrument with small number points comprises the following steps: dividing collected digital instrument LED picture samples into independent LED character pictures, and sending the LED character pictures into a network model for training after preprocessing; inputting the picture to be recognized into the trained network model for recognition; the network model is composed of an LED character convolution neural network model and a decimal point convolution neural network model, the LED character picture preprocessing process comprises an LED digital sample image preprocessing step and a decimal point sample picture preprocessing step, and specifically:
the LED digital sample image preprocessing step comprises the following steps:
a1: labeling the LED character picture;
a2: carrying out data augmentation on the LED character pictures with the labels;
a3: unifying the sizes of all LED character pictures;
a4: graying all LED character pictures, and only keeping brightness information;
a5: packaging and packaging data of all grayed LED character pictures, and dividing the LED character pictures into two data packets for training and testing the LED character convolutional neural network model;
the decimal point sample picture preprocessing step comprises the following steps:
b1: labeling the LED character picture;
b2: carrying out data augmentation on the LED character pictures with the labels;
b3: unifying the sizes of all LED character pictures;
b4: 3-by-3 segmentation is carried out on the character picture, two sub-regions are randomly selected in the first 8 regions, the label is set to be 0, and if the LED picture contains decimal points, the label of the 9 th region at the lower right corner is 1;
b5: graying all the segmented decimal point sample pictures, and only keeping brightness information;
b6: packaging and packing all decimal point sample pictures, and dividing the decimal point sample pictures into two data packets for training and testing the decimal point convolutional neural network model;
when the trained network model is used for carrying out pictures: preprocessing an image to be recognized, inputting the LED character convolution neural network model, then performing 3x3 cutting on the preprocessed image to be recognized, taking the 9 th area, sending the area into the decimal point convolution neural network model, determining the position of a decimal point according to the sequence of successful detection and recognition of the decimal point, and finally splicing the recognition results of the LED character convolution neural network model and the decimal point convolution neural network model.
Furthermore, the LED character convolution neural network model and the decimal point convolution neural network model both adopt a correction line unit as a nonlinear activation function, and correspondingly, a network weight uses a xavier mode; wherein the correction line unit isThe weight range isIn the formula yiRepresenting a non-linear activation function value, xiAnd (4) representing function variables, wherein m and n respectively represent the number of input channels and output channels of the network layer.
Furthermore, the exponential moving average value m of the gradient is updated by adopting an Adam network weight updating algorithm in model trainingtSum squared gradient vtThe update rule is:
wherein,is an estimate of the first moment of the gradient,for second moment estimation of gradient, η learning rate, β1、β2ε is the over-parameter, and t represents the update time step.
Further, the LED character convolution neural network model comprises 4 convolution layers, 4 maximum pooling layers and 2 full-connection layers, and the decimal point convolution neural network model comprises 3 convolution layers, 1 maximum pooling layer and 2 full-connection layers.
Further, the LED character convolutional neural network model and the decimal point convolutional neural network model both use Softmax loss as a basis for updating the network weight, and the Softmax loss function is:where T is the number of network classes, yiIs the label value of the sample, siThe predicted value calculated for the network is,a is yiThe individual element values of the vector, j is the order of a.
Further, the training effect of the network is judged according to the current prediction accuracy of the network in the model training, which specifically comprises the following steps:
wherein,representing label-taking yiThe maximum sequence number of the vector;calculating predicted value s by representative networkiThe maximum possible tag value of; accuracy denotes Accuracy and BatchSize denotes batch size.
Has the advantages that: the method for identifying the LED characters and the decimal points by using the convolutional neural network shown in the specification is used for comparing the two-valued processing and then performing template matching or extracting the characteristic value and sending the characteristic value into the SVM or BP network for identification and the like, and has the advantages of high identification accuracy, simplicity in pretreatment, universality and strong stability. The LED lamp can be used for various actual illuminations and LED fonts and decimal points with different styles. The method is applied to scenes such as instrument monitoring and identification, substation inspection and the like, so that the labor cost can be greatly reduced, and the detection efficiency is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a decimal point 3x3 cut;
FIG. 3 is a schematic diagram of a model of a convolution neural network for LED characters;
FIG. 4 is a schematic diagram of a fractional point convolutional neural network model.
Detailed Description
The invention is further explained below with reference to the drawings.
As shown in fig. 1, a convolutional neural network-based automatic identification method for a digital instrument with a small number of points includes the following steps: the method comprises the steps of dividing collected digital instrument LED picture samples into independent LED character pictures, preprocessing the LED character pictures, sending the preprocessed LED character pictures into a network model for training, and inputting pictures to be recognized into the trained network model for recognition. The network model is composed of an LED character convolution neural network model and a decimal point convolution neural network model. The pretreatment process of the LED character picture comprises an LED digital sample image pretreatment step and a decimal sample picture pretreatment step, and specifically comprises the following steps:
the LED digital sample image preprocessing step comprises the following steps:
a1: labeling the LED character picture, namely performing type labeling on the intercepted single character, wherein the characters of 0-9, A, B and C, unlighted characters and abnormal characters need to be identified, 14 labels are distributed in total, and the unlighted characters and the abnormal characters are combined into the same class;
a2: carrying out data augmentation on the LED character pictures with the labels;
a3: unifying the sizes of all the LED character pictures to 48 x 48;
a4: graying all LED character pictures, and only keeping brightness information;
a5: and packaging data of all the LED character pictures subjected to graying, and dividing the LED character pictures into a training data packet and a testing data packet for training and testing an LED character convolution neural network model.
The decimal point sample picture preprocessing step comprises the following steps:
b1: labeling the LED character picture, wherein the label containing decimal points is 1, and the label without decimal points is 0;
b2: carrying out data augmentation on the LED character pictures with the labels;
b3: unifying the sizes of all the LED character pictures to 48 x 48;
b4: 3-3 segmentation is carried out on the 48-48 character picture, the first eight segmentation regions do not contain decimal points, two sub-regions are randomly selected from the first 8 regions, the label is set to be 0, and if the LED picture contains decimal points, the label of the 9 th region at the lower right corner is 1; wherein, randomly taking 2 in the first 8 areas ensures that the proportion of positive and negative samples does not differ too much, and randomly taking ensures the richness of the negative samples.
B5: graying all the segmented decimal point sample pictures, namely positive samples formed in the 9 th area after all the sample pictures are segmented, and only keeping brightness information;
b6: and packaging all the decimal point sample pictures, and dividing the decimal point sample pictures into two data packets for training and testing the decimal point convolutional neural network model.
When the trained network model is used for carrying out pictures: the method comprises the steps of carrying out scaling and graying preprocessing (Resize + gray) on an image to be recognized, inputting an LED character convolution neural network model, carrying out 3x3 cutting on the preprocessed image to be recognized, taking a 9 th area, sending the area into a decimal point convolution neural network model, determining the position of a decimal point according to the sequence of successful detection and recognition of the decimal point, and finally splicing the recognition results of the LED character convolution neural network model and the decimal point convolution neural network model. For example, 12.34, if the decimal point is successfully recognized in the second character region, the decimal point is located at the second position, and the final concatenation result is (LED: 1234, Dot:2) - > 12.34.
The LED character convolution neural network model and the decimal point convolution neural network model both adopt a correction line Unit (Relu, Rectified Linear Unit) as a nonlinear activation function, and correspondingly, a network weight uses a xavier mode to ensure that the network can be trained normally. Wherein the correction line unit isThe weight range isIn the formula yiRepresenting a non-linear activation function value, xiAnd (4) representing function variables, wherein m and n respectively represent the number of input channels and output channels of the network layer.
In the model training, Adam (adaptive motion estimation) network weight value updating calculation capable of adaptively modifying learning rate of each parameter is adoptedExponential moving mean m of method update gradienttSum squared gradient vtThe update rule is:
wherein,is an estimate of the first moment of the gradient,for second moment estimation of gradient η learning rate, t represents update time step β1、β2ε is a hyperparameter, β1Default to 0.9, β2Default to 0.999, epsilon default to 10 e-8.
The LED character convolution neural network model comprises 4 convolution layers, 4 maximum pooling layers and 2 full-connection layers, and the specific structure is as follows:
Type | Configutations | Size |
Input | 48x48gray-scale image | 48x48x1 |
Convolution | #maps:20,k:3x3,s:1,p:1 | 48x48x20 |
MaxPooling | Window:2x2,s:2 | 24x24x20 |
Convolution | #maps:50,k:3x3,s:1,p:1 | 24x24x50 |
MaxPooling | Window:2x2,s:2 | 12x12x50 |
Convolution | #maps:128,k:3x3,s:1,p:1 | 12x12x128 |
MaxPooling | Window:2x2,s:2 | 6x6x128 |
Convolution | #maps:256,k:3x3,s:1 | 6x6x256 |
MaxPooling | Window:2x2,s:2 | 3x3x256 |
Fully-Connection | #maps:1024 | 1024x1 |
Fully-Connection | #maps:14 | 14x1 |
the decimal point convolution neural network model comprises 3 convolution layers, 1 maximum pooling layer and 2 full-connection layers, and the concrete structure is as follows:
the LED character convolution neural network model and the decimal point convolution neural network model both adopt Softmax loss as the basis for updating the network weight, and the Softmax loss function is as follows:wherein, T is the number of network types, namely the length of network output; y isiIs the label value of the sample, is a vector of length T; siA predicted value calculated for the network is a vector of length T;a is yiThe individual element values of the vector, j being the ordinal number of a, j-0 representing yiThe 0 th element value in the vector. The Softmax is the calculation loss of the network, namely the network weight is updated by using the Adam algorithm with the Softmax loss as a judgment standard.
In the training process, the training effect of the network is judged according to the network loss and the current prediction accuracy of the network, and the method specifically comprises the following steps:
wherein,representing label-taking yiThe maximum sequence number of the vector; the tag is vectorized before comparing with the predicted value, assuming that the number of classes is 14 and the tag 3 is [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]Tag 3 can be retrieved using argmax, with the sequence number starting from 0.Calculating predicted value s by representative networkiThe maximum possible tag value of; accuracy denotes Accuracy and BatchSize denotes batch size.
In the prior art, aiming at decimal point positioning and identification, two training modes of classification and regression are provided. And (4) classification: the picture sample containing the decimal point is set to 1, and the picture sample not containing the decimal point is set to 0. And (3) regression: and scaling the size of the decimal point in the picture sample to 48 x 48, and taking the coordinate value of the decimal point in the picture as label data. If the classification method is directly applied, the training recognition effect is very poor because the decimal point area accounts for too small in the whole LED character picture. And the regression method is difficult to train and difficult to apply.
The invention zooms the LED character picture containing decimal point to 48 × 48, then cuts 3 × 3 area into 9 sub-pictures of 16 × 16, the decimal point is located in the 9 th cutting area. The label of the segmentation region containing the decimal point is set to 1, and the labels of other segmentation regions without the decimal point are set to 0. And (4) sending the sub-pictures into a CNN network for training, namely converting the regression positioning problem into a classification problem. Because decimal point and LED character recognition are two different networks, the model recognition results can not interfere with each other, and the network debugging is more flexible.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (6)
1. A convolution neural network-based automatic identification method for a digital instrument with a small number is characterized by comprising the following steps: dividing collected digital instrument LED picture samples into independent LED character pictures, and sending the LED character pictures into a network model for training after preprocessing; inputting the picture to be recognized into the trained network model for recognition; the network model is composed of an LED character convolution neural network model and a decimal point convolution neural network model, the LED character picture preprocessing process comprises an LED digital sample image preprocessing step and a decimal point sample picture preprocessing step, and specifically:
the LED digital sample image preprocessing step comprises the following steps:
a1: labeling the LED character picture;
a2: carrying out data augmentation on the LED character pictures with the labels;
a3: unifying the sizes of all LED character pictures;
a4: graying all LED character pictures, and only keeping brightness information;
a5: packaging and packaging data of all grayed LED character pictures, and dividing the LED character pictures into two data packets for training and testing the LED character convolutional neural network model;
the decimal point sample picture preprocessing step comprises the following steps:
b1: labeling the LED character picture;
b2: carrying out data augmentation on the LED character pictures with the labels;
b3: unifying the sizes of all LED character pictures;
b4: 3-by-3 segmentation is carried out on the character picture, two sub-regions are randomly selected in the first 8 regions, the label is set to be 0, and if the LED picture contains decimal points, the label of the 9 th region at the lower right corner is 1;
b5: graying all the segmented decimal point sample pictures, and only keeping brightness information;
b6: packaging and packing all decimal point sample pictures, and dividing the decimal point sample pictures into two data packets for training and testing the decimal point convolutional neural network model;
when the trained network model is used for picture recognition: preprocessing an image to be recognized, inputting the LED character convolution neural network model, then performing 3x3 cutting on the preprocessed image to be recognized, taking the 9 th area, sending the area into the decimal point convolution neural network model, determining the position of a decimal point according to the sequence of successful detection and recognition of the decimal point, and finally splicing the recognition results of the LED character convolution neural network model and the decimal point convolution neural network model.
2. The automatic identification method with the small digital instrument based on the convolutional neural network as claimed in claim 1, wherein the LED character convolutional neural network model and the small digital convolutional neural network model both adopt a correction line unit as a nonlinear activation function, and use a xavier mode corresponding to network weight; wherein the correction line unit isThe weight range isIn the formula yiRepresenting a non-linear activation function value, xiAnd (4) representing function variables, wherein m and n respectively represent the number of input channels and output channels of the network layer.
3. The automatic identification method for digital instruments with small number of points based on convolutional neural network as claimed in claim 2, characterized in that the exponential moving mean m of gradient is updated by Adam network weight update algorithm in model trainingtSum squared gradient vtThe update rule is:
wherein,is an estimate of the first moment of the gradient,for second moment estimation of gradient, η learning rate, β1、β2ε is the over-parameter, and t represents the update time step.
4. The convolutional neural network-based automatic identification method for a digital instrument with a decimal point, which is characterized in that the LED character convolutional neural network model comprises 4 convolutional layers, 4 maximum pooling layers and 2 full-connection layers, and the decimal point convolutional neural network model comprises 3 convolutional layers, 1 maximum pooling layer and 2 full-connection layers.
5. The convolutional neural network-based automatic identification method for a small digital instrument with a small number based on the convolutional neural network as claimed in claim 3, wherein the LED character convolutional neural network model and the small digital convolutional neural network model both adopt Softmax loss as the basis for updating the network weight, and the Softmax loss function is as follows:where T is the number of network classes, yiIs the label value of the sample, siThe predicted value calculated for the network is,a is yiThe individual element values of the vector, j is the order of a.
6. The automatic identification method with the small digital instrument based on the convolutional neural network as claimed in claim 5, wherein the training effect of the network is judged according to the current prediction accuracy of the network in the model training, specifically:
wherein,representing label-taking yiThe maximum sequence number of the vector;calculating predicted value s by representative networkiThe maximum possible tag value of; accuracy denotes Accuracy and BatchSize denotes batch size.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810734321.2A CN109034160B (en) | 2018-07-06 | 2018-07-06 | A kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810734321.2A CN109034160B (en) | 2018-07-06 | 2018-07-06 | A kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109034160A CN109034160A (en) | 2018-12-18 |
CN109034160B true CN109034160B (en) | 2019-07-12 |
Family
ID=64641172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810734321.2A Active CN109034160B (en) | 2018-07-06 | 2018-07-06 | A kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109034160B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109902751B (en) * | 2019-03-04 | 2022-07-08 | 福州大学 | Dial digital character recognition method integrating convolution neural network and half-word template matching |
CN110033037A (en) * | 2019-04-08 | 2019-07-19 | 重庆邮电大学 | A kind of recognition methods of digital instrument reading |
CN110197227B (en) * | 2019-05-30 | 2023-10-27 | 成都中科艾瑞科技有限公司 | Multi-model fusion intelligent instrument reading identification method |
CN110298347B (en) * | 2019-05-30 | 2022-11-01 | 长安大学 | Method for identifying automobile exhaust analyzer screen based on GrayWorld and PCA-CNN |
CN110346516A (en) * | 2019-07-19 | 2019-10-18 | 精英数智科技股份有限公司 | Fault detection method and device, storage medium |
CN111368824B (en) * | 2020-02-24 | 2022-09-23 | 河海大学常州校区 | Instrument identification method, mobile device and storage medium |
CN112348018A (en) * | 2020-11-16 | 2021-02-09 | 杭州安森智能信息技术有限公司 | Digital display type instrument reading identification method based on inspection robot |
CN112200160A (en) * | 2020-12-02 | 2021-01-08 | 成都信息工程大学 | Deep learning-based direct-reading water meter reading identification method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101030258A (en) * | 2006-02-28 | 2007-09-05 | 浙江工业大学 | Dynamic character discriminating method of digital instrument based on BP nerve network |
CN101079108A (en) * | 2007-06-29 | 2007-11-28 | 浙江工业大学 | DSP based multiple channel mechanical digital display digital gas meter automatic detection device |
CN105654130A (en) * | 2015-12-30 | 2016-06-08 | 成都数联铭品科技有限公司 | Recurrent neural network-based complex image character sequence recognition system |
CN105809179A (en) * | 2014-12-31 | 2016-07-27 | 中国科学院深圳先进技术研究院 | Pointer type instrument reading recognition method and device |
CN106529537A (en) * | 2016-11-22 | 2017-03-22 | 亿嘉和科技股份有限公司 | Digital meter reading image recognition method |
CN106960208A (en) * | 2017-03-28 | 2017-07-18 | 哈尔滨工业大学 | A kind of instrument liquid crystal digital automatic segmentation and the method and system of identification |
CN108133216A (en) * | 2017-11-21 | 2018-06-08 | 武汉中元华电科技股份有限公司 | The charactron Recognition of Reading method that achievable decimal point based on machine vision is read |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8045798B2 (en) * | 2007-08-30 | 2011-10-25 | Xerox Corporation | Features generation and spotting methods and systems using same |
CN105184265A (en) * | 2015-09-14 | 2015-12-23 | 哈尔滨工业大学 | Self-learning-based handwritten form numeric character string rapid recognition method |
-
2018
- 2018-07-06 CN CN201810734321.2A patent/CN109034160B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101030258A (en) * | 2006-02-28 | 2007-09-05 | 浙江工业大学 | Dynamic character discriminating method of digital instrument based on BP nerve network |
CN101079108A (en) * | 2007-06-29 | 2007-11-28 | 浙江工业大学 | DSP based multiple channel mechanical digital display digital gas meter automatic detection device |
CN105809179A (en) * | 2014-12-31 | 2016-07-27 | 中国科学院深圳先进技术研究院 | Pointer type instrument reading recognition method and device |
CN105654130A (en) * | 2015-12-30 | 2016-06-08 | 成都数联铭品科技有限公司 | Recurrent neural network-based complex image character sequence recognition system |
CN106529537A (en) * | 2016-11-22 | 2017-03-22 | 亿嘉和科技股份有限公司 | Digital meter reading image recognition method |
CN106960208A (en) * | 2017-03-28 | 2017-07-18 | 哈尔滨工业大学 | A kind of instrument liquid crystal digital automatic segmentation and the method and system of identification |
CN108133216A (en) * | 2017-11-21 | 2018-06-08 | 武汉中元华电科技股份有限公司 | The charactron Recognition of Reading method that achievable decimal point based on machine vision is read |
Non-Patent Citations (2)
Title |
---|
一种基于显著性检测的LED仪表字符自动识别方法;程敏;《信息与电脑(理论版)》;20180525;第38页-第41页 |
数显仪表数字实时识别系统的设计与实现;崔行臣 等;《计算机工程与设计》;20100116;第214页-第217页 |
Also Published As
Publication number | Publication date |
---|---|
CN109034160A (en) | 2018-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034160B (en) | A kind of mixed decimal point digital instrument automatic identifying method based on convolutional neural networks | |
CN110059694B (en) | Intelligent identification method for character data in complex scene of power industry | |
CN109829914B (en) | Method and device for detecting product defects | |
KR102631031B1 (en) | Method for detecting defects in semiconductor device | |
CN112368657B (en) | Machine learning analysis of pipeline and instrumentation diagrams | |
CN110363252B (en) | End-to-end trend scene character detection and identification method and system | |
CN108918536B (en) | Tire mold surface character defect detection method, device, equipment and storage medium | |
CN110648310B (en) | Weak supervision casting defect identification method based on attention mechanism | |
CN112381788B (en) | Part surface defect increment detection method based on double-branch matching network | |
CN111860348A (en) | Deep learning-based weak supervision power drawing OCR recognition method | |
CN110598693A (en) | Ship plate identification method based on fast-RCNN | |
US20210214765A1 (en) | Methods and systems for automated counting and classifying microorganisms | |
CN112381175A (en) | Circuit board identification and analysis method based on image processing | |
CN111680753A (en) | Data labeling method and device, electronic equipment and storage medium | |
CN111369526B (en) | Multi-type old bridge crack identification method based on semi-supervised deep learning | |
CN110866915A (en) | Circular inkstone quality detection method based on metric learning | |
CN110659637A (en) | Electric energy meter number and label automatic identification method combining deep neural network and SIFT features | |
CN113706523A (en) | Method for monitoring belt deviation and abnormal operation state based on artificial intelligence technology | |
CN111161213A (en) | Industrial product defect image classification method based on knowledge graph | |
CN112183639A (en) | Mineral image identification and classification method | |
CN114241469A (en) | Information identification method and device for electricity meter rotation process | |
CN111274863B (en) | Text prediction method based on text mountain probability density | |
CN116259008A (en) | Water level real-time monitoring method based on computer vision | |
CN116071294A (en) | Optical fiber surface defect detection method and device | |
CN114998215A (en) | Method and system for detecting abnormal paint spraying on surface of sewing machine |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
PP01 | Preservation of patent right | ||
PP01 | Preservation of patent right |
Effective date of registration: 20220517 Granted publication date: 20190712 |
|
PD01 | Discharge of preservation of patent | ||
PD01 | Discharge of preservation of patent |
Date of cancellation: 20230314 Granted publication date: 20190712 |