CN110288013A - A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input - Google Patents

A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input Download PDF

Info

Publication number
CN110288013A
CN110288013A CN201910537875.8A CN201910537875A CN110288013A CN 110288013 A CN110288013 A CN 110288013A CN 201910537875 A CN201910537875 A CN 201910537875A CN 110288013 A CN110288013 A CN 110288013A
Authority
CN
China
Prior art keywords
block
weight
input
iteration
label picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910537875.8A
Other languages
Chinese (zh)
Inventor
李竹
康迪迪
李文钧
盛庆华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Hangzhou Electronic Science and Technology University
Original Assignee
Hangzhou Electronic Science and Technology University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Electronic Science and Technology University filed Critical Hangzhou Electronic Science and Technology University
Priority to CN201910537875.8A priority Critical patent/CN110288013A/en
Publication of CN110288013A publication Critical patent/CN110288013A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input that the invention discloses a kind of, step S1: carries out block dividing processing for label picture;Step S2: the multiple twin convolutional neural networks of input are trained using block block label picture data set, obtain the trained multiple twin residual error network model of input;Step S3: defective labels are identified and is classified using trained model.Using technical solution of the present invention, block block label data collection is trained, determine classification belonging to defect, correctly classified in conjunction with adaboost algorithm, the calculation amount and complexity of defective labels identification are thus greatly reduced, while also effectively increasing the accuracy of defective labels identification classification.

Description

A kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input Sign recognition methods
Technical field
The present invention relates to the defective labels that identification field is detected in industrial production activities to detect identification field more particularly to one Defective labels recognition methods of the kind based on block segmentation and the multiple twin convolutional neural networks of input.
Background technique
With the development of society, nowadays, many commodity all have label on the market, label is believed with marked product key The effect of breath, the effect played in the work and life of people is increasing, while the quality problems of label are also increasingly It is concerned by people.However, label is in process of production, due to being influenced by factors such as production technology and mechanical precisions, The label produced often will appear many quality problems, and such as label breakage, printing bad includes that character is printed more, few print, lacked Draw, have scratch phenomenon etc.;Therefore label defects detection link is most important.Simultaneously because defective labels are many kinds of, defect mark The detection of label also becomes very difficult with classification.The detection identification of current defect label mainly has following three kinds of methods:
1. in the industrial production, the worker in production line is the quality that label is detected by the method that human eye compares, And retain up-to-standard label, abandon underproof label.
The problem is that: there are various drawbacks for the method for artificial detection label quality, for example detection speed is slow, precision It is low, it is at high cost, and also prolonged artificial detection easily causes the fatigue of people.
2. the defective labels detection method based on difference processing prepares reference label, label to be measured is allowed to do reference label Difference processing can detecte label picture.
The problem is that: the reference label of preparation is improper, and erroneous detection caused by label substance is different, uneven illumination is even to be made At erroneous detection etc..
3. can detecte based on the defective labels detection method of frequency domain processing using the higher feature of flaw indication frequency Defective labels out.
The problem is that: erroneous detection is caused in label picture region similar with defect information frequency, wants to label substance It asks.
So there is an urgent need to develop a set of defective labels detection methods in industrial production line.This method can be to label Print content is detected and is accurately identified automatically defective labels position and classification, at the same can also type to defective labels into The customized extension of row, system flexibility are high.
Summary of the invention
In view of this, it is necessory to provide a kind of lacking based on block segmentation and the twin convolutional neural networks of multiple input Label identification method is fallen into, block block label data collection is trained, determines the position that classification belonging to defect and defect occur It sets, is correctly classified in conjunction with adaboost algorithm, thus greatly reduce the calculation amount and complexity of defective labels identification Degree, while also effectively increasing the accuracy of defective labels identification classification.On this basis, we can be with customized defect kind Label picture is trained, and the flexibility of the system and scalability are relatively good.
In order to overcome the drawbacks of the prior art, technical scheme is as follows:
A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input, including with Lower step:
Step S1: label picture is subjected to block dividing processing;
Step S2: the multiple twin convolutional neural networks of input are trained using block block label picture data set, are obtained To the trained multiple twin residual error network model of input;
Step S3: defective labels are identified and is classified using trained network model.
Wherein, the S1 further comprises:
Step S11: label picture data set is obtained, and carries out block cutting process;
Step S12: the label picture block block after cutting is stored in block block label picture database;
The step S11 further comprises:
Step S111: the tag width that label picture is concentrated is w, is highly h;
Step S112: block cutting is carried out to label picture with width and height are the block block of n;
S113: every label picture of step is divided into w/n*h/n block block.
The step S2 further comprises:
Step S21: block block label picture data set is obtained from block block label picture database;
Step S22: the multiple twin residual error network model of input of block block label picture data set training is used;
The step S22 further comprises:
Step S221: the weight distribution of training set is initialized, each training sample is endowed identical when most starting Weight: 1/N can be expressed as follows:
D1=(w11, w12...w1i..., w1N),
D1 indicates the data set with specified weight of first round iteration in above formula, and in W subscript, first digit is shown It is which wheel iteration, second digit is the index of sample, and the quantity of sample is N;
Step S222: assuming that we will carry out M wheel iteration, that is, M optimal Weak Classifiers are selected, next starts to change In generation, wherein what m was indicated is the number of iteration.
For (int m=1;M <=M;m++);
Step S223: D is distributed using with weightmTraining dataset study, obtain an optimal Weak Classifier:
Gm(x): χ →, { -1 ,+1 }
Wherein Gm(x) what is indicated is to be distributed D to weight in m wheelmThe classifier that learns of training set, classification Result be χ →, { -1 ,+1 }
Step S224: the minimum Weak Classifier G of an error current rate is chosen as m-th of basic classification device Gm, and is counted Calculate Weak Classifier: Gm(x): χ →, { -1 ,+1 } calculates the error in classification rate of Gm (x) on training dataset:
Wherein emIndicate error rate, xiThat indicate is input sample, yiThat indicate is classification results, wmiWhat is indicated is in m Take turns the weight of i-th of sample in iteration.
Step S225: it calculatesIndicate the weight of Gm (x) in final classification device:
Step S226: the weight distribution for updating training dataset carries out m+1 wheel iteration:
Dm+1=(wM+1,1,wm+1,2...wm+1,i...,wm+1,N),
Wherein Dm+1Indicate the data set in the m+1 times iteration, Wm+1What is indicated is the power of data set in the m+1 times iteration Weight, ZmWhat is indicated is normaliztion constant, What is indicated is weight of the Gm (x) in final classification device, Gm is the Weak Classifier that error current rate is minimum in m wheel iteration.
After right value update, just to be increased by the weight of basic classification device Gm (x) misclassification sample, and by sample of correctly classifying This weight reduces;
Step S227: after top M takes turns iteration, can combine strong classifier, as the final type of trained mould:
Sign function is sign function, is greater than 0 and returns to 1, -1 is returned less than 0, is equal to 0 and returns to 0.Gm (x) is changed in m wheel Classifier obtained in generation,It is current class device weight shared in final classifier.
Compared with prior art, the invention has the benefit that
High efficiency: 1. present invention carry out block dividing processing to label picture, can be in the case where defect information is weaker The efficiently defect characteristic of crawl label picture, while defective locations can be quickly recognized.2. the present invention utilizes block label figure Sheet data library stores label picture information, using the twin convolutional Neural net of multiple input of deep learning being made of residual error network Network is trained label picture data set, has obtained the efficient multiple twin residual error network model of input, has improved defect mark Picture classification performance is signed, the shortcomings that existing defective labels picture complexity height be easy to cause erroneous detection is improved, improves identification Efficiency.
Accuracy: 1. present invention are trained block block label data collection, and it is twin residual to establish accurate multiple input Poor network model, since twin network has multiple inputs sharing parameters, what can be extracted is same category feature, is mapped to feature Vector has very high similitude, is finally classified as defective labels picture in most like reference label classification, improves The shortcomings that existing label picture is easily extracted useless feature, causes classification error, improves the accuracy of classification.2. knot The adaboost algorithm for closing improved label picture is further trained the label picture of classification error, is further improved scarce The accuracy for falling into label picture class prediction improves existing defective labels detection identification technology and classifies accurately to defective labels Property difference disadvantage.
Scalability: the present invention is trained data using the multiple twin convolutional neural networks of input, twin net Input one of network is used as label to be measured, and remaining input terminal is reference label, is classified by similitude, is not only increased The accuracy of classification, and can adapt to that defective labels are many kinds of with customized defective labels type, identify difficult lack Point.
Detailed description of the invention
Fig. 1 is a kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Sign the flow chart of recognition methods;
Fig. 2 is a kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Sign the detail flowchart of recognition methods;
Fig. 3 is a kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Sign the schematic diagram of recognition methods step S1;
Fig. 4 is a kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Sign the structure chart of the multiple twin network model of input of recognition methods;
Fig. 5 is a kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Sign the structure chart of recognition methods residual error network;
Fig. 6 is a kind of defect mark based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Sign the detail flowchart of recognition methods step S22;
Following specific embodiment will further illustrate the present invention in conjunction with above-mentioned attached drawing.
Specific embodiment
Technical solution provided by the invention is described further below with reference to attached drawing.
When label picture and defect information to it is bigger when, be not easy to grab useful information when extracting feature, be based on this, We have carried out block segmentation to label picture;In order to which guarantee crawl is deeper defect characteristic, while in order to guarantee The classification and Detection effect of deeper time defect characteristic, we use residual error network to carry out feature extraction to defect characteristic;For Guarantee crawl is same class and most like feature, improves the accuracy of classification, we take twin residual error network To grab defect characteristic;While in order to improve the accuracy of classification, we used the progress of the twin network of multiple input is similar Property compare, while also the label picture of classification error is repeated to train using adaboost algorithm.The present invention provides one Defective labels recognition methods of the kind based on block segmentation and the multiple twin convolutional neural networks of input.
A kind of defective labels identification based on block segmentation and the multiple twin convolutional neural networks of input provided by the invention Method, Fig. 1 and 2 show the defective labels identification divided the present invention is based on block with the multiple twin convolutional neural networks of input System, generally speaking, the present invention include 3 big steps, step S1: carry out block segmentation to label picture;Step S2: it uses The multiple twin residual error network model of input of block label picture data set training;Step S3: trained multiple input is used Twin residual error network is classified and is identified to defective labels picture;
It is based on block label picture database referring to Fig. 3, step S1, is w by width, highly divides for the label picture of h For w/n*h/n small label block blocks;
Referring to fig. 4, the twin convolutional Neural model used in the present invention is made of multiple residual error networks, each residual error net Network is corresponding with an input, and an input terminal is label information to be measured, and remaining input terminal is reference label information.
Referring to Fig. 5, a residual error module is to add one identical to reflect again by two layers of convolution in the ResNet-34 that the present invention applies It penetrates and to be formed, the influence of gradient disappearance has been effectively relieved, the network model number of plies greatly increased.
Referring to Fig. 6, step S22 is specifically included as follows using the multiple twin residual error network model of adaboost algorithm training Step:
Step S221: the weight distribution of training set is initialized, each training sample is endowed identical when most starting Weight: 1/N can be expressed as follows:
D1=(w11, w12...w1i..., w1N),
D1 indicates the data set with specified weight of first round iteration in above formula, and in W subscript, first digit is shown It is which wheel iteration, second digit is the index of sample, and the quantity of sample is N;
Step S222: assuming that we will carry out M wheel iteration, that is, M optimal Weak Classifiers are selected, next starts to change In generation, wherein what m was indicated is the number of iteration.
For (int m=1;M <=M;m++);
Step S223: D is distributed using with weightmTraining dataset study, obtain an optimal Weak Classifier:
Gm(x): χ →, { -1 ,+1 }
Wherein Gm(x) what is indicated is to be distributed D to weight in m wheelmThe classifier that learns of training set, classification Result be χ →, { -1 ,+1 }
Step S224: the minimum Weak Classifier G of an error current rate is chosen as m-th of basic classification device Gm, and is counted Calculate Weak Classifier: Gm(x): χ →, { -1 ,+1 } calculates the error in classification rate of Gm (x) on training dataset:
Wherein emIndicate error rate, xiThat indicate is input sample, yiThat indicate is classification results, wmiWhat is indicated is in m Take turns the weight of i-th of sample in iteration.
Step S225: it calculatesIndicate the weight of Gm (x) in final classification device:
Step S226: the weight distribution for updating training dataset carries out m+1 wheel iteration:
Dm+1=(wM+1,1,wm+1,2...wm+1,i...,wm+1,N),
Wherein Dm+1Indicate the data set in the m+1 times iteration, Wm+1What is indicated is the power of data set in the m+1 times iteration Weight, ZmWhat is indicated is normaliztion constant, What is indicated is weight of the Gm (x) in final classification device, Gm is the Weak Classifier that error current rate is minimum in m wheel iteration.
After right value update, just to be increased by the weight of basic classification device Gm (x) misclassification sample, and by sample of correctly classifying This weight reduces;
Step S227: after top M takes turns iteration, can combine strong classifier, as the final type of trained mould:
Sign function is sign function, is greater than 0 and returns to 1, -1 is returned less than 0, is equal to 0 and returns to 0.Gm (x) is changed in m wheel Classifier obtained in generation,It is current class device weight shared in final classifier.
The above description of the embodiment is only used to help understand the method for the present invention and its core ideas.It should be pointed out that pair For those skilled in the art, without departing from the principle of the present invention, the present invention can also be carried out Some improvements and modifications, these improvements and modifications also fall within the scope of protection of the claims of the present invention.
The foregoing description of the disclosed embodiments enables those skilled in the art to implement or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, of the invention It is not intended to be limited to the embodiments shown herein, and is to fit to and the principles and novel features disclosed herein phase one The widest scope of cause.

Claims (1)

1. a kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input, feature exist In, comprising the following steps:
Step S1: label picture is subjected to block dividing processing;
Step S2: the multiple twin convolutional neural networks of input are trained using block block label picture data set, are obtained more Twin residual error network model is inputted again;
Step S3: defective labels are identified and is classified using trained multiple input twin residual error network model;
Wherein, the S1 further comprises:
Step S11: label picture database is obtained, and block cutting process is carried out to label picture;
Step S12: the label picture block block after cutting is stored in block block label picture database;
The step S11 further comprises:
Step S111: the tag width in label picture database is w, is highly h;
Step S112: block cutting is carried out to label picture with width and height are the block block of n;
S113: every label picture of step is divided into w/n*h/n block block;
The step S2 further comprises:
Step S21: block block label picture data set is obtained from block block label picture database;
Step S22: the multiple twin residual error network model of input of block block label picture data set training is used;
The step S22 further comprises:
Step S221: initializing the weight distribution of training set, each training sample is endowed identical weight when most starting: 1/N can be expressed as follows:
D1 indicates the data set with specified weight of first round iteration in above formula, and in W subscript, it is that first digit, which is shown, A few wheel iteration, second digit are the index of sample, and the quantity of sample is N;
Step S222: assuming that we will carry out M wheel iteration, that is, selecting M optimal Weak Classifiers, next start iteration, Wherein what m was indicated is the number of iteration;
For (int m=1;M <=M;m++);
Step S223: D is distributed using with weightmTraining dataset study, obtain an optimal Weak Classifier:
Gm(x): χ →, { -1 ,+1 }
Wherein Gm(x) what is indicated is to be distributed D to weight in m wheelmThe classifier that learns of training set, the knot of classification Fruit be χ →, { -1 ,+1 };
Step S224: the minimum Weak Classifier G of an error current rate is chosen as m-th of basic classification device Gm, and is calculated weak Classifier: Gm(x): χ →, { -1 ,+1 } calculates the error in classification rate of Gm (x) on training dataset:
Wherein emIndicate error rate, xiThat indicate is input sample, yiThat indicate is classification results, wmiWhat is indicated is changed in m wheel The weight of i-th of sample in generation;
Step S225: it calculatesIndicate the weight of Gm (x) in final classification device:
Step S226: the weight distribution for updating training dataset carries out m+1 wheel iteration:
Dm+1=(wM+1,1,wm+1,2...wm+1,i...,wm+1,N),
Wherein Dm+1Indicate the data set in the m+1 times iteration, Wm+1That indicate is the weight of data set in the m+1 times iteration, ZmTable What is shown is normaliztion constant, What is indicated is weight of the Gm (x) in final classification device, and Gm is m Take turns the Weak Classifier that error current rate is minimum in iteration;
After right value update, just to be increased by the weight of basic classification device Gm (x) misclassification sample, and by correct classification samples Weight reduces;
Step S227: after top M takes turns iteration, can combine strong classifier, as the final type of trained mould:
Sign function is sign function, is greater than 0 and returns to 1, -1 is returned less than 0, is equal to 0 and returns to 0;Gm (x) is in m wheel iteration Obtained classifier,It is current class device weight shared in final classifier.
CN201910537875.8A 2019-06-20 2019-06-20 A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input Pending CN110288013A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910537875.8A CN110288013A (en) 2019-06-20 2019-06-20 A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910537875.8A CN110288013A (en) 2019-06-20 2019-06-20 A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input

Publications (1)

Publication Number Publication Date
CN110288013A true CN110288013A (en) 2019-09-27

Family

ID=68004851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910537875.8A Pending CN110288013A (en) 2019-06-20 2019-06-20 A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input

Country Status (1)

Country Link
CN (1) CN110288013A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992334A (en) * 2019-11-29 2020-04-10 深圳易嘉恩科技有限公司 Quality evaluation method for DCGAN network generated image
CN111291657A (en) * 2020-01-21 2020-06-16 同济大学 Crowd counting model training method based on difficult case mining and application
CN111325708A (en) * 2019-11-22 2020-06-23 济南信通达电气科技有限公司 Power transmission line detection method and server
CN111709920A (en) * 2020-06-01 2020-09-25 深圳市深视创新科技有限公司 Template defect detection method
CN112907510A (en) * 2021-01-15 2021-06-04 中国人民解放军国防科技大学 Surface defect detection method
CN116128798A (en) * 2022-11-17 2023-05-16 台州金泰精锻科技股份有限公司 Finish forging process for bell-shaped shell forged surface teeth

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866865A (en) * 2015-05-11 2015-08-26 西南交通大学 DHOG and discrete cosine transform-based overhead line system equilibrium line fault detection method
CN105653450A (en) * 2015-12-28 2016-06-08 中国石油大学(华东) Software defect data feature selection method based on combination of modified genetic algorithm and Adaboost
CN108074231A (en) * 2017-12-18 2018-05-25 浙江工业大学 Magnetic sheet surface defect detection method based on convolutional neural network
CN108389180A (en) * 2018-01-19 2018-08-10 浙江工业大学 A kind of fabric defect detection method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866865A (en) * 2015-05-11 2015-08-26 西南交通大学 DHOG and discrete cosine transform-based overhead line system equilibrium line fault detection method
CN105653450A (en) * 2015-12-28 2016-06-08 中国石油大学(华东) Software defect data feature selection method based on combination of modified genetic algorithm and Adaboost
CN108074231A (en) * 2017-12-18 2018-05-25 浙江工业大学 Magnetic sheet surface defect detection method based on convolutional neural network
CN108389180A (en) * 2018-01-19 2018-08-10 浙江工业大学 A kind of fabric defect detection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FIGHTING41LOVE: "《Siamese network 孪生神经网络--一个简单神奇的结构》", 《简书》 *
PAN_JINQUAN: "《Adaboost算法原理分析和实例+代码(简明易懂)》", 《CSDN》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325708A (en) * 2019-11-22 2020-06-23 济南信通达电气科技有限公司 Power transmission line detection method and server
CN111325708B (en) * 2019-11-22 2023-06-30 济南信通达电气科技有限公司 Transmission line detection method and server
CN110992334A (en) * 2019-11-29 2020-04-10 深圳易嘉恩科技有限公司 Quality evaluation method for DCGAN network generated image
CN110992334B (en) * 2019-11-29 2023-04-07 四川虹微技术有限公司 Quality evaluation method for DCGAN network generated image
CN111291657A (en) * 2020-01-21 2020-06-16 同济大学 Crowd counting model training method based on difficult case mining and application
CN111709920A (en) * 2020-06-01 2020-09-25 深圳市深视创新科技有限公司 Template defect detection method
CN112907510A (en) * 2021-01-15 2021-06-04 中国人民解放军国防科技大学 Surface defect detection method
CN112907510B (en) * 2021-01-15 2023-07-07 中国人民解放军国防科技大学 Surface defect detection method
CN116128798A (en) * 2022-11-17 2023-05-16 台州金泰精锻科技股份有限公司 Finish forging process for bell-shaped shell forged surface teeth
CN116128798B (en) * 2022-11-17 2024-02-27 台州金泰精锻科技股份有限公司 Finish forging method for bell-shaped shell forging face teeth

Similar Documents

Publication Publication Date Title
CN110288013A (en) A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input
CN109101938B (en) Multi-label age estimation method based on convolutional neural network
CN111553127A (en) Multi-label text data feature selection method and device
CN108647595B (en) Vehicle weight identification method based on multi-attribute depth features
CN103699523A (en) Product classification method and device
CN106909946A (en) A kind of picking system of multi-modal fusion
CN111476307B (en) Lithium battery surface defect detection method based on depth field adaptation
CN102385592B (en) Image concept detection method and device
CN104636755A (en) Face beauty evaluation method based on deep learning
KR102362872B1 (en) Method for refining clean labeled data for artificial intelligence training
CN110377727A (en) A kind of multi-tag file classification method and device based on multi-task learning
CN108595558A (en) A kind of image labeling method of data balancing strategy and multiple features fusion
CN112949517B (en) Plant stomata density and opening degree identification method and system based on deep migration learning
CN109299252A (en) The viewpoint polarity classification method and device of stock comment based on machine learning
CN110134777A (en) Problem De-weight method, device, electronic equipment and computer readable storage medium
CN110019820A (en) Main suit and present illness history symptom Timing Coincidence Detection method in a kind of case history
CN116051479A (en) Textile defect identification method integrating cross-domain migration and anomaly detection
CN104978569A (en) Sparse representation based incremental face recognition method
CN109145685A (en) Fruits and vegetables EO-1 hyperion quality detecting method based on integrated study
CN110569495A (en) Emotional tendency classification method and device based on user comments and storage medium
CN116304020A (en) Industrial text entity extraction method based on semantic source analysis and span characteristics
CN115659947A (en) Multi-item selection answering method and system based on machine reading understanding and text summarization
CN106326914A (en) SVM-based pearl multi-classification method
CN106778859A (en) A kind of mark semisupervised classification method and device certainly based on packing density peak value
CN112905793B (en) Case recommendation method and system based on bilstm+attention text classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190927

RJ01 Rejection of invention patent application after publication