CN109685030A - A kind of mug rim of a cup defects detection classification method based on convolutional neural networks - Google Patents

A kind of mug rim of a cup defects detection classification method based on convolutional neural networks Download PDF

Info

Publication number
CN109685030A
CN109685030A CN201811631744.8A CN201811631744A CN109685030A CN 109685030 A CN109685030 A CN 109685030A CN 201811631744 A CN201811631744 A CN 201811631744A CN 109685030 A CN109685030 A CN 109685030A
Authority
CN
China
Prior art keywords
image
convolutional neural
mug
cup
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811631744.8A
Other languages
Chinese (zh)
Inventor
李东洁
李若昊
李东阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin University of Science and Technology
Original Assignee
Harbin University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin University of Science and Technology filed Critical Harbin University of Science and Technology
Priority to CN201811631744.8A priority Critical patent/CN109685030A/en
Publication of CN109685030A publication Critical patent/CN109685030A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/32Normalisation of the pattern dimensions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The mug rim of a cup defects detection classification method based on convolutional neural networks that the invention discloses a kind of, comprising the following steps: mug rim of a cup image information A, is acquired by image capturing system;B, the noise of image is collected using opencv removal and data set expands;C, flaw labeling is carried out to mug rim of a cup image using LabelImg, the training set image marked is uniformly formatted as fixed size: 2M*2M;D, training convolutional neural networks are treated using formatted training set to be trained;E, image characteristics extraction is carried out to the image after flaw labeling using convolutional neural networks model after training;F, region recommendation network generates the positive sample candidate frame and negative sample candidate frame of identical quantity according to characteristics of image;G, detection target in region is recommended to classify target.So far, whole system, which completes, classifies to mug rim of a cup defects detection.The present invention can be effectively used for the classification of mug rim of a cup defects detection, improve detection degree of automation and efficiency and reduce the labor intensity of influence and worker of the human factor to detection process.

Description

Mark cup rim defect detection and classification method based on convolutional neural network
Technical Field
The invention relates to the technical field of cup rim defect detection and classification, in particular to a mug rim defect detection and classification method based on a convolutional neural network.
Background
With the continuous development of the manufacturing industry of China and the continuous improvement of the living standard of people, the mug cup becomes a part of the daily life of people, the defects of the mug cup mouth mainly refer to spots, gaps and scratches of the mug cup mouth, and the defects directly influence the product sales volume and the enterprise image. Therefore, it becomes important to adopt an appropriate defect detection classification means. The traditional manual detection method has the defects of low detection efficiency, high working strength, low precision and the like, in addition, some researchers combine computer vision and image processing and detect the defect condition of the mug through defect comparison, the defect image identification algorithm needs to be manually constructed, main characteristics of a target are selected, and a proper classifier is selected for identification, so that the limitation is large. For example, the candidate region discrimination, that is, the discrimination of the divided candidate regions according to the shape feature, the gray feature and the Hu invariant moment feature requires some main features of the ginseng and the design defect, and has a problem that the feature based on manual design has no good robustness to the variation of defect diversity, is only suitable for specific defect detection, and is difficult to adapt to the automatic identification and positioning of images with different defect areas, diversified shape types and complex background regions.
In recent years, with the advent of millions of labeled training sets and the advent of GPU-based training algorithms, training complex convolutional network models is no longer luxurious. Compared with the traditional method for manually extracting the features, the convolutional neural network can automatically learn the features of the target, is suitable for processing a data set, can learn from end to end, completes most predictions in a GPU, and greatly improves the speed and accuracy of target detection.
Therefore, the convolutional neural network is introduced into the defect detection classification of the cup mouth of the mug cup to become a feasible scheme, compared with defect comparison, the speed and the accuracy of target detection are improved, and meanwhile compared with traditional manual detection, the detection automation degree and the detection efficiency are improved, and the influence of human factors on the detection process and the labor intensity of workers are reduced.
Disclosure of Invention
The invention aims to design a method for detecting and classifying defects of a mug mouth based on a convolutional neural network, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a mug cup mouth defect detection and classification method based on a convolutional neural network comprises the following steps:
A. collecting image information of the cup mouth of the mug by an image collecting system;
B. removing noise and data set expansion of the acquired image by using opencv;
C. utilizing labelImg to mark defects of the image of the cup opening of the mug, and uniformly formatting the marked training set image into a fixed size: 2M by 2M;
D. training the convolutional neural network to be trained by using the formatted training set;
E. carrying out image feature extraction on the image with the defect marks by using the trained convolutional neural network model;
F. the regional recommendation network generates positive sample candidate frames and negative sample candidate frames with the same quantity according to the image characteristics;
G. and classifying the detection target in the target recommendation area.
Preferably, the updating of the weights in the training network in step D includes the following steps:
A. adding weight attenuation in the traditional Adam method;
B. weight attenuation is not added to a loss function to participate in gradient calculation, but the attenuation process of weight is additionally carried out when parameters are updated every time;
C. the parameter updating step of weight attenuation is as follows:
wherein,in order to obtain a learning rate,in order to weight the attenuation coefficient(s),it is an additional weight attenuation term that,is a small constant for numerical stability, defaultCorrected deviation of first order momentDeviation of corrected second momentAndin the intervalIn, the recommendation defaults to: 0.9 and 0.999 respectively.
Preferably, the generating of the positive and negative sample frames in step F includes:
A. generating a feature map(acquiescence get) The small sliding window of (1);
B. predicting while generating each of the small-scale windowsA recommended region candidate box;
C. each small sliding window is mapped into 256-dimensional characteristic vectors under a ZF network, and the vectors are sent into two parallel full-connection layers comprising a target frame classification layer and a target frame regression layer, wherein the targetThe box classification layer outputs two scores, i.e., the probability that each target box corresponds to a target and a non-target. Outputting four regression parameters by the regression layer of the target frameRespectively representing the central coordinate and the width and the height of the target frame, and used for correcting the sliding window to obtain a corrected target frame;
D. and taking the candidate frame with the intersection ratio IOU of more than 0.7 with the target real frame as a positive sample, taking the candidate frame with the intersection ratio IOU of less than 0.3 with the target real frame as a negative sample, and discarding the rest candidate frames.
Preferably, the step G of classifying the detection target in the target recommendation region includes the following steps:
A. mapping the positive and negative candidate frames generated in the step F to an original feature map finally extracted by the convolutional neural network;
B. uniformly formatting the ROI pooling layer intoDefault n = 7;
C. network use of obtained recommended region featuresThe classifier judges the target category and carries out secondary adjustment on the regional candidate frame belonging to a certain category by utilizing a regression layer of the network, and the position of the regional candidate frame is further corrected;
D. and obtaining a final classification result.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method for detecting and classifying the defects of the mug can effectively improve the automation degree, accuracy and efficiency of detection and classification;
(2) the detection method can accurately reflect the defect type of the mug, is convenient for accurately evaluating the cup mouth defect of the mug, is beneficial to a manufacturer to improve the process according to the defect type, and improves the production efficiency.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a flowchart of the operation of the ROI pooling layer.
Fig. 3 is a diagram of the overall network architecture of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 2 and fig. 3, the present invention provides a system scheme: a mug cup mouth defect detection and classification method based on a convolutional neural network. The invention is described in further detail below with reference to the following detailed description and accompanying drawings: wherein, fig. 1 is a flow chart of the whole system, fig. 2 is a flow chart of the work of the ROI pooling layer, and fig. 3 is a structure chart of the total network of the present invention:
the method comprises the following steps:
A. collecting image information of the cup mouth of the mug by an image collecting system;
B. removing noise and data set expansion of the acquired image by using opencv;
C. utilizing labelImg to mark defects of the image of the cup opening of the mug, and uniformly formatting the marked training set image into a fixed size: 2M by 2M;
D. training the convolutional neural network to be trained by using the formatted training set;
E. carrying out image feature extraction on the image with the defect marks by using the trained convolutional neural network model;
F. the regional recommendation network generates positive sample candidate frames and negative sample candidate frames with the same quantity according to the image characteristics;
G. and classifying the detection target in the target recommendation area.
In the invention, the image acquisition system in the step A comprises the following steps:
A. fixing a CCD camera on an operation platform;
B. after the position is properly adjusted, the CCD camera acquires an image of the cup opening of the mug;
C. and the image is collected and stored by the upper computer for subsequent operation.
The invention adopts the CCD camera to replace the work of human eyes in the traditional manual detection, and the CCD camera is fixed on the operation platform, thereby ensuring the detection stability, improving the automation degree of the detection process and simultaneously reducing errors caused by human factors.
In the invention, the image denoising and the data set expansion in the step B comprise the following steps:
A. installing opencv3.3.0 and visual studio2013 and configuring the environment;
B. filtering the acquired image by using a media blur function in opencv;
C. the data set is expanded by using functions of resize, flip and the like in opencv.
The method adopts the media nerve function in opencv to carry out denoising, has denoising effect more suitable for the method compared with other traditional denoising methods, adopts image geometric transformation to expand a data set, ensures that the trained convolutional neural network has better effect, and increases the accuracy of the result.
In the invention, the defect marking in the step C comprises the following steps:
A. installing python2.7.17 and labelImg and performing environment configuration;
B. the source code folder opens predefined _ classes.txt by using the statepad + +, and modifies the default class;
C. opening a picture folder by the Open Dir, sequentially selecting pictures to begin labeling, beginning a picture frame by using a Create RectBox, and labeling corresponding defect types;
D. uniformly formatting the marked training set images into fixed sizes: 2M by 2M.
The labelImg adopted by the invention is a convenient picture marking tool, and can mark the designated area as the designated defect type, thereby facilitating the training of the convolutional neural network.
In the invention, the convolutional neural network training in the step D comprises the following steps:
A. selecting a convolutional neural network structure and determining initial parameters;
B. the training iteration process comprises forward propagation and error backward propagation of each small batch of data in sequence, after all the data are subjected to backward propagation, uniformly updating weight parameters for the next iteration, stopping iteration when the expected iteration times are reached, and calculating the final accuracy of the network;
C. continuously adjusting the network structure and related parameters according to the training condition until the training accuracy reaches the requirement;
D. and using the trained convolutional neural network model, the weight parameters and other data for subsequent operation.
The convolutional neural network adopted by the invention needs to be trained by using the data set marked in the step C, and the trained network model can be used for detecting and classifying the acquired images.
In the invention, the image feature extraction in the step E comprises the following steps:
A. the convolutional layer senses the image based on a local receptive field principle, and obtains image characteristics as a characteristic diagram for subsequent operation;
B. the pooling layer carries out secondary extraction on the feature map based on the local correlation principle of the image, further reduces the number of parameters needing to be trained on the premise of keeping the image features as much as possible, and reduces the risk of overfitting.
The image feature extraction network adopted by the invention can obtain the features which are more consistent with the actual image by the convolution layer, thereby reducing the complexity of the network and preventing overfitting; the pooling layer is used for carrying out secondary extraction on the characteristic diagram extracted from the convolution layer, so that the risk of overfitting is reduced.
In the invention, the image feature extraction in the step F comprises the following steps:
A. generating a feature map(acquiescence get) The small sliding window of (1);
B. predicting while generating each of the small-scale windowsA recommended region candidate box;
C. each small sliding window is mapped into 256-dimensional characteristic vector under ZF network, and the vector is sent into two parallel full-connection layers comprising a target frame classification layer and a targetA box regression layer, where the target box classification layer outputs two scores, i.e., the probability that each target box corresponds to a target and a non-target. Outputting four regression parameters by the regression layer of the target frameRespectively representing the central coordinate and the width and the height of the target frame, and used for correcting the sliding window to obtain a corrected target frame;
D. and taking the candidate frame with the intersection ratio IOU of more than 0.7 with the target real frame as a positive sample, taking the candidate frame with the intersection ratio IOU of less than 0.3 with the target real frame as a negative sample, and discarding the rest candidate frames.
Compared with the traditional method for acquiring the candidate frames, the method for generating the positive and negative sample frames by the regional recommendation network is quicker and is easily combined with the subsequent target classification.
In the invention, the target classification in the step G comprises the following steps:
A. mapping the positive and negative candidate frames generated in the step F to an original feature map finally extracted by the convolutional neural network;
B. uniformly formatting the ROI pooling layer intoDefault n = 7;
C. network use of obtained recommended region featuresThe classifier judges the target category and carries out secondary adjustment on the regional candidate frame belonging to a certain category by utilizing a regression layer of the network, and the position of the regional candidate frame is further corrected;
D. and obtaining a final classification result.
The method is used for classifying the detection targets in the target recommendation area, and the regression parameters are obtained while classification is carried out, so that the classification result is more accurate.

Claims (3)

1. A method for detecting and classifying defects of a mug cup mouth based on a convolutional neural network is characterized by comprising the following steps: and updating the weight of the convolutional neural network by using the improved Adam, and properly modifying the learning rate according to the gradient descending condition in the updating process so as to adapt to different gradients and improve the training efficiency.
2. A method for detecting and classifying defects of a mug cup mouth based on a convolutional neural network is characterized by comprising the following steps: and generating a small sliding window on the feature map, predicting recommended region candidate frames, taking the candidate frames with the intersection ratio IOU of the target real frame being more than 0.7 as positive samples, taking the candidate frames with the intersection ratio IOU of the target real frame being less than 0.3 as negative samples, and discarding the rest candidate frames.
3. A method for detecting and classifying defects of a mug cup mouth based on a convolutional neural network is characterized by comprising the following steps: and judging the target category of the obtained recommended regional characteristics by using a softmax classifier, performing secondary adjustment on the regional candidate frames belonging to a certain category by using a regression layer of the network, further correcting the positions of the regional candidate frames until the effect is optimal, and finally achieving optimal classification.
CN201811631744.8A 2018-12-29 2018-12-29 A kind of mug rim of a cup defects detection classification method based on convolutional neural networks Pending CN109685030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811631744.8A CN109685030A (en) 2018-12-29 2018-12-29 A kind of mug rim of a cup defects detection classification method based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811631744.8A CN109685030A (en) 2018-12-29 2018-12-29 A kind of mug rim of a cup defects detection classification method based on convolutional neural networks

Publications (1)

Publication Number Publication Date
CN109685030A true CN109685030A (en) 2019-04-26

Family

ID=66190164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811631744.8A Pending CN109685030A (en) 2018-12-29 2018-12-29 A kind of mug rim of a cup defects detection classification method based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN109685030A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764790A (en) * 2019-10-18 2020-02-07 东北农业大学 Data set marking method for deep learning
CN111105411A (en) * 2019-12-30 2020-05-05 创新奇智(青岛)科技有限公司 Magnetic shoe surface defect detection method
CN111768388A (en) * 2020-07-01 2020-10-13 哈尔滨工业大学(深圳) Product surface defect detection method and system based on positive sample reference
CN112113978A (en) * 2020-09-22 2020-12-22 成都国铁电气设备有限公司 Vehicle-mounted tunnel defect online detection system and method based on deep learning
CN112633327A (en) * 2020-12-02 2021-04-09 西安电子科技大学 Staged metal surface defect detection method, system, medium, equipment and application

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886755A (en) * 2017-01-19 2017-06-23 北京航空航天大学 A kind of intersection vehicles system for detecting regulation violation based on Traffic Sign Recognition
US20180322623A1 (en) * 2017-05-08 2018-11-08 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
CN108985337A (en) * 2018-06-20 2018-12-11 中科院广州电子技术有限公司 A kind of product surface scratch detection method based on picture depth study
CN109035233A (en) * 2018-07-24 2018-12-18 西安邮电大学 Visual attention network and Surface Flaw Detection method
CN109064454A (en) * 2018-07-12 2018-12-21 上海蝶鱼智能科技有限公司 Product defects detection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106886755A (en) * 2017-01-19 2017-06-23 北京航空航天大学 A kind of intersection vehicles system for detecting regulation violation based on Traffic Sign Recognition
US20180322623A1 (en) * 2017-05-08 2018-11-08 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
CN108985337A (en) * 2018-06-20 2018-12-11 中科院广州电子技术有限公司 A kind of product surface scratch detection method based on picture depth study
CN109064454A (en) * 2018-07-12 2018-12-21 上海蝶鱼智能科技有限公司 Product defects detection method and system
CN109035233A (en) * 2018-07-24 2018-12-18 西安邮电大学 Visual attention network and Surface Flaw Detection method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110764790A (en) * 2019-10-18 2020-02-07 东北农业大学 Data set marking method for deep learning
CN111105411A (en) * 2019-12-30 2020-05-05 创新奇智(青岛)科技有限公司 Magnetic shoe surface defect detection method
CN111105411B (en) * 2019-12-30 2023-06-23 创新奇智(青岛)科技有限公司 Magnetic shoe surface defect detection method
CN111768388A (en) * 2020-07-01 2020-10-13 哈尔滨工业大学(深圳) Product surface defect detection method and system based on positive sample reference
CN111768388B (en) * 2020-07-01 2023-08-11 哈尔滨工业大学(深圳) Product surface defect detection method and system based on positive sample reference
CN112113978A (en) * 2020-09-22 2020-12-22 成都国铁电气设备有限公司 Vehicle-mounted tunnel defect online detection system and method based on deep learning
CN112633327A (en) * 2020-12-02 2021-04-09 西安电子科技大学 Staged metal surface defect detection method, system, medium, equipment and application
CN112633327B (en) * 2020-12-02 2023-06-30 西安电子科技大学 Staged metal surface defect detection method, system, medium, equipment and application

Similar Documents

Publication Publication Date Title
CN108830188B (en) Vehicle detection method based on deep learning
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN109685030A (en) A kind of mug rim of a cup defects detection classification method based on convolutional neural networks
CN108345911B (en) Steel plate surface defect detection method based on convolutional neural network multi-stage characteristics
CN108960245B (en) Tire mold character detection and recognition method, device, equipment and storage medium
CN106960195B (en) Crowd counting method and device based on deep learning
CN106127780B (en) A kind of curved surface defect automatic testing method and its device
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
CN110473173A (en) A kind of defect inspection method based on deep learning semantic segmentation
CN110853015A (en) Aluminum profile defect detection method based on improved Faster-RCNN
CN111724355B (en) Image measuring method for abalone body type parameters
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN108921201A (en) Dam defect identification and classification method based on feature combination and CNN
CN111127417B (en) Printing defect detection method based on SIFT feature matching and SSD algorithm improvement
CN105224921A (en) A kind of facial image preferentially system and disposal route
CN111488920A (en) Bag opening position detection method based on deep learning target detection and recognition
CN107622277A (en) A kind of complex-curved defect classification method based on Bayes classifier
CN113393438B (en) Resin lens defect detection method based on convolutional neural network
CN111914902B (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
CN112749675A (en) Potato disease identification method based on convolutional neural network
CN110929795A (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN111161237A (en) Fruit and vegetable surface quality detection method, storage medium and sorting device thereof
CN108932471B (en) Vehicle detection method
CN111612747A (en) Method and system for rapidly detecting surface cracks of product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190426

WD01 Invention patent application deemed withdrawn after publication