CN110175982A - A kind of defect inspection method based on target detection - Google Patents

A kind of defect inspection method based on target detection Download PDF

Info

Publication number
CN110175982A
CN110175982A CN201910303500.5A CN201910303500A CN110175982A CN 110175982 A CN110175982 A CN 110175982A CN 201910303500 A CN201910303500 A CN 201910303500A CN 110175982 A CN110175982 A CN 110175982A
Authority
CN
China
Prior art keywords
defect
image
candidate frame
defect area
steps
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910303500.5A
Other languages
Chinese (zh)
Other versions
CN110175982B (en
Inventor
李卓蓉
封超
吴明晖
颜晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University City College ZUCC
Original Assignee
Zhejiang University City College ZUCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University City College ZUCC filed Critical Zhejiang University City College ZUCC
Priority to CN201910303500.5A priority Critical patent/CN110175982B/en
Publication of CN110175982A publication Critical patent/CN110175982A/en
Application granted granted Critical
Publication of CN110175982B publication Critical patent/CN110175982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of defect inspection methods based on target detection, comprising the following steps: step S1 acquires training image;Step S2, the amplification of defect image data;Step S3, defect area label;Step S4 constructs defects detection model;Step S5, model training;Step S5 exports defects detection result.Beneficial effects of the present invention are mainly manifested in: can be increased using data amplification and be largely used to the sample of study and reduce data collection cost, filter out defect region that may be present first using deep neural network, then intense adjustment is carried out to the range of defect area, to detecting precise region existing for defect automatically, the disadvantages of manual identified is inefficient and rule-based conventional method scalability is bad is efficiently solved.

Description

A kind of defect inspection method based on target detection
Technical field
The present invention relates to a kind of defect inspection methods, and in particular to a kind of defect inspection method based on target detection belongs to In computer vision field.
Background technique
In recent years, deep neural network technology obtains great development, especially in computer vision field, effect Remote ultra-traditional technology.Defects detection is the major issue of industrial circle, and traditional defect inspection method depends on detection people The experience of member, takes time and effort;Rule-based defect inspection method is often only applicable to the apparent defects detection of some features, And method building process is complicated.Object detection method based on deep neural network technology can automatic learning objective feature simultaneously Target area is positioned, accuracy is high, scalability is good, but this technology is mainly used in natural scene at present Target detection is still applied to less in the industrial scene such as defects detection.
Summary of the invention
In view of the problems of the existing technology the present invention, proposes a kind of defect inspection method based on target detection.
The present invention uses following technical scheme, a kind of defect inspection method based on target detection, comprising the following steps:
Step S1 acquires training image;
Step S2, the amplification of defect image data;
Step S3, defect area label;
Step S4 constructs defects detection model;
Step S5, model training;
Step S6 exports defects detection result.
Further, the step S1 includes the following steps:
Step S1.1 acquires image by CCD camera, including the image containing defect and without the image of defect;
Step S1.2 carries out handmarking to defective locations, generates Closing Binary Marker image.
Further, the step S2 includes the following steps:
Step S2.1, defect location and cutting;
Step S2.2, defect area image preprocessing;
Step S2.3, image co-registration.
Further, the step S2.1 includes the following steps:
S2.1.1: it is parallel with image coordinate axis and defect area is complete to generate a boundary for traversal binaryzation tag image The minimum rectangle for including entirely;
S2.1.2: using obtained minimum rectangle as clipping boundary, defect area image is obtained.
Further, the step S2.2 includes the following steps:
S2.2.1: defect area image line is scaled;
S2.2.2: flip vertical and flip horizontal are carried out to defect area image;
S2.2.3: defect area image is rotated;
S2.2.4: random affine transformation is carried out to defect area image.
Further, the step S2.3 includes the following steps:
S2.3.1: carrying out local auto-adaptive Threshold segmentation to pretreated defect area image, extracts more accurate Defect area image;
S2.3.2: zero defect image is randomly selected;
S2.3.3: generating a position coordinates at random within the scope of zero defect picture size, and by defect area image Central point is aligned with the coordinate, with defect area image pixel value replacement zero defect image pixel value to complete defect area The fusion of image and zero defect image.
Further, the step S3 includes the following steps:
S3.1: traversal binaryzation tag image obtains the coordinate maximum value (x of all pixels point in defect area imagemax, ymax) and minimum value (xmin, ymin), for amplification data, coordinate value is calculated by the random coordinates that S2.3.2 is generated;
S3.2: the normalized of defect area pixel coordinate maximum value and minimum value: (xmax/ width, ymax/ Height), xmin/ width, ymin/ height), wherein width, height are the width and height of image respectively.
Further, the step S4 includes the following steps:
Step S4.1 carries out feature extraction to the image being input in convolutional neural networks, to obtain characteristic pattern;
Step S4.2 extracts candidate frame according to characteristic pattern, and obtains characteristic information within the scope of candidate frame;
Step S4.3 classifies to characteristic information using classifier to defect classification;
Step S4.4 adjusts position using device is returned, keeps candidate frame position more accurate for the candidate frame of a certain feature.
Further, the step S4.2 includes the following steps:
S4.2.1: it on the characteristic pattern that S4.1 is obtained, carries out convolution operation and obtains the feature vector of each position;
S4.2.2: feature vector is obtained based on S4.2.1, each position generates the candidate frame of different length and width;
S4.2.3: being ranked up according to the confidence level of candidate frame, chooses final candidate frame from high to low by confidence level.
Further, in the step S5, model training refers to utilizing stochastic gradient descent method optimization object function (1):
First item is Classification Loss in formula (1), and Section 2 is to return loss, and i is the serial number of candidate frame, piFor candidate frame Probability comprising target defect;pi *For label, target defect is indicated whether in candidate frame, and value is 1 when including target, no Value is 0 when comprising target;ti={ tx, ty, tw, thIt is a vector, indicate the offset of candidate frame prediction, wherein tx、 ty、tw、thRespectively represent the abscissa on candidate frame upper left side vertex, ordinate, candidate width of frame, candidate frame Height Prediction it is inclined Shifting amount;ti *It is and tiThe vector of identical dimensional indicates offset of the candidate frame relative to real marking;NclsFor the number of candidate frame Mesh;NregIt is characterized the size of figure;The accuracy of λ control candidate frame;∑iExpression sums to the loss of all candidate frames;
Lcls(pi, pi *) lost for the logarithm comprising target defect and not comprising two classifications of target defect, by formula (2) It determines:
Lcls(pi, pi *)=- log [pi *pi+(1-pi *)(1-pi)] (2)
Lreg(ti, ti *) lost for the recurrence of classfying frame range, it is determined by formula (3), wherein σ is to keep loss function smooth Range:
According to the above technical scheme, in the step S6, defects detection result is generated by step S5, the position including defect It sets and classification, and the accuracy of detection.
Beneficial effects of the present invention are as follows: can be increased using data amplification and be largely used to the sample of study and reduce data Compiling costs filters out defect region that may be present using deep neural network first, then to the range of defect area into Row intense adjustment efficiently solves that manual identified is inefficient and base to detect precise region existing for defect automatically In rule conventional method scalability it is bad the disadvantages of.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart;
Fig. 2 a- Fig. 2 c is true defect image;
Fig. 3 a- Fig. 3 c is the tag image of binaryzation;
Fig. 4 a- Fig. 4 f is cutting and pretreated defect image;
Fig. 5 a is defect area image;
Fig. 5 b is zero defect image;
Fig. 5 c is fused image;
Fig. 6 a- Fig. 6 c is for trained flaw labeling schematic diagram;
Fig. 7 is defects detection model flow figure;
Fig. 8 a- Fig. 8 b is defects detection result figure.
Specific embodiment
In order to more clearly illustrate technical solution of the present invention, below in conjunction with the attached drawing in the present invention to skill of the invention Art scheme is described further.Obviously, content described in this specification embodiment is only the way of realization to inventive concept Enumerate, protection scope of the present invention should not be construed as being limited to the specific forms stated in the embodiments, protection of the invention Range also and in those skilled in the art conceive according to the present invention it is conceivable that equivalent technologies mean.
As shown in Figure 1, present embodiments providing a kind of defect inspection method based on target detection, comprising the following steps:
Step S1 acquires training image;
Step S2, the amplification of defect image data;
Step S3, defect area label;
Step S4 constructs defects detection model;
Step S5, model training;
Step S6 exports defects detection result.
Specifically, in step sl, construct defect image data set the following steps are included:
Step S1.1 acquires image by CCD camera, including the image (as shown in Fig. 2 a- Fig. 2 c) containing defect and not Image containing defect;
Step S1.2 carries out handmarking to defective locations, Closing Binary Marker image is generated, as shown in Fig. 3 a- Fig. 3 c.
Further, in step s 2, defect image data amplification comprises the steps of:
Step S2.1, defect location and cutting;
Step S2.2, defect area image preprocessing;
Step S2.3, image co-registration.
Further, in step S2.1, defect location and cut the following steps are included:
S2.1.1: it is parallel with image coordinate axis and defect area is complete to generate a boundary for traversal binaryzation tag image The minimum rectangle for including entirely;
S2.1.2: using obtained minimum rectangle as clipping boundary, defect area image is obtained.
Further, in step S2.2, defect area image preprocessing the following steps are included:
S2.2.1: defect area image line is scaled;
S2.2.2: flip vertical or flip horizontal are carried out to defect area image;;
S2.2.3: defect area image is rotated;
S2.2.4: random affine transformation is carried out to defect area image.
The design parameter of image preprocessing is as shown in table 1, cuts and treated partial graphical is as shown in Fig. 4 a- Fig. 4 f,
1. data Amplification of table
Scaling [0.5,2]
Flip horizontal probability 50%
Flip vertical probability 50%
Rotate angle ±20°
Further, in step S2.3, image co-registration the following steps are included:
S2.3.1: progress local auto-adaptive Threshold segmentation (as shown in Figure 5 a) to defect area image adjusted extracts Accurate defect area out;
S2.3.2: flawless image (as shown in Figure 5 b) is randomly selected;
S2.3.3: generating a position coordinates at random within the scope of zero defect picture size, and by defect area image Central point is aligned with the coordinate, with defect area image pixel value replacement zero defect image pixel value to complete defect area The fusion (as shown in Figure 5 c) of image and zero defect image.
Further, in step s3, defect area label comprises the steps of:
S3.1: traversal binaryzation tag image obtains the coordinate maximum value (x of defect area all pixels pointmax, ymax) and Minimum value (xmin, ymin), for amplification data, (such as Fig. 6 a- can be calculated by the random coordinates that S2.3.2 is generated in coordinate value Shown in 6c);
S3.2: the normalized of defect area pixel coordinate maximum value and minimum value: (xmax/ width, ymax/ Height), xmin/ width, ymin/ height), wherein width, height are the width and height of image respectively.
Further, in step s 4, as shown in fig. 7, defects detection model comprises the steps of:
Step S4.1 carries out feature extraction to the image being input in convolutional neural networks, to obtain characteristic pattern;
Step S4.2 according to image characteristics extraction candidate frame, and obtains characteristic information within the scope of candidate frame;
Step S4.3 is classified to characteristic information using classifier, to defect classification specifically, being obtained in S4.2 After characteristic information, adds two full articulamentums and one softmax layers, classify;
Step S4.4 adjusts position using device is returned, keeps candidate frame position more accurate for the candidate frame of a certain feature, Specifically, being to add two full articulamentums after the characteristic information that S4.2 is obtained, adjust the range of candidate frame, it is allowed to comprising defect Region, and it is small as far as possible.
Further, in step 4.1, the framework details of convolutional layer is as shown in table 2, wherein and Conv indicates convolutional layer, MaxPool indicates that maximum pond layer, convolution kernel such as [3 × 3] indicate that convolution kernel size is 3 × 3, and output 7 × 7 × 512 indicates defeated Port number is 512 out, and output characteristic pattern size is 7 × 7.
2. convolution layer parameter of table
Network layer Convolution kernel Output
Input - 224x224x3
Conv [3×3]×64 224x224x64
Conv [3×3]×64 224x224x64
MaxPool [2×2] 112x112x64
Conv [3×3]×128 112x112x128
Conv [3×3]×128 112x112x128
MaxPool [2×2] 56×56×128
Conv [3×3]×256 56×56×256
Conv [3×3]×256 56×56×256
Conv [3×3]×256 56×56×256
MaxPool [2×2] 28×28×256
Conv [3×3]×512 28×28×512
Conv [3×3]×512 28×28×512
Conv [3×3]×512 28×28×512
MaxPool [2×2] 14×14×512
Conv [3×3]×512 14×14×512
Conv [3×3]×512 14×14×512
Conv [3×3]×512 14×14×512
MaxPool [2×2] 7×7×512
Further, in step S4.2, candidate frame extracts and obtains candidate frame characteristic information and comprises the steps of:
S4.2.1: it on the characteristic pattern that S4.1 is obtained, carries out convolution operation and obtains the feature vector of each position, specifically , use 3x3 convolution kernel;
S4.2.2: obtaining feature vector based on S4.2.1, and each position generates nine candidate frames, length-width ratio is respectively 1: 1, 3: 1,1: 3 candidate frame each three;
S4.2.3: being ranked up according to the confidence level of candidate frame, chooses 300 or so candidate frames from high to low by confidence level As final candidate frame.
Further, in step s 5, model training refers to utilizing stochastic gradient descent method optimization object function (1):
First item is Classification Loss in formula (1), and Section 2 is to return loss.I is the serial number of candidate frame, piFor candidate frame Probability comprising target defect, pi *For label, target defect is indicated whether in candidate frame, and value is 1 when including target, no Value is 0, t when comprising targeti={ tx, ty, tw, thIt is a vector, indicate the offset of candidate frame prediction, wherein tx、 ty、tw、thRespectively represent the abscissa on candidate frame upper left side vertex, ordinate, candidate width of frame, candidate frame Height Prediction it is inclined Shifting amount, ti *It is and tiThe vector of identical dimensional indicates offset of the candidate frame relative to real marking, NclsFor the number of candidate frame Mesh, NregIt is characterized the size of figure, λ controls the accuracy of candidate frame;∑iExpression sums to the loss of all candidate frames;
Lcls(pi, pi *) lost for the logarithm comprising target defect and not comprising two classifications of target defect, by formula (2) It determines:
Lcls(pi, pi *)=- log [pi *pi+(1-pi *)(1-pi)] (2)
Lreg(ti, ti *) lost for the recurrence of classfying frame range, it is determined by formula (3), wherein σ is to keep loss function smooth Range.
Further, in step s 6, defects detection result is generated by step S5, position and classification including defect, with And the accuracy of detection, as shown in Figure 8 a-8b.

Claims (10)

1. a kind of defect inspection method based on target detection, which comprises the following steps:
Step S1 acquires training image;
Step S2, the amplification of defect image data;
Step S3, defect area label;
Step S4 constructs defects detection model;
Step S5, model training;
Step S6 exports defects detection result.
2. the method as described in claim 1, which is characterized in that the step S1 includes the following steps:
Step S1.1 acquires image by CCD camera, including the image containing defect and without the image of defect;
Step S1.2 carries out handmarking to defective locations, generates Closing Binary Marker image.
3. method according to claim 2, which is characterized in that the step S2 includes the following steps:
Step S2.1, defect location and cutting;
Step S2.2, defect area image preprocessing;
Step S2.3, image co-registration.
4. method as claimed in claim 3, which is characterized in that the step S2.1 includes the following steps:
S2.1.1: traversal binaryzation tag image it is parallel with image coordinate axis to generate a boundary, and defect area is wrapped completely The minimum rectangle contained;
S2.1.2: using obtained minimum rectangle as clipping boundary, defect area image is obtained.
5. method as claimed in claim 4, which is characterized in that the step S2.2 includes the following steps:
S2.2.1: defect area image line is scaled;
S2.2.2: flip vertical and flip horizontal are carried out to defect area image;
S2.2.3: defect area image is rotated;
S2.2.4: random affine transformation is carried out to defect area image.
6. method as claimed in claim 5, which is characterized in that the step S2.3 includes the following steps:
S2.3.1: local auto-adaptive Threshold segmentation is carried out to pretreated defect area image, extracts more accurate defect Area image;
S2.3.2: zero defect image is randomly selected;
S2.3.3: generating a position coordinates at random within the scope of zero defect picture size, and by the center of defect area image Point be aligned with the coordinate, with defect area image pixel value replacement zero defect image pixel value to complete defect area image with The fusion of zero defect image.
7. method as claimed in claim 6, which is characterized in that the step S3 includes the following steps:
S3.1: traversal binaryzation tag image obtains the coordinate maximum value (x of all pixels point in defect area imagemax, ymax) With minimum value (xmin, ymin), for amplification data, coordinate value is calculated by the random coordinates that S2.3.2 is generated;
S3.2: the normalized of defect area pixel coordinate maximum value and minimum value: (xmax/ width, ymax/ height), xmin/ width, ymin/ height), wherein width, height are the width and height of image respectively.
8. the method for claim 7, which is characterized in that the step S4 includes the following steps:
Step S4.1 carries out feature extraction to the image being input in convolutional neural networks, to obtain characteristic pattern;
Step S4.2 extracts candidate frame according to characteristic pattern, and obtains characteristic information within the scope of candidate frame;
Step S4.3 classifies to characteristic information using classifier to defect classification;
Step S4.4 adjusts position using device is returned, keeps candidate frame position more accurate for the candidate frame of a certain feature.
9. method according to claim 8, which is characterized in that the step S4.2 includes the following steps:
S4.2.1: it on the characteristic pattern that S4.1 is obtained, carries out convolution operation and obtains the feature vector of each position;
S4.2.2: feature vector is obtained based on S4.2.1, each position generates the candidate frame of different length and width;
S4.2.3: being ranked up according to the confidence level of candidate frame, chooses final candidate frame from high to low by confidence level.
10. method as claimed in claim 9, which is characterized in that in the step S5, model training refers to utilizing boarding steps It spends descent method optimization object function (1):
First item is Classification Loss in formula (1), and Section 2 is to return loss, and i is the serial number of candidate frame, piInclude for candidate frame The probability of target defect;pi *For label, target defect is indicated whether in candidate frame, and value is 1 when including target, does not include mesh Value is 0 when mark;ti={ tx,ty,tw,thIt is a vector, indicate the offset of candidate frame prediction, wherein tx、ty、tw、thPoint The abscissa on candidate frame upper left side vertex, the offset of ordinate, candidate width of frame, candidate frame Height Prediction are not represented;ti *It is With tiThe vector of identical dimensional indicates offset of the candidate frame relative to real marking;NclsFor the number of candidate frame;NregFor spy Levy the size of figure;The accuracy of λ control candidate frame;∑iExpression sums to the loss of all candidate frames;
Lcls(pi,pi *) lost for the logarithm comprising target defect and not comprising two classifications of target defect, it is determined by formula (2):
Lcls(pi,pi *)=- log [pi *pi+(1-pi *)(1-pi)] (2)
Lreg(ti,ti *) lost for the recurrence of classfying frame range, it is determined by formula (3), wherein σ is the model for keeping loss function smooth It encloses;
CN201910303500.5A 2019-04-16 2019-04-16 Defect detection method based on target detection Active CN110175982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910303500.5A CN110175982B (en) 2019-04-16 2019-04-16 Defect detection method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910303500.5A CN110175982B (en) 2019-04-16 2019-04-16 Defect detection method based on target detection

Publications (2)

Publication Number Publication Date
CN110175982A true CN110175982A (en) 2019-08-27
CN110175982B CN110175982B (en) 2021-11-02

Family

ID=67689513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910303500.5A Active CN110175982B (en) 2019-04-16 2019-04-16 Defect detection method based on target detection

Country Status (1)

Country Link
CN (1) CN110175982B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852373A (en) * 2019-11-08 2020-02-28 深圳市深视创新科技有限公司 Defect-free sample deep learning network training method based on vision
CN110852352A (en) * 2019-10-22 2020-02-28 西北工业大学 Data enhancement method for training deep neural network model for target detection
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN110910353A (en) * 2019-11-06 2020-03-24 成都数之联科技有限公司 Industrial false failure detection method and system
CN111062915A (en) * 2019-12-03 2020-04-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model
CN111060514A (en) * 2019-12-02 2020-04-24 精锐视觉智能科技(上海)有限公司 Defect detection method and device and terminal equipment
CN111080602A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matters in water leakage hole of railway wagon
CN111091534A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Target detection-based pcb defect detection and positioning method
CN111103307A (en) * 2019-11-19 2020-05-05 佛山市南海区广工大数控装备协同创新研究院 Pcb defect detection method based on deep learning
CN111145163A (en) * 2019-12-30 2020-05-12 深圳市中钞科信金融科技有限公司 Paper wrinkle defect detection method and device
CN111160553A (en) * 2019-12-23 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Novel field self-adaptive learning method
CN111583183A (en) * 2020-04-13 2020-08-25 成都数之联科技有限公司 Data enhancement method and system for PCB image defect detection
CN112669296A (en) * 2020-12-31 2021-04-16 江苏南高智能装备创新中心有限公司 Defect detection method, device and equipment of numerical control punch die based on big data
WO2021135372A1 (en) * 2019-12-30 2021-07-08 歌尔股份有限公司 Product defect detection method, device and system
CN113298077A (en) * 2021-06-21 2021-08-24 中国电建集团海南电力设计研究院有限公司 Transformer substation foreign matter identification and positioning method and device based on deep learning
CN113569737A (en) * 2021-07-28 2021-10-29 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Notebook screen defect detection method and medium based on autonomous learning network model
CN114331961A (en) * 2021-11-25 2022-04-12 腾讯科技(深圳)有限公司 Method for defect detection of an object

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127780A (en) * 2016-06-28 2016-11-16 华南理工大学 A kind of curved surface defect automatic testing method and device thereof
CN106952250A (en) * 2017-02-28 2017-07-14 北京科技大学 A kind of metal plate and belt detection method of surface flaw and device based on Faster R CNN networks
CN107229930A (en) * 2017-04-28 2017-10-03 北京化工大学 A kind of pointer instrument numerical value intelligent identification Method and device
US20180189581A1 (en) * 2010-06-07 2018-07-05 Affectiva, Inc. Vehicle manipulation using convolutional image processing
CN108257114A (en) * 2017-12-29 2018-07-06 天津市万贸科技有限公司 A kind of transmission facility defect inspection method based on deep learning
US20180342050A1 (en) * 2016-04-28 2018-11-29 Yougetitback Limited System and method for detection of mobile device fault conditions

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180189581A1 (en) * 2010-06-07 2018-07-05 Affectiva, Inc. Vehicle manipulation using convolutional image processing
US20180342050A1 (en) * 2016-04-28 2018-11-29 Yougetitback Limited System and method for detection of mobile device fault conditions
CN106127780A (en) * 2016-06-28 2016-11-16 华南理工大学 A kind of curved surface defect automatic testing method and device thereof
CN106952250A (en) * 2017-02-28 2017-07-14 北京科技大学 A kind of metal plate and belt detection method of surface flaw and device based on Faster R CNN networks
CN107229930A (en) * 2017-04-28 2017-10-03 北京化工大学 A kind of pointer instrument numerical value intelligent identification Method and device
CN108257114A (en) * 2017-12-29 2018-07-06 天津市万贸科技有限公司 A kind of transmission facility defect inspection method based on deep learning

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SHAOQING REN ET AL.: "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", 《ARXIV》 *
唐文博: "基于深度学习的细粒度图像分类研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王子昊: "深度学习在输电铁塔关键部件缺陷检测中的应用研究", 《中国优秀硕士学位论文全文数据库 工程科技II辑》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852352A (en) * 2019-10-22 2020-02-28 西北工业大学 Data enhancement method for training deep neural network model for target detection
CN110852352B (en) * 2019-10-22 2022-07-29 西北工业大学 Data enhancement method for training deep neural network model for target detection
CN110910353B (en) * 2019-11-06 2022-06-10 成都数之联科技股份有限公司 Industrial false failure detection method and system
CN110910353A (en) * 2019-11-06 2020-03-24 成都数之联科技有限公司 Industrial false failure detection method and system
CN110852373A (en) * 2019-11-08 2020-02-28 深圳市深视创新科技有限公司 Defect-free sample deep learning network training method based on vision
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111091534A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Target detection-based pcb defect detection and positioning method
CN111103307A (en) * 2019-11-19 2020-05-05 佛山市南海区广工大数控装备协同创新研究院 Pcb defect detection method based on deep learning
CN111060514A (en) * 2019-12-02 2020-04-24 精锐视觉智能科技(上海)有限公司 Defect detection method and device and terminal equipment
CN111060514B (en) * 2019-12-02 2022-11-04 精锐视觉智能科技(上海)有限公司 Defect detection method and device and terminal equipment
CN111062915A (en) * 2019-12-03 2020-04-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model
CN111062915B (en) * 2019-12-03 2023-10-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model
CN111080602A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matters in water leakage hole of railway wagon
CN111080602B (en) * 2019-12-12 2020-10-09 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matters in water leakage hole of railway wagon
CN111160553A (en) * 2019-12-23 2020-05-15 中国人民解放军军事科学院国防科技创新研究院 Novel field self-adaptive learning method
WO2021135372A1 (en) * 2019-12-30 2021-07-08 歌尔股份有限公司 Product defect detection method, device and system
CN111145163A (en) * 2019-12-30 2020-05-12 深圳市中钞科信金融科技有限公司 Paper wrinkle defect detection method and device
US11836907B2 (en) 2019-12-30 2023-12-05 Goertek, Inc. Product defect detection method, device and system
CN111583183A (en) * 2020-04-13 2020-08-25 成都数之联科技有限公司 Data enhancement method and system for PCB image defect detection
CN112669296A (en) * 2020-12-31 2021-04-16 江苏南高智能装备创新中心有限公司 Defect detection method, device and equipment of numerical control punch die based on big data
CN112669296B (en) * 2020-12-31 2023-09-26 江苏南高智能装备创新中心有限公司 Defect detection method, device and equipment of numerical control punch die based on big data
CN113298077A (en) * 2021-06-21 2021-08-24 中国电建集团海南电力设计研究院有限公司 Transformer substation foreign matter identification and positioning method and device based on deep learning
CN113569737A (en) * 2021-07-28 2021-10-29 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Notebook screen defect detection method and medium based on autonomous learning network model
CN114331961A (en) * 2021-11-25 2022-04-12 腾讯科技(深圳)有限公司 Method for defect detection of an object

Also Published As

Publication number Publication date
CN110175982B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN110175982A (en) A kind of defect inspection method based on target detection
CN109711474B (en) Aluminum product surface defect detection algorithm based on deep learning
CN109147254B (en) Video field fire smoke real-time detection method based on convolutional neural network
CN109829893B (en) Defect target detection method based on attention mechanism
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN110210362A (en) A kind of method for traffic sign detection based on convolutional neural networks
JP5546317B2 (en) Visual inspection device, visual inspection discriminator generation device, visual inspection discriminator generation method, and visual inspection discriminator generation computer program
CN111553929A (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN107145845A (en) The pedestrian detection method merged based on deep learning and multi-characteristic points
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN111598856B (en) Chip surface defect automatic detection method and system based on defect-oriented multipoint positioning neural network
CN103927534A (en) Sprayed character online visual detection method based on convolutional neural network
CN108399361A (en) A kind of pedestrian detection method based on convolutional neural networks CNN and semantic segmentation
CN109615609A (en) A kind of solder joint flaw detection method based on deep learning
CN108564120B (en) Feature point extraction method based on deep neural network
CN108985337A (en) A kind of product surface scratch detection method based on picture depth study
CN112085024A (en) Tank surface character recognition method
CN107545571A (en) A kind of image detecting method and device
CN113392849A (en) R-CNN-based complex pavement crack identification method
CN111027511A (en) Remote sensing image ship detection method based on region of interest block extraction
CN113221881B (en) Multi-level smart phone screen defect detection method
CN111612747B (en) Rapid detection method and detection system for product surface cracks
CN107025442A (en) A kind of multi-modal fusion gesture identification method based on color and depth information
CN110992314A (en) Pavement defect detection method and device and storage medium
CN115205626A (en) Data enhancement method applied to field of coating defect detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220707

Address after: 310015 No. 51, Huzhou street, Hangzhou, Zhejiang

Patentee after: Zhejiang University City College

Address before: 310015 No. 51 Huzhou street, Hangzhou, Zhejiang, Gongshu District

Patentee before: Zhejiang University City College

TR01 Transfer of patent right