CN110175982B - Defect detection method based on target detection - Google Patents

Defect detection method based on target detection Download PDF

Info

Publication number
CN110175982B
CN110175982B CN201910303500.5A CN201910303500A CN110175982B CN 110175982 B CN110175982 B CN 110175982B CN 201910303500 A CN201910303500 A CN 201910303500A CN 110175982 B CN110175982 B CN 110175982B
Authority
CN
China
Prior art keywords
defect
image
candidate
steps
candidate frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910303500.5A
Other languages
Chinese (zh)
Other versions
CN110175982A (en
Inventor
李卓蓉
封超
吴明晖
颜晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University City College ZUCC
Original Assignee
Zhejiang University City College ZUCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University City College ZUCC filed Critical Zhejiang University City College ZUCC
Priority to CN201910303500.5A priority Critical patent/CN110175982B/en
Publication of CN110175982A publication Critical patent/CN110175982A/en
Application granted granted Critical
Publication of CN110175982B publication Critical patent/CN110175982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a defect detection method based on target detection, which comprises the following steps: step S1, collecting training images; step S2, amplifying the defect image data; step S3, marking a defect area; step S4, constructing a defect detection model; step S5, training a model; in step S5, a defect detection result is output. The invention has the following beneficial effects: the method has the advantages that a large number of samples for learning can be increased by data amplification, data collection cost is reduced, the regions possibly with defects are firstly screened out by using a deep neural network, and then the range of the defect regions is finely adjusted, so that the accurate regions with the defects are automatically detected, and the defects of low manual identification efficiency, poor expansibility and the like of the conventional method based on rules are effectively overcome.

Description

Defect detection method based on target detection
Technical Field
The invention relates to a defect detection method, in particular to a defect detection method based on target detection, and belongs to the field of computer vision.
Background
In recent years, the deep neural network technology has been greatly developed, and particularly in the field of computer vision, the effect of the deep neural network technology is far superior to that of the traditional technology. The defect detection is an important problem in the industrial field, and the traditional defect detection method mainly depends on the experience of detection personnel, and is time-consuming and labor-consuming; the defect detection method based on the rules is usually only suitable for detecting some defects with obvious characteristics, and the construction process of the method is complex. The target detection method based on the deep neural network technology can automatically learn target characteristics and position a target area, and is high in accuracy and good in expandability.
Disclosure of Invention
The invention provides a defect detection method based on target detection, aiming at the problems in the prior art.
The invention adopts the following technical scheme that a defect detection method based on target detection comprises the following steps:
step S1, collecting training images;
step S2, amplifying the defect image data;
step S3, marking a defect area;
step S4, constructing a defect detection model;
step S5, training a model;
in step S6, a defect detection result is output.
Further, the step S1 includes the following steps:
s1.1, collecting images including images containing defects and images without defects through a CCD camera;
and S1.2, manually marking the defect position to generate a binary marked image.
Further, the step S2 includes the following steps:
s2.1, positioning defects and cutting;
s2.2, preprocessing a defective area image;
and S2.3, fusing the images.
Further, the step S2.1 includes the following steps:
s2.1.1: traversing the binary marked image, generating a minimum rectangle with a boundary parallel to the coordinate axis of the image and completely containing the defect area;
s2.1.2: and taking the obtained minimum rectangle as a cutting boundary to obtain a defect area image.
Further, the step S2.2 includes the following steps:
s2.2.1: zooming the image line of the defect area;
s2.2.2: vertically and horizontally turning the image of the defect area;
s2.2.3: rotating the image of the defect area;
s2.2.4: and carrying out random affine transformation on the defect area image.
Further, the step S2.3 includes the following steps:
s2.3.1: performing local self-adaptive threshold segmentation on the preprocessed image of the defect region to extract a more accurate image of the defect region;
s2.3.2: randomly selecting a non-defective image;
s2.3.3: randomly generating a position coordinate in the size range of the defect-free image, aligning the central point of the defect area image with the coordinate, and replacing the pixel value of the defect-free image with the pixel value of the defect area image so as to complete the fusion of the defect area image and the defect-free image.
Further, the step S3 includes the following steps:
s3.1: traversing the binary marked image to obtain the maximum value (x) of coordinates of all pixel points in the defect area imagemax,ymax) And minimum value (x)min,ymin) For the amplification data, the coordinate values are calculated from the random coordinates generated at S2.3.2;
s3.2: normalization processing of the maximum value and the minimum value of the pixel point coordinates of the defect area: (x)max/width,ymax/height), xmin/width,ymin/height), where width and height are the width and height of the image, respectively.
Further, the step S4 includes the following steps:
s4.1, extracting the features of the image input into the convolutional neural network to obtain a feature map;
s4.2, extracting a candidate frame according to the feature map, and obtaining feature information in the range of the candidate frame;
s4.3, classifying the defect types by using a classifier for the characteristic information;
and S4.4, for the candidate frame of a certain characteristic, adjusting the position by using a regressor, so that the position of the candidate frame is more accurate.
Further, the step S4.2 includes the following steps:
s4.2.1: performing convolution operation on the feature map obtained in the S4.1 to obtain a feature vector of each position;
s4.2.2: obtaining feature vectors based on S4.2.1, and generating candidate boxes with different lengths and widths at each position;
s4.2.3: and sorting according to the confidence degrees of the candidate frames, and selecting the final candidate frame according to the confidence degrees from high to low.
Further, in step S5, the model training refers to optimizing the objective function (1) by using a stochastic gradient descent method:
Figure BDA0002029025120000031
the first term in the formula (1) is the classification loss, the second term is the regression loss, i is the number of the candidate box, piA probability of containing a target defect for the candidate box; p is a radical ofi *The target defect is a label, and the value is 1 when the target defect is contained in the candidate frame or 0 when the target defect is not contained in the candidate frame; t is ti={tx,ty,tw,thIs a vector representing the offset of the candidate frame prediction, where tx、ty、tw、thRespectively representing the horizontal coordinate, the vertical coordinate, the candidate frame width and the predicted offset of the candidate frame height of the top left vertex of the candidate frame; t is ti *Is and tiVectors of the same dimension, representing the offset of the candidate box with respect to the actual mark; n is a radical ofclsIs the number of candidate boxes; n is a radical ofregThe size of the characteristic diagram; λ controls the accuracy of the candidate box; sigmaiMeans to sum the losses of all candidate boxes;
Lcls(pi,pi *) The logarithmic loss for both classes of inclusion and non-inclusion of the target defect is determined by equation (2):
Lcls(pi,pi *)=-log[pi *pi+(1-pi *)(1-pi)] (2)
Lreg(ti,ti *) The regression loss for the classification box range is determined by equation (3), where σ is the range that smoothes the loss function:
Figure BDA0002029025120000032
according to the above technical solution, in the step S6, the defect detection result is generated by the step S5, including the position and the category of the defect, and the accuracy of the detection.
The invention has the following beneficial effects: a large number of samples for learning can be increased and data collection cost can be reduced by data amplification, areas where defects possibly exist are firstly screened out by a deep neural network, and then the range of the defect areas is finely adjusted, so that the accurate areas where the defects exist are automatically detected, and the defects that manual identification efficiency is not high, the conventional rule-based method is not good in expansibility and the like are effectively overcome.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIGS. 2 a-2 c are actual defect images;
3 a-3 c are binarized label images;
FIGS. 4 a-4 f are defect images after cropping and preprocessing;
FIG. 5a is a defect area image;
FIG. 5b is a defect-free image;
FIG. 5c is a fused image;
6a-6c are schematic diagrams of defect labeling for training;
FIG. 7 is a defect detection model flow diagram;
fig. 8a-8b are graphs of defect detection results.
Detailed Description
In order to more clearly illustrate the technical solution of the present invention, the technical solution of the present invention will be further described below with reference to the accompanying drawings. It is obvious that the description of the embodiments is only intended to illustrate the implementation of the inventive concept, and the scope of the invention should not be construed as being limited to the specific forms set forth in the embodiments, but also as equivalent technical means which can be conceived by those skilled in the art based on the inventive concept.
As shown in fig. 1, the present embodiment provides a defect detection method based on target detection, including the following steps:
step S1, collecting training images;
step S2, amplifying the defect image data;
step S3, marking a defect area;
step S4, constructing a defect detection model;
step S5, training a model;
in step S6, a defect detection result is output.
Specifically, in step S1, constructing the defect image data set includes the steps of:
step S1.1, collecting images through a CCD camera, wherein the images comprise images containing defects (shown in figures 2 a-2 c) and images without defects;
step S1.2, the defect position is marked manually, and a binary mark image is generated, as shown in fig. 3 a-3 c.
Further, in step S2, the defect image data expansion includes the steps of:
s2.1, positioning defects and cutting;
s2.2, preprocessing a defective area image;
and S2.3, fusing the images.
Further, in step S2.1, defect localization and cropping comprises the steps of:
s2.1.1: traversing the binary marked image, generating a minimum rectangle with a boundary parallel to the coordinate axis of the image and completely containing the defect area;
s2.1.2: and taking the obtained minimum rectangle as a cutting boundary to obtain a defect area image.
Further, in step S2.2, the defect area image preprocessing comprises the steps of:
s2.2.1: zooming the image line of the defect area;
s2.2.2: vertically or horizontally turning the image of the defect area; (ii) a
S2.2.3: rotating the image of the defect area;
s2.2.4: and carrying out random affine transformation on the defect area image.
The specific parameters of the image preprocessing are shown in table 1, the partial graphs after cutting and processing are shown in fig. 4 a-4 f,
TABLE 1 data amplification parameters
Scaling [0.5,2]
Probability of horizontal turnover 50%
Probability of vertical flipping 50%
Rotation angle ±20°
Further, in step S2.3, the image fusion comprises the steps of:
s2.3.1: performing local adaptive threshold segmentation on the adjusted image of the defect region (as shown in fig. 5 a), and extracting an accurate defect region;
s2.3.2: randomly picking a defect-free image (as shown in fig. 5 b);
s2.3.3: randomly generating a position coordinate in the size range of the non-defective image, aligning the center point of the defective area image with the coordinate, and replacing the pixel value of the non-defective image with the pixel value of the defective area image to complete the fusion of the defective area image and the non-defective image (as shown in fig. 5 c).
Further, in step S3, the marking of the defective area includes the following steps:
s3.1: traversing the binary marked image to obtain the maximum value (x) of coordinates of all pixel points in the defect areamax,ymax) And minimum value (x)min,ymin) For amplification data, the coordinate values can be calculated from the random coordinates generated at S2.3.2 (as shown in FIGS. 6a-6 c);
s3.2: normalization processing of the maximum value and the minimum value of the pixel point coordinates of the defect area: (x)max/width,ymax/height), xmin/width,ymin/height), where width and height are the width and height of the image, respectively.
Further, in step S4, as shown in fig. 7, the defect detection model includes the following steps:
s4.1, extracting the features of the image input into the convolutional neural network to obtain a feature map;
s4.2, extracting a candidate frame according to the image characteristics and obtaining characteristic information in the range of the candidate frame;
s4.3, classifying the defect types of the feature information by using a classifier, specifically, adding two full-connection layers and one softmax layer after the feature information obtained in the S4.2, and classifying;
and S4.4, for the candidate frame of a certain characteristic, adjusting the position by using a regressor to ensure that the position of the candidate frame is more accurate, specifically, after the characteristic information obtained in the S4.2, adding two full-connection layers, and adjusting the range of the candidate frame to ensure that the candidate frame contains a defect area and is as small as possible.
Further, in step 4.1, the architecture details of the convolutional layers are shown in table 2, where Conv denotes the convolutional layers, MaxPool denotes the maximum pooling layer, the convolutional kernel such as [3 × 3] denotes that the convolutional kernel size is 3 × 3, the output 7 × 7 × 512 denotes that the number of output channels is 512, and the output feature map size is 7 × 7.
TABLE 2 convolutional layer parameters
Network layer Convolution kernel Output of
Input device - 224x224x3
Conv [3×3]×64 224x224x64
Conv [3×3]×64 224x224x64
MaxPool [2×2] 112x112x64
Conv [3×3]×128 112x112x128
Conv [3×3]×128 112x112x128
MaxPool [2×2] 56×56×128
Conv [3×3]×256 56×56×256
Conv [3×3]×256 56×56×256
Conv [3×3]×256 56×56×256
MaxPool [2×2] 28×28×256
Conv [3×3]×512 28×28×512
Conv [3×3]×512 28×28×512
Conv [3×3]×512 28×28×512
MaxPool [2×2] 14×14×512
Conv [3×3]×512 14×14×512
Conv [3×3]×512 14×14×512
Conv [3×3]×512 14×14×512
MaxPool [2×2] 7×7×512
Further, in step S4.2, the candidate box extraction and obtaining the candidate box feature information includes the following steps:
s4.2.1: performing convolution operation on the feature map obtained in the step S4.1 to obtain a feature vector of each position, specifically, using a 3 × 3 convolution kernel;
s4.2.2: obtaining feature vectors based on S4.2.1, generating nine candidate frames at each position, wherein the length-width ratio of each candidate frame is 1: 1, 3: 1 and 1: 3;
s4.2.3: and sorting according to the confidence degrees of the candidate frames, and selecting about 300 candidate frames as final candidate frames from high confidence degrees to low confidence degrees.
Further, in step S5, the model training refers to optimizing the objective function (1) by using a stochastic gradient descent method:
Figure BDA0002029025120000071
the first term in equation (1) is the classification loss and the second term is the regression loss. i is the number of the candidate box, piProbability of containing target defect for candidate frame, pi *The target defect is a label, and the target defect is in the candidate frame, the value is 1 when the target is contained, the value is 0 when the target is not contained, and t isi={tx,ty,tw,thIs a vector representing the offset of the candidate frame prediction, where tx、ty、tw、thRespectively represents the predicted offset of the horizontal coordinate, the vertical coordinate, the candidate frame width and the candidate frame height of the top left vertex of the candidate frame, ti *Is and tiVectors of the same dimension, representing the offset of the candidate box with respect to the actual mark, NclsIs the number of candidate boxes, NregLambda controls the accuracy of the candidate box for the size of the feature map; sigmaiMeans to sum the losses of all candidate boxes;
Lcls(pi,pi *) The logarithmic loss for both classes of inclusion and non-inclusion of the target defect is determined by equation (2):
Lcls(pi,pi *)=-log[pi *pi+(1-pi *)(1-pi)] (2)
Lreg(ti,ti *) The regression loss for the classification box range is determined by equation (3), where σ is the range over which the loss function is smoothed.
Figure BDA0002029025120000072
Further, in step S6, defect detection results are generated by step S5, including the location and type of the defect, and the accuracy of the detection, as shown in FIGS. 8a-8 b.

Claims (3)

1. A defect detection method based on target detection is characterized by comprising the following steps:
step S1, acquiring a training image, the steps of which are as follows:
s1.1, collecting images including images containing defects and images without defects through a CCD camera;
s1.2, manually marking the defect position to generate a binary marking image;
step S2, amplifying the defect image data, the steps are as follows:
s2.1, positioning defects and cutting;
s2.2, preprocessing a defective area image;
step S2.3, image fusion, which comprises the following steps:
step S2.3.1: performing local self-adaptive threshold segmentation on the preprocessed image of the defect region to extract a more accurate image of the defect region;
step S2.3.2: randomly selecting a non-defective image;
step S2.3.3: randomly generating a position coordinate in the size range of the defect-free image, aligning the central point of the defect area image with the coordinate, and replacing the pixel value of the defect-free image with the pixel value of the defect area image so as to complete the fusion of the defect area image and the defect-free image;
step S3, marking a defect area;
step S4, constructing a defect detection model;
step S5, training a model;
step S6, outputting a defect detection result;
wherein said step S2.1 comprises the steps of:
step S2.1.1: traversing the binary marked image, generating a minimum rectangle with a boundary parallel to the coordinate axis of the image and completely containing the defect area;
step S2.1.2: obtaining a defect area image by taking the obtained minimum rectangle as a cutting boundary;
the step S2.2 comprises the steps of:
step S2.2.1: zooming the image line of the defect area;
step S2.2.2: vertically and horizontally turning the image of the defect area;
step S2.2.3: rotating the image of the defect area;
step S2.2.4: carrying out random affine transformation on the image of the defect area;
wherein the step S3 includes the steps of:
step S3.1: traversing the binary marked image to obtain the maximum value (x) of coordinates of all pixel points in the defect area imagemax,ymax) And minimum value (x)min,ymin) For the amplification data, the coordinate values are calculated from the random coordinates generated in step S2.3.2;
step S3.2: normalization processing of the maximum value and the minimum value of the pixel point coordinates of the defect area: (x)max/width,ymax/height),xmin/width,yminHeight), wherein width and height are the width and height of the image, respectively;
wherein the step S4 includes the steps of:
s4.1, extracting the features of the image input into the convolutional neural network to obtain a feature map;
s4.2, extracting a candidate frame according to the feature map, and obtaining feature information in the range of the candidate frame;
s4.3, classifying the defect types by using a classifier for the characteristic information;
and S4.4, for the candidate frame of a certain characteristic, adjusting the position by using a regressor, so that the position of the candidate frame is more accurate.
2. The method according to claim 1, characterized in that said step S4.2 comprises the steps of:
step S4.2.1: performing convolution operation on the feature map obtained in the step S4.1 to obtain a feature vector of each position;
step S4.2.2: generating candidate frames with different lengths and widths at each position based on the feature vectors obtained in step S4.2.1;
step S4.2.3: and sorting according to the confidence degrees of the candidate frames, and selecting the final candidate frame according to the confidence degrees from high to low.
3. The method according to claim 1, wherein in step S5, the model training refers to optimizing the objective function (1) by using a stochastic gradient descent method:
Figure FDA0003102174670000021
the first term in the formula (1) is the classification loss, the second term is the regression loss, i is the number of the candidate box, piA probability of containing a target defect for the candidate box; p is a radical ofi *The target defect is a label, and the value is 1 when the target defect is contained in the candidate frame or 0 when the target defect is not contained in the candidate frame; t is ti={tx,ty,tw,thIs a vector representing the offset of the candidate frame prediction, where tx、ty、tw、thRespectively representing the horizontal coordinate, the vertical coordinate, the candidate frame width and the predicted offset of the candidate frame height of the top left vertex of the candidate frame; t is ti *Is and tiVectors of the same dimension, representing the offset of the candidate box with respect to the actual mark; n is a radical ofclsIs the number of candidate boxes; n is a radical ofregIs the size of the feature map; λ controls the accuracy of the candidate box; sigmaiMeans to sum the losses of all candidate boxes;
Lcls(pi,pi *) The logarithmic loss for both classes of inclusion and non-inclusion of the target defect is determined by equation (2):
Lcls (pi,pi *)=-log[pi *pi+(1-pi *)(1-pi)] (2)
Lreg(ti,ti *) The regression loss for the classification box range is determined by equation (3), where σ is the range over which the loss function is smoothed;
Figure FDA0003102174670000031
CN201910303500.5A 2019-04-16 2019-04-16 Defect detection method based on target detection Active CN110175982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910303500.5A CN110175982B (en) 2019-04-16 2019-04-16 Defect detection method based on target detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910303500.5A CN110175982B (en) 2019-04-16 2019-04-16 Defect detection method based on target detection

Publications (2)

Publication Number Publication Date
CN110175982A CN110175982A (en) 2019-08-27
CN110175982B true CN110175982B (en) 2021-11-02

Family

ID=67689513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910303500.5A Active CN110175982B (en) 2019-04-16 2019-04-16 Defect detection method based on target detection

Country Status (1)

Country Link
CN (1) CN110175982B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852352B (en) * 2019-10-22 2022-07-29 西北工业大学 Data enhancement method for training deep neural network model for target detection
CN110910353B (en) * 2019-11-06 2022-06-10 成都数之联科技股份有限公司 Industrial false failure detection method and system
CN110852373A (en) * 2019-11-08 2020-02-28 深圳市深视创新科技有限公司 Defect-free sample deep learning network training method based on vision
CN111103307A (en) * 2019-11-19 2020-05-05 佛山市南海区广工大数控装备协同创新研究院 Pcb defect detection method based on deep learning
CN111091534A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Target detection-based pcb defect detection and positioning method
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111060514B (en) * 2019-12-02 2022-11-04 精锐视觉智能科技(上海)有限公司 Defect detection method and device and terminal equipment
CN111062915B (en) * 2019-12-03 2023-10-24 浙江工业大学 Real-time steel pipe defect detection method based on improved YOLOv3 model
CN111080602B (en) * 2019-12-12 2020-10-09 哈尔滨市科佳通用机电股份有限公司 Method for detecting foreign matters in water leakage hole of railway wagon
CN111160553B (en) * 2019-12-23 2022-10-25 中国人民解放军军事科学院国防科技创新研究院 Novel field self-adaptive learning method
CN111060520B (en) * 2019-12-30 2021-10-29 歌尔股份有限公司 Product defect detection method, device and system
CN111145163B (en) * 2019-12-30 2021-04-02 深圳市中钞科信金融科技有限公司 Paper wrinkle defect detection method and device
CN111583183B (en) * 2020-04-13 2022-12-06 成都数之联科技股份有限公司 Data enhancement method and system for PCB image defect detection
CN112669296B (en) * 2020-12-31 2023-09-26 江苏南高智能装备创新中心有限公司 Defect detection method, device and equipment of numerical control punch die based on big data
CN113298077A (en) * 2021-06-21 2021-08-24 中国电建集团海南电力设计研究院有限公司 Transformer substation foreign matter identification and positioning method and device based on deep learning
CN113569737A (en) * 2021-07-28 2021-10-29 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Notebook screen defect detection method and medium based on autonomous learning network model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127780A (en) * 2016-06-28 2016-11-16 华南理工大学 A kind of curved surface defect automatic testing method and device thereof
CN106952250A (en) * 2017-02-28 2017-07-14 北京科技大学 A kind of metal plate and belt detection method of surface flaw and device based on Faster R CNN networks
CN107229930A (en) * 2017-04-28 2017-10-03 北京化工大学 A kind of pointer instrument numerical value intelligent identification Method and device
CN108257114A (en) * 2017-12-29 2018-07-06 天津市万贸科技有限公司 A kind of transmission facility defect inspection method based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11017250B2 (en) * 2010-06-07 2021-05-25 Affectiva, Inc. Vehicle manipulation using convolutional image processing
US11210777B2 (en) * 2016-04-28 2021-12-28 Blancco Technology Group IP Oy System and method for detection of mobile device fault conditions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127780A (en) * 2016-06-28 2016-11-16 华南理工大学 A kind of curved surface defect automatic testing method and device thereof
CN106952250A (en) * 2017-02-28 2017-07-14 北京科技大学 A kind of metal plate and belt detection method of surface flaw and device based on Faster R CNN networks
CN107229930A (en) * 2017-04-28 2017-10-03 北京化工大学 A kind of pointer instrument numerical value intelligent identification Method and device
CN108257114A (en) * 2017-12-29 2018-07-06 天津市万贸科技有限公司 A kind of transmission facility defect inspection method based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks;Shaoqing Ren et al.;《arXiv》;20160106;第4-5页 *
基于深度学习的细粒度图像分类研究;唐文博;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190115;第2019年卷(第1期);第I138-2172页 *

Also Published As

Publication number Publication date
CN110175982A (en) 2019-08-27

Similar Documents

Publication Publication Date Title
CN110175982B (en) Defect detection method based on target detection
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN108509950B (en) Railway contact net support number plate detection and identification method based on probability feature weighted fusion
CN110598698B (en) Natural scene text detection method and system based on adaptive regional suggestion network
CN110298227B (en) Vehicle detection method in unmanned aerial vehicle aerial image based on deep learning
CN112508857B (en) Aluminum product surface defect detection method based on improved Cascade R-CNN
CN112037219A (en) Metal surface defect detection method based on two-stage convolution neural network
CN108846831B (en) Band steel surface defect classification method based on combination of statistical characteristics and image characteristics
CN115601355A (en) Method and device for detecting and classifying product surface defects and storage medium
CN113393426A (en) Method for detecting surface defects of rolled steel plate
CN111612747A (en) Method and system for rapidly detecting surface cracks of product
CN113298809B (en) Composite material ultrasonic image defect detection method based on deep learning and superpixel segmentation
CN110751619A (en) Insulator defect detection method
CN115272204A (en) Bearing surface scratch detection method based on machine vision
CN111027538A (en) Container detection method based on instance segmentation model
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN116205876A (en) Unsupervised notebook appearance defect detection method based on multi-scale standardized flow
CN110363196B (en) Method for accurately recognizing characters of inclined text
CN112017154A (en) Ray defect detection method based on Mask R-CNN model
CN109615610B (en) Medical band-aid flaw detection method based on YOLO v2-tiny
CN110889418A (en) Gas contour identification method
CN114078106A (en) Defect detection method based on improved Faster R-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220707

Address after: 310015 No. 51, Huzhou street, Hangzhou, Zhejiang

Patentee after: Zhejiang University City College

Address before: 310015 No. 51 Huzhou street, Hangzhou, Zhejiang, Gongshu District

Patentee before: Zhejiang University City College