CN112907510A - Surface defect detection method - Google Patents
Surface defect detection method Download PDFInfo
- Publication number
- CN112907510A CN112907510A CN202110051929.7A CN202110051929A CN112907510A CN 112907510 A CN112907510 A CN 112907510A CN 202110051929 A CN202110051929 A CN 202110051929A CN 112907510 A CN112907510 A CN 112907510A
- Authority
- CN
- China
- Prior art keywords
- image
- beta
- defect
- harr
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8854—Grading and classifying of flaws
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
A surface defect detection method comprises the steps of collecting images of a defect position through a camera; the method mainly aims to detect and classify the defect image by adopting an Adaboost and DCNN fusion mode under the condition that a defect image training sample is insufficient, so that the recognition precision of the surface defect is remarkably improved, and a basis is provided for the surface defect detection of a complex irregular object.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a surface defect detection method.
Background
The machine vision technology has replaced human eyes to go deep into the social aspect, and thoroughly changes the living environment of people. Machine vision inspection integrates machine vision and automation technology, is widely applied to product defect detection in the manufacturing industry, such as product assembly process detection and positioning, product packaging detection, product appearance quality detection, goods sorting or fruit sorting in the logistics industry and the like, and can replace manual work to complete various works quickly and accurately.
The invention mainly aims at the problem that the surface defect detection is difficult, such as the surface crack detection of a large-sized workpiece, the defect detection of an airplane skin, the screw corrosion surface detection and the like, and the traditional manual detection method has great limitation in the practical use. The manual detection depends on subjective evaluation of people, and has great instability, unreliability and non-quantization due to the influence of mood and thinking of people and subjective and objective factors of illuminating lamps. Many factors of instability and unreliability are brought to the quality control of the product.
The method comprises the following steps that 1, an application number of 201910264717.X is that image preprocessing and a PixelNet network are adopted to segment a defect image, and defect identification is not carried out on a defect surface; the application number 201810820348.3 introduces an attention module into a convolution module to improve the detection precision, but increases the training difficulty, so the invention mainly provides a surface defect detection method under the condition that a crack image training sample is not enough, effectively improves the identification precision of the surface defect, and provides a basis for the surface crack detection of a complex irregular object.
Disclosure of Invention
The invention aims to provide a surface defect detection method, which comprises the steps of collecting images at defect positions through a camera; and then, segmenting the acquired image, respectively inputting the segmented sub-images into a DCNN (distributed computing network) and an Adaboost network, respectively outputting DCNN characteristics and Adaboost characteristics through the DCNN and the Adaboost network, finally, carrying out normalized fusion on the characteristics, classifying the characteristics by adopting a classifier, and outputting the type of the surface defect and the probability of the surface defect belonging to the type.
As a further improvement of the above scheme:
preferably, the camera is fixed directly above the object, the camera being angled downwards at an angle of 30 ° to the vertical.
Preferably, the step of segmenting the acquired image is:
s1, because the target surface defect is generally obvious, firstly, preprocessing the image by adopting an edge operator to detect the edge of a suspected defect part, wherein the edge operator is one or more of Canny edge detection, Laplacian operator, Prewitt operator and Sobel operator;
s2 then performing morphological operations on the image to enlarge or reduce the edge region in the image by adding or reducing pixels, the morphological operations including a dilation operation and a erosion operation;
and S3, finally, carrying out blocking processing on the image, and carrying out image blocking processing on the image in a circumscribed rectangle fitting mode.
Preferably, the DCNN network adopts a six-layer convolutional network structure; each layer of convolution network structure comprises convolution kernel size, convolution kernel number, an activation function and a pooling layer.
Preferably, the specific structure of each layer in the convolutional network structure is as follows: the input image is 128 x 48 x 1, the first tier output is 124 x 44 x 32, the second tier output is 62 x 22 x 32, the third tier output is 58 x 18 x 32, the fourth tier output is 29 x 9 x 32, the fifth tier output is 27 x 7 x 32, the sixth tier output is 13 x 3 x 32, and finally the fully connected tier output is used to output the 128-dimensional feature vector α.
Preferably, the Adaboost network adopts a matrix feature set Harr-Like as a strong classifier formed by weak classifiers, an input image and each Harr-Like weak classifier are subjected to AND operation, and the input image and 128 Harr-Like weak classifiers are subjected to operation to obtain a 128-dimensional feature vector beta; for each dimension, there are various types of 1, e.g. 1 is a normal surface0 is other; a specific class is formed by 128-dimensional 0, 1 vectors, which are trained to form a normal surface beta1Crack surface beta2Etching surface beta3And unexpected loss of surface beta4Vector quantity; the characteristic vector obtained by the image to be detected is beta, and the beta are comparediPerforming Euclidean distance calculation when the threshold value is less than iTWhen considered asiAnd (4) class.
Preferably, a feature rectangle in the Harr-Like feature template consists of an arrayTo illustrate, a particular Harr-Like feature template can be expressed as:
wherein x and y represent the coordinates of the top left vertex of the black area of the feature matrix; w, h represent the width and height of the characteristic rectangle, respectively;the weight of the pixel value in the feature matrix in the calculation is taken as the weight;
and (3) carrying out AND operation on the image to be detected and a Harr-Like characteristic template, then carrying out integral operation on the processed image, and obtaining an integral image value I positioned at the image coordinate (x, y)int(x, y) is equal to the sum of all pixels in the upper left corner rectangle of the original image, i.e. x, yConstructing an operation of a Harr-Like weak classifier set to obtain a 128-dimensional characteristic vector beta,
Preferably, the features of the surface defects are subjected to normalized fusion and classified by a classifier, and the types of the surface defects and the probabilities of the surface defects belonging to the types are output; firstly, respectively normalizing a feature vector alpha of a DCNN (distributed component network) and a feature vector beta of an Adaboost network into alpha 'and beta', after normalization, enabling | alpha '| to be 1 and | beta' | to be 1, enabling a fused feature vector to be C ═ alpha ', beta' }, then classifying the fused feature vector C, judging which type of defect and the probability of the corresponding defect are, and classifying by adopting a Support Vector Machine (SVM) or a probabilistic neural network.
Preferably, in the training process, when the training sample for defect detection is insufficient, the defect image is scaled, image rotated and tilted to obtain a new defect pattern, and the sample which is tested each time is used as the sample of the training set to solve the problem of the insufficient image training sample.
Compared with the prior art, the invention has the following beneficial effects:
under the condition that a defect image training sample is not enough, detection and classification are carried out in a mode of fusion of Adaboost and DCNN, when the training sample for defect detection is not enough, the defect image is zoomed, rotated and inclined to obtain a new defect pattern, the sample which is tested each time is used as the sample of a training set to solve the problem that the image training sample is not enough, the identification precision of the surface defect is obviously improved, and a basis is provided for surface defect detection of a complex irregular object.
Drawings
FIG. 1 is a flow chart of a surface defect detection method.
Fig. 2 is a normal surface view.
Fig. 3 is a crack surface view.
FIG. 4 is a corrosion surface diagram.
Fig. 5 is an unexpected loss surface view.
FIG. 6 is a flow chart of target image segmentation.
Fig. 7 is a diagram illustrating a structure of a DCNN network.
Fig. 8 is an Adaboost weak classifier diagram (a), an Adaboost weak classifier diagram (b), an Adaboost weak classifier diagram (c), and an Adaboost weak classifier diagram (d).
Fig. 9 is an Adaboost weak classifier diagram (e), an Adaboost weak classifier diagram (f), an Adaboost weak classifier diagram (g), and an Adaboost weak classifier diagram (h).
Fig. 10 is an Adaboost weak classifier diagram (i), an Adaboost weak classifier diagram (j), an Adaboost weak classifier diagram (k), and an Adaboost weak classifier diagram (l).
FIG. 11 is a surface defect detection network structure training diagram based on Adaboost and DCNN fusion.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
As shown in fig. 1, a surface defect detection method, which acquires an image of a defect by a camera; and then, segmenting the acquired image, respectively inputting the segmented sub-images into a DCNN (distributed computing network) and an Adaboost network, respectively outputting DCNN characteristics and Adaboost characteristics through the DCNN and the Adaboost network, finally, carrying out normalized fusion on the characteristics, classifying the characteristics by adopting a classifier, and outputting the type of the surface defect and the probability of the surface defect belonging to the type.
The camera is fixed right above the object, and the camera is downward and forms an angle of 30 degrees with the vertical direction.
The step of segmenting the acquired image is as follows:
s1, because the target surface defect is generally obvious, firstly, preprocessing the image by adopting an edge operator to detect the edge of a suspected defect part, wherein the edge operator is one or more of Canny edge detection, Laplacian operator, Prewitt operator and Sobel operator;
s2 then performing morphological operations on the image to enlarge or reduce the edge region in the image by adding or reducing pixels, the morphological operations including a dilation operation and a erosion operation;
and S3, finally, carrying out blocking processing on the image, and carrying out image blocking processing on the image in a circumscribed rectangle fitting mode.
The DCNN adopts a six-layer convolution network structure; each layer of convolution network structure comprises convolution kernel size, convolution kernel number, an activation function and a pooling layer.
The specific structure of each layer in the convolutional network structure is as follows: the input image is 128 x 48 x 1, the first tier output is 124 x 44 x 32, the second tier output is 62 x 22 x 32, the third tier output is 58 x 18 x 32, the fourth tier output is 29 x 9 x 32, the fifth tier output is 27 x 7 x 32, the sixth tier output is 13 x 3 x 32, and finally the fully connected tier output is used to output the 128-dimensional feature vector α.
The Adaboost network adopts a matrix characteristic set Harr-Like as a strong classifier formed by weak classifiers, an input image and each Harr-Like weak classifier are subjected to AND operation, and the input image and 128 Harr-Like weak classifiers are subjected to operation to obtain a 128-dimensional characteristic vector beta; for each dimension, the number of the dimension 1 is various, such as 1 is a normal surface and a crack surface, and 0 is other; a specific class is formed by 128-dimensional 0, 1 vectors, which are trained to form a normal surface beta1Crack surface beta2Etching surface beta3And unexpected loss of surface beta4Vector quantity; the characteristic vector obtained by the image to be detected is beta, and the beta are comparediPerforming Euclidean distance calculation when the threshold is less than TiThen it is considered as i-class.
A feature rectangle in the Harr-Like feature template consists of an arrayTo illustrate, a particular Harr-Like feature template can be expressed as:
wherein x and y represent the coordinates of the top left vertex of the black area of the feature matrix; w, h represent the width and height of the characteristic rectangle, respectively;the weight of the pixel value in the feature matrix in the calculation is set according to a specified calculation method;
image to be detected and Harr-Like characteristic template and operation, then integral operation is carried out on the processed image, and integral image value I at image coordinate (x, y)int(x, y) is equal to the sum of all pixels in the upper left corner rectangle of the original image, i.e. x, yConstructing an operation of a Harr-Like weak classifier set to obtain a 128-dimensional characteristic vector which is beta,
Carrying out normalized fusion on the characteristics of the surface defects and classifying the characteristics by adopting a classifier, and outputting the types of the surface defects and the probability of the surface defects belonging to the types; firstly, respectively normalizing a feature vector alpha of a DCNN (distributed component network) and a feature vector beta of an Adaboost network into alpha 'and beta', after normalization, enabling | alpha '| to be 1 and | beta' | to be 1, enabling a fused feature vector to be C ═ alpha ', beta' }, then classifying the fused feature vector C, judging which type of defect and the probability of the corresponding defect are, and classifying by adopting a Support Vector Machine (SVM) or a probabilistic neural network.
When the training samples for defect detection are insufficient, operations such as image scaling, image rotation and tilting can be performed on the defect image area to increase the training samples.
The invention provides a surface defect detection method, which comprises the following steps:
the A1 camera is fixed right above the object, and the camera is downward and forms an angle of 30 degrees with the vertical direction, so that the detection range of the surface area of the target can be enlarged;
a2, in order to improve the subsequent detection speed and the classification of various defects, image block extraction is carried out on a target image in an image segmentation mode;
the A3DCNN adopts a six-layer convolution network structure; each layer of convolution network structure comprises the size of a convolution kernel, the number of the convolution kernels, an activation function and a pooling layer;
the A4Adaboost network adopts a matrix feature set Harr-Like as a strong classifier formed by weak classifiers;
a5 is normalized and fused with its features and classified by a classifier, and the type of surface defect and the probability of belonging to the type are output.
In fig. 2 to 5, fig. 2 is a normal picture, fig. 3 is a crack image, the crack area is small and is not easily distinguished by naked eyes, but the detail part is more different from the periphery; FIG. 4 is a diagram of a corroded screw, which shows a greater distinction degree between the corroded screw and the surroundings; fig. 5 is an unexpected lesion whose image may have fewer pixel values but a greater degree of distinction from the surroundings.
FIG. 6 is a flow chart of image segmentation, in which, since the target surface defect is generally obvious, an edge operator is first used to preprocess the image and detect the edge of a suspected defect portion, wherein the edge operator is one or more of a Laplacian operator, a Prewitt operator and a Sobel operator; and then performing morphological operation on the image, wherein the pixels are added or reduced to expand the edge area in the image or reduce the edge area in the image, the morphological operation comprises expansion operation and corrosion operation, and finally performing blocking processing on the image, wherein the image blocking processing adopts a circumscribed rectangle fitting mode to perform image blocking processing on the image.
Fig. 7 is a structure diagram of a DCNN network, which adopts a six-layer convolutional network structure, where each layer of convolutional network structure includes the size of a convolutional kernel, the number of convolutional kernels, an activation function, and a pooling layer. Each layer structure is respectively 128 × 48 × 1 of input image, 124 × 44 × 32 of first layer output, 62 × 22 × 32 of second layer output, 58 × 18 × 32 of third layer output, 29 × 9 × 32 of fourth layer output, 27 × 7 × 32 of fifth layer output, 13 × 3 × 32 of sixth layer output, and finally, a fully connected layer output 128-dimensional feature vector α is adopted.
The specific structure is shown in fig. 7, the scale change is performed on the segmented image, the image is reduced or enlarged to 128 × 48 × 1, and then the image is sent to the convolution layer, and each layer of convolution network structure comprises the size of a convolution kernel, the number of convolution kernels, an activation function and a pooling layer. The first layer is convolution kernel size 5 × 5, the number of convolution kernels is 32, the activation function ReLU, and the pooling layer MaxPooling, stride is the number of pixels that need to jump when scanning on the original image, and stride 2 × 2 means scanning every other pixel.
Fig. 8-10 are diagrams of Adaboost weak classifier sets, in which an Adaboost network adopts a matrix feature set (Harr-Like) as a strong classifier formed by weak classifiers, an input image and each Harr-Like weak classifier perform and operation, when a threshold is greater than T, the input image is considered as the class, and a 128-dimensional feature vector β is obtained through operation with the Harr-Like weak classifier set.
A feature rectangle in the Harr-Like feature template consists of an arrayTo illustrate, a particular Harr-Like feature template can be expressed as:
where x, represents the coordinates of the top left vertex of the black area of the feature matrix, w, h represent the width and height of the feature rectangle, respectively,the weight of the pixel value in the feature matrix in the calculation is set according to a specified calculation method;
and (3) carrying out AND operation on the image to be detected and the Harr-Likey characteristic template, then carrying out integral operation on the processed image, and obtaining an integral image value I positioned at the image coordinate (x, y)int(x, y) is equal to the sum of all pixels in the upper left corner rectangle of the original image, i.e. x, yConstructing an operation of a Harr-Like weak classifier set to obtain a 128-dimensional characteristic vector which is beta,
As shown, the crack pattern is better suited for the fine black area template of fig. 8 and 9 in the Harr-Like feature template, the erosion pattern is better suited for the Harr-Like feature template of fig. 10, and the unexpected damage is better suited for the large black area template of fig. 8 and 9 in the Harr-Like feature template.
The last step is to carry out normalized fusion on the characteristics and classify the characteristics by adopting a classifier, and output the type of the surface defect and the probability of the surface defect belonging to the type; firstly, respectively normalizing a DCNN network feature vector alpha and an Adaboost network feature vector beta into alpha 'and beta', classifying the feature vector C as C ═ alpha ', beta', judging which type of defect and the probability of the corresponding defect, and classifying by adopting an SVM (support vector machine) or classifying by a probabilistic neural network.
In the training process, the defect image scaling, image rotation and inclination are adopted to increase the training pictures, and the samples which are tested each time are used as training set samples to be trained so as to solve the problem that the image training samples are insufficient. As shown in fig. 11, a circle is a defective region, and the region is copied to another region of the image through operations such as image scaling, image rotation, and tilting during training, so as to improve the accuracy of identifying the small target.
The above-mentioned embodiments are only preferred embodiments of the present invention, and the scope of the claims of the present invention should not be limited by the above-mentioned embodiments, because the modifications, equivalent variations, improvements, etc. made in the claims of the present invention still fall within the scope of the present invention.
Claims (9)
1. A surface defect detection method is characterized in that images of a defect position are collected through a camera; and then, segmenting the acquired image, respectively inputting the segmented sub-images into a DCNN (distributed computing network) and an Adaboost network, respectively outputting DCNN characteristics and Adaboost characteristics through the DCNN and the Adaboost network, finally, carrying out normalized fusion on the characteristics, classifying the characteristics by adopting a classifier, and outputting the type of the surface defect and the probability of the surface defect belonging to the type.
2. A surface defect inspection method according to claim 1, wherein said camera is fixed directly above the object, said camera being directed downwardly at an angle of 30 ° to the vertical.
3. A method as claimed in claim 1, wherein the step of segmenting the acquired image comprises:
s1, firstly, preprocessing the image by adopting an edge operator to detect the edge of the suspected defect part, wherein the edge operator is one or more of Canny edge detection, Laplacian, Prewitt and Sobel;
s2 then performing morphological operations on the image to enlarge or reduce the edge region in the image by adding or reducing pixels, the morphological operations including a dilation operation and a erosion operation;
and S3, finally, carrying out blocking processing on the image, and carrying out image blocking processing on the image in a circumscribed rectangle fitting mode.
4. The method of claim 1, wherein the DCNN network employs a six-layer convolutional network structure; each layer of convolution network structure comprises convolution kernel size, convolution kernel number, an activation function and a pooling layer.
5. The method of claim 4, wherein the specific structure of each layer in the convolutional network structure is as follows: the input image is 128 x 48 x 1, the first tier output is 124 x 44 x 32, the second tier output is 62 x 22 x 32, the third tier output is 58 x 18 x 32, the fourth tier output is 29 x 9 x 32, the fifth tier output is 27 x 7 x 32, the sixth tier output is 13 x 3 x 32, and finally the fully connected tier output is used to output the 128-dimensional feature vector α.
6. The surface defect detection method of claim 1, wherein the Adaboost network adopts a matrix feature set Harr-Like as a strong classifier formed by weak classifiers, an input image and each Harr-Like weak classifier are subjected to AND operation, and the input image and 128 Harr-Like weak classifiers are subjected to operation to obtain a 128-dimensional feature vector beta; for each dimension, the number of the dimension 1 is various, such as 1 is a normal surface and a crack surface, and 0 is other; a specific class is formed by 128-dimensional 0, 1 vectors, which are trained to form a normal surface beta1Crack surface beta2Etching surface beta3And unexpected loss of surface beta4Vector quantity; the characteristic vector obtained by the image to be detected is beta, and the beta are comparediPerforming Euclidean distance calculation when the threshold is less than TiThen it is considered as i-class.
7. A method as claimed in claim 6, wherein a feature rectangle in the Harr-Like feature template is formed from an arrayTo illustrate, a particular Harr-Like feature template can be expressed as:
where x, y represent the coordinates of the top left vertex of the black area of the feature matrix, w, h represent the width and height of the feature rectangle, respectively,the weight of the pixel value in the feature matrix in the calculation is taken as the weight;
and (4) carrying out AND operation on the image to be detected and the Harr-Like characteristic template, then carrying out integral operation on the processed image, and carrying out integral operation on the image positioned at the image coordinate (x, y)Partial picture value Iint(x, y) is equal to the sum of all pixels in the upper left corner rectangle of the original image, i.e. x, yConstructing an operation of a Harr-Like weak classifier set to obtain a 128-dimensional characteristic vector beta,
8. The method of claim 1, wherein the features are normalized and fused and classified by a classifier, and the type of the surface defect and the probability of the surface defect belonging to the type are output; firstly, respectively normalizing a feature vector alpha of a DCNN (distributed component network) and a feature vector beta of an Adaboost network into alpha 'and beta', after normalization, enabling | alpha '| to be 1 and | beta' | to be 1, enabling a fused feature vector to be C ═ alpha ', beta' }, then classifying the fused feature vector C, judging which type of defect and the probability of the corresponding defect are, and classifying by adopting a Support Vector Machine (SVM) or a probabilistic neural network.
9. The method according to any one of claims 1 to 8, wherein in the training process, when the training sample for defect detection is insufficient, the defect image is scaled, image rotated and tilted to obtain a new defect pattern, and the sample after each test is used as the sample of the training set to solve the problem of insufficient image training sample.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110051929.7A CN112907510B (en) | 2021-01-15 | 2021-01-15 | Surface defect detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110051929.7A CN112907510B (en) | 2021-01-15 | 2021-01-15 | Surface defect detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112907510A true CN112907510A (en) | 2021-06-04 |
CN112907510B CN112907510B (en) | 2023-07-07 |
Family
ID=76113664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110051929.7A Active CN112907510B (en) | 2021-01-15 | 2021-01-15 | Surface defect detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112907510B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117611587A (en) * | 2024-01-23 | 2024-02-27 | 赣州泰鑫磁性材料有限公司 | Rare earth alloy material detection system and method based on artificial intelligence |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008151471A1 (en) * | 2007-06-15 | 2008-12-18 | Tsinghua University | A robust precise eye positioning method in complicated background image |
CN102147866A (en) * | 2011-04-20 | 2011-08-10 | 上海交通大学 | Target identification method based on training Adaboost and support vector machine |
CN102855501A (en) * | 2012-07-26 | 2013-01-02 | 北京锐安科技有限公司 | Multi-direction object image recognition method |
CN103093250A (en) * | 2013-02-22 | 2013-05-08 | 福建师范大学 | Adaboost face detection method based on new Haar- like feature |
CN103646251A (en) * | 2013-09-14 | 2014-03-19 | 江南大学 | Apple postharvest field classification detection method and system based on embedded technology |
CN106446784A (en) * | 2016-08-30 | 2017-02-22 | 东软集团股份有限公司 | Image detection method and apparatus |
CN108133231A (en) * | 2017-12-14 | 2018-06-08 | 江苏大学 | A kind of real-time vehicle detection method of dimension self-adaption |
CN109580656A (en) * | 2018-12-24 | 2019-04-05 | 广东华中科技大学工业技术研究院 | Mobile phone light guide panel defect inspection method and system based on changeable weight assembled classifier |
US20190156159A1 (en) * | 2017-11-20 | 2019-05-23 | Kavya Venkata Kota Sai KOPPARAPU | System and method for automatic assessment of cancer |
CN110288013A (en) * | 2019-06-20 | 2019-09-27 | 杭州电子科技大学 | A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input |
CN110314854A (en) * | 2019-06-06 | 2019-10-11 | 苏州市职业大学 | A kind of device and method of the workpiece sensing sorting of view-based access control model robot |
CN110582748A (en) * | 2017-04-07 | 2019-12-17 | 英特尔公司 | Method and system for boosting deep neural networks for deep learning |
CN111340754A (en) * | 2020-01-18 | 2020-06-26 | 中国人民解放军国防科技大学 | Method for detecting and classifying surface defects based on aircraft skin |
KR20200087297A (en) * | 2018-12-28 | 2020-07-21 | 이화여자대학교 산학협력단 | Defect inspection method and apparatus using image segmentation based on artificial neural network |
WO2020181570A1 (en) * | 2019-03-08 | 2020-09-17 | 上海达显智能科技有限公司 | Intelligent smoke removal device and control method thereof |
CN111833328A (en) * | 2020-07-14 | 2020-10-27 | 汪俊 | Aircraft engine blade surface defect detection method based on deep learning |
-
2021
- 2021-01-15 CN CN202110051929.7A patent/CN112907510B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008151471A1 (en) * | 2007-06-15 | 2008-12-18 | Tsinghua University | A robust precise eye positioning method in complicated background image |
CN102147866A (en) * | 2011-04-20 | 2011-08-10 | 上海交通大学 | Target identification method based on training Adaboost and support vector machine |
CN102855501A (en) * | 2012-07-26 | 2013-01-02 | 北京锐安科技有限公司 | Multi-direction object image recognition method |
CN103093250A (en) * | 2013-02-22 | 2013-05-08 | 福建师范大学 | Adaboost face detection method based on new Haar- like feature |
CN103646251A (en) * | 2013-09-14 | 2014-03-19 | 江南大学 | Apple postharvest field classification detection method and system based on embedded technology |
CN106446784A (en) * | 2016-08-30 | 2017-02-22 | 东软集团股份有限公司 | Image detection method and apparatus |
CN110582748A (en) * | 2017-04-07 | 2019-12-17 | 英特尔公司 | Method and system for boosting deep neural networks for deep learning |
US20190156159A1 (en) * | 2017-11-20 | 2019-05-23 | Kavya Venkata Kota Sai KOPPARAPU | System and method for automatic assessment of cancer |
CN108133231A (en) * | 2017-12-14 | 2018-06-08 | 江苏大学 | A kind of real-time vehicle detection method of dimension self-adaption |
CN109580656A (en) * | 2018-12-24 | 2019-04-05 | 广东华中科技大学工业技术研究院 | Mobile phone light guide panel defect inspection method and system based on changeable weight assembled classifier |
KR20200087297A (en) * | 2018-12-28 | 2020-07-21 | 이화여자대학교 산학협력단 | Defect inspection method and apparatus using image segmentation based on artificial neural network |
WO2020181570A1 (en) * | 2019-03-08 | 2020-09-17 | 上海达显智能科技有限公司 | Intelligent smoke removal device and control method thereof |
CN110314854A (en) * | 2019-06-06 | 2019-10-11 | 苏州市职业大学 | A kind of device and method of the workpiece sensing sorting of view-based access control model robot |
CN110288013A (en) * | 2019-06-20 | 2019-09-27 | 杭州电子科技大学 | A kind of defective labels recognition methods based on block segmentation and the multiple twin convolutional neural networks of input |
CN111340754A (en) * | 2020-01-18 | 2020-06-26 | 中国人民解放军国防科技大学 | Method for detecting and classifying surface defects based on aircraft skin |
CN111833328A (en) * | 2020-07-14 | 2020-10-27 | 汪俊 | Aircraft engine blade surface defect detection method based on deep learning |
Non-Patent Citations (4)
Title |
---|
JIAQIU AI等: "Multi-Scale Rotation-Invariant Haar-Like Feature Integrated CNN-Based Ship Detection Algorithm of Multiple-Target Environment in SAR Imagery", 《IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING》, 26 August 2019 (2019-08-26), pages 10070 * |
李茜等: "边缘跟踪算法在多纸病边缘跟踪算法在多纸病", 《中国造纸》, vol. 36, no. 8, 27 September 2017 (2017-09-27), pages 43 - 44 * |
汤勃;戴超凡;黄文豪: "基于卷积神经网络带标记的钢板表面缺陷检测", 制造业自动化, vol. 42, no. 09, pages 34 - 40 * |
闵永智;程天栋;马宏锋: "基于多特征融合与AdaBoost算法的轨面缺陷识别方法", 铁道科学与工程学报, vol. 14, no. 12, pages 2554 - 2562 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117611587A (en) * | 2024-01-23 | 2024-02-27 | 赣州泰鑫磁性材料有限公司 | Rare earth alloy material detection system and method based on artificial intelligence |
Also Published As
Publication number | Publication date |
---|---|
CN112907510B (en) | 2023-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110826416B (en) | Bathroom ceramic surface defect detection method and device based on deep learning | |
CN108898610B (en) | Object contour extraction method based on mask-RCNN | |
CN111292305B (en) | Improved YOLO-V3 metal processing surface defect detection method | |
CN115082683B (en) | Injection molding defect detection method based on image processing | |
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
CN109978839B (en) | Method for detecting wafer low-texture defects | |
US9639748B2 (en) | Method for detecting persons using 1D depths and 2D texture | |
CN113592845A (en) | Defect detection method and device for battery coating and storage medium | |
Li et al. | A novel algorithm for defect extraction and classification of mobile phone screen based on machine vision | |
CN113724231B (en) | Industrial defect detection method based on semantic segmentation and target detection fusion model | |
CN112862770B (en) | Defect analysis and diagnosis system, method and device based on artificial intelligence | |
Bong et al. | Vision-based inspection system for leather surface defect detection and classification | |
CN112085024A (en) | Tank surface character recognition method | |
CN113177924A (en) | Industrial production line product flaw detection method | |
CN111259893A (en) | Intelligent tool management method based on deep learning | |
Ko et al. | Defect detection of polycrystalline solar wafers using local binary mean | |
CN114255212A (en) | FPC surface defect detection method and system based on CNN | |
CN115690670A (en) | Intelligent identification method and system for wafer defects | |
CN112200795A (en) | Large intestine endoscope polyp detection method based on deep convolutional network | |
CN111487192A (en) | Machine vision surface defect detection device and method based on artificial intelligence | |
Abdellah et al. | Defect detection and identification in textile fabric by SVM method | |
Muresan et al. | Automatic vision inspection solution for the manufacturing process of automotive components through plastic injection molding | |
CN112907510B (en) | Surface defect detection method | |
CN112069974B (en) | Image recognition method and system for recognizing defects of components | |
CN116523916B (en) | Product surface defect detection method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |