CN112001909B - Powder bed defect visual detection method based on image feature fusion - Google Patents
Powder bed defect visual detection method based on image feature fusion Download PDFInfo
- Publication number
- CN112001909B CN112001909B CN202010868173.0A CN202010868173A CN112001909B CN 112001909 B CN112001909 B CN 112001909B CN 202010868173 A CN202010868173 A CN 202010868173A CN 112001909 B CN112001909 B CN 112001909B
- Authority
- CN
- China
- Prior art keywords
- powder bed
- image
- defect
- feature
- powder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000007547 defect Effects 0.000 title claims abstract description 232
- 239000000843 powder Substances 0.000 title claims abstract description 193
- 238000001514 detection method Methods 0.000 title claims abstract description 90
- 230000000007 visual effect Effects 0.000 title claims abstract description 46
- 230000004927 fusion Effects 0.000 title claims abstract description 40
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 127
- 238000000034 method Methods 0.000 claims abstract description 56
- 238000000605 extraction Methods 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 26
- 238000012544 monitoring process Methods 0.000 claims abstract description 11
- 238000012549 training Methods 0.000 claims description 46
- 238000012360 testing method Methods 0.000 claims description 38
- 238000007637 random forest analysis Methods 0.000 claims description 35
- 239000011159 matrix material Substances 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 18
- 230000007480 spreading Effects 0.000 claims description 16
- 238000003892 spreading Methods 0.000 claims description 16
- 238000001914 filtration Methods 0.000 claims description 12
- 238000005253 cladding Methods 0.000 claims description 11
- 238000009826 distribution Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 10
- 238000003066 decision tree Methods 0.000 claims description 9
- 230000009467 reduction Effects 0.000 claims description 9
- 238000011156 evaluation Methods 0.000 claims description 7
- 238000002790 cross-validation Methods 0.000 claims description 3
- 230000007812 deficiency Effects 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 2
- 238000011179 visual inspection Methods 0.000 claims 5
- 230000000694 effects Effects 0.000 description 9
- 238000005516 engineering process Methods 0.000 description 5
- 239000002184 metal Substances 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000008018 melting Effects 0.000 description 4
- 238000002844 melting Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000010894 electron beam technology Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000005336 cracking Methods 0.000 description 1
- 238000005137 deposition process Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000013332 literature search Methods 0.000 description 1
- 239000011148 porous material Substances 0.000 description 1
- 239000002893 slag Substances 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8883—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a powder bed defect visual detection method based on image feature fusion, which can construct a powder bed defect detection algorithm model and is applied to a specific powder paving process, and comprises the following steps: determining three different types of powder bed defects for the cause of the powder bed defects; selecting corresponding feature extraction strategies according to the defect features of different types of powder beds, wherein the feature extraction strategies comprise: scale space features, texture features and geometric features; constructing a powder bed defect detection algorithm model with feature fusion; designing algorithm parameter combination and optimizing strategy, optimizing algorithm parameters, and establishing a final powder bed defect detection algorithm model; the optimal defect detection algorithm model is applied to powder bed images acquired in real time, and the quality of the powder bed powder process is monitored in real time. The method selects three feature extraction and fusion strategies to establish the powder bed defect detection algorithm model, and the model can be applied to online quality monitoring in the powder laying process to improve the powder laying quality.
Description
Technical Field
The invention relates to the technical field of metal powder bed fusion, in particular to a powder bed defect visual detection method based on image feature fusion.
Background
The metal powder bed melting technology comprises a laser selective melting (selective lasermelting, SLM) technology and an electron beam selective melting (electron beam selectivemelting, EBSM) technology, wherein a laser beam and an electron beam are respectively adopted to scan layer by layer to lay a powder bed in a forming area in advance, so that the powder bed is melted and deposited to form a part, and the metal powder bed melting technology is one of metal additive manufacturing technologies widely applied to the industries of aerospace, equipment manufacturing, biomedical and the like. In the fused deposition process, the defects of the powder bed cause unstable molten pool in the scanning process, so that macroscopic defects such as bending deformation, cracking, spheroidization and the like of the part and internal metallurgical defects such as pores, slag inclusion, incomplete fusion and the like are caused, and finally, the quality of the part is influenced, so that the method has important significance in monitoring the defects of the powder bed in real time in the powder laying process.
Through literature search analysis, the powder bed defect detection method mainly comprises two types: a defect detection method based on traditional image processing and a defect detection model introducing a deep learning method. The defect detection method based on the traditional image processing has higher requirements on the positions of the sensor and the light source, and has limited types of defects which can be detected, and poor adaptability and expansibility to the defects of the powder bed. The existing powder bed defect detection method based on deep learning generally does not consider the difference and the effect of different characteristics of powder bed images, and still has room for improvement in algorithm effect.
Disclosure of Invention
The invention aims to provide a visual detection method for defects of a powder bed based on image feature fusion, which aims to solve the defect that the existing detection method for defects of the powder bed based on deep learning does not usually consider the difference and effect of different features of images of the powder bed, and the defects of the powder bed are detected by combining a defect detection algorithm according to three different types of powder bed defect feature design feature extraction strategies.
In order to solve the technical problems, specifically, the invention provides a powder bed defect visual detection method based on image feature fusion, which comprises the following steps:
s1, dividing the powder bed defects into three different types of powder bed defects according to the forming reasons of the powder bed defects;
s2, determining a feature extraction strategy according to the three different types of powder bed defect features in the step S1, wherein the feature extraction strategy comprises the following steps: extracting scale space features, texture features and geometric features;
s3, establishing a powder bed defect detection algorithm model, and monitoring the quality of the powder laying process by using the powder bed defect detection algorithm model, wherein the method specifically comprises the following substeps:
s31, preprocessing the obtained powder bed defect images, and then, carrying out 7 on all the powder bed defect images: the ratio of 3 is divided into two groups, wherein the first group is a training group which is used for initially establishing a powder bed defect detection algorithm model, and the second group is a testing group which is used for testing the powder bed defect detection algorithm model;
s32, respectively extracting scale space features, texture features and geometric features from each powder bed defect image of the training set and the test set based on the word bag model, so as to respectively construct three groups of visual dictionaries for the training set and the test set according to feature extraction resultsAnd->Counting the distribution condition of each word in the visual dictionary in the image to obtain a quantized form H of each powder bed defect image represented by a visual word histogram SIFT 、H GLCM And H Hu Thereby constructing three groups of visual word histograms for the training group and the test group respectively;
s33, respectively carrying out serial fusion on the three groups of visual word histograms of the training group and the testing group obtained in the step S32 to form a fused feature matrix, and carrying out dimension reduction treatment on the fused feature matrix through feature selection;
the specific method for carrying out serial fusion on the three groups of visual word histograms comprises the following substeps:
s331, firstly, extracting scale space features from each powder bed defect image by adopting SIFT algorithmWherein c is a category label of the picture; i is the image number, each powder bed defect image contains +.>Each feature point is a 128-dimensional feature vector, and the SIFT features of all powder bed defect images are expressed as:
secondly, 6 GLCM features are extracted from each powder bed defect imageThe GLCM feature of all powder bed defect images can be expressed as:
then, 7 invariant moments of each powder bed defect image are calculated to form a 7-dimensional feature vectorThe Hu invariant moment of all powder bed defect images can be expressed as:
s332, adopting a serial fusion mode to fuse F SIFT 、F GLCM And F Hu Fusing, wherein the fused feature matrix is marked as H, and the fused feature matrix H is expressed as:
H=(F SIFT ,F GLCM ,F Hu );
s333, performing variance filtering dimension reduction treatment on the fused feature matrix H to obtain a final feature matrix H';
s34, combining a random forest classification algorithm, and establishing a preliminary powder bed defect detection algorithm model by utilizing training group image data, wherein the method specifically comprises the following sub-steps:
s341, repeatedly randomly extracting m samples from the training set N in a put-back way to generate a new training sample set;
s342, generating m decision trees for m sample sets to form a random forest, wherein each decision tree is constructed as follows:
s3421 selecting Gini (N, H' j ) Characteristic H 'with minimum value' j Dividing the set N into two subsets N 1 And N 2 ,Gini(N,H′ j ) Expressed as:
s3422 pair N 1 And N 2 Two child nodes recursively call the step S3421 until the random forest is generated; the m decision tree sets are expressed as:
{(t 1 (H′)),(t 2 (H′)),(t 3 (H′)),…,(t m (H′))};
s343, obtaining a final classification result by adopting a simple majority voting method, wherein the final classification result is expressed as:
s35, determining a defect detection algorithm parameter combination, carrying out 10 times of ten-fold cross validation to optimize random forest algorithm parameters, and selecting the random forest algorithm optimal parameters by taking an average accuracy average value of the 10 times of algorithm as an evaluation index, wherein the specific substeps are as follows:
s351, randomly dividing the data set N into 10 groups of disjoint subsets, wherein the number of training samples of the data set N is 630, each subset has 63 training samples, and the corresponding subset is expressed as:
N={N 1 ,N 2 ,N 3 ,…N i },i=1,2,3,…10,
N i =(H′ i ,y i ),
wherein, (H' i ,y i ) Representing a feature matrix and an image real category corresponding to the ith subset;
s352, randomly selecting 1 from 10 subsets each time as a test set, using the other 9 as training sets, and training a random forest classification model by using training set data;
s353, testing on the test set data to obtain the average accuracy of the algorithm, calculating the average accuracy mean of the 10 times of algorithm, and using the average accuracy mean of the algorithm as the true classification rate of the random forest classification model, wherein the average accuracy mean of the algorithm is expressed as:
wherein T is N(i) (H′ i ) Representing the predicted value obtained when selecting the ith subset as the test set for testing,representing the average accuracy of a primary algorithm;
establishing a random forest classification model based on optimal parameters of random forest algorithm, and setting k 1 、k 2 、k 3 Respectively representing the number of word bag model clustering centers for extracting invariant moment of SIFT operator, GLCM and Hu, and enabling k to be k 1 、k 2 、k 3 The initial values of the algorithm are all 100, the subsequent value interval is set to be 100, the termination value is set to be 500, all parameter combinations are traversed, an optimal parameter random forest classification model is brought in, and the defect detection algorithm parameter combinations are selected by taking the average accuracy of the algorithm as an evaluation index;
s36, according to the result of the step S35, using the defect detection algorithm model established by the optimal defect detection algorithm parameter combination as an optimal defect detection algorithm model;
and S37, applying the optimal defect detection algorithm model selected in the step S36 to the powder bed image acquired in real time, and monitoring the quality of the powder bed powder process in real time.
Preferably, the three different types of powder bed defects described in S1 are streak defects, cladding layer localized high defects, and insufficient powder supply defects.
Preferably, the different classes of powder bed defect features described in S2 include: stripe-shaped defect characteristics, cladding layer local high defect characteristics and powder supply deficiency defect characteristics; wherein,
the stripe-shaped defect is characterized in that a defect area in an image shows that the gray value of the defect area is lower than that of a good powder spreading area, and the area of the defect is smaller;
the local high defect of the cladding layer is shown in the image that the gray value of the local high region is higher than that of the region with good powder spreading, and the area of the defect is minimum;
the defect of insufficient powder supply shows the phenomenon that the local gray level is higher and the local gray level is lower in the defect area in the image, and the area of the defect area is the largest.
Preferably, the SIFT algorithm step in step S32 includes:
(1) Firstly, filtering is performed by adopting a Gaussian kernel function to construct a scale space, wherein the Gaussian kernel function is expressed as:
second, the constructed scale space function is expressed as:
L(x,y,σ)=G(x,y,σ)*I(x,y);
next, subtracting the upper layer image and the lower layer image in each group of the gaussian pyramid to obtain a gaussian differential DoG image, wherein the gaussian differential image is expressed as:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein I (x, y) represents the original image and (x, y) represents the pixel position in the image; g (x) i ,y i Sigma) is a scale variable gaussian function, sigma is a scale space factor; k represents a multiple of two adjacent scale spaces; * Is a convolution operation symbol;
finally, comparing each pixel point of the Gaussian differential image with 26 points of all adjacent points of the Gaussian differential image to ensure that extreme points are detected in a scale space and a two-dimensional image space;
(2) Performing curve fitting on the DoG function of the scale space by using a Taylor series, thereby further determining the position and the scale of the key points, and simultaneously removing the key points with low contrast and unstable edge response points;
(3) For key points detected in the DOG pyramid, calculating and using the histogram to count the gradient and direction distribution characteristics of pixels in the neighborhood of the Gaussian pyramid image where the key points are located, and the modulus value m (x, y) and the direction theta (x, y) of the gradient are expressed as follows:
(4) Gradient direction histograms of 8 directions calculated in a window of 4*4 in the keypoint scale space are plotted for each gradient direction, so that each keypoint forms a feature vector of 4 x 8 = 128 dimensions.
Preferably, in step S32, the texture feature extraction of the powder bed defect image constructs 6 texture feature values using a gray level co-occurrence matrix, which are respectively expressed as:
wherein 6 eigenvalues are angular second moment, energy, contrast, dissimilarity, homogeneity and relativity respectively,the method is characterized by calculating the probability of simultaneous occurrence of a pixel B with d distance from the pixel A, theta direction and B gray scale from the pixel A with the gray scale a; θ takes the values of 0 °, 45 °, 90 °, 135 °; d takes a value of 1; mu (mu) a 、μ b Respectively represent P a 、P b Mean, sigma of a 、σ b Respectively represent P a 、P b Standard deviation of (2).
Preferably, the geometric feature extraction of the powder bed defect image in step S32 uses Hu invariant moment 7 geometric feature values, which are respectively expressed as:
wherein eta pq Representing the normalized center moment of the p+q order;
preferably, in step S33, a serial feature fusion technique is used to perform feature fusion, and a filtering feature selection algorithm is used to perform dimension reduction processing.
Preferably, in step S35, the parameters of the visual dictionary size and the random forest classifier in the bag-of-words model are optimized respectively.
Compared with the prior art, the invention has the following beneficial effects:
the invention aims at the cause of the powder bed defect to be divided into three different types of powder bed defects; designing feature extraction strategies according to different types of powder bed defect features, wherein the feature extraction strategies comprise: extracting scale space features, texture features and geometric features; performing feature fusion and selection, and initially establishing a powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimizing strategy, and establishing a final powder bed defect detection algorithm model according to the optimal algorithm parameter combination; and monitoring the powder spreading process in real time, acquiring a powder bed image, and realizing the detection of the powder bed defect by combining a defect detection algorithm so as to monitor the quality of the powder spreading process. Compared with the existing algorithm, the algorithm is more accurate, guides on the powder spreading process are more perfect and accurate, and the defect of the powder bed in the powder spreading process can be well corrected in real time.
Drawings
Fig. 1 is a schematic flow chart of a powder bed defect visual detection method based on image feature fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a powder bed defect provided by an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a powder bed defect detection algorithm according to an embodiment of the present invention;
FIG. 4a is a schematic diagram of a striped defect in an embodiment of the present invention;
FIG. 4b is a schematic illustration of a cladding type localized high defect in an embodiment of the present invention; and
FIG. 4c is a schematic diagram of a defect of insufficient powder supply in an embodiment of the invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments.
Aiming at the defect that the existing powder bed defect detection method based on deep learning does not generally consider the difference and effect of different features of a powder bed image, the invention provides a powder bed defect visual detection method based on image feature fusion.
The algorithm and the working principle of the invention are specifically discussed below in connection with specific embodiments:
the invention provides a powder bed defect visual detection method based on image feature fusion, which can detect the powder bed defect in real time in the powder laying process without any influence on the processing process and relates to the field of metal powder bed fusion. The method comprises the following steps: defining three different categories of powder bed defects for their cause; designing feature extraction strategies according to the defect features of different types of powder beds, wherein the feature extraction strategies comprise: extracting scale space features, texture features and geometric features; performing feature fusion and selection, and initially establishing a powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimizing strategy, and establishing a final powder bed defect detection algorithm model according to the optimal algorithm parameter combination; and monitoring the powder spreading process in real time, acquiring a powder bed image, and realizing the detection of the powder bed defect by combining a defect detection algorithm so as to monitor the quality of the powder spreading process.
As shown in fig. 1, the visual detection method for the defects of the powder bed based on the image feature fusion provided by the embodiment of the invention comprises the following steps:
s1, defining three different types of powder bed defects aiming at the causes of the powder bed defects, wherein in the embodiment, as shown in FIG. 2, the three different types of powder bed defects comprise stripe defects, cladding layer local high defects and powder supply shortage defects.
S2, designing feature extraction strategies according to the defect features of different types of powder beds, wherein the feature extraction strategies comprise: extracting scale space features, texture features and geometric features; the different types of powder bed defect features in this embodiment include: the stripe-shaped defects are mostly represented in the image as the gray value of the defect area is lower than that of the area with good powder spreading, and the area of the defects is generally smaller; the local high defect of the cladding layer is mostly represented in an image as the gray value of the local high region is higher than that of the region with good powder spreading, and the area of the defect is very small; the defect of insufficient powder supply shows the phenomenon that the partial gray scale is higher and the partial gray scale is lower in the defect area in the image, and the area of the defect area is generally large.
S3, performing feature fusion and selection, and initially establishing a powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimizing strategy, and establishing a final powder bed defect detection algorithm model according to the optimal algorithm parameter combination; and monitoring the powder spreading process in real time, acquiring a powder bed image, and realizing the detection of the powder bed defect by combining a defect detection algorithm so as to monitor the quality of the powder spreading process.
In this embodiment, as shown in fig. 3, a powder bed defect detection algorithm model is established, and the powder laying process quality is monitored by using the powder bed defect detection algorithm model, specifically:
the invention provides a powder bed defect visual detection method based on image feature fusion, which comprises the following steps of:
s1, dividing the powder bed defects into three different types of powder bed defects according to the forming reasons of the powder bed defects;
s2, determining a feature extraction strategy according to three different types of powder bed defect features, wherein the feature extraction strategy comprises the following steps: extracting scale space features, texture features and geometric features;
s3, establishing a powder bed defect detection algorithm model, and monitoring the quality of the powder laying process by using the powder bed defect detection algorithm model, wherein the method specifically comprises the following substeps:
s31, preprocessing the obtained powder bed defect images, and then, carrying out 7 on all the powder bed defect images: the ratio of 3 is divided into two groups, wherein the first group is a training group which is used for establishing a powder bed defect detection algorithm model, and the second group is a testing group which is used for testing the powder bed defect detection algorithm model;
s32, extracting scale space features, texture features and geometric features from each powder bed defect image of the training set and the test set based on the word bag model, so as to construct three sets of visual dictionaries for the training set and the test set according to feature extraction resultsAnd->Counting the distribution condition of each word in the visual dictionary in the image to obtain a quantized form H of each picture represented by a visual word histogram SIFT 、H GLCM And H Hu Thereby constructing three groups of visual word histograms;
s33, respectively carrying out serial fusion on three groups of visual word histograms of the training group and the testing group to form a fused feature matrix, and carrying out dimension reduction treatment on the fused feature matrix through feature selection;
the specific method for carrying out serial fusion on the three groups of visual word histograms is as follows:
s331, firstly, extracting scale space features from each powder bed defect image by adopting SIFT algorithmWherein c is a category label of the picture; i is the image number, each image contains +.>If each feature point is a 128-dimensional feature vector, the SIFT features of all images are expressed as:
next, 6 GLCM features were extracted in 4 directions of each powder bed defect imageA feature vector of 24 dimensions is obtained,
then, 7 invariant moments of each image are calculated to form a 7-dimensional feature vectorThe Hu invariant moment for all images can be expressed as:
s332, adopting a serial fusion mode to fuse F SIFT 、F GLCM And F Hu Fusing, wherein the fused feature matrix is marked as H, and the fused feature matrix is expressed as:
H=(F SIFT ,F GLCM ,F Hu );
s333, performing variance filtering dimension reduction treatment on the fused feature matrix H, and removing partial features which do not have an effect on distinguishing samples to obtain a final feature matrix H';
the specific method for carrying out serial fusion on the obtained characteristics is as follows:
firstly, extracting scale space features from each preprocessed powder bed defect image by adopting SIFT algorithmWherein c is a category label of the picture; i is the image number. Each image contains->If each feature point is a 128-dimensional feature vector, SIFT features of all images can be expressed as:
next, 6 GLCM features are extracted in 4 directions of each image, respectivelyResulting in a 24-dimensional feature vector, the GLCM features of all images can be expressed as:
the 6 characteristic values are Angular Second Moment (ASM) which is a measure of the gray level change stability degree of the image texture and reflects the gray level distribution uniformity degree and the texture thickness of the image, and the larger the energy value is, the more regular the current image texture is changed.
The Energy (Energy) is the arithmetic square root of the angular second moment, and as with the angular second moment, reflects the uniformity of the image gray scale distribution, with a larger Energy value indicating a more uniform current image texture distribution.
Contrast (Contrast) is a measure of the sharpness of the texture and the depth of the corrugations of an image, with a larger value of Contrast indicating a clearer current image and a deeper texture corrugations.
Dissimilarity (dissimilarity) is a measure of the degree of difference in texture of an image, and the larger the value of local contrast, the larger the value of dissimilarity, like contrast.
Homogeneity (Homogeneity) is a measure of the local uniformity of the texture of an image, with a larger value for Homogeneity indicating that the local texture of the current image is more uniform and that the texture variation between different regions is smaller.
The Correlation (Correlation) is a measure of the linear relation of the gray scale of an image, reflects the similarity of the gray scale values of the image along the horizontal or vertical direction, and indicates that the larger the value of the Correlation is, the more uniform the gray scale distribution of the current image is.
Wherein mu a 、μ b Respectively represent P a 、P b Mean, sigma of a 、σ b Respectively represent P a 、P b Standard deviation of (2).
Then, calculate each image7 invariant moments to form a 7-dimensional feature vectorThe Hu invariant moment for all images can be expressed as:
for a discrete digital image, the gray value at image (x, y) is denoted as f (x, y), and its standard moment of the p+q order is defined as:
when the image is shifted, in order to ensure m pq The translation is unchanged, the position of the translation is normalized, and the p+q-order center distance u is obtained pq The definition is as follows:
wherein,representing the barycentric coordinates of the image:
to ensure m pq The center moment is still unchanged after translation and scaling, and the expansion size of the center moment is normalized to obtain normalized center moment eta pq The following are provided:
where r= (p+q)/2+1, p+q=2, 3,4 ….
7 invariant moment groups can be obtained by utilizing second-order and third-order normalized center distances, and the geometric features of the image are described by forming 7-dimensional feature vectors, as follows:
next, F for all pictures SIFT 、F GLCM And F Hu The characteristics respectively adopt K-means clustering algorithm to generate K clustering centers. And selecting Euclidean distance as an evaluation index of the similarity of the feature vectors, dividing all the feature vectors into different classes in the continuous iteration process of the K-means algorithm, and stopping iteration when the square sum of the clusters of the classes is minimum. Each cluster center represents a visual word, and three visual dictionaries are obtained for three different extracted featuresAnd->And counting the distribution condition of each word in the visual dictionary in the image to obtain a quantized form H of each picture represented by the visual word histogram SIFT 、H GLCM And H Hu 。
Serial fusion mode is adopted to combine H SIFT 、H GLCM And H Hu And (3) fusing, namely marking the fused feature matrix as H, and carrying out feature selection on the fused feature matrix H by adopting a filtering method to obtain a final feature matrix H'. The feature selection can effectively reduce feature dimension, improve classification accuracy, and obtain better analysis and interpretation on potential significance of data. The filtering method is based on the idea of feature ordering, which measures the importance of features only from the data itself and is not related to any learning algorithm. For a large-scale high-dimensional data set, the advantage of adopting a filtering algorithm for feature selection is that the calculation is simple and quick, modeling and evaluating the feature subset are not needed in the whole process, and the feature selection is independent of a classification algorithm.
And then, carrying out variance filtering on the fused feature set, and removing partial features which have no effect on distinguishing samples. When the variance of a feature itself is small, it is shown that the samples have substantially no difference in this feature, which does not or does little to distinguish between samples. And then, carrying out chi-square filtering on the feature set subjected to variance filtering, and eliminating redundant feature vectors with large correlation.
S34, combining a random forest classification algorithm, and establishing a preliminary powder bed defect detection algorithm model by utilizing training group image data, wherein the method specifically comprises the following sub-steps:
s341, repeatedly randomly extracting m samples from the training set N in a put-back way to generate a new training sample set;
s342, generating m decision trees for m sample sets to form a random forest, wherein each decision tree is constructed as follows:
s3421 selecting Gini (N, H' j ) Characteristic H 'with minimum value' j Dividing set N into two subsetsN 1 And N 2 ,Gini(N,H′ j ) Expressed as:
s3422 pair N 1 And N 2 Two child nodes, recursively calling S3421 until the random forest is generated; the m decision tree sets are expressed as:
{(t 1 (H′)),(t 2 (H′)),(t 3 (H′)),…,(t m (H′))};
s343, obtaining a final classification result by adopting a simple majority voting method, wherein the final classification result is expressed as:
s35, determining a defect detection algorithm parameter combination, carrying out 10 times of ten-fold cross validation to optimize random forest algorithm parameters, and selecting the random forest algorithm optimal parameters by taking an average accuracy average value of the 10 times of algorithm as an evaluation index, wherein the specific substeps are as follows:
s351, randomly dividing the data set N into 10 groups of disjoint subsets, wherein the number of training samples of the data set N is 630, each subset has 63 training samples, and the corresponding subset is expressed as:
N={N 1 ,N 2 ,N 3 ,…N i },i=1,2,3,…10,
N i =(H′ i ,y i ),
wherein, (H' i ,y i ) Representing a feature matrix and an image real category corresponding to the ith subset;
s352, randomly selecting 1 from 10 subsets each time as a test set, using the other 9 as training sets, and training a random forest classification model by using training set data;
s353, testing on the test set data to obtain the average accuracy of the algorithm, calculating the average accuracy mean of the 10 times of algorithm, and using the average accuracy mean of the algorithm as the true classification rate of the random forest classification model, wherein the average accuracy mean of the algorithm is expressed as:
wherein T is N(i) (H′ i ) Representing the predicted value obtained when selecting the ith subset as the test set for testing,representing the average accuracy of the one-time algorithm.
Establishing a random forest classification model based on optimal parameters of random forest algorithm, and setting k 1 、k 2 、k 3 Respectively representing the number of word bag model clustering centers for extracting invariant moment of SIFT operator, GLCM and Hu, and enabling k to be k 1 、k 2 、k 3 The initial values of the parameters are all 100, the subsequent value interval is set to be 100, the termination value is set to be 500, all parameter combinations are traversed, an optimal parameter random forest classification model is brought in, and the optimal parameter combinations are selected by taking the average accuracy of the algorithm as an evaluation index;
s36, according to the result of the step S35, using the defect detection algorithm model established by the optimal defect detection algorithm parameter combination as an optimal defect detection algorithm model;
and S37, applying the optimal defect detection algorithm model selected in the step S36 to the powder bed image acquired in real time, so as to monitor the quality of the powder process of the powder bed in real time.
Specific examples:
firstly, 100 defect images are respectively acquired for three types of defects, then 300 original images are processed in an image enhancement mode, a random rotation mode and a random color conversion mode, and a data set formed by 900 powder bed defect images is obtained, wherein each type of defects comprises 300 images. The method comprises the steps of dividing the method into two groups, wherein the first group is a training group and is used for building a training algorithm, and the second group is a testing group and is used for testing the detection effect of the algorithm.
And secondly, SIFT features are extracted from each powder bed defect image, and SIFT feature visualizations of three types of defects are shown in fig. 4.
The pink points in the graph represent scale space feature points detected by the SIFT algorithm, the dense feature points in fig. 4 (a) and fig. 4 (c) are gathered around the defect area, the sparse feature points are scattered in the area except the defect, and the sparse feature points in fig. 4 (b) are irregularly distributed in the whole defect image area. The main reason for this phenomenon is that the defect areas of the two types of defects, namely the stripe-shaped defect and the insufficient powder supply, are relatively concentrated, a large number of characteristic points are extracted in the defect areas, a similar rule is presented, the defect areas of the partial higher defects of the cladding layer are often present at the edges of the construction, and are embodied as extremely narrow construction edge shapes in the images, so that the scale space characteristics of the partial higher areas of the cladding layer are difficult to extract in the whole defect image.
For each powder bed image, the values of 6 common statistics of angular second moment, energy, contrast, dissimilarity, homogeneity and relativity in 4 directions of 0 degree, 45 degree, 90 degree and 135 degree are calculated, so that a 24-dimensional feature vector is obtained. And selecting one sample image from the three types of defect images to calculate GLCM, wherein the GLCM characteristic values of the sample image in four directions are listed in table 1.
For each powder bed defect image, 7 invariant moments were calculated, resulting in a 7-dimensional feature vector describing the geometry of the defect. One sample image was selected from the three types of defect images, respectively, and table 2 lists 7 invariant moments of the sample images.
Table 1 GLCM values in 4 directions for three powder bed defect images
Table 2 Hu invariant moment of three powder bed defect images
Then, three groups of visual word histograms of a training group and a testing group are constructed, the three groups of visual word histograms of the training group and the testing group are respectively fused in series, and feature selection is used for carrying out dimension reduction on the fused feature matrix;
and then combining a random forest classification algorithm, and establishing a preliminary powder bed defect detection algorithm model by utilizing training set data. Determining a defect detection algorithm parameter combination, performing defect detection on the image data of the test group by using the established preliminary powder bed defect detection algorithm model, and evaluating the detection effect of the powder bed defect detection algorithm model under the defect detection algorithm parameter combination according to the size of the visual dictionary and the random forest classifier parameters;
repeating iterative operation for all defect detection algorithm parameter combinations, and evaluating the detection effect of the powder bed defect detection algorithm model under all defect detection algorithm parameter combinations according to the size of the visual dictionary and the random forest classifier parameters; confirming that all the defect detection algorithm parameter combinations have completed iterative operation, searching for the defect detection algorithm parameter combination with the best detection result in the test group image data according to the result of the iterative operation, and selecting a defect detection algorithm model established by using the algorithm parameter combination as an optimal defect detection algorithm model.
And finally, in actual production, applying the selected optimal defect detection algorithm model to the powder bed defect image acquired in real time, so as to achieve the purpose of monitoring the quality of the powder laying process in real time. The powder spreading process is improved according to the defect image of the powder bed, and the powder spreading accuracy is increased.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that various modifications and adaptations can be made without departing from the principles of the present invention, and such modifications and adaptations are intended to be comprehended within the scope of the present invention.
Claims (6)
1. The powder bed defect visual detection method based on image feature fusion is characterized by comprising the following steps of:
s1, dividing the powder bed defects into three different types of powder bed defects aiming at the forming reasons of the powder bed defects, wherein the three different types of powder bed defects are respectively stripe defects, cladding layer local high defects and powder supply deficiency defects;
s2, determining a feature extraction strategy according to the three different types of powder bed defect features in the step S1, wherein the feature extraction strategy comprises the following steps: extracting scale space features, texture features and geometric features;
the different classes of powder bed defect features include: stripe-shaped defect characteristics, cladding layer local high defect characteristics and powder supply deficiency defect characteristics; wherein,
the stripe-shaped defect is characterized in that a defect area in an image shows that the gray value of the defect area is lower than that of a good powder spreading area, and the area of the defect is smaller;
the local high defect of the cladding layer is shown in the image that the gray value of the local high region is higher than that of the region with good powder spreading, and the area of the defect is minimum;
the defect of insufficient powder supply shows the phenomenon that the local gray level is higher and the local gray level is lower in the image at the same time, and the area of the defect area is the largest;
s3, establishing a powder bed defect detection algorithm model, and monitoring the quality of the powder laying process by using the powder bed defect detection algorithm model, wherein the method specifically comprises the following substeps:
s31, preprocessing the obtained powder bed defect images, and then, carrying out 7 on all the powder bed defect images: the ratio of 3 is divided into two groups, wherein the first group is a training group which is used for establishing a powder bed defect detection algorithm model, and the second group is a testing group which is used for testing the powder bed defect detection algorithm model;
s32, respectively extracting scale space features, texture features and geometric features from each powder bed defect image of the training set and the test set based on the word bag model, so as to respectively construct three groups of visual dictionaries for the training set and the test set according to feature extraction resultsAnd->Counting the distribution condition of each word in the visual dictionary in the image to obtain a quantized form H of each powder bed defect image represented by a visual word histogram SIFT 、H GLCM And H Hu Thereby constructing three sets of visual word histograms for the training set and the test set, respectively, wherein +.>For the visual dictionary constructed to training group according to the scale space feature extraction result, < > for>For the visual dictionary constructed to training group according to the texture feature extraction result, ++>H, constructing a visual dictionary for the training group according to the geometrical feature extraction result SIFT Quantized form represented by visual word histogram according to scale space feature extraction result for each picture, H GLCM Quantized form expressed by visual word histogram according to texture feature extraction result for each picture, H Hu A quantized form represented by a visual word histogram according to the geometric feature extraction result for each picture;
s33, respectively carrying out serial fusion on the three groups of visual word histograms of the training group and the testing group obtained in the step S32 to form a fused feature matrix, and carrying out dimension reduction treatment on the fused feature matrix through feature selection;
the specific method for carrying out serial fusion on the three groups of visual word histograms comprises the following substeps:
s331, firstly, extracting scale space features from each powder bed defect image by adopting SIFT algorithmWherein c is a category label of the picture; i is the image number, each powder bed defect image contains +.>Each feature point is a 128-dimensional feature vector, and the SIFT features of all powder bed defect images are expressed as:
secondly, 6 GLCM features are extracted from each powder bed defect imageThe GLCM feature of all powder bed defect images can be expressed as:
then, 7 invariant moments of each powder bed defect image are calculated to form a 7-dimensional feature vectorThe Hu invariant moment of all powder bed defect images can be expressed as:
s332, adopting a serial fusion mode to fuse F SIFT 、F GLCM And F Hu Fusing, namely marking the fused feature matrix as H, and expressing the fused feature matrix H as:
H=(F SIFT ,F GLCM ,F Hu );
s333, performing variance filtering dimension reduction treatment on the fused feature matrix H to obtain a final feature matrix H';
s34, combining a random forest classification algorithm, and establishing a preliminary powder bed defect detection algorithm model by utilizing training group image data, wherein the method specifically comprises the following sub-steps:
s341, repeatedly randomly extracting m samples from the training set N in a put-back way to generate a new training sample set;
s342, generating m decision trees for m sample sets to form a random forest, wherein each decision tree is constructed as follows:
s3421 selecting Gini (N, H' j ) Characteristic H 'with minimum value' j Dividing the set N into two subsets N 1 And N 2 ,Gini(N,H′ j ) Expressed as:
s3422 pair N 1 And N 2 Two child nodes recursively call the step S3421 until the random forest is generated; the m decision tree sets are expressed as:
{(t 1 (H′)),(t 2 (H′)),(t 3 (H′)),…,(t m (H′))};
s343, obtaining a final classification result by adopting a simple majority voting method, wherein the final classification result is expressed as:
s35, determining a defect detection algorithm parameter combination, carrying out 10 times of ten-fold cross validation to optimize random forest algorithm parameters, and selecting the random forest algorithm optimal parameters by taking an average accuracy average value of the 10 times of algorithm as an evaluation index, wherein the specific substeps are as follows:
s351, randomly dividing the data set N into 10 groups of disjoint subsets, wherein the number of training samples of the data set N is 630, each subset has 63 training samples, and the corresponding subset is expressed as:
N={N 1 ,N 2 ,N 3 ,…N i },i=1,2,3,…10,
N i =(H′ i ,y i ),
wherein, (H' i ,y i ) Representing a feature matrix and an image real category corresponding to the ith subset;
s352, randomly selecting 1 from 10 subsets each time as a test set, using the other 9 as training sets, and training a random forest classification model by using training set data;
s353, testing on the test set data to obtain the average accuracy of the algorithm, calculating the average accuracy mean of the 10 times of algorithm, and using the average accuracy mean of the algorithm as the true classification rate of the random forest classification model, wherein the average accuracy mean of the algorithm is expressed as:
wherein T is N(i) (H′ i ) Representing the predicted value obtained when selecting the ith subset as the test set for testing,representing the average accuracy of a primary algorithm;
establishing a random forest classification model based on optimal parameters of random forest algorithm, and setting k 1 、k 2 、k 3 Respectively representing the number of word bag model clustering centers for extracting invariant moment of SIFT operator, GLCM and Hu, and enabling k to be k 1 、k 2 、k 3 The initial values of the algorithm are all 100, the subsequent value interval is set to be 100, the termination value is set to be 500, all parameter combinations are traversed, an optimal parameter random forest classification model is brought in, and the defect detection algorithm parameter combinations are selected by taking the average accuracy of the algorithm as an evaluation index;
s36, according to the result of the step S35, using the defect detection algorithm model established by the optimal defect detection algorithm parameter combination as an optimal defect detection algorithm model;
and S37, applying the optimal defect detection algorithm model selected in the step S36 to the powder bed image acquired in real time, and monitoring the quality of the powder bed powder process in real time.
2. The visual inspection method of powder bed defects based on image feature fusion according to claim 1, wherein the SIFT algorithm step in step S32 specifically comprises:
(1) Firstly, filtering is performed by adopting a Gaussian kernel function to construct a scale space, wherein the Gaussian kernel function is expressed as:
second, the constructed scale space function is expressed as:
L(x,y,σ)=G(x,y,σ)*I(x,y);
next, subtracting the upper layer image and the lower layer image in each group of the Gaussian pyramid to obtain a Gaussian difference DoG image, wherein the Gaussian difference DoG image is expressed as:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein I (x, y) represents the original image and (x, y) represents the pixel position in the image; g (x) i ,y i Sigma) is a scale variable gaussian function, sigma is a scale space factor; k represents a multiple of two adjacent scale spaces; * Is a convolution operation symbol;
finally, comparing each pixel point of the Gaussian differential image with 26 points of all adjacent points of the Gaussian differential image to ensure that extreme points are detected in a scale space and a two-dimensional image space;
(2) Performing curve fitting on the DoG function of the scale space by using a Taylor series, thereby further determining the position and the scale of the key points, and simultaneously removing the key points with low contrast and unstable edge response points;
(3) For key points detected in the DOG pyramid, calculating and using the histogram to count the gradient and direction distribution characteristics of pixels in the neighborhood of the Gaussian pyramid image where the key points are located, and the modulus value m (x, y) and the direction theta (x, y) of the gradient are expressed as follows:
(4) Gradient direction histograms of 8 directions calculated in a window of 4*4 in the keypoint scale space are plotted for each gradient direction, so that each keypoint forms a feature vector of 4 x 8 = 128 dimensions.
3. The visual inspection method of powder bed defects based on image feature fusion according to claim 1, wherein the texture feature extraction of the powder bed defect image in step S32 uses a gray level co-occurrence matrix to construct 6 texture feature values, which are respectively expressed as:
wherein 6 eigenvalues are angular second moment, energy, contrast, dissimilarity, homogeneity and relativity respectively,the method is characterized by calculating the probability of simultaneous occurrence of a pixel B with d distance from the pixel A, theta direction and B gray scale from the pixel A with the gray scale a; θ takes the values of 0 °, 45 °, 90 °, 135 °; d takes a value of 1; mu (mu) a 、μ b Respectively represent P a 、P b Mean, sigma of a 、σ b Respectively represent P a 、P b Standard deviation of (2).
4. The visual inspection method of powder bed defects based on image feature fusion according to claim 1, wherein the geometric feature extraction of the powder bed defect image in step S32 adopts Hu invariant moment 7 geometric feature values, which are respectively expressed as:
wherein eta pq Representing the normalized center moment of the p + q order.
5. The visual inspection method of powder bed defects based on image feature fusion according to claim 1, wherein in step S33, feature fusion is performed by using a serial feature fusion technique, and dimension reduction is performed by using a filter feature selection algorithm.
6. The visual inspection method of powder bed defects based on image feature fusion according to claim 1, wherein in step S35, parameters of a random forest classifier and a visual dictionary size in a bag-of-words model are optimized respectively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010868173.0A CN112001909B (en) | 2020-08-26 | 2020-08-26 | Powder bed defect visual detection method based on image feature fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010868173.0A CN112001909B (en) | 2020-08-26 | 2020-08-26 | Powder bed defect visual detection method based on image feature fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112001909A CN112001909A (en) | 2020-11-27 |
CN112001909B true CN112001909B (en) | 2023-11-24 |
Family
ID=73471042
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010868173.0A Active CN112001909B (en) | 2020-08-26 | 2020-08-26 | Powder bed defect visual detection method based on image feature fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112001909B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950601B (en) * | 2021-03-11 | 2024-01-09 | 成都微识医疗设备有限公司 | Picture screening method, system and storage medium for esophageal cancer model training |
CN113344872A (en) * | 2021-06-01 | 2021-09-03 | 上海大学 | Segment code liquid crystal display defect detection method based on machine vision |
CN113537413B (en) * | 2021-09-15 | 2022-01-07 | 常州微亿智造科技有限公司 | Clustering method for part defect detection interval of feature selection and combination optimization algorithm |
CN114494254B (en) * | 2022-04-14 | 2022-07-05 | 科大智能物联技术股份有限公司 | GLCM and CNN-Transformer fusion-based product appearance defect classification method and storage medium |
CN114782425B (en) * | 2022-06-17 | 2022-09-02 | 江苏宜臻纺织科技有限公司 | Spooling process parameter control method and artificial intelligence system in textile production process |
CN114897908B (en) * | 2022-07-14 | 2022-09-16 | 托伦斯半导体设备启东有限公司 | Machine vision-based method and system for analyzing defects of selective laser powder spreading sintering surface |
CN116984628B (en) * | 2023-09-28 | 2023-12-29 | 西安空天机电智能制造有限公司 | Powder spreading defect detection method based on laser feature fusion imaging |
CN117853453A (en) * | 2024-01-10 | 2024-04-09 | 苏州矽行半导体技术有限公司 | Defect filtering method based on gradient lifting tree |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593670A (en) * | 2013-10-14 | 2014-02-19 | 浙江工业大学 | Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine |
CN106651856A (en) * | 2016-12-31 | 2017-05-10 | 湖南文理学院 | Detection method for foamed nickel surface defects |
CN107341499A (en) * | 2017-05-26 | 2017-11-10 | 昆明理工大学 | It is a kind of based on non-formaldehyde finishing and ELM fabric defect detection and sorting technique |
KR20170127269A (en) * | 2016-05-11 | 2017-11-21 | 한국과학기술원 | Method and apparatus for detecting and classifying surface defect of image |
CN108765412A (en) * | 2018-06-08 | 2018-11-06 | 湖北工业大学 | A kind of steel strip surface defect sorting technique |
CN109872303A (en) * | 2019-01-16 | 2019-06-11 | 北京交通大学 | Surface defect visible detection method, device and electronic equipment |
CN111965197A (en) * | 2020-07-23 | 2020-11-20 | 广东工业大学 | Defect classification method based on multi-feature fusion |
CN112070727A (en) * | 2020-08-21 | 2020-12-11 | 电子科技大学 | Metal surface defect detection method based on machine learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8457414B2 (en) * | 2009-08-03 | 2013-06-04 | National Instruments Corporation | Detection of textural defects using a one class support vector machine |
-
2020
- 2020-08-26 CN CN202010868173.0A patent/CN112001909B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103593670A (en) * | 2013-10-14 | 2014-02-19 | 浙江工业大学 | Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine |
KR20170127269A (en) * | 2016-05-11 | 2017-11-21 | 한국과학기술원 | Method and apparatus for detecting and classifying surface defect of image |
CN106651856A (en) * | 2016-12-31 | 2017-05-10 | 湖南文理学院 | Detection method for foamed nickel surface defects |
CN107341499A (en) * | 2017-05-26 | 2017-11-10 | 昆明理工大学 | It is a kind of based on non-formaldehyde finishing and ELM fabric defect detection and sorting technique |
CN108765412A (en) * | 2018-06-08 | 2018-11-06 | 湖北工业大学 | A kind of steel strip surface defect sorting technique |
CN109872303A (en) * | 2019-01-16 | 2019-06-11 | 北京交通大学 | Surface defect visible detection method, device and electronic equipment |
CN111965197A (en) * | 2020-07-23 | 2020-11-20 | 广东工业大学 | Defect classification method based on multi-feature fusion |
CN112070727A (en) * | 2020-08-21 | 2020-12-11 | 电子科技大学 | Metal surface defect detection method based on machine learning |
Non-Patent Citations (6)
Title |
---|
Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm;SCIME L et al.;《Additive Manufacturing》;全文 * |
In Situ Process Monitoring for Laser-Powder Bed Fusion using Convolutional Neural Networks and Infrared Tomography;Hamed Elwarfalli et al.;《2019 IEEE National Aerospace and Electronics Conference (NAECON)》;全文 * |
基于灰度共生矩阵和视觉信息的布匹瑕疵检测方法研究;闵信军;《中国优秀硕士学位论文全文数据库》;全文 * |
基于空间金字塔的BoW模型图像分类方法;林椹尠等;《西安邮电大学学报》;全文 * |
基于纹理特征和Hu不变矩的KELM滤光片缺陷识别研究;孙枭文;《甘肃科学学报》;全文 * |
融合多特征与随机森林的纹理图像分类方法;陈静等;《传感器与微系统》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112001909A (en) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112001909B (en) | Powder bed defect visual detection method based on image feature fusion | |
CN108562589B (en) | Method for detecting surface defects of magnetic circuit material | |
Lian et al. | Deep-learning-based small surface defect detection via an exaggerated local variation-based generative adversarial network | |
CN111582294B (en) | Method for constructing convolutional neural network model for surface defect detection and application thereof | |
CN104331699B (en) | A kind of method that three-dimensional point cloud planarization fast search compares | |
US20060029257A1 (en) | Apparatus for determining a surface condition of an object | |
CN111402197B (en) | Detection method for colored fabric cut-parts defect area | |
CN115082467A (en) | Building material welding surface defect detection method based on computer vision | |
CN107622277B (en) | Bayesian classifier-based complex curved surface defect classification method | |
CN110544233B (en) | Depth image quality evaluation method based on face recognition application | |
Zhang et al. | Zju-leaper: A benchmark dataset for fabric defect detection and a comparative study | |
CN115797354B (en) | Method for detecting appearance defects of laser welding seam | |
CN1322471C (en) | Comparing patterns | |
CN111965197B (en) | Defect classification method based on multi-feature fusion | |
CN108985337A (en) | A kind of product surface scratch detection method based on picture depth study | |
CN113516619B (en) | Product surface flaw identification method based on image processing technology | |
CN112017204A (en) | Tool state image classification method based on edge marker graph neural network | |
CN117392097A (en) | Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm | |
CN111783798A (en) | Saliency feature-based mask generation method for simulating incomplete point cloud | |
CN118176522A (en) | Method and system for generating segmentation mask | |
Dong et al. | Fusing multilevel deep features for fabric defect detection based NTV-RPCA | |
CN113838040A (en) | Detection method for defect area of color texture fabric | |
CN113689360B (en) | Image restoration method based on generation countermeasure network | |
CN115797649A (en) | Crack extraction method under complex background | |
Zhu et al. | An identification method of cashmere and wool by the two features fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |