CN112001909A - Powder bed defect visual detection method based on image feature fusion - Google Patents

Powder bed defect visual detection method based on image feature fusion Download PDF

Info

Publication number
CN112001909A
CN112001909A CN202010868173.0A CN202010868173A CN112001909A CN 112001909 A CN112001909 A CN 112001909A CN 202010868173 A CN202010868173 A CN 202010868173A CN 112001909 A CN112001909 A CN 112001909A
Authority
CN
China
Prior art keywords
powder bed
image
defect
defect detection
powder
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010868173.0A
Other languages
Chinese (zh)
Other versions
CN112001909B (en
Inventor
陈哲涵
师彬彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology Beijing USTB
Original Assignee
University of Science and Technology Beijing USTB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology Beijing USTB filed Critical University of Science and Technology Beijing USTB
Priority to CN202010868173.0A priority Critical patent/CN112001909B/en
Publication of CN112001909A publication Critical patent/CN112001909A/en
Application granted granted Critical
Publication of CN112001909B publication Critical patent/CN112001909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8883Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges involving the calculation of gauges, generating models
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a visual powder bed defect detection method based on image feature fusion, which can be used for constructing a powder bed defect detection algorithm model and applying the powder bed defect detection algorithm model to a specific powder paving process, and comprises the following steps: determining three different classes of powder bed defects for the cause of the powder bed defects; selecting corresponding characteristic extraction strategies according to different types of powder bed defect characteristics, comprising the following steps: scale space features, texture features and geometric features; constructing a characteristic-fused powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimization strategy, optimizing algorithm parameters, and determining a final powder bed defect detection algorithm model; the optimal defect detection algorithm model is applied to the powder bed image acquired in real time, and the powder bed powder laying process quality is monitored in real time. The method selects three characteristic extraction and fusion strategies to establish a powder bed defect detection algorithm model, and the model can be applied to online quality monitoring in the powder paving process, so that the powder paving quality is improved.

Description

Powder bed defect visual detection method based on image feature fusion
Technical Field
The invention relates to the technical field of metal powder bed melting, in particular to a powder bed defect visual detection method based on image feature fusion.
Background
The metal powder bed melting technology comprises a Selective Laser Melting (SLM) technology and an Electron Beam Selective Melting (EBSM) technology, wherein laser beams and electron beams are respectively adopted to scan layer by layer to lay a powder bed in a forming area in advance so as to melt and deposit the powder bed to form a part, and the metal powder bed melting technology is one of metal additive manufacturing processes widely applied to the industries of aerospace, equipment manufacturing, biomedical treatment and the like. In the fused deposition process, the defects of the powder bed cause instability of a molten pool in the scanning process, thereby causing the parts to have macroscopic defects such as bending deformation, cracking, spheroidization and the like, and internal metallurgical defects such as pores, slag inclusion, incomplete fusion and the like, and finally influencing the quality of the parts, so that the real-time monitoring of the defects of the powder bed in the powder paving process is of great significance.
Through literature search and analysis, the powder bed defect detection method mainly comprises two types: the defect detection method based on the traditional image processing and the defect detection model introducing the deep learning method. The defect detection method based on the traditional image processing has high requirements on the positions of the sensor and the light source, the types of the defects which can be detected are limited, and the adaptability and the expansibility to the defects of the powder bed are poor. The existing powder bed defect detection method based on deep learning does not generally consider the difference and the effect of different characteristics of powder bed images, and an improved space still exists in the algorithm effect.
Disclosure of Invention
The invention aims to provide a powder bed defect visual detection method based on image feature fusion, which aims to solve the problem that the existing powder bed defect detection method based on deep learning usually does not consider the difference and action of different features of a powder bed image, design a feature extraction strategy according to the defect features of three different types of powder beds, realize defect detection on the powder bed by combining a defect detection algorithm and achieve the purpose of monitoring the powder bed powder laying process quality in real time.
In order to solve the technical problem, the invention specifically provides a powder bed defect visual detection method based on image feature fusion, which comprises the following steps:
s1, dividing the powder bed defects into three different types of powder bed defects according to the forming reasons of the powder bed defects;
s2, determining feature extraction strategies according to the three different types of powder bed defect features in the step S1, wherein the feature extraction strategies comprise: extracting scale space features, extracting texture features and extracting geometric features;
s3, establishing a powder bed defect detection algorithm model, and monitoring the quality of the powder paving process by using the powder bed defect detection algorithm model, wherein the method specifically comprises the following substeps:
s31, preprocessing the acquired powder bed defect images, and then, dividing all the powder bed defect images into 7: 3, the first group is a training group which is used for primarily establishing a powder bed defect detection algorithm model, and the second group is a testing group which is used for testing the powder bed defect detection algorithm model;
s32, extracting scale space features, texture features and geometric features of each powder bed defect image of the training group and the testing group respectively based on the bag-of-words model, and accordingly constructing three groups of visual dictionaries for the training group and the testing group respectively according to feature extraction results
Figure BDA0002650375470000031
And
Figure BDA0002650375470000032
counting the distribution of each word in the visual dictionary in the image to obtain a quantization form H represented by visual word histogram of each powder bed defect imageSIFT、HGLCMAnd HHuSo as to respectively construct three groups of visual word histograms for the training group and the test group;
s33, serially fusing the three groups of visual word histograms of the training group and the testing group obtained in the step S32 to form a fused feature matrix, and performing dimension reduction processing on the fused feature matrix through feature selection;
the specific method for serially fusing the histograms of the three groups of visual words comprises the following sub-steps:
s331, firstly, extracting scale space features from each powder bed defect image by adopting SIFT algorithm
Figure BDA0002650375470000036
Wherein c is a category label of the picture; i is the image number, each powder bed defect image contains
Figure BDA0002650375470000035
And each feature point is a 128-dimensional feature vector, and the SIFT features of all powder bed defect images are expressed as follows:
Figure BDA0002650375470000033
secondly, 6 GLCM features are extracted from each powder bed defect image
Figure BDA0002650375470000037
Obtaining a 24-dimensional feature vector, the GLCM features of all powder bed defect images can be expressed as:
Figure BDA0002650375470000034
then, 7 invariant moments of each powder bed defect image are calculated to form a 7-dimensional feature vector
Figure BDA0002650375470000041
The Hu invariant moment of all powder bed defect images can be expressed as:
Figure BDA0002650375470000042
s332, adopting a serial fusion mode to fuse FSIFT、FGLCMAnd FHuAnd fusing, recording a fused feature matrix as H, and expressing the fused feature matrix as follows:
H=(FSIFT,FGLCM,FHu);
s333, performing dimension reduction processing of variance filtering on the fused feature matrix H to obtain a final feature matrix H';
s34, establishing a preliminary powder bed defect detection algorithm model by using the training set image data in combination with a random forest classification algorithm, and specifically comprising the following substeps:
s341, repeatedly and randomly extracting m samples to generate a new training sample set with the training set N replaced;
s342, generating m decision trees for the m sample sets to form a random forest, wherein the construction steps of each decision tree are as follows:
s3421, selecting Gini (N, H'j) Minimum value characteristic H'jDividing the set N into two subsets N1And N2,Gini(N,H′j) Expressed as:
Figure BDA0002650375470000043
s3422, for N1And N2Recursively calling the step S3421 by the two child nodes until the random forest is generated; the set of m decision trees is represented as:
{(t1(H′)),(t2(H′)),(t3(H′)),…,(tm(H′))};
s343, obtaining a final classification result by adopting a simple majority voting method, wherein the final classification result is expressed as:
Figure BDA0002650375470000051
s35, determining a defect detection algorithm parameter combination, performing 10-time ten-fold cross validation to optimize the random forest algorithm parameters, and selecting the optimal parameters of the random forest algorithm by taking the average accuracy mean value of the 10-time algorithm as an evaluation index, wherein the specific substeps are as follows:
s351, randomly dividing the data set N into 10 disjoint subsets, wherein the number of training samples of the data set N is 630, each subset has 63 training samples, and the corresponding subset is expressed as:
N={N1,N2,N3,…Ni},i=1,2,3,…10,
Ni=(H′i,yi),
wherein, (H'i,yi) Representing a feature matrix and an image real category corresponding to the ith subset;
s352, randomly selecting 1 from the 10 subsets each time as a test set, taking the other 9 subsets as training sets, and training a random forest classification model by using the training set data;
s353, testing the data of the test set to obtain the average accuracy of the algorithm, calculating the average accuracy mean value of the algorithm for 10 times, and taking the average accuracy mean value as the real classification rate of the random forest classification model, wherein the average accuracy mean value of the algorithm is expressed as:
Figure BDA0002650375470000052
wherein, TN(i)(H′i) The predicted value obtained when the ith subset is selected as a test set for testing is shown,
Figure BDA0002650375470000053
representing the average accuracy of the primary algorithm;
establishing a random forest classification model based on the optimal parameters of the random forest algorithm, and setting k1、k2、k3Respectively representing the number of clustering centers of the bag-of-words model for extracting SIFT operator, GLCM and Hu invariant moment, and making k1、k2、k3The initial values of the parameters are all 100, the subsequent value intervals are set to be 100, the termination values are set to be 500, all parameter combinations are traversed, an optimal parameter random forest classification model is brought in, and the average accuracy of the algorithm is used as an evaluation index to select a defect detection algorithm parameter combination;
s36, according to the result of the step S35, a defect detection algorithm model established by the optimal defect detection algorithm parameter combination is used as an optimal defect detection algorithm model;
and S37, applying the optimal defect detection algorithm model selected in the step S36 to the powder bed image acquired in real time, and monitoring the powder laying process quality of the powder bed in real time.
Preferably, the three different types of powder bed defects in S1 are cord defects, locally higher cladding defects, and under-supply defects.
Preferably, the different classes of powder bed defect characteristics in S2 include: stripe-shaped defect characteristics, local over-height defect characteristics of a cladding layer and insufficient powder supply defect characteristics; wherein the content of the first and second substances,
the stripe-shaped defect is represented in the defect area of the image that the gray value of the defect area is lower than that of the good powder spreading area, and the area of the defect is smaller;
the defect of local high defect of the cladding layer shows that the gray value of the local high region is higher than that of the good powder-paving region in the image, and the area of the defect is the minimum;
the defect of insufficient powder supply shows the phenomenon that local gray scale is higher and lower simultaneously in the defect area of the image, and the area of the defect area is the largest.
Preferably, the SIFT algorithm in step S32 includes:
(1) firstly, a Gaussian kernel function is adopted for filtering to construct a scale space, and the Gaussian kernel function is expressed as:
Figure BDA0002650375470000071
secondly, the constructed scale-space function is expressed as:
L(x,y,σ)=G(x,y,σ)*I(x,y);
and then, subtracting adjacent upper and lower layers of images in each group by using a Gaussian pyramid to obtain a Gaussian difference DoG image, wherein the Gaussian difference image is expressed as:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein I (x, y) represents the original image, (x, y) represents the pixel position in the image; g (x)i,yiσ) is a scale-variable gaussian function, σ is a scale space factor; k represents a multiple of two adjacent scale spaces; is the sign of the convolution operation;
finally, each pixel point of the Gaussian difference image is compared with 26 points of all adjacent points of the Gaussian difference image to ensure that extreme points are detected in both the scale space and the two-dimensional image space;
(2) performing curve fitting on the DoG function of the scale space by using a Taylor series, thereby further determining the position and the scale of the key point, and simultaneously removing the key point with low contrast and the unstable edge response point;
(3) calculating and using the histogram to count the gradient and direction distribution characteristics of pixels in the neighborhood of the Gaussian pyramid image where the key points are detected in the DOG pyramid, wherein the module value m (x, y) and the direction theta (x, y) of the gradient are expressed as:
Figure BDA0002650375470000072
Figure BDA0002650375470000081
(4) the cumulative value of each gradient direction is plotted for 8-direction histogram computed in a 4 x 4 window in the keypoint scale space, so that each keypoint forms a 4 x 8-128-dimensional feature vector.
Preferably, in the step S32, the texture feature extraction of the powder bed defect image adopts a gray level co-occurrence matrix to construct 6 texture feature values, which are respectively expressed as:
Figure BDA0002650375470000082
Figure BDA0002650375470000083
Figure BDA0002650375470000084
Figure BDA0002650375470000085
Figure BDA0002650375470000086
Figure BDA0002650375470000087
wherein the 6 characteristic values are respectively angular second moment, energy, contrast, dissimilarity, homogeneity and correlation,
Figure BDA0002650375470000088
the probability that a pixel B which is away from a certain pixel A with the gray scale of a and has the distance of d, the direction of theta and the gray scale of B appears at the same time is calculated; theta takes the values of 0 degree, 45 degrees, 90 degrees and 135 degrees; d takes the value of 1; mu.sa、μbRespectively represent Pa、PbMean value of (a)a、σbRespectively represent Pa、PbStandard deviation of (2).
Preferably, the geometric feature extraction of the powder bed defect image in step S32 uses 7 geometric feature values of Hu invariant moment, which are respectively expressed as:
Figure BDA0002650375470000091
Figure BDA0002650375470000092
Figure BDA0002650375470000093
Figure BDA0002650375470000094
Figure BDA0002650375470000095
Figure BDA0002650375470000096
Figure BDA0002650375470000097
wherein eta ispqRepresenting normalized central moments of order p + q;
preferably, in step S33, a serial feature fusion technique is used for feature fusion, and a filtering feature selection algorithm is used for dimension reduction.
Preferably, in step S35, the visual dictionary size and the random forest classifier parameters in the bag-of-words model are optimized separately.
Compared with the prior art, the invention has the following beneficial effects:
the invention divides the cause of the powder bed defect into three different types of powder bed defects; designing a feature extraction strategy according to the defect features of different types of powder beds, wherein the feature extraction strategy comprises the following steps: extracting scale space features, extracting texture features and extracting geometric features; performing feature fusion and selection, and initially establishing a powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimization strategy, and establishing a final powder bed defect detection algorithm model according to the optimal algorithm parameter combination; and monitoring the powder paving process in real time, acquiring a powder bed image, and realizing powder bed defect detection by combining a defect detection algorithm so as to monitor the quality of the powder paving process. Compared with the existing algorithm, the algorithm is more accurate, the guidance on the powder paving process is more perfect and accurate, and the problem of powder bed defects in the powder paving process can be well corrected in real time.
Drawings
Fig. 1 is a schematic flow chart of a powder bed defect visual inspection method based on image feature fusion according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a powder bed defect provided by an embodiment of the present invention;
FIG. 3 is a schematic flow chart of a powder bed defect detection algorithm provided by an embodiment of the present invention;
FIG. 4a is a schematic diagram of a streak type defect in an embodiment of the present invention;
FIG. 4b is a schematic view of a cladding-type local high defect in an embodiment of the present invention; and
FIG. 4c is a schematic diagram of a powder starvation defect in an embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
The invention provides a visual powder bed defect detection method based on image feature fusion, aiming at the defects that the difference and the effect of different features of a powder bed image are not generally considered in the existing powder bed defect detection method based on deep learning.
The algorithm and the working principle of the present invention are specifically discussed below with reference to specific embodiments:
the invention provides a visual detection method for powder bed defects based on image feature fusion, which can be used for detecting the powder bed defects in real time in the powder paving process without influencing the processing process, and relates to the field of metal powder bed fusion. The method comprises the following steps: defining three different classes of powder bed defects for their cause; designing a characteristic extraction strategy according to the defect characteristics of different types of powder beds, wherein the characteristic extraction strategy comprises the following steps: extracting scale space features, extracting texture features and extracting geometric features; performing feature fusion and selection, and initially establishing a powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimization strategy, and establishing a final powder bed defect detection algorithm model according to the optimal algorithm parameter combination; and monitoring the powder paving process in real time, acquiring a powder bed image, and realizing powder bed defect detection by combining a defect detection algorithm so as to monitor the quality of the powder paving process.
As shown in fig. 1, the visual inspection method for powder bed defects based on image feature fusion provided by the embodiment of the invention includes the following steps:
s1, defining three different types of powder bed defects according to the cause of the powder bed defect, wherein in the embodiment, as shown in fig. 2, the three different types of powder bed defects include a stripe-shaped defect, a locally higher cladding layer defect, and an insufficient powder supply defect.
S2, designing a feature extraction strategy according to the defect features of the powder beds in different classes, wherein the feature extraction strategy comprises the following steps: extracting scale space features, extracting texture features and extracting geometric features; the different classes of powder bed defect characteristics in this example include: the stripe-shaped defects are mostly expressed in the image that the gray value of the defect area is lower than that of the good powder paving area, and the area of the defects is generally smaller; the local high defect of the cladding layer is frequently represented in an image, the gray value of the local high region is higher than that of the powder-paving good region, and the areas of the defects are very small; the defect of insufficient powder supply shows the phenomenon that local gray scale is higher and lower simultaneously in the defect area of the image, and the area of the defect area is large generally.
S3, performing feature fusion and selection, and primarily establishing a powder bed defect detection algorithm model; designing an algorithm parameter combination and an optimization strategy, and establishing a final powder bed defect detection algorithm model according to the optimal algorithm parameter combination; and monitoring the powder paving process in real time, acquiring a powder bed image, and realizing powder bed defect detection by combining a defect detection algorithm so as to monitor the quality of the powder paving process.
In this embodiment, as shown in fig. 3, a powder bed defect detection algorithm model is established, and the powder bed defect detection algorithm model is used to monitor the quality of the powder paving process, specifically:
the invention provides a visual detection method for powder bed defects based on image feature fusion, which comprises the following steps:
s1, dividing the powder bed defects into three different types of powder bed defects according to the forming reasons of the powder bed defects;
s2, determining a feature extraction strategy according to the defect features of the powder bed in three different classes, wherein the feature extraction strategy comprises the following steps: extracting scale space features, extracting texture features and extracting geometric features;
s3, establishing a powder bed defect detection algorithm model, and monitoring the quality of the powder paving process by using the powder bed defect detection algorithm model, wherein the method specifically comprises the following substeps:
s31, preprocessing the acquired powder bed defect images, and then, dividing all the powder bed defect images into 7: 3, the first group is a training group which is used for establishing a powder bed defect detection algorithm model, and the second group is a testing group which is used for testing the powder bed defect detection algorithm model;
s32, extracting scale space features, texture features and geometric features of each powder bed defect image of the training set and the testing set based on the bag-of-words model, and constructing three groups of visual dictionaries for the training set and the testing set according to feature extraction results
Figure BDA0002650375470000131
And
Figure BDA0002650375470000132
counting the distribution of each word in the visual dictionary in the image to obtain the quantization form H represented by the visual word histogram of each pictureSIFT、HGLCMAnd HHuThereby constructing three groups of visual word histograms;
s33, performing serial fusion on the three groups of visual word histograms of the training group and the testing group respectively to form a fused feature matrix, and performing dimension reduction processing on the fused feature matrix through feature selection;
the specific method for serially fusing the histograms of the three groups of visual words is as follows:
s331, firstly, extracting scale space features from each powder bed defect image by adopting SIFT algorithm
Figure BDA0002650375470000133
Wherein c is a category label of the picture; i is an imageNumbering, each image comprising
Figure BDA0002650375470000134
Each feature point is a 128-dimensional feature vector, and then the SIFT features of all images are expressed as:
Figure BDA0002650375470000135
secondly, 6 GLCM features are extracted in 4 directions of each powder bed defect image
Figure BDA0002650375470000136
A feature vector of 24 dimensions is obtained,
then, 7 invariant moments of each image are calculated to form a 7-dimensional feature vector
Figure BDA0002650375470000137
The Hu invariant moments for all images can be expressed as:
Figure BDA0002650375470000138
s332, adopting a serial fusion mode to fuse FSIFT、FGLCMAnd FHuAnd fusing, recording the fused feature matrix as H, and expressing the fused feature matrix as follows:
H=(FSIFT,FGLCM,FHu);
s333, performing dimension reduction treatment of variance filtering on the fused feature matrix H, and eliminating partial features which have no effect on distinguishing samples to obtain a final feature matrix H';
the specific method for performing serial fusion on the obtained features is as follows:
firstly, extracting scale space characteristics of each preprocessed powder bed defect image by adopting an SIFT algorithm
Figure BDA0002650375470000141
Wherein c is a category label of the picture; i is the image number. Each image comprises
Figure BDA0002650375470000142
Each feature point is a 128-dimensional feature vector, and then the SIFT features of all images can be expressed as:
Figure BDA0002650375470000143
secondly, 6 GLCM features are extracted in 4 directions of each image respectively
Figure BDA0002650375470000144
Obtaining a 24-dimensional feature vector, the GLCM features of all images can be expressed as:
Figure BDA0002650375470000145
the 6 characteristic values are respectively angle second moment, energy, contrast, dissimilarity, homogeneity and correlation, the Angle Second Moment (ASM) is a measure for the gray level change stability degree of the image texture, the image gray level distribution uniformity degree and the texture thickness degree are reflected, and the larger the energy value is, the larger the current image texture presents regular change.
Figure BDA0002650375470000146
Energy is the arithmetic square root of the angular second moment, and like the angular second moment, reflects the uniformity of the gray level distribution of the image, and a larger Energy value indicates that the texture distribution of the current image is more uniform.
Figure BDA0002650375470000147
Contrast (Contrast) is a measure of the sharpness of texture and the depth of ravines in an image, and a larger value of Contrast indicates that the sharper the current image is, the deeper the texture ravines are.
Figure BDA0002650375470000148
Dissimilarity (dissimilarity) is a measure of how different the image textures differ, and similar to contrast, the greater the value of local contrast, the greater the value of dissimilarity.
Figure BDA0002650375470000149
Homogeneity (Homogeneity) is a measure of the local uniformity of the image texture, and a larger value of Homogeneity indicates that the local texture of the current image is more uniform and the texture variation between different regions is smaller.
Figure BDA0002650375470000151
The Correlation (Correlation) is a measure of a linear relation to the image gray scale, and reflects the similarity of the image gray scale values in the horizontal or vertical direction, and the larger the value of the Correlation, the more uniform the gray scale distribution of the current image is.
Figure BDA0002650375470000152
Wherein, mua、μbRespectively represent Pa、PbMean value of (a)a、σbRespectively represent Pa、PbStandard deviation of (2).
Then, 7 invariant moments of each image are calculated to form a 7-dimensional feature vector
Figure BDA0002650375470000153
The Hu invariant moments for all images can be expressed as:
Figure BDA0002650375470000154
for a discrete digital image, the gray-scale value at image (x, y) is represented as f (x, y), whose standard moment of order p + q is defined as:
Figure BDA0002650375470000155
when images are shifted, to ensure mpqWith translation unchanged, the position of the lens is normalized, and the central distance u of order p + q is obtainedpqIs defined as:
Figure BDA0002650375470000156
wherein the content of the first and second substances,
Figure BDA0002650375470000157
coordinates representing the center of gravity of the image:
Figure BDA0002650375470000158
to ensure mpqThe central moment expansion size is normalized to obtain a normalized central moment eta after translation and scaling are still unchangedpqThe following were used:
Figure BDA0002650375470000161
wherein r is (p + q)/2+1, and p + q is 2,3,4 ….
7 invariant moment groups can be obtained by utilizing the normalized center distance of the second order and the third order to form a 7-dimensional feature vector to describe the geometric features of the image, and the description is as follows:
Figure BDA0002650375470000162
Figure BDA0002650375470000163
Figure BDA0002650375470000164
Figure BDA0002650375470000165
Figure BDA0002650375470000166
Figure BDA0002650375470000167
Figure BDA0002650375470000168
next, F for all picturesSIFT、FGLCMAnd FHuThe characteristics are respectively adopted to generate K clustering centers by a K-means clustering algorithm. And selecting Euclidean distance as an evaluation index of the similarity of the feature vectors, dividing all the feature vectors into different classes in the continuous iteration process of the K-means algorithm, and stopping iteration when the square sum of the clusters of the classes is minimum. Each cluster center represents a visual word, and three visual dictionaries are obtained according to the extracted three groups of different features
Figure BDA0002650375470000169
And
Figure BDA00026503754700001610
and counting the distribution condition of each word in the visual dictionary in the image to obtain a quantization form H represented by a visual word histogram for each pictureSIFT、HGLCMAnd HHu
H is fused in seriesSIFT、HGLCMAnd HHuAnd fusing, recording the fused characteristic matrix as H, and selecting the characteristics of the fused characteristic matrix H by adopting a filtering method to obtain a final characteristic matrix H'. The feature selection can not only effectively reduce feature dimension and improve classification accuracy, but also obtain better analysis and explanation on the potential significance of data. The filtering method is based on the idea of feature sorting, which only measures the importance of features from the data itself, and is not related to any learning algorithm. For a large-scale high-dimensional data set, the feature selection is carried out by adopting a filtering algorithm, and the method has the advantages of simple and quick calculation, no need of modeling and evaluating a feature subset in the whole process and independence of a classification algorithm.
And then, carrying out variance filtering on the fused feature set, and removing partial features which have no effect on distinguishing samples. When the variance of a feature itself is small, it indicates that the sample has substantially no difference in the feature, and the feature has no or little contribution to distinguishing the sample. And then, carrying out chi-square filtering on the feature set subjected to variance filtering, and removing redundant feature vectors with high correlation.
S34, establishing a preliminary powder bed defect detection algorithm model by using the training set image data in combination with a random forest classification algorithm, and specifically comprising the following substeps:
s341, repeatedly and randomly extracting m samples to generate a new training sample set with the training set N replaced;
s342, generating m decision trees for the m sample sets to form a random forest, wherein the construction steps of each decision tree are as follows:
s3421, selecting Gini (N, H'j) Minimum value characteristic H'jDividing the set N into two subsets N1And N2,Gini(N,H′j) Expressed as:
Figure BDA0002650375470000171
s3422, for N1And N2Recursively calling S3421 by the two child nodes until the random forest is generated; the set of m decision trees is represented as:
{(t1(H′)),(t2(H′)),(t3(H′)),…,(tm(H′))};
s343, obtaining a final classification result by adopting a simple majority voting method, wherein the final classification result is expressed as:
Figure BDA0002650375470000181
s35, determining a defect detection algorithm parameter combination, performing 10-time ten-fold cross validation to optimize the random forest algorithm parameters, and selecting the optimal parameters of the random forest algorithm by taking the average accuracy mean value of the 10-time algorithm as an evaluation index, wherein the specific substeps are as follows:
s351, randomly dividing the data set N into 10 disjoint subsets, wherein the number of training samples of the data set N is 630, each subset has 63 training samples, and the corresponding subset is expressed as:
N={N1,N2,N3,…Ni},i=1,2,3,…10,
Ni=(H′i,yi),
wherein, (H'i,yi) Representing a feature matrix and an image real category corresponding to the ith subset;
s352, randomly selecting 1 from the 10 subsets each time as a test set, taking the other 9 subsets as training sets, and training a random forest classification model by using the training set data;
s353, testing the data of the test set to obtain the average accuracy of the algorithm, calculating the average accuracy mean value of the algorithm for 10 times, and taking the average accuracy mean value as the real classification rate of the random forest classification model, wherein the average accuracy mean value of the algorithm is expressed as:
Figure BDA0002650375470000182
wherein, TN(i)(H′i) The predicted value obtained when the ith subset is selected as a test set for testing is shown,
Figure BDA0002650375470000183
representing the average accuracy of the primary algorithm.
Establishing a random forest classification model based on the optimal parameters of the random forest algorithm, and setting k1、k2、k3Respectively representing the number of clustering centers of the bag-of-words model for extracting SIFT operator, GLCM and Hu invariant moment, and making k1、k2、k3The initial values of the parameters are all 100, the subsequent value interval is set to be 100, the termination value is set to be 500, all parameter combinations are traversed, an optimal parameter random forest classification model is brought in, and the average accuracy of the algorithm is used as an evaluation index to select the optimal parameter combination;
s36, according to the result of the step S35, a defect detection algorithm model established by the optimal defect detection algorithm parameter combination is used as an optimal defect detection algorithm model;
and S37, applying the optimal defect detection algorithm model selected in the step S36 to the powder bed image acquired in real time to achieve real-time monitoring of the powder paving process quality of the powder bed.
The specific embodiment is as follows:
firstly, respectively collecting 100 defect images aiming at three types of defects, and then processing 300 original images by adopting image enhancement, random rotation and random color conversion modes to obtain a data set consisting of 900 powder bed defect images, wherein each type of defect comprises 300 images. The method is divided into two groups, wherein the first group is a training group and is used for establishing a training algorithm, and the second group is a testing group and is used for testing the detection effect of the algorithm.
SIFT features are extracted from each powder bed defect image, and the SIFT features of the three types of defects are visualized as shown in FIG. 4.
The pink points in the graph represent scale space feature points detected by an SIFT algorithm, dense feature points in the graph of FIG. 4(a) and FIG. 4(c) are gathered at the periphery of a defect area, sparse feature points are scattered in an area outside the defect, and sparse feature points in the graph of FIG. 4(b) are irregularly distributed in the whole defect image area. The main reason for this phenomenon is that the defect regions of the two types of defects, i.e., the stripe-shaped defect and the powder supply deficiency, are relatively concentrated, a large number of feature points are extracted in the defect regions, and present similar laws, and the defect region of the locally higher defect of the cladding layer often appears at the edge of the construction and is embodied in the shape of the extremely narrow construction edge in the image, so that the scale space feature of the locally higher region of the cladding layer is difficult to extract in the whole defect image.
For each powder bed image, the values of 6 common statistics of angular second moment, energy, contrast, dissimilarity, homogeneity and correlation in 4 directions of 0 °, 45 °, 90 ° and 135 ° were calculated, resulting in a 24-dimensional feature vector. A sample image is selected from the three types of defect images respectively to calculate GLCM, and GLCM characteristic values of the sample image in four directions are listed in table 1.
For each powder bed defect image, 7 invariant moments are calculated, and a 7-dimensional feature vector is obtained to describe the geometrical features of the defect. One sample image is selected from the three types of defect images, and 7 invariant moments of the sample image are listed in table 2.
TABLE 1 GLCM values in 4 directions for three types of powder bed Defect images
Figure BDA0002650375470000201
TABLE 2 Hu invariant moments of three types of powder bed defect images
Figure BDA0002650375470000202
Then, three groups of visual word histograms of a training group and a testing group are constructed, the three groups of visual word histograms of the training group and the testing group are serially fused, and the fused feature matrix is subjected to dimension reduction processing through feature selection;
and then, establishing a preliminary powder bed defect detection algorithm model by combining a random forest classification algorithm and utilizing training group data. Determining a defect detection algorithm parameter combination, performing defect detection on the image data of the test group by using the established preliminary powder bed defect detection algorithm model, and evaluating the detection effect of the powder bed defect detection algorithm model under the defect detection algorithm parameter combination according to the size of the visual dictionary and the parameters of the random forest classifier;
repeatedly performing iterative operation on all defect detection algorithm parameter combinations, and evaluating the detection effect of the powder bed defect detection algorithm model under all the defect detection algorithm parameter combinations according to the size of the visual dictionary and the parameters of the random forest classifier; and confirming that all defect detection algorithm parameter combinations have completed iterative operation, searching for a defect detection algorithm parameter combination with the best detection result in the test group image data according to the result of the iterative operation, and selecting a defect detection algorithm model established by using the algorithm parameter combination as an optimal defect detection algorithm model.
And finally, in the actual production, applying the selected optimal defect detection algorithm model to the powder bed defect image acquired in real time to achieve the purpose of monitoring the quality of the powder paving process in real time. The powder paving process is improved according to the defect image of the powder bed, and the powder paving accuracy is improved.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (8)

1. A visual powder bed defect detection method based on image feature fusion is characterized by comprising the following steps:
s1, dividing the powder bed defects into three different types of powder bed defects according to the forming reasons of the powder bed defects;
s2, determining feature extraction strategies according to the three different types of powder bed defect features in the step S1, wherein the feature extraction strategies comprise: extracting scale space features, extracting texture features and extracting geometric features;
s3, establishing a powder bed defect detection algorithm model, and monitoring the quality of the powder paving process by using the powder bed defect detection algorithm model, wherein the method specifically comprises the following substeps:
s31, preprocessing the acquired powder bed defect images, and then, dividing all the powder bed defect images into 7: 3, the first group is a training group which is used for establishing a powder bed defect detection algorithm model, and the second group is a testing group which is used for testing the powder bed defect detection algorithm model;
s32, extracting scale space features, texture features and geometric features of each powder bed defect image of the training group and the testing group respectively based on the bag-of-words model, and accordingly constructing three groups of visual dictionaries for the training group and the testing group respectively according to feature extraction results
Figure FDA0002650375460000011
And
Figure FDA0002650375460000012
counting the distribution of each word in the visual dictionary in the image to obtain a quantization form H represented by visual word histogram of each powder bed defect imageSIFT、HGLCMAnd HHuSo as to respectively construct three groups of visual word histograms for the training group and the test group;
s33, serially fusing the three groups of visual word histograms of the training group and the testing group obtained in the step S32 to form a fused feature matrix, and performing dimension reduction processing on the fused feature matrix through feature selection;
the specific method for serially fusing the histograms of the three groups of visual words comprises the following sub-steps:
s331, firstly, extracting scale space features from each powder bed defect image by adopting SIFT algorithm
Figure FDA0002650375460000021
Wherein c is a category label of the picture; i is the image number, each powder bed defect image contains
Figure FDA0002650375460000022
And each feature point is a 128-dimensional feature vector, and the SIFT features of all powder bed defect images are expressed as follows:
Figure FDA0002650375460000023
secondly, 6 GLCM features are extracted from each powder bed defect image
Figure FDA0002650375460000024
Obtaining a 24-dimensional feature vector, the GLCM features of all powder bed defect images can be expressed as:
Figure FDA0002650375460000025
then, 7 invariant moments of each powder bed defect image are calculated to form a 7-dimensional feature vector
Figure FDA0002650375460000026
The Hu invariant moment of all powder bed defect images can be expressed as:
Figure FDA0002650375460000027
s332, adopting a serial fusion mode to fuse FSIFT、FGLCMAnd FHuAnd (3) performing fusion, recording the fused characteristic matrix as H, and expressing the fused characteristic matrix as follows:
H=(FSIFT,FGLCM,FHu);
s333, performing dimension reduction processing of variance filtering on the fused feature matrix H to obtain a final feature matrix H';
s34, establishing a preliminary powder bed defect detection algorithm model by using the training set image data in combination with a random forest classification algorithm, and specifically comprising the following substeps:
s341, repeatedly and randomly extracting m samples to generate a new training sample set with the training set N replaced;
s342, generating m decision trees for the m sample sets to form a random forest, wherein the construction steps of each decision tree are as follows:
s3421, selecting Gini (N, H'j) Minimum value characteristic H'jDividing the set N into two subsets N1And N2,Gini(N,H′j) Expressed as:
Figure FDA0002650375460000031
s3422, for N1And N2Recursively calling the step S3421 by the two child nodes until the random forest is generated; the set of m decision trees is represented as:
{(t1(H′)),(t2(H′)),(t3(H′)),…,(tm(H′))};
s343, obtaining a final classification result by adopting a simple majority voting method, wherein the final classification result is expressed as:
Figure FDA0002650375460000032
s35, determining a defect detection algorithm parameter combination, performing 10-time ten-fold cross validation to optimize the random forest algorithm parameters, and selecting the optimal parameters of the random forest algorithm by taking the average accuracy mean value of the 10-time algorithm as an evaluation index, wherein the specific substeps are as follows:
s351, randomly dividing the data set N into 10 disjoint subsets, wherein the number of training samples of the data set N is 630, each subset has 63 training samples, and the corresponding subset is expressed as:
N={N1,N2,N3,…Ni},i=1,2,3,…10,
Ni=(H′i,yi),
wherein, (H'i,yi) Representing a feature matrix and an image real category corresponding to the ith subset;
s352, randomly selecting 1 from the 10 subsets each time as a test set, taking the other 9 subsets as training sets, and training a random forest classification model by using the training set data;
s353, testing the data of the test set to obtain the average accuracy of the algorithm, calculating the average accuracy mean value of the algorithm for 10 times, and taking the average accuracy mean value as the real classification rate of the random forest classification model, wherein the average accuracy mean value of the algorithm is expressed as:
Figure FDA0002650375460000041
wherein, TN(i)(H′i) The predicted value obtained when the ith subset is selected as a test set for testing is shown,
Figure FDA0002650375460000042
representing the average accuracy of the primary algorithm;
establishing a random forest classification model based on the optimal parameters of the random forest algorithm, and setting k1、k2、k3Respectively representing the number of clustering centers of the bag-of-words model for extracting SIFT operator, GLCM and Hu invariant moment, and making k1、k2、k3The initial values of the parameters are all 100, the subsequent value intervals are set to be 100, the termination values are set to be 500, all parameter combinations are traversed, an optimal parameter random forest classification model is brought in, and the average accuracy of the algorithm is used as an evaluation index to select a defect detection algorithm parameter combination;
s36, according to the result of the step S35, a defect detection algorithm model established by the optimal defect detection algorithm parameter combination is used as an optimal defect detection algorithm model;
and S37, applying the optimal defect detection algorithm model selected in the step S36 to the powder bed image acquired in real time, and monitoring the powder laying process quality of the powder bed in real time.
2. The visual inspection method for powder bed defects based on image feature fusion of claim 1, wherein the three different types of powder bed defects in S1 are stripe-shaped defects, local over-height defects of cladding layers and insufficient powder supply defects.
3. The visual powder bed defect detection method based on image feature fusion of claim 1, wherein the different classes of powder bed defect features in S2 comprise: stripe-shaped defect characteristics, local over-height defect characteristics of a cladding layer and insufficient powder supply defect characteristics; wherein the content of the first and second substances,
the stripe-shaped defect is represented in the defect area of the image that the gray value of the defect area is lower than that of the good powder spreading area, and the area of the defect is smaller;
the defect of local high defect of the cladding layer shows that the gray value of the local high region is higher than that of the good powder-paving region in the image, and the area of the defect is the minimum;
the defect of insufficient powder supply shows the phenomenon that local gray scale is higher and lower simultaneously in the defect area of the image, and the area of the defect area is the largest.
4. The visual inspection method for powder bed defects based on image feature fusion according to claim 1, wherein the SIFT algorithm in the step S32 specifically comprises the following steps:
(1) firstly, a Gaussian kernel function is adopted for filtering to construct a scale space, and the Gaussian kernel function is expressed as:
Figure FDA0002650375460000051
secondly, the constructed scale-space function is expressed as:
L(x,y,σ)=G(x,y,σ)*I(x,y);
and then, subtracting adjacent upper and lower layers of images in each group by using a Gaussian pyramid to obtain a Gaussian difference DoG image, wherein the Gaussian difference DoG image is expressed as:
D(x,y,σ)=[G(x,y,kσ)-G(x,y,σ)]*I(x,y)
=L(x,y,kσ)-L(x,y,σ)
wherein I (x, y) represents the original image, (x, y) represents the pixel position in the image; g (x)i,yiσ) is a scale-variable gaussian function, σ is a scale space factor; k represents a multiple of two adjacent scale spaces; is the sign of the convolution operation;
finally, each pixel point of the Gaussian difference image is compared with 26 points of all adjacent points of the Gaussian difference image to ensure that extreme points are detected in both the scale space and the two-dimensional image space;
(2) performing curve fitting on the DoG function of the scale space by using a Taylor series, thereby further determining the position and the scale of the key point, and simultaneously removing the key point with low contrast and the unstable edge response point;
(3) calculating and using the histogram to count the gradient and direction distribution characteristics of pixels in the neighborhood of the Gaussian pyramid image where the key points are detected in the DOG pyramid, wherein the module value m (x, y) and the direction theta (x, y) of the gradient are expressed as:
Figure FDA0002650375460000061
Figure FDA0002650375460000062
(4) the cumulative value of each gradient direction is plotted for 8-direction histogram computed in a 4 x 4 window in the keypoint scale space, so that each keypoint forms a 4 x 8-128-dimensional feature vector.
5. The visual powder bed defect detection method based on image feature fusion according to claim 1, wherein the texture feature extraction of the powder bed defect image in step S32 adopts a gray level co-occurrence matrix to construct 6 texture feature values, which are respectively expressed as:
Figure FDA0002650375460000071
Figure FDA0002650375460000072
Figure FDA0002650375460000073
Figure FDA0002650375460000074
Figure FDA0002650375460000075
Figure FDA0002650375460000076
wherein the 6 characteristic values are respectively angular second moment, energy, contrast, dissimilarity, homogeneity and correlation,
Figure FDA0002650375460000077
the probability that a pixel B which is away from a certain pixel A with the gray scale of a and has the distance of d, the direction of theta and the gray scale of B appears at the same time is calculated; theta takes the values of 0 degree, 45 degrees, 90 degrees and 135 degrees; d takes the value of 1; mu.sa、μbRespectively represent Pa、PbMean value of (a)a、σbRespectively represent Pa、PbStandard deviation of (2).
6. The visual inspection method for powder bed defects based on image feature fusion according to claim 1, characterized in that the geometric feature extraction of the powder bed defect image in step S32 adopts Hu invariant moment 7 geometric feature values, which are respectively expressed as:
Figure FDA0002650375460000078
Figure FDA0002650375460000079
Figure FDA00026503754600000710
Figure FDA00026503754600000711
Figure FDA0002650375460000081
Figure FDA0002650375460000082
Figure FDA0002650375460000083
wherein eta ispqRepresenting normalized central moments of order p + q.
7. The visual powder bed defect detection method based on image feature fusion as claimed in claim 1, wherein in step S33, feature fusion is performed by using a serial feature fusion technique, and dimension reduction is performed by using a filtering feature selection algorithm.
8. The visual powder bed defect detection method based on image feature fusion of claim 1, wherein in step S35, the visual dictionary size and the random forest classifier parameters in the bag-of-words model are optimized separately.
CN202010868173.0A 2020-08-26 2020-08-26 Powder bed defect visual detection method based on image feature fusion Active CN112001909B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010868173.0A CN112001909B (en) 2020-08-26 2020-08-26 Powder bed defect visual detection method based on image feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010868173.0A CN112001909B (en) 2020-08-26 2020-08-26 Powder bed defect visual detection method based on image feature fusion

Publications (2)

Publication Number Publication Date
CN112001909A true CN112001909A (en) 2020-11-27
CN112001909B CN112001909B (en) 2023-11-24

Family

ID=73471042

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010868173.0A Active CN112001909B (en) 2020-08-26 2020-08-26 Powder bed defect visual detection method based on image feature fusion

Country Status (1)

Country Link
CN (1) CN112001909B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950601A (en) * 2021-03-11 2021-06-11 成都微识医疗设备有限公司 Method, system and storage medium for screening pictures for esophageal cancer model training
CN113344872A (en) * 2021-06-01 2021-09-03 上海大学 Segment code liquid crystal display defect detection method based on machine vision
CN113537413A (en) * 2021-09-15 2021-10-22 常州微亿智造科技有限公司 Clustering method for part defect detection interval of feature selection and combination optimization algorithm
CN114494254A (en) * 2022-04-14 2022-05-13 科大智能物联技术股份有限公司 Product appearance defect classification method based on fusion of GLCM and CNN-Transformer and storage medium
CN114782425A (en) * 2022-06-17 2022-07-22 江苏宜臻纺织科技有限公司 Spooling process parameter control method and artificial intelligence system in textile production process
CN114897908A (en) * 2022-07-14 2022-08-12 托伦斯半导体设备启东有限公司 Machine vision-based method and system for analyzing defects of selective laser powder spreading sintering surface
CN116984628A (en) * 2023-09-28 2023-11-03 西安空天机电智能制造有限公司 Powder spreading defect detection method based on laser feature fusion imaging
CN117853453A (en) * 2024-01-10 2024-04-09 苏州矽行半导体技术有限公司 Defect filtering method based on gradient lifting tree

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026804A1 (en) * 2009-08-03 2011-02-03 Sina Jahanbin Detection of Textural Defects Using a One Class Support Vector Machine
CN103593670A (en) * 2013-10-14 2014-02-19 浙江工业大学 Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
CN106651856A (en) * 2016-12-31 2017-05-10 湖南文理学院 Detection method for foamed nickel surface defects
CN107341499A (en) * 2017-05-26 2017-11-10 昆明理工大学 It is a kind of based on non-formaldehyde finishing and ELM fabric defect detection and sorting technique
KR20170127269A (en) * 2016-05-11 2017-11-21 한국과학기술원 Method and apparatus for detecting and classifying surface defect of image
CN108765412A (en) * 2018-06-08 2018-11-06 湖北工业大学 A kind of steel strip surface defect sorting technique
CN109872303A (en) * 2019-01-16 2019-06-11 北京交通大学 Surface defect visible detection method, device and electronic equipment
CN111965197A (en) * 2020-07-23 2020-11-20 广东工业大学 Defect classification method based on multi-feature fusion
CN112070727A (en) * 2020-08-21 2020-12-11 电子科技大学 Metal surface defect detection method based on machine learning

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110026804A1 (en) * 2009-08-03 2011-02-03 Sina Jahanbin Detection of Textural Defects Using a One Class Support Vector Machine
CN103593670A (en) * 2013-10-14 2014-02-19 浙江工业大学 Copper sheet and strip surface defect detection method based on-line sequential extreme learning machine
KR20170127269A (en) * 2016-05-11 2017-11-21 한국과학기술원 Method and apparatus for detecting and classifying surface defect of image
CN106651856A (en) * 2016-12-31 2017-05-10 湖南文理学院 Detection method for foamed nickel surface defects
CN107341499A (en) * 2017-05-26 2017-11-10 昆明理工大学 It is a kind of based on non-formaldehyde finishing and ELM fabric defect detection and sorting technique
CN108765412A (en) * 2018-06-08 2018-11-06 湖北工业大学 A kind of steel strip surface defect sorting technique
CN109872303A (en) * 2019-01-16 2019-06-11 北京交通大学 Surface defect visible detection method, device and electronic equipment
CN111965197A (en) * 2020-07-23 2020-11-20 广东工业大学 Defect classification method based on multi-feature fusion
CN112070727A (en) * 2020-08-21 2020-12-11 电子科技大学 Metal surface defect detection method based on machine learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
HAMED ELWARFALLI ET AL.: "In Situ Process Monitoring for Laser-Powder Bed Fusion using Convolutional Neural Networks and Infrared Tomography", 《2019 IEEE NATIONAL AEROSPACE AND ELECTRONICS CONFERENCE (NAECON)》 *
SCIME L ET AL.: "Anomaly detection and classification in a laser powder bed additive manufacturing process using a trained computer vision algorithm", 《ADDITIVE MANUFACTURING》 *
孙枭文: "基于纹理特征和Hu不变矩的KELM滤光片缺陷识别研究", 《甘肃科学学报》 *
林椹尠等: "基于空间金字塔的BoW模型图像分类方法", 《西安邮电大学学报》 *
闵信军: "基于灰度共生矩阵和视觉信息的布匹瑕疵检测方法研究", 《中国优秀硕士学位论文全文数据库》 *
陈静等: "融合多特征与随机森林的纹理图像分类方法", 《传感器与微系统》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950601A (en) * 2021-03-11 2021-06-11 成都微识医疗设备有限公司 Method, system and storage medium for screening pictures for esophageal cancer model training
CN112950601B (en) * 2021-03-11 2024-01-09 成都微识医疗设备有限公司 Picture screening method, system and storage medium for esophageal cancer model training
CN113344872A (en) * 2021-06-01 2021-09-03 上海大学 Segment code liquid crystal display defect detection method based on machine vision
CN113537413A (en) * 2021-09-15 2021-10-22 常州微亿智造科技有限公司 Clustering method for part defect detection interval of feature selection and combination optimization algorithm
CN113537413B (en) * 2021-09-15 2022-01-07 常州微亿智造科技有限公司 Clustering method for part defect detection interval of feature selection and combination optimization algorithm
CN114494254A (en) * 2022-04-14 2022-05-13 科大智能物联技术股份有限公司 Product appearance defect classification method based on fusion of GLCM and CNN-Transformer and storage medium
CN114782425A (en) * 2022-06-17 2022-07-22 江苏宜臻纺织科技有限公司 Spooling process parameter control method and artificial intelligence system in textile production process
CN114897908A (en) * 2022-07-14 2022-08-12 托伦斯半导体设备启东有限公司 Machine vision-based method and system for analyzing defects of selective laser powder spreading sintering surface
CN116984628A (en) * 2023-09-28 2023-11-03 西安空天机电智能制造有限公司 Powder spreading defect detection method based on laser feature fusion imaging
CN116984628B (en) * 2023-09-28 2023-12-29 西安空天机电智能制造有限公司 Powder spreading defect detection method based on laser feature fusion imaging
CN117853453A (en) * 2024-01-10 2024-04-09 苏州矽行半导体技术有限公司 Defect filtering method based on gradient lifting tree

Also Published As

Publication number Publication date
CN112001909B (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN112001909B (en) Powder bed defect visual detection method based on image feature fusion
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
Geirhos et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
France et al. A new approach to automated pollen analysis
CN111582294B (en) Method for constructing convolutional neural network model for surface defect detection and application thereof
CN104331699B (en) A kind of method that three-dimensional point cloud planarization fast search compares
CN116188475B (en) Intelligent control method, system and medium for automatic optical detection of appearance defects
CN115797354B (en) Method for detecting appearance defects of laser welding seam
CN1322471C (en) Comparing patterns
CN108985337A (en) A kind of product surface scratch detection method based on picture depth study
Zhang et al. Zju-leaper: A benchmark dataset for fabric defect detection and a comparative study
CN111965197B (en) Defect classification method based on multi-feature fusion
CN113516619B (en) Product surface flaw identification method based on image processing technology
CN114820471A (en) Visual inspection method for surface defects of intelligent manufacturing microscopic structure
CN110210415A (en) Vehicle-mounted laser point cloud roadmarking recognition methods based on graph structure
CN114998103A (en) Point cloud cultural relic fragment three-dimensional virtual splicing method based on twin network
Jin et al. End Image Defect Detection of Float Glass Based on Faster Region-Based Convolutional Neural Network.
CN117392097A (en) Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm
CN117011274A (en) Automatic glass bottle detection system and method thereof
CN116539619A (en) Product defect detection method, system, device and storage medium
Rill-García et al. Syncrack: Improving Pavement and Concrete Crack Detection Through Synthetic Data Generation
CN113283495B (en) Aggregate particle grading method and device
Zulkarnain et al. Table information extraction using data augmentation on deep learning and image processing
Shevlyakov et al. Recognition of the MNIST-dataset with skeletonized images
Ma et al. Visual detection of cells in brain tissue slice for patch clamp system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant