CN117392097A - Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm - Google Patents

Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm Download PDF

Info

Publication number
CN117392097A
CN117392097A CN202311404001.8A CN202311404001A CN117392097A CN 117392097 A CN117392097 A CN 117392097A CN 202311404001 A CN202311404001 A CN 202311404001A CN 117392097 A CN117392097 A CN 117392097A
Authority
CN
China
Prior art keywords
defect
defect detection
frame
model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311404001.8A
Other languages
Chinese (zh)
Inventor
李霁
王玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202311404001.8A priority Critical patent/CN117392097A/en
Publication of CN117392097A publication Critical patent/CN117392097A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/776Validation; Performance evaluation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

The invention discloses a defect detection method and a defect detection system for an extrusion process of an additive manufacturing material based on an improved YOLOv8 algorithm. The method comprises the following steps: collecting material extrusion defect images, carrying out image collection on each layer of surface of a printed product, carrying out image preprocessing on the collected images, and generating a material extrusion process original defect image data set; performing data enhancement and diffusion model image supplementation on the original defect image data set, and performing corresponding defect type labeling on all defect images subjected to the data enhancement and diffusion model image supplementation to generate a material extrusion process defect data set; constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm; and deploying the improved defect detection model in terminal equipment, and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and a defect detection quantitative evaluation algorithm. The defect detection method and device can improve the defect detection efficiency and accuracy.

Description

Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm
Technical Field
The invention belongs to the technical field of defect detection, and particularly relates to a defect detection method and system for an extrusion process of an additive manufacturing material based on an improved YOLOv8 algorithm.
Background
Additive manufacturing is a complex manufacturing technology integrating multiple technologies such as computer science, material science, photoelectric technology, numerical control technology, automation technology, mechanical engineering and the like, and has the advantage of personalized customization by constructing a three-dimensional object by stacking materials layer by layer. In the process of material extrusion printing technology, the problems of uneven distribution of a printing head movement error, a temperature field and a stress field, unsmooth material discharge and the like often occur, and the problems can cause the generation of common defects such as warping, wire drawing, plugs, cracks, needle scratch, pores, collapse and the like, so that the yield and the product quality of additive manufacturing are reduced.
Currently, common methods for solving this problem are: the traditional manual detection technology mainly detects defects in a visual, auditory and tactile mode, but the method has the defects of limited detection size, easy fatigue generation and influence by subjective factors, so that the false detection rate is high and the detection precision and speed are difficult to ensure; conventional nondestructive inspection techniques such as magnetic particle inspection techniques, eddy current inspection techniques, ultrasonic inspection techniques, penetration inspection techniques, and radiation inspection techniques are currently in wide use. Although these methods have high accuracy and powerful defect quantifying capability, in-situ detection is difficult to achieve, and the cost is high, with some limitations.
Disclosure of Invention
Aiming at the difficulty of defect detection in the existing additive manufacturing, the invention aims to provide a quantitative defect detection method and a quantitative defect detection system based on an improved YOLOv8 algorithm, which are used for accurately detecting whether defects exist in the material extrusion printing process, the types of the defects and other information in real time, so that references are provided for the quality of printed products, and technical basis is provided for subsequent repair of printers.
In order to achieve the aim of the invention, the invention adopts the following scheme: an additive manufacturing material extrusion process defect detection method based on an improved YOLOv8 algorithm, comprising:
s1, collecting material extrusion defect images, collecting images of each layer of surface of a printed product, and performing image preprocessing on the collected images to generate an original defect image data set;
s2, constructing a material extrusion defect data set, carrying out data enhancement and diffusion model image supplementation on the original defect image data set, and carrying out corresponding defect category labeling on each defect image after the data enhancement and diffusion model image supplementation to generate a defect data set;
s3, constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm;
And S4, deploying the improved defect detection model in terminal equipment, and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and a defect detection quantitative evaluation algorithm.
The defect detection model constructed in the step S3 includes introducing an attention mechanism module in the backbone network, where the attention mechanism module is configured to improve the attention of the model to the target location, and a calculation process of the attention mechanism module includes: coordinate coding, space transformation and weighted fusion; the attention mechanism module calculates a similarity matrix by utilizing the position coding vector of the coordinate information, and then performs weighted fusion on the feature map and the similarity matrix to obtain final feature representation;
for a given input x, the attention mechanism module encodes each channel along the horizontal and vertical coordinates using two spatial ranges (H, 1) and (1, W) of the pooling kernel, respectively, and the output equation at a height H in the c-th dimension is as follows:
the output formula for width w in dimension c is as follows:
finally, feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W are respectively generated;
for a given input x, the attention mechanism module uses the two directions of the pooling kernel, the vertical direction H and the horizontal direction W to channel encode the vertical (H, 1) and horizontal (1, W) coordinates of two spatial ranges, and the output formula at the c-th channel at vertical height H is as follows:
Representing a characteristic map generated at a vertical height h,/>Representing a set of elements of the input x in the range of channel c, height h and width (1, w). Similarly, the output equation for the horizontal width w of the c-th channel is as follows:
representing a characteristic map generated at a horizontal width w,/>Representing a set of elements of the input x in the channel c, height (H, 1) range and width w. The two groups of formulas finally generate feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W respectively;
then the characteristic information of two dimensions is connected and fused, and then a convolution transformation function F of 1 multiplied by 1 is input 1 The specific formula is as follows:
f=δ(F 1 ([z h ,z w ]))
where δ is a nonlinear activation function for the activation operation and f is an intermediate feature containing lateral and longitudinal spatial information;
then dividing f into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w The whole process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression of the attention mechanism module is as follows:
The loss function of the defect detection model constructed in the step S3 is as follows:
wherein w represents the width of the frame, h represents the height of the frame, and b represents the center point coordinates of the frame; l (L) IOU Representing an IOU penalty function; l (L) dis Representing a center frame distance loss function; l (L) asp Representing a length-width loss function; IOU represents the cross ratio, which is an index for measuring the overlapping degree between the predicted frame and the real frame in the target detection algorithm; ρ represents the Euclidean distance of the two boxes; b gt Representing the center point of the target frame; w (w) gt Representing the width of the target frame; h is a gt Representing the height of the target frame; w (w) c Representing the width of the minimum circumscribed frame of the target frame and the prediction frame;h c representing the height of the smallest circumscribed frame of the target frame and the predicted frame.
The process defects include: cutting and rubbing the needle head, forming holes, extruding excessive materials and doping the materials; in the step S2, data enhancement and diffusion model image supplementation are performed on the original defect image dataset, which specifically includes: performing image enhancement on the randomly selected defect image by using one or more of exposure change, color change, overturning, cutting, mosaic collage, random shielding and grid segmentation; a feature LORA model for each defect class is trained using the diffusion model, and each defect class generates a plurality of defect images using the LORA model.
The process of performing quantization analysis based on the defect detection quantization evaluation algorithm constructed in the step S4 includes: when the printer prints, a camera is placed above the printing platform, and the camera is perpendicular to the printing plane; and (3) calculating the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, and finishing one or more defect quantification information of the total number, the size, the area occupation ratio and the damage degree of the defects.
An additive manufacturing material extrusion process defect detection system based on a modified YOLOv8 algorithm, the defect detection system comprising:
the original defect image data set generation module is used for collecting material extrusion defect images, collecting images on the surface of each layer of printed product, and carrying out image preprocessing on the collected images to generate an original defect image data set;
the defect data set generation module is used for constructing a material extrusion defect data set, carrying out data enhancement and diffusion model image supplementation on the original defect image data set, and carrying out corresponding defect category labeling on each defect image after the data enhancement and diffusion model image supplementation to generate a defect data set;
The defect detection model and defect detection quantitative evaluation algorithm construction module is used for constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm;
and the defect identification and quantitative analysis module is used for deploying the improved defect detection model to the terminal equipment and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and the defect detection quantitative evaluation algorithm.
The constructed defect detection model comprises the following steps: introducing an attention mechanism module in a backbone network, wherein the attention mechanism module is used for improving the attention of a model to a target position, and the calculation process of the attention mechanism module comprises the following steps: coordinate coding, space transformation and weighted fusion; the attention mechanism module calculates a similarity matrix by utilizing the position coding vector of the coordinate information, and then performs weighted fusion on the feature map and the similarity matrix to obtain final feature representation;
for a given input x, the attention mechanism module encodes each channel along the horizontal and vertical coordinates using two spatial ranges (H, 1) and (1, W) of the pooling kernel, respectively, and the output equation at a height H in the c-th dimension is as follows:
The output formula for width w in dimension c is as follows:
finally, feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W are respectively generated;
then the characteristic information of two dimensions is connected and fused, and then a convolution transformation function F of 1 multiplied by 1 is input 1 The specific formula is as follows:
f=δ(F 1 ([z h ,z w ]))
where δ is a nonlinear activation function for the activation operation and f is an intermediate feature containing lateral and longitudinal spatial information;
then dividing f into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w The whole process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression of the attention mechanism module is as follows:
the constructed defect detection model has the loss function of:
wherein w represents the width of the frame, h represents the height of the frame, and b represents the center point coordinates of the frame; l (L) IOU Representing an IOU penalty function; l (L) dis Representing a center frame distance loss function; l (L) asp Representing a length-width loss function; IOU represents the cross ratio, which is an index for measuring the overlapping degree between the predicted frame and the real frame in the target detection algorithm; ρ represents the Euclidean distance of the two boxes; b gt Representing the center point of the target frame; w (w) gt Representing the width of the target frame; h is a gt Representing the height of the target frame; w (w) c Representing the width of the minimum circumscribed frame of the target frame and the prediction frame; h is a c Representing the height of the smallest circumscribed frame of the target frame and the predicted frame.
The process defects include: cutting and rubbing a needle, forming holes, extruding excessive materials and doping the materials; in the step S2, data enhancement and diffusion model image supplementation are performed on the original defect image dataset, which specifically includes: performing image enhancement on the randomly selected defect image by using one or more of exposure change, color change, overturning, cutting, mosaic collage, random shielding and grid segmentation; a feature LORA model for each defect class is trained using the diffusion model, and each defect class generates a plurality of defect images using the LORA model.
The process for carrying out quantitative analysis based on the constructed defect detection quantitative evaluation algorithm comprises the following steps: when the printer prints, a camera is placed above the printing platform, and the camera is perpendicular to the printing plane; and (3) calculating the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, and finishing one or more defect quantification information of the total number, the size, the area occupation ratio and the damage degree of the defects.
Compared with the prior art, the invention has at least the following advantages: (1) The method constructs an extrusion molding printing defect data set for model training, is used for solving the problem of online defect detection in the extrusion molding 3D printing process, and is convenient for realizing intelligent defect detection and defect detailed diagnosis in the material extrusion printing process; (2) The target detection algorithm used in the method avoids the defects of manual detection and conventional nondestructive detection, improves the detection efficiency of defects, and improves the intelligent integration degree of printing detection; (3) The method can perform real-time layer-by-layer in-situ detection in the material extrusion printing process, record corresponding defect quantification information, perform product quality comprehensive analysis after printing, and store analysis results locally; (4) The invention is based on the YOLOv8 network, and can more accurately detect and identify the defect image of the printed product based on the material extrusion process by adding the CA module and improving the original model loss function; (5) According to the defect quantitative detection evaluation system provided by the invention, through converting the image pixels into the actual physical dimensions, the automatic defect detection of the printed product and the acquisition visualization of the basic defect information are finally realized, and a reference basis is provided for the quality evaluation of the printed product and the maintenance direction of the printer.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for visual inspection of additive manufacturing defects based on an improved YOLO network in an exemplary embodiment of the invention;
FIG. 2 is a schematic diagram of operations performed to capture images of printing defects in an exemplary embodiment of the present invention;
FIG. 3 is a diagram of four common defect types for a material extrusion 3D printing process for defect detection in an exemplary embodiment of the present invention: cutting and rubbing a needle, forming holes, extruding excessive materials and doping sample patterns;
FIG. 4 is a diagram showing the main method of augmenting and data enhancing data set data in an exemplary embodiment of the invention;
FIG. 5 is a representation of file content data detailing defect image annotation in an exemplary embodiment of the present invention;
FIG. 6 is a graph comparing the performance of YOLOv8 with other target detection models in an exemplary embodiment of the invention;
FIG. 7 is a diagram of the original YOLOv8 network model;
FIG. 8 is a diagram of a CA module in an exemplary embodiment of the invention;
FIG. 9 is a block diagram of a YOLOv8 backbone network with CA added in an exemplary embodiment of the invention;
FIG. 10 is a schematic diagram of detection of an edge deployment of a YOLOv8 modified model raspberry group in an exemplary embodiment of the invention;
FIG. 11 is a flowchart of performing quantitative defect detection verification in an exemplary embodiment of the present invention;
FIG. 12 is a diagram of a defect after performing defect quantification detection in accordance with an exemplary embodiment of the present invention;
FIG. 13 is a schematic diagram of a print product quality inspection report automatically output after printing according to an exemplary embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following detailed description of the embodiments of the present invention will be given with reference to the accompanying drawings. Examples of these preferred embodiments are illustrated in the accompanying drawings. The embodiments of the invention shown in the drawings and described in accordance with the drawings are merely exemplary and the invention is not limited to these embodiments.
One aspect of the present invention provides a method for detecting defects in an extrusion process of an additive manufacturing material based on an improved YOLOv8 algorithm, comprising:
S1, collecting material extrusion defect images, collecting images of each layer of surface of a printed product, and performing image preprocessing on the collected images to generate an original defect image data set;
s2, constructing a material extrusion defect data set, carrying out data enhancement and diffusion model image supplementation on the original defect image data set, and carrying out corresponding defect category labeling on each defect image after the data enhancement and diffusion model image supplementation to generate a defect data set;
s3, constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm;
and S4, deploying the improved defect detection model in terminal equipment, and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and a defect detection quantitative evaluation algorithm.
In one embodiment, the defect detection model constructed in the step S3 includes an attention mechanism module in the backbone network, where the attention mechanism module is configured to improve the attention of the model to the target location, and the calculation process of the attention mechanism module includes: coordinate coding, space transformation and weighted fusion; the attention mechanism module calculates a similarity matrix by utilizing the position coding vector of the coordinate information, and then performs weighted fusion on the feature map and the similarity matrix to obtain final feature representation;
For a given input x, the attention mechanism module encodes each channel along the horizontal and vertical coordinates using two spatial ranges (H, 1) and (1, W) of the pooling kernel, respectively, and the output equation at a height H in the c-th dimension is as follows:
the output formula for width w in dimension c is as follows:
finally, feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W are respectively generated;
then the characteristic information of two dimensions is connected and fused, and then a convolution transformation function F of 1 multiplied by 1 is input 1 The specific formula is as follows:
f=δ(F 1 ([z h ,z w ]))
where δ is a nonlinear activation function for the activation operation and f is an intermediate feature containing lateral and longitudinal spatial information;
then dividing f into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w The whole process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression of the attention mechanism module is as follows:
in one embodiment, the loss function of the defect detection model constructed in the step S3 is:
Wherein w represents the width of the frame, h represents the height of the frame, and b represents the center point coordinates of the frame; l (L) IOU Representing an IOU penalty function; l (L) dis Representing a center frame distance loss function; l (L) asp Representing a length-width loss function; IOU represents the cross ratio, which is an index for measuring the overlapping degree between the predicted frame and the real frame in the target detection algorithm; ρ represents the Euclidean distance of the two boxes; b gt Representing the center point of the target frame; w (w) gt Representing the width of the target frame; h is a gt Representing the height of the target frame; w (w) c Representing the width of the minimum circumscribed frame of the target frame and the prediction frame; h is a c Representing the height of the smallest circumscribed frame of the target frame and the predicted frame.
In one embodiment, the process imperfections include: cutting and rubbing a needle, forming holes, extruding excessive materials and doping the materials; in the step S2, data enhancement and diffusion model image supplementation are performed on the original defect image dataset, which specifically includes: performing image enhancement on the randomly selected defect image by using one or more of exposure change, color change, overturning, cutting, mosaic collage, random shielding and grid segmentation; a feature LORA model for each defect class is trained using the diffusion model, and each defect class generates a plurality of defect images using the LORA model.
In one embodiment, the process of performing the quantization analysis based on the defect detection quantization evaluation algorithm constructed in the step S4 includes: when the printer prints, a camera is placed above the printing platform, and the camera is perpendicular to the printing plane; and (3) calculating the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, and finishing one or more defect quantification information of the total number, the size, the area occupation ratio and the damage degree of the defects.
Another aspect of the present invention provides an additive manufacturing material extrusion process defect detection system based on a modified YOLOv8 algorithm, the defect detection system comprising:
the original defect image data set generation module is used for collecting material extrusion defect images, collecting images on the surface of each layer of printed product, and carrying out image preprocessing on the collected images to generate an original defect image data set;
the defect data set generation module is used for constructing a material extrusion defect data set, carrying out data enhancement and diffusion model image supplementation on the original defect image data set, and carrying out corresponding defect category labeling on each defect image after the data enhancement and diffusion model image supplementation to generate a defect data set;
The defect detection model and defect detection quantitative evaluation algorithm construction module is used for constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm;
and the defect identification and quantitative analysis module is used for deploying the improved defect detection model to the terminal equipment and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and the defect detection quantitative evaluation algorithm.
In one embodiment, the constructed defect detection model includes: introducing an attention mechanism module in a backbone network, wherein the attention mechanism module is used for improving the attention of a model to a target position, and the calculation process of the attention mechanism module comprises the following steps: coordinate coding, space transformation and weighted fusion; the attention mechanism module calculates a similarity matrix by utilizing the position coding vector of the coordinate information, and then performs weighted fusion on the feature map and the similarity matrix to obtain final feature representation;
for a given input x, the attention mechanism module encodes each channel along the horizontal and vertical coordinates using two spatial ranges (H, 1) and (1, W) of the pooling kernel, respectively, and the output equation at a height H in the c-th dimension is as follows:
The output formula for width w in dimension c is as follows:
finally, feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W are respectively generated;
then the characteristic information of two dimensions is connected and fused, and then a convolution transformation function F of 1 multiplied by 1 is input 1 The specific formula is as follows:
f=δ(F 1 ([z h ,z w ]))
where δ is a nonlinear activation function for the activation operation and f is an intermediate feature containing lateral and longitudinal spatial information;
then dividing f into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w The whole process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression of the attention mechanism module is as follows:
in one embodiment, the constructed defect detection model has a loss function of:
wherein w represents the width of the frame, h represents the height of the frame, and b represents the center point coordinates of the frame; l (L) IOU Representing an IOU penalty function; l (L) dis Representing a center frame distance loss function; l (L) asp Representing a length-width loss function; IOU represents the cross ratio, which is an index for measuring the overlapping degree between the predicted frame and the real frame in the target detection algorithm; ρ represents the Euclidean distance of the two boxes; b gt Representing the center point of the target frame; w (w) gt Representing the width of the target frame; h is a gt Representing the height of the target frame; w (w) c Representing the width of the minimum circumscribed frame of the target frame and the prediction frame; h is a c Representing the height of the smallest circumscribed frame of the target frame and the predicted frame.
In one embodiment, the process imperfections include: cutting and rubbing the needle head, forming holes, extruding excessive materials and doping the materials; in the step S2, data enhancement and diffusion model image supplementation are performed on the original defect image dataset, which specifically includes: performing image enhancement on the randomly selected defect image by using one or more of exposure change, color change, overturning, cutting, mosaic collage, random shielding and grid segmentation; a feature LORA model for each defect class is trained using the diffusion model, and each defect class generates a plurality of defect images using the LORA model.
In one embodiment, the process of performing a quantitative analysis based on the constructed defect detection quantitative evaluation algorithm includes: when the printer prints, a camera is placed above the printing platform, and the camera is perpendicular to the printing plane; and (3) calculating the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, and finishing one or more defect quantification information of the total number, the size, the area occupation ratio and the damage degree of the defects.
The invention provides an additive manufacturing material extrusion process defect detection method based on an improved YOLOv8 algorithm, wherein a typical implementation process comprises the following steps:
s1, aiming at common four types of material extrusion process defects, collecting defect images: acquiring 1849 surface defect images of a printed product, removing 74 images of problems such as excessive defects, abnormal image exposure and the like, and obtaining an original defect data set by cutting the images, denoising the images and carrying out histogram equalization;
s2, manufacturing a material extrusion defect data set: performing data enhancement and diffusion model image expansion on the collected original defect data sets, expanding the total data set to 3550 pieces, and finally performing corresponding defect category labeling on each defect image to complete data set production and data set division;
s3, determining a detection basic model: comparing and analyzing the main industrial defect detection models at present, performing model training on 7 types of detection models, comparing the detection performance and the detection time of each model, selecting a YOLOv8 model with better performance as a basic model, and performing model training, testing and tuning;
s4, improving a detection model: replacing the original regression loss function CIOU of Yolov8 with the EIOU loss function and introducing an attention mechanism in the backbone network to complete model improvement;
S5, designing quantitative defect evaluation standards: the method comprises the steps of designing a detection method to complete the display functions of the total number, the size, the area occupation ratio, the damage degree and the overall condition of the product, displaying defect quantification information in a detection image, and visualizing the overall defect of the product after the printing of the product is completed;
s6, building a model by edge equipment to detect: and deploying an improved defect detection model on the terminal equipment raspberry group 4B, and carrying out defect identification and quantitative analysis on the printed product.
In steps S1 and S2, the method for creating the defect data set of the printed product is as follows:
s1.1, all defect images shot in the step S1 are shot by using a sea-health CMOS industrial camera MV-CA-10GM/GC, and an industrial light source is used for illumination in a darker link.
S1.2, after shooting is completed, cutting out the defect image, and using image cutting software AdobeP photoshop to cut out the image, so as to ensure that the content of a printed product occupies more than 70% of the whole image, and the image resolution of a single defect product image after cutting out is over 850 multiplied by 850;
s1.3, after image cutting is completed, gaussian filtering denoising is carried out on the image, a convolution kernel with the size of 7 multiplied by 7 and a standard deviation value of 1.5 are selected as parameters of a Gaussian filter, and histogram equalization is carried out after denoising is completed to enhance the contrast of the image;
S2.1, performing image augmentation on all the defect images obtained in the step S2 to realize data set augmentation and approaching of the number of each defect sample, wherein the specific operation is that the image augmentation is completed on the randomly selected defect images by using one or more combinations of exposure change, color change, overturning, cutting, mosaic collaging, random shielding, grid segmentation and the like; training a characteristic LORA model of each defect type by using a diffusion model, generating 100 defect images by using the LORA model of each defect type, and finally obtaining 3550 defect data set images;
s2.2, performing image marking tasks on the expanded defect data set, performing image defect marking by using marking software LabelImg software to form a txt data file corresponding to the defect image, wherein the txt data file records the type of defect marking and the coordinates of the upper left corner and the lower right corner of the defect, and marking is finished to finish the manufacturing of the defect data set;
s2.3, extruding the manufactured material out of the defect data set according to 8:1:1 is divided into a training set, a verification set and a test set.
In the steps S1 and S2, the defect types of the gear comprise pores, needle scratch, excessive accumulation and material doping.
In step S4, the specific way to improve the YOLOv8 network model is as follows:
(1) The original regression loss function of YOLOv8 is divided into two parts, the first part of the CIOU loss function and the second part of the DFL loss function. The CIOU can better meet the difference between the calculated prediction frame and the target frame, but the convergence is slower, so the CIOU loss function is replaced by the method.
(2) And a CA module is added between the last SPPF module and the C2F module in the backbone network of the original YOLOv8, and the CA module adaptively selects and adjusts the feature weights of different channels, so that input data is better expressed, and the attention of the model to key features is improved.
In step S4, the EIOU loss function measures the similarity by calculating the distance between the center point, the width and the height of the target frames, so that the rotation, the shielding, the dislocation and other situations between the target frames can be better processed, and the detection accuracy is improved.
In step S4, the CA module is modified based on SE and CBAM, where SE only considers the information between channels, ignores the spatial information, and CBAM only includes a local area of the original image at each location after the convolution and downsampling operations. The CA performs average pooling in the horizontal direction and the vertical direction, codes the space information, and performs weighted fusion on the space information in the channel dimension so as to improve the attention of the model to the target position.
In step S5, the quantitative detection and evaluation is to calculate the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, so as to design a detection method to complete the information calculation functions of the total number of defects, the size, the area occupation ratio, the damage degree and the overall condition of the product, finally display the defect quantitative information in the defect detection image, and visualize the overall defects of the product after the printing of the product is completed.
The hardware in the final detection in step S6 comprises: the printing system comprises a material extrusion printer, an air compressor and a dispenser; the detection system comprises a camera, an industrial light source, a raspberry group 4B terminal for deploying the improved model, a display, a mouse and a keyboard.
The invention provides an additive manufacturing material extrusion process defect detection method based on an improved YOLOv8 algorithm, wherein the other typical implementation process comprises the following steps:
s1, collecting a material extrusion defect image: the material extrusion printer comprises a printer, an air compressor and a dispenser, wherein in the printing process of the material extrusion process, an industrial camera is used for collecting images of each layer of printing surface, four types of defect needle scratch, pore, excessive accumulation and material doping defect image data sets are constructed, 1849 defect images of a printed product are collected altogether, and 74 problem images such as excessive defects and abnormal image exposure are removed. Then image clipping software Adobe Photoshop is used for image clipping, so that the content of a printed product occupies more than 70% of the whole image, and the image resolution of a single defective product image exceeds 850×850 after clipping is finished. Then denoising the image by using a Gaussian filter image to remove signal noise when the industrial camera shoots, wherein the convolution kernel selects 7 multiplied by 7, the standard deviation selects 1.5, and finally, adjusting the gray level of the image by using histogram equalization to make the defect more clear and visible, and finally, obtaining an original defect data set;
S2.1, performing image augmentation on all the defect images obtained in the step S2 to realize data set augmentation and approaching of the number of each defect sample, wherein the specific operation is that the image augmentation is completed on the randomly selected defect images by using one or more combinations of exposure change, color change, overturning, cutting, mosaic collaging, random shielding, grid segmentation and the like; training the feature LORA model of each defect type in a diffusion model, generating 100 defect images by using the LORA model of the corresponding type of each defect type, and finally obtaining 3550 defect data set images;
s2.2, performing an image marking task on the expanded defect data set, and performing image defect marking by using Labelimg software to form a txt data file corresponding to the defect image, thereby completing the manufacture of the defect data set;
s2.3, extruding the manufactured material out of the defect data set according to 8:1:1 is divided into a training set, a verification set and a test set.
S3, selecting a current mainstream target detection algorithm: the model is characterized in that 300 rounds of model training are carried out on data sets manufactured in S2 by Faster R-CNN, cascades R-CNN, YOLOv5, YOLOv6, YOLOv7, PP-YOLOv and YOLOv8, the learning rate LR1=0.001, the batch size=4 and num_works=2, an optimizer is Adam, the performance among the models is compared and analyzed in a test set after training is finished, a YOLOv8 model with better effect is selected as a basic model according to result data, and the model subjected to migration learning usually has better generalization in a feature extraction stage, so that the model subjected to pre-training of the YOLOv8 basic model in a COCO data set is selected as the basic model.
The S4.1 and YOLOv8 models can consider all the features as being equally important in training, and can not concentrate attention on a key area of a target, so that the target positioning is inaccurate. The present invention opts to add Coordinate Attention (CA) to improve the model's attention to the target location. The calculation process of the CA module mainly comprises three steps: coordinate coding, spatial transformation and weighted fusion. CA calculates the similarity matrix by using the position coding vector of the coordinate information, and then carries out weighted fusion on the feature map and the similarity matrix to obtain the final feature representation. For a given input x, CA encodes each channel along the horizontal and vertical coordinates, respectively, using two spatial ranges (H, 1) and (1, W) of the pooling kernel. The output equation for height h in dimension c is as follows:
the output formula for width w in dimension c is as follows:
finally, feature maps with sizes of C×H×1 and C×1×W are generated respectively. Then, the characteristic information of two dimensions is firstly connected and fused, and then is input into a convolution transformation function F of 1 multiplied by 1 1 The specific process formula is as follows:
f=δ(F 1 ([z h ,z w ]))
delta is a nonlinear activation function for the activation operation, f is an intermediate feature containing lateral and longitudinal spatial information, and f is then divided into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w Finishing the wholeThe process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression for CA is as follows:
compared with other attention mechanisms, the CA has higher calculation efficiency, and the CA can calculate the attention weight through simple convolution operation, so that the calculation efficiency and the training speed of the model are greatly improved. Moreover, the CA can better capture the relation between different positions, the position information is greatly helpful to tasks such as image classification, target detection and segmentation, and the accuracy and generalization capability of the model can be improved by using the CA.
S4.2, the IOU (Intersection over Union) value in the target detection task can well describe the position of a predicted frame of a target detection model, the regression loss function of the YOLOv8 is divided into two parts, the first part calculates the difference between the predicted frame and the target frame through a CIOU (Complete-IOU) loss function, and the second part uses the DFL (Distribution Focal Loss) loss function to assist the CIOU loss function so that the detection network is quickly focused to the target position. The CIOU function introduces an additional averaging approach in its calculation formula to improve the penalty for aspect ratio variation. However, the CIOU function does not take into account the direction between the bounding box and the prediction box, which in turn results in a slower convergence speed. The invention uses a loss function which is improved based on CIOU, divides the IOU loss function into three parts, and respectively measures the similarity of the integral prediction frame and the actual frame by calculating the distance among the center point, the width and the height of the target frame, thereby better processing the rotation, the shielding, the dislocation and other conditions among the target frames and improving the detection accuracy. The specific formula of the improved loss function is as follows:
Wherein the detailed meaning of each parameter is as follows, w represents the width of the frame, h represents the height of the frame, b represents the coordinates of the center point of the frame, and the corresponding w, w gt And w c Representing the width of the predicted frame, the width of the target frame, the width of the minimum inclusion frame of the target frame and the predicted frame, respectively. The improved loss function can learn the shape and the size of the bounding box better, improves the accuracy of target detection, makes the model show more prominence on the detection of small targets and overlapped targets, and is beneficial to improving the recall rate of model detection.
S5, the target detection algorithm can only acquire specific category and position information of the defects, and can not acquire specific quantifiable defect size information, so the patent designs a method, and the corresponding pixel physical size proportion relation is found by detecting the pixel information of the image target frame, so that the corresponding specific defect size is acquired, and the physical size of each defect is marked beside the corresponding defect category in a millimeter form. This is achieved by establishing a proportional relationship between the pixel size and the physical size, as follows:
taking a construction platform of the 3D printer as a reference, and obtaining the physical size (P) of each pixel in the image by measuring the actual physical width (W) of the printing platform and the corresponding pixel width (W) in the obtained image. When detecting the defects, the upper left coordinate and the lower right coordinate of the positions of the defects can be obtained through parameter transmission, the pixel width and the height of the target defects are obtained through subtracting the lower right coordinate from the upper left coordinate, and finally the actual width and the height of the defects in the current printing layer are calculated through multiplying (P) and the pixel width and the height of the target defects, and the defect area is obtained through multiplying the actual width and the height. If the defect area exceeds one-fourth of the printed area of the layer, it is defined as a special defect for recording the severity of the layer defect.
The Matplotlib library function in Python was used for total number of defects statistics and to show the number of special defects and the total defect area ratio for each image in which defects were detected. After printing, carrying out automatic statistical analysis on the defect data to finish quality inspection images of single printed products:
s6, deploying a trained YOLOv8 final model on a terminal raspberry party, deploying a quantitative detection evaluation algorithm on a defect detection stage, placing a camera above a printing platform when a printer prints, enabling the camera to be perpendicular to a printing plane, enabling the printing platform to be 20cm away from the camera, obtaining a quantitative calculation formula at the moment, obtaining a corresponding relation between image pixels and actual physical dimensions, enabling a needle to move away from the surface of a printed product after the printer finishes one layer, enabling the camera to operate at the moment to obtain defect image information of the printed surface, automatically judging whether the defect exists on the surface of the product, and automatically generating a quantitative defect detection information graph to be stored in the raspberry party if the defect exists. And after the detection is finished, continuing the printing process of the next layer, and outputting a product defect overview quality inspection report chart according to all detected defect information after the final printing is finished.
The following describes the embodiments of the present invention in further detail.
As shown in fig. 1, the present invention proposes a computer vision inspection method for material extrusion 3D printing defects based on an improved YOLOv8 target inspection network, the method comprising the steps of:
the method comprises the steps of constructing an additive manufacturing printing system and a defect image acquisition system, completing the acquisition task of a defect data set by using an industrial camera, specifically acquiring a defect image by using an industrial light source for illumination, and acquiring 1849 defect images altogether. After the inferior image is removed, 1775 images of the defect of the acquired solid propellant are obtained. And then, performing data cleaning treatment on the collected defect image by using methods of image processing, image labeling, data enhancement and histogram equalization, and successfully manufacturing a defect data set of the extrusion process of the additive manufacturing material. The data set is divided into four types of defects, including 3550 pictures, and four types of defect concrete sample pictures are shown in fig. 3; data enhancement, also referred to as image enhancement, is a method used to augment the performance of a data set enhancement model, in a particular manner as shown in FIG. 4; the labeling software for labeling the defect image is LabelImg, and labeling result data is shown in FIG. 5. Total defect count 9310, detailed data are as in table 1:
TABLE 1 defect counts for each class
The specific manufacturing method of the four defect samples is as follows: the reasons for the occurrence of the needle scratch defect are many, when the printing surface is uneven or the printer is wrong, the needle scratch damages the surface of a product, and the needle scratch on the surface of the printing product can be simulated by a manual scratch mode to obtain a sample of the defect; the pores may come from too fast running, resulting in incomplete filling, or from uneven mixing of the syringe materials, and the material in the syringe may contain some air bubbles, resulting in pores being formed where the air bubbles exist when the material is extruded. The manufacturing of such defects only requires insufficient mixing of the materials in the needle tube and intentional introduction of a portion of the bubbles to produce such defective samples; the excessive accumulation defect mainly comes from mismatching of the needle travel speed and the discharging speed of the printer, and possibly from the fact that a machine or a needle cylinder is not firmly fixed, and when the needle cylinder is used for feeding air and extruding materials in the printing process, the needle head is slightly moved, so that the materials on the surface of a product are unevenly distributed. Manufacturing such defects can produce such defect samples only by greatly reducing the needle travel speed within a period of time during the printing process; the material doping defect comes from inherent impurities of a platform or impurities in a needle tube, so that a formed product contains impurities, and the surface of a printed sample is coated with ink of various colors to represent the material doping defect. Then according to 8:1:1, a training set, a validation set and a test set. The present mainstream target detection algorithm is used to train the data set collected by the data set of the present invention, and specific performance data of each model is shown in fig. 6, so that the YOLOv8 algorithm of the present invention can be clearly found to have advantages.
Firstly, constructing a Yolov8 original model, wherein the original Yolov8 model is shown in fig. 7, and the whole model framework mainly comprises:
the CBS convolution module consists of two Conv convolution layers, a BN normalization layer and a SiLu activation function which are connected in series, the SiLu function is more stable than the ReLu function and is beneficial to model convergence, and the main function of the CBS convolution module is to extract picture characteristics and perform downsampling.
The C2f module mainly comprises a Conv convolution layer, a Split layer and a series of Bottleneck layers, and the Bottleneck can greatly reduce parameters and reduce calculation amount. The C2f has the function of ensuring the light weight of the YOLOv8 and simultaneously improving the feature extraction and the perception receptive field, so that the model can better understand objects and scenes with different scales.
The SPPF module consists of a CBS convolution module and three serial maximum pooling layers, and the original feature map and the feature map obtained by carrying out maximum pooling every time are spliced, so that feature fusion is realized. The SPPF module mainly aims to fuse characteristic diagrams of different receptive fields in a mode that a plurality of small-size pooling cores are connected in parallel and are connected in cascade, and meanwhile, the running speed is further improved.
The Head detection Head module adopts a decoupling Head structure (coupled-Head) to separate classification from detection heads, the channel number of the regression Head also becomes 4×reg_max, and reg_max defaults to 16.
The overall result of the CA module is shown in fig. 8, coordinate Attention (CA) is added before the SPPF layer of the backbone network to improve the attention of the model to the target position, and the backbone network after the CA is added is shown in fig. 9.
After the main model is built, network training is performed through super parameter setting. Dividing each 32 images into 1 Batch, totally 89 batches, taking 89 batches as one Epoch, training 300 rounds of epochs by a model, normalizing the input size of the pictures to 640 x 640, and setting the working thread to 8. The initial learning rate is set to be 0.001, and a WarmUp strategy is adopted for the initial learning rate, and the strategy can use a smaller learning rate at the beginning of training, so that the model can be helped to more stably update the weight in the initial stage of training, the final learning rate is the initial learning rate multiplied by 0.0001, and the smaller final learning rate is set, so that the model is beneficial to converging to an optimal solution. The optimizer selects Adam, and the optimizer can help the model learn faster and more accurately. The IOU threshold is set to 0.5.
In order to evaluate the detection performance of the model, precision (Precision), recall (Recall), average Precision mean (meanAveragePrecision, mAP) and F1 score are selected as evaluation indexes, and the corresponding calculation formulas of the indexes are as follows:
/>
the model judges whether the detection is successful or not by calculating the intersection ratio between the prediction frame and the real frame, if the real frame exists at the target position and the model outputs a correct prediction frame, the part is a correct positive sample, which is called TP; if there is a real box at this location but the model does not output the predicted box correctly, this part is the wrong positive sample, called FP. If there is no real box at this location, but the model does make the output of the predicted box, this part is a negative sample of the error, called FN. If the position has no real frame and the model has no corresponding predicted frame, this part is the correct negative sample, called TN.
After training, selecting a model with the minimum loss function value for classification prediction in a test set, wherein the result is that the precision of the improved model in four types of defects is respectively as follows: 87.3 percent of needle head scratch, 88.1 percent of pore, 94.2 percent of excessive accumulation and 92.8 percent of material doping; the recall rates are respectively as follows: 93.3 percent of needle head scratch, 88.7 percent of pore, 78.6 percent of excessive accumulation and 86.4 percent of material doping; the APs 50 are respectively: 95.7 percent of needle head scratch, 93.0 percent of pore, 84.5 percent of excessive accumulation and 93.8 percent of material doping; the average detection time of each picture is 13.9ms, and the mAP50 of various defect detection indexes is 91.7%. The results of the ablation experiments for the improved model are shown in table 2.
Table 2 improved YOLOv8 model ablation experimental performance comparison
Compared with the initial model of YOLOv8, the final improved model has a certain amplitude improvement on the full-class precision index, and the precision improvement of the doping defect class of the material is 9.5% at most; compared with mAP50 of the original YOLOv8 algorithm, the method disclosed by the invention is improved by 9.0%. In conclusion, the final improved model has better performance in terms of detection accuracy and detection time, and can meet the online detection requirement of the material extrusion 3D process.
The invention adopts raspberry group 4B to build an improved YOLOv8 model, and adopts a quantitative detection algorithm to carry out quantitative detection evaluation, and the detection hardware principle is shown in figure 10. The quantitative defect detection verification flow is shown in fig. 11. Firstly, starting a printer to print a product, after a layer of printing is finished, in order to prevent the printer platform from being blocked by an X axis of an extrusion head, firstly moving the extrusion head to a coordinate origin, then moving the printer platform to a position with a coordinate of 200, wherein the G-code is G28X Z, G1Y 200, and controlling a camera to perform single-frame shooting through a raspberry group, and if no defect detection is successful, directly outputting a printed product without defects; and if the defects are found, carrying out a quantization evaluation detection flow, obtaining numerical indexes of each defect through calculation, marking on the image, and finally outputting a defect quantization information picture and storing the defect quantization information picture in a local folder to finish a quantization defect detection task. The single quantization detection graph is shown in fig. 12.
The quantization detection method designed by the invention can calculate the specific defect information of each picture in detail, and TD (Total Defects) represents the total defect number of one picture; SDN (Severse Defects Number) the number of severe defects, which represent that the defect area is greater than one-fourth of the overall product area; DAR (Defects Area Ratio) represents the ratio of the sum of the total defect areas to the total printed product area. After printing the whole product, the system can automatically generate a histogram to summarize the whole printing process of the product. As shown in fig. 13.
It should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate technical solution, and this description is for clarity only, and those skilled in the art should consider the disclosure as a whole, and the technical solutions of the embodiments may be combined appropriately to form other embodiments that can be understood by those skilled in the art.

Claims (10)

1. An additive manufacturing material extrusion process defect detection method based on an improved YOLOv8 algorithm is characterized by comprising the following steps:
s1, collecting material extrusion defect images, collecting images of each layer of surface of a printed product, and performing image preprocessing on the collected images to generate an original defect image data set;
S2, constructing a material extrusion defect data set, carrying out data enhancement and diffusion model image supplementation on the original defect image data set, and carrying out corresponding defect category labeling on each defect image after the data enhancement and diffusion model image supplementation to generate a defect data set;
s3, constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm;
and S4, deploying the improved defect detection model in terminal equipment, and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and a defect detection quantitative evaluation algorithm.
2. The method according to claim 1, wherein the defect detection model constructed in the step S3 includes an attention mechanism module for improving the attention of the model to the target location in the backbone network, and the calculation process of the attention mechanism module includes: coordinate coding, space transformation and weighted fusion; the attention mechanism module calculates a similarity matrix by utilizing the position coding vector of the coordinate information, and then performs weighted fusion on the feature map and the similarity matrix to obtain final feature representation;
For a given input x, the attention mechanism module encodes each channel along the horizontal and vertical coordinates using two spatial ranges (H, 1) and (1, W) of the pooling kernel, respectively, and the output equation at a height H in the c-th dimension is as follows:
the output formula for width w in dimension c is as follows:
finally, feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W are respectively generated;
for a given input x, the attention mechanism module uses the two directions of the pooling kernel, the vertical direction H and the horizontal direction W to channel encode the vertical (H, 1) and horizontal (1, W) coordinates of two spatial ranges, and the output formula at the c-th channel at vertical height H is as follows:
representing a characteristic map generated at a vertical height h,/>Representing a set of elements of the input x in the range of channel c, height h and width (1, w). Similarly, the output equation for the horizontal width w of the c-th channel is as follows:
representing a characteristic map generated at a horizontal width w,/>Representing a set of elements of the input x in the channel c, height (H, 1) range and width w. The two groups of formulas finally generate feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W respectively;
then the characteristic information of two dimensions is connected and fused, and then a convolution transformation function F of 1 multiplied by 1 is input 1 The specific formula is as follows:
f=δ(F 1 ([z h ,z w ]))
where δ is a nonlinear activation function for the activation operation and f is an intermediate feature containing lateral and longitudinal spatial information;
then dividing f into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w The whole process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression of the attention mechanism module is as follows:
3. the method for detecting defects in extrusion process of additive manufacturing material based on modified YOLOv8 algorithm according to claim 1, wherein the loss function of the defect detection model constructed in step S3 is:
wherein w represents the width of the frame, h represents the height of the frame, and b represents the center point coordinates of the frame; l (L) IOU Representing an IOU penalty function; l (L) dis Representing a center frame distance loss function; l (L) asp Representing a length-width loss function; IOU represents the cross ratio, which is an index for measuring the overlapping degree between the predicted frame and the real frame in the target detection algorithm; ρ represents the Euclidean distance of the two boxes; b gt Representing the center point of the target frame; w (w) gt Representing the width of the target frame; h is a gt Representing the height of the target frame; w (w) c Representing the width of the minimum circumscribed frame of the target frame and the prediction frame; h is a c Representing the height of the smallest circumscribed frame of the target frame and the predicted frame.
4. The method for detecting the process defect of extrusion of the additive manufacturing material based on the improved YOLOv8 algorithm according to claim 1, wherein the process defect comprises: cutting and rubbing the needle head, forming holes, extruding excessive materials and doping the materials; in the step S2, data enhancement and diffusion model image supplementation are performed on the original defect image dataset, which specifically includes: performing image enhancement on the randomly selected defect image by using one or more of exposure change, color change, overturning, cutting, mosaic collage, random shielding and grid segmentation; a feature LORA model for each defect class is trained using the diffusion model, and each defect class generates a plurality of defect images using the LORA model.
5. The method for detecting defects in extrusion process of additive manufacturing material based on modified YOLOv8 algorithm according to claim 4, wherein the process of performing quantitative analysis based on the defect detection quantitative evaluation algorithm constructed in step S4 comprises: when the printer prints, a camera is placed above the printing platform, and the camera is perpendicular to the printing plane; and (3) calculating the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, and finishing one or more defect quantification information of the total number, the size, the area occupation ratio and the damage degree of the defects.
6. An additive manufacturing material extrusion process defect detection system based on the modified YOLOv8 algorithm for use in any one of claims 1-5, said defect detection system comprising:
the original defect image data set generation module is used for collecting material extrusion defect images, collecting images on the surface of each layer of printed product, and carrying out image preprocessing on the collected images to generate an original defect image data set;
the defect data set generation module is used for constructing a material extrusion defect data set, carrying out data enhancement and diffusion model image supplementation on the original defect image data set, and carrying out corresponding defect category labeling on each defect image after the data enhancement and diffusion model image supplementation to generate a defect data set;
the defect detection model and defect detection quantitative evaluation algorithm construction module is used for constructing an improved defect detection model and constructing a defect detection quantitative evaluation algorithm;
and the defect identification and quantitative analysis module is used for deploying the improved defect detection model to the terminal equipment and carrying out defect identification and quantitative analysis on the printed product according to the improved defect detection model and the defect detection quantitative evaluation algorithm.
7. The additive manufacturing material extrusion process defect detection system based on the modified YOLOv8 algorithm of claim 6, wherein the constructed defect detection model comprises: introducing an attention mechanism module in a backbone network, wherein the attention mechanism module is used for improving the attention of a model to a target position, and the calculation process of the attention mechanism module comprises the following steps: coordinate coding, space transformation and weighted fusion; the attention mechanism module calculates a similarity matrix by utilizing the position coding vector of the coordinate information, and then performs weighted fusion on the feature map and the similarity matrix to obtain final feature representation;
for a given input x, the attention mechanism module encodes each channel along the horizontal and vertical coordinates using two spatial ranges (H, 1) and (1, W) of the pooling kernel, respectively, and the output equation at a height H in the c-th dimension is as follows:
the output formula for width w in dimension c is as follows:
finally, feature graphs with the sizes of C multiplied by H multiplied by 1 and C multiplied by 1 multiplied by W are respectively generated;
then the characteristic information of two dimensions is connected and fused, and then a convolution transformation function F of 1 multiplied by 1 is input 1 The specific formula is as follows:
f=δ(F 1 ([z h ,z w ]))
where δ is a nonlinear activation function for the activation operation and f is an intermediate feature containing lateral and longitudinal spatial information;
Then dividing f into two independent features f h And f w Finally, along the spatial dimension, two 1×1 convolution transforms F are used, respectively h And F w Will f h And f w Feature transformation is performed so that its dimension is consistent with the input X, and the final attention vector g is obtained in combination with the activation function Sigmod h And g w The whole process formula is as follows:
g h =σ(F h (f h ))
g w =σ(F w (f w ))
finally, the output attention vector g h And g w The final output attention block Y expression of the attention mechanism module is as follows:
8. the additive manufacturing material extrusion process defect detection system based on the modified YOLOv8 algorithm of claim 6, wherein the constructed defect detection model has a loss function of:
wherein w represents the width of the frame, h represents the height of the frame, and b represents the center point coordinates of the frame; l (L) IOU Representing an IOU penalty function; l (L) dis Representing a center frame distance loss function; l (L) asp Representing a length-width loss function; IOU represents the cross-over ratio, which is used for measuring the overlapping degree between the predicted frame and the real frame in the target detection algorithmIs an indicator of (2); ρ represents the Euclidean distance of the two boxes; b gt Representing the center point of the target frame; w (w) gt Representing the width of the target frame; h is a gt Representing the height of the target frame; w (w) c Representing the width of the minimum circumscribed frame of the target frame and the prediction frame; h is a c Representing the height of the smallest circumscribed frame of the target frame and the predicted frame.
9. The additive manufacturing material extrusion process defect detection system based on the modified YOLOv8 algorithm of claim 6, wherein the process defect comprises: cutting and rubbing a needle, forming holes, extruding excessive materials and doping the materials; in the step S2, data enhancement and diffusion model image supplementation are performed on the original defect image dataset, which specifically includes: performing image enhancement on the randomly selected defect image by using one or more of exposure change, color change, overturning, cutting, mosaic collage, random shielding and grid segmentation; a feature LORA model for each defect class is trained using the diffusion model, and each defect class generates a plurality of defect images using the LORA model.
10. The additive manufacturing material extrusion process defect detection system based on the modified YOLOv8 algorithm of claim 9, wherein the process of performing the quantitative analysis based on the constructed defect detection quantitative evaluation algorithm comprises: when the printer prints, a camera is placed above the printing platform, and the camera is perpendicular to the printing plane; and (3) calculating the corresponding relation between the image pixels and the actual size of the defects by measuring the distance between the printed product and the camera, and finishing one or more defect quantification information of the total number, the size, the area occupation ratio and the damage degree of the defects.
CN202311404001.8A 2023-10-27 2023-10-27 Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm Pending CN117392097A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311404001.8A CN117392097A (en) 2023-10-27 2023-10-27 Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311404001.8A CN117392097A (en) 2023-10-27 2023-10-27 Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm

Publications (1)

Publication Number Publication Date
CN117392097A true CN117392097A (en) 2024-01-12

Family

ID=89468179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311404001.8A Pending CN117392097A (en) 2023-10-27 2023-10-27 Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm

Country Status (1)

Country Link
CN (1) CN117392097A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649351A (en) * 2024-01-30 2024-03-05 武汉大学 Diffusion model-based industrial defect image simulation method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117649351A (en) * 2024-01-30 2024-03-05 武汉大学 Diffusion model-based industrial defect image simulation method and device
CN117649351B (en) * 2024-01-30 2024-04-19 武汉大学 Diffusion model-based industrial defect image simulation method and device

Similar Documents

Publication Publication Date Title
CN111223088B (en) Casting surface defect identification method based on deep convolutional neural network
CN115082467B (en) Building material welding surface defect detection method based on computer vision
CN109961049B (en) Cigarette brand identification method under complex scene
CN110473173A (en) A kind of defect inspection method based on deep learning semantic segmentation
CN111310558A (en) Pavement disease intelligent extraction method based on deep learning and image processing method
CN111402226A (en) Surface defect detection method based on cascade convolution neural network
CN110110646A (en) A kind of images of gestures extraction method of key frame based on deep learning
CN112233067A (en) Hot rolled steel coil end face quality detection method and system
CN112070727B (en) Metal surface defect detection method based on machine learning
CN109284779A (en) Object detecting method based on the full convolutional network of depth
CN117392097A (en) Additive manufacturing process defect detection method and system based on improved YOLOv8 algorithm
CN112001909A (en) Powder bed defect visual detection method based on image feature fusion
CN109085178A (en) A kind of accurate on-line monitoring method of defect fingerprint and feedback strategy for increasing material manufacturing
CN115601332A (en) Embedded fingerprint module appearance detection method based on semantic segmentation
CN114926407A (en) Steel surface defect detection system based on deep learning
Chen et al. X-ray of tire defects detection via modified faster R-CNN
CN115147363A (en) Image defect detection and classification method and system based on deep learning algorithm
CN117455917B (en) Establishment of false alarm library of etched lead frame and false alarm on-line judging and screening method
CN108242061B (en) Supermarket shopping cart hand identification method based on Sobel operator
CN114037684A (en) Defect detection method based on yolov5 and attention mechanism model
CN117197146A (en) Automatic identification method for internal defects of castings
CN112883797A (en) Tobacco shred sundry detection method based on Yolo V3 model
CN115115578B (en) Defect detection method and system in additive manufacturing process
CN111627018B (en) Steel plate surface defect classification method based on double-flow neural network model
CN115456944A (en) Ceramic substrate appearance defect intelligent identification method based on target detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination