CN116958086A - Metal surface defect detection method and system with enhanced feature fusion capability - Google Patents

Metal surface defect detection method and system with enhanced feature fusion capability Download PDF

Info

Publication number
CN116958086A
CN116958086A CN202310904939.XA CN202310904939A CN116958086A CN 116958086 A CN116958086 A CN 116958086A CN 202310904939 A CN202310904939 A CN 202310904939A CN 116958086 A CN116958086 A CN 116958086A
Authority
CN
China
Prior art keywords
detection model
model
defect
detection
metal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310904939.XA
Other languages
Chinese (zh)
Other versions
CN116958086B (en
Inventor
周锋
陈帅庭
高淦
葛晓乐
王如刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yancheng Institute of Technology
Yancheng Institute of Technology Technology Transfer Center Co Ltd
Original Assignee
Yancheng Institute of Technology
Yancheng Institute of Technology Technology Transfer Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yancheng Institute of Technology, Yancheng Institute of Technology Technology Transfer Center Co Ltd filed Critical Yancheng Institute of Technology
Priority to CN202310904939.XA priority Critical patent/CN116958086B/en
Publication of CN116958086A publication Critical patent/CN116958086A/en
Application granted granted Critical
Publication of CN116958086B publication Critical patent/CN116958086B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Analytical Chemistry (AREA)
  • Signal Processing (AREA)
  • Biochemistry (AREA)
  • Immunology (AREA)
  • Chemical & Material Sciences (AREA)
  • Pathology (AREA)
  • Quality & Reliability (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a metal surface defect detection method and system with enhanced feature fusion capability, wherein the method comprises the following steps: constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal; acquiring an original Yolov7-tiny detection model, replacing all ELAN-T modules in the original Yolov7-tiny detection model with C5New3 modules, and acquiring a first detection model; adding an add module with self-adaptive weighting feature fusion capability into a model network of the first detection model to obtain a second detection model; replacing the boundary box regression loss function of the second detection model to obtain a third detection model; performing iterative training on the third detection model by utilizing the data set, and selecting an optimal detection model as a final metal surface defect detection model after the third detection model converges; and detecting surface defects of the metal to be detected by using the metal surface defect detection model. The method realizes high-speed, accurate and robust metal surface defect detection.

Description

Metal surface defect detection method and system with enhanced feature fusion capability
Technical Field
The invention relates to the technical field of metal detection, in particular to a method and a system for detecting metal surface defects with enhanced feature fusion capability.
Background
At present, metal is used as a vital industrial raw material and is widely applied to various fields of manufacturing machinery, aerospace, automobiles, national defense, light industry and the like. However, in actual industrial production, factors such as quality of raw materials, production environment, equipment and human errors often cause problems of defects on the metal surface, including weld marks, water stains, punching holes, impurities and the like, and the product surface is easily damaged in actual production, so that it is important to identify the surface defects. In order to prevent unnecessary economic loss and casualties caused by the supply of such defective products to other industries, detection of defects on metal surfaces is of great importance.
For decades, related personnel have been researching how efficient to classify defects and detect them. In addition to manual visual inspection, conventional image processing techniques are widely used for defect detection. The manual visual inspection requires a long-time and high-intensity work of workers, and the recognition quality naturally decreases to lower the efficiency. The use of specially designed manual features to classify surface defects is the most important ring for conventional image processing techniques, and the presence of such a ring improves the performance of surface defect detection to some extent. However, since the well-designed features have extremely strong sensitivity to light source intensity and environmental factors such as different backgrounds, this may lead to poor robustness and generalization ability of the defect detection method. The deep learning method overcomes the defect, a Convolutional Neural Network (CNN) is generated, the problem of artificial feature extraction is solved, the CNN can automatically capture deep semantic features, the CNN-based method has stronger robustness and generalization capability than the traditional method, and the convolutional neural network has become a very important method in the industry. There are two classes of target detection algorithms based on deep learning: dual stage and single stage. The dual-stage network is divided into two steps of generating a suggested area and classifying images, and the detection accuracy is high. Common double-stage target detection algorithms include R-CNN, fast R-CNN, etc. The single-stage model performs classification and regression directly. Therefore, the single-stage algorithm has high detection speed, but lower precision, and particularly overlaps with a small target. Common single-stage object detection algorithms are SSD, YOLO, etc.
Defect detection has a very wide range of applications. However, the defect detection task still has a certain limitation, and the task of detecting certain types of defects may not be well completed by directly using the existing target detection model. For metal surface defect detection, the nature of the deep metal surface creates specific challenges for the completion of the detection task. The first challenge is the problem of varying defect shape, small types of metal surface defects, large defects, and excessive intra-class differences of defects; these obstructions make it difficult to detect small defects, while relatively large defects, again because the differences in the shape of the same defect become difficult to detect. The second challenge is that the detection efficiency is also a critical part of industrial application, and we need to improve the detection accuracy of the model as much as possible on the premise of meeting the real-time detection.
Disclosure of Invention
Aiming at the problems shown above, the invention provides a method and a system for detecting metal surface defects with enhanced feature fusion capability, which are used for solving the problems that the tiny defects are difficult to detect and relatively large defects in the background art, the shape difference of the same defect is difficult to detect, and the model detection precision is low.
A metal surface defect detection method with enhanced feature fusion capability comprises the following steps:
constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal;
acquiring an original Yolov7-tiny detection model, replacing all ELAN-T modules in the original Yolov7-tiny detection model with C5New3 modules, and acquiring a first detection model;
adding an add module with self-adaptive weighting feature fusion capability into a model network of the first detection model to obtain a second detection model;
replacing the boundary box regression loss function of the second detection model to obtain a third detection model;
performing iterative training on the third detection model by utilizing the data set, and selecting an optimal detection model as a final metal surface defect detection model after the third detection model converges;
and detecting surface defects of the metal to be detected by using the metal surface defect detection model.
Preferably, the constructing the data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal includes:
acquiring a surface defect detection image of a preset sample metal, adjusting the image size of the surface defect detection image to be a preset size, and acquiring an adjusted surface defect detection image;
Acquiring defect information of a preset sample metal according to the adjusted surface defect detection image;
generating a plurality of defect labels based on defect information of preset sample metals and associating each defect label with a corresponding target defect feature;
and constructing a data set marked with the preset sample metal surface defects according to each defect label and the associated target defect characteristics.
Preferably, the obtaining the original YOLOv7-tiny detection model and replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules, obtaining the first detection model includes:
acquiring a first convolution distribution of an ELAN-T module in an original YOLOv7-tiny detection model, and respectively determining the quantity and arrangement modes of 1 multiplied by 1 convolution and 3 multiplied by 3 convolution according to the first convolution distribution;
generating a convolution optimization scheme according to the number and arrangement modes of the 1 multiplied by 1 convolution and the 3 multiplied by 3 convolution of the ELAN-T module;
increasing the respective numbers of 1×1 convolutions and 3×3 convolutions of the ELAN-T module according to a convolution optimization scheme to construct a C5New3 module;
and replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules to obtain a first detection model.
Preferably, the adding an add module with adaptive weighted feature fusion capability to the model network of the first detection model, to obtain a second detection model, includes:
Configuring module running logic of an add module with self-adaptive weighting characteristic fusion capability;
and calling a model network of the first detection model, and adding the configured add module with the self-adaptive weighting characteristic fusion capability into the model network to obtain a second detection model.
Preferably, the module running logic of the add module with the adaptive weighted feature fusion capability comprises:
acquiring network output of a trunk feature extraction network of a first detection model, and acquiring a plurality of first feature tensors with different resolutions according to the network output;
setting a weight value for each first feature tensor based on the resolution of the feature tensor by a Sigmoid function;
performing product operation on each first characteristic tensor and the weight value of the characteristic tensor, obtaining a plurality of second characteristic tensors, and performing summation operation on the plurality of first characteristic tensors and the plurality of second characteristic tensors;
and outputting the summation result of the plurality of first characteristic tensors and the second characteristic tensors as a module logic of an add module with adaptive weighting characteristic fusion capability.
Preferably, the replacing the bounding box regression loss function of the second detection model to obtain a third detection model includes:
Replacing the original bounding box regression loss function CIoU in the second detection model by using the loss function Focal-SIoU;
and acquiring a third detection model according to the replaced second detection model.
Preferably, the iterative training is performed on the third detection model by using the data set, and after the third detection model converges, an optimal detection model is selected as a final metal surface defect detection model, including:
performing first iterative training on the third detection model by using the data set, detecting the model weight after the loss value stops decreasing in each training as the optimal weight of the training, and acquiring the current model;
performing a plurality of second iterative training on the optimal weight of the current model by adopting a transfer learning mode to obtain a plurality of trained current models;
obtaining the model detection precision of each trained current model through a preset evaluation index;
and selecting the target model with highest model precision as a final metal surface defect detection model.
Preferably, the detecting the surface defect of the metal to be detected by using the metal surface defect detection model includes:
acquiring an area image with defects of metal to be detected, performing pixel optimization and denoising pretreatment on the area image, and acquiring a pretreated area image;
Detecting texture characteristics of the preprocessed region image by using a metal surface defect detection model to obtain a detection result;
determining texture defect parameters of the region images according to the detection results, and determining defect types corresponding to each region image according to the texture defect parameters;
and generating an overall defect report of the metal to be detected based on the defect type corresponding to each area image, and uploading the overall defect report to a terminal server.
Preferably, the preset evaluation index includes: recall, precision, average precision value, and number of pictures processed per second.
A metal surface defect detection system with enhanced feature fusion capability, the system comprising:
the construction module is used for constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal;
the first replacing module is used for acquiring an original Yolov7-tiny detection model, replacing all ELAN-T modules in the original Yolov7-tiny detection model with C5New3 modules and acquiring a first detection model;
the adding module is used for adding an add module with self-adaptive weighting characteristic fusion capability into the model network of the first detection model to obtain a second detection model;
the second replacing module is used for replacing the bounding box regression loss function of the second detection model to obtain a third detection model;
The selecting module is used for carrying out iterative training on the third detection model by utilizing the data set, and selecting the optimal detection model as a final metal surface defect detection model after the third detection model converges;
and the detection module is used for detecting the surface defects of the metal to be detected by using the metal surface defect detection model.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention may be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
FIG. 1 is a flowchart of a method for detecting defects on a metal surface with enhanced feature fusion capability according to the present invention;
FIG. 2 is another workflow diagram of a method for detecting defects on a metal surface with enhanced feature fusion capability according to the present invention;
FIG. 3 is a further flowchart of a method for detecting defects on a metal surface with enhanced feature fusion capability according to the present invention;
FIG. 4 is a flowchart of a method for detecting defects on a metal surface with enhanced feature fusion capability according to the present invention;
FIG. 5 is a schematic structural diagram of a C5New3 module in an embodiment of a method for detecting defects on a metal surface with enhanced feature fusion capability according to the present invention;
FIG. 6 is a flow chart of a design calculation of an add module with adaptive weighted feature fusion capability in an embodiment of a method for detecting defects on a metal surface with enhanced feature fusion capability according to the present invention;
FIG. 7 is a schematic diagram of a third improved detection model in an embodiment of a method for detecting defects on a metal surface with enhanced feature fusion according to the present invention;
FIG. 8 is a dataset image of an embodiment of a method for detecting defects on a metal surface with enhanced feature fusion capabilities according to the present invention;
FIG. 9 is a histogram of AP values of different design modules on different defect categories in an embodiment of a metal surface defect detection method with enhanced feature fusion capability according to the present invention;
FIG. 10 is a diagram showing a third detection model and a portion of the latest defect detection algorithm in comparison with the data set in an embodiment of a metal surface defect detection method with enhanced feature fusion capability according to the present invention;
FIG. 11 is a graph comparing the detection results of baseline and an improved model on a dataset in an embodiment of a method for detecting defects on a metal surface with enhanced feature fusion capability provided by the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
At present, metal is used as a vital industrial raw material and is widely applied to various fields of manufacturing machinery, aerospace, automobiles, national defense, light industry and the like. However, in actual industrial production, factors such as quality of raw materials, production environment, equipment and human errors often cause problems of defects on the metal surface, including weld marks, water stains, punching holes, impurities and the like, and the product surface is easily damaged in actual production, so that it is important to identify the surface defects. In order to prevent unnecessary economic loss and casualties caused by the supply of such defective products to other industries, detection of defects on metal surfaces is of great importance.
For decades, related personnel have been researching how efficient to classify defects and detect them. In addition to manual visual inspection, conventional image processing techniques are widely used for defect detection. The manual visual inspection requires a long-time and high-intensity work of workers, and the recognition quality naturally decreases to lower the efficiency. The use of specially designed manual features to classify surface defects is the most important ring for conventional image processing techniques, and the presence of such a ring improves the performance of surface defect detection to some extent. However, since the well-designed features have extremely strong sensitivity to light source intensity and environmental factors such as different backgrounds, this may lead to poor robustness and generalization ability of the defect detection method. The deep learning method overcomes the defect, a Convolutional Neural Network (CNN) is generated, the problem of artificial feature extraction is solved, the CNN can automatically capture deep semantic features, the CNN-based method has stronger robustness and generalization capability than the traditional method, and the convolutional neural network has become a very important method in the industry. There are two classes of target detection algorithms based on deep learning: dual stage and single stage. The dual-stage network is divided into two steps of generating a suggested area and classifying images, and the detection accuracy is high. Common double-stage target detection algorithms include R-CNN, fast R-CNN, etc. The single-stage model performs classification and regression directly. Therefore, the single-stage algorithm has high detection speed, but lower precision, and particularly overlaps with a small target. Common single-stage object detection algorithms are SSD, YOLO, etc.
Defect detection has a very wide range of applications. However, the defect detection task still has a certain limitation, and the task of detecting certain types of defects may not be well completed by directly using the existing target detection model. For metal surface defect detection, the nature of the deep metal surface creates specific challenges for the completion of the detection task. The first challenge is the problem of varying defect shape, small types of metal surface defects, large defects, and excessive intra-class differences of defects; these obstructions make it difficult to detect small defects, while relatively large defects, again because the differences in the shape of the same defect become difficult to detect. The second challenge is that the detection efficiency is also a critical part of industrial application, and we need to improve the detection accuracy of the model as much as possible on the premise of meeting the real-time detection. In order to solve the above-mentioned problems, the present embodiment discloses a metal surface defect detection method with enhanced feature fusion capability.
A metal surface defect detection method with enhanced feature fusion capability, as shown in FIG. 1, comprises the following steps:
step S101, constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal;
Step S102, acquiring an original YOLOv7-tiny detection model, and replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules to acquire a first detection model;
step S103, adding an add module with self-adaptive weighting feature fusion capability into a model network of the first detection model to obtain a second detection model;
step S104, replacing the bounding box regression loss function of the second detection model to obtain a third detection model;
step S105, performing iterative training on the third detection model by utilizing the data set, and selecting an optimal detection model as a final metal surface defect detection model after the third detection model converges;
and S106, detecting surface defects of the metal to be detected by using a metal surface defect detection model.
In the present embodiment, the defect information is expressed as defect pattern information of a preset sample metal;
in this embodiment, the data set marked with the preset sample metal surface defect is represented as an image data set of the preset sample metal surface defect with mark for the defect pattern information;
in this embodiment, the ELAN-T module is represented as a module for performing feature extraction and fusion and information interaction;
In this embodiment, the bounding box regression loss function is represented as an identified regression loss function of the model at the time of characterization.
The working principle of the technical scheme is as follows: constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal; acquiring an original Yolov7-tiny detection model, replacing all ELAN-T modules in the original Yolov7-tiny detection model with C5New3 modules, and acquiring a first detection model; adding an add module with self-adaptive weighting feature fusion capability into a model network of the first detection model to obtain a second detection model; replacing the boundary box regression loss function of the second detection model to obtain a third detection model; performing iterative training on the third detection model by utilizing the data set, and selecting an optimal detection model as a final metal surface defect detection model after the third detection model converges; and detecting surface defects of the metal to be detected by using the metal surface defect detection model.
The beneficial effects of the technical scheme are as follows: the improved YOLOv7-tiny model is adopted, so that the capability of extracting features and fusing feature information of the model can be enhanced, a C5new3 module is introduced into a network to replace all ELAN-T modules, the importance of different features can be adaptively resolved by the network, the defect detection from small size to large size can be adaptively completed, and an add module is added into a network part; the method can accelerate the convergence rate of the model and relieve the problem of unbalanced number of positive and negative samples in the research, and the model with highest precision is selected from the converged models to be used as a final model, so that the metal surface defect detection with high speed, accuracy and strong robustness is realized. The method solves the problems that the tiny defects are difficult to detect and relatively large defects in the prior art, the shape difference of the same defect is difficult to detect, and the model detection precision is low.
In this embodiment, after constructing a data set labeled with a surface defect of a preset sample metal according to defect information of the preset sample metal, acquiring an original YOLOv7-tiny detection model, replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules, and before acquiring the first detection model, further including:
acquiring the image data characteristics of each piece of sub data in the data set, determining whether the image data characteristics meet preset requirements, if so, determining that the sub data format is qualified, and if not, determining that the sub data format is unqualified;
acquiring texture images of defect types corresponding to each piece of sub data based on a data set, and dividing the texture images of the defect types corresponding to each piece of sub data into a plurality of equal-area areas;
acquiring boundary point cloud data of each divided area, if the boundary point cloud data is complete data, using the boundary point cloud data as first divided point cloud data of the divided area, and if the boundary point cloud data is incomplete data, using the boundary point cloud data as second divided point cloud data of a target divided area sharing the point cloud data with the divided area;
acquiring high-density point cloud data of each divided area according to the first divided point cloud data or the second divided point cloud data of the divided area;
Carrying out data sampling of a plurality of points on the high-density point cloud data of each divided area to obtain a sampling result;
comparing the sampling results of a plurality of points of each divided area, and determining the surface deformation change condition and the deformation rate change condition of each divided area according to the comparison results;
determining the current deformation description characteristics of the defect types corresponding to each sub data according to the surface deformation change condition and the deformation rate change condition of each divided region;
matching the current deformation description characteristic of the defect type corresponding to each sub data with the basic description information of the defect type corresponding to the sub data, determining the matching degree, if the matching degree is larger than or equal to a preset value, confirming that the sub data information is qualified, and if the matching degree is smaller than the preset value, confirming that the sub data information is unqualified.
In the present embodiment, the image data feature is represented as a feature for display of an image format, resolution, or the like of each sub-data;
in this embodiment, the preset requirement is expressed as a format storage requirement of each sub-data in the data set;
in the present embodiment, the boundary point cloud data is represented as point cloud data on a boundary around each divided area.
The beneficial effects of the technical scheme are as follows: the method can ensure the accuracy of the pattern data of each defect in the data set, ensure the stability and reliability of the pattern data storage of each defect, facilitate the subsequent model training and improve the practicability.
In one embodiment, as shown in fig. 2, the constructing a data set labeled with the surface defect of the preset sample metal according to the defect information of the preset sample metal includes:
step S201, obtaining a surface defect detection image of a preset sample metal, adjusting the image size of the surface defect detection image to a preset size, and obtaining an adjusted surface defect detection image;
step S202, acquiring defect information of preset sample metal according to the adjusted surface defect detection image;
step S203, generating a plurality of defect labels based on defect information of preset sample metals and associating each defect label with a corresponding target defect characteristic;
step S204, a data set marked with the preset sample metal surface defects is constructed according to each defect label and the associated target defect characteristics.
In this embodiment, the preset size may be 640 x 640;
in this embodiment, the defect label is used to represent a preset label corresponding to a defect type corresponding to the defect information;
In this embodiment, the target defect feature is represented as a defect visual appearance feature of a defect type corresponding to each defect label.
The beneficial effects of the technical scheme are as follows: the accuracy and high quality of the model training sample can be guaranteed by generating the defect label and constructing the data set in association with the related defect characteristics, meanwhile, the detection reliability and efficiency of the metal surface defect detection model are guaranteed, and the overall working efficiency and stability are improved.
In one embodiment, as shown in fig. 3, the obtaining the original YOLOv7-tiny detection model and replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules, obtaining the first detection model includes:
step S301, acquiring first convolution distribution of an ELAN-T module in an original YOLOv7-tiny detection model, and respectively determining the quantity and arrangement modes of 1 multiplied by 1 convolution and 3 multiplied by 3 convolution according to the first convolution distribution;
step S302, generating a convolution optimization scheme according to the number and arrangement modes of the 1 multiplied by 1 convolution and the 3 multiplied by 3 convolution of the ELAN-T module;
step S303, increasing the respective numbers of 1 multiplied by 1 and 3 multiplied by 3 convolutions of the ELAN-T module according to a convolution optimization scheme to construct a C5New3 module;
and S304, replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules to obtain a first detection model.
The beneficial effects of the technical scheme are as follows: on the premise of ensuring a certain reasoning speed, the capability of extracting and fusing the defect characteristics in the network is further enhanced, the information between channels is consolidated by increasing the number of 1x 1 convolutions, the receptive field is further enlarged by increasing the number of 3x 3 convolutions, the interaction of space information and the characteristic fusion are enhanced, and the nonlinear change of the network is also enhanced to a certain extent.
In one embodiment, the adding an add module with adaptive weighted feature fusion capability to the model network of the first detection model, to obtain a second detection model, includes:
configuring module running logic of an add module with self-adaptive weighting characteristic fusion capability;
and calling a model network of the first detection model, and adding the configured add module with the self-adaptive weighting characteristic fusion capability into the model network to obtain a second detection model.
The beneficial effects of the technical scheme are as follows: compatibility between the add module and the model network can be ensured by carrying out module operation logic configuration, and meanwhile, the add module can be used for more stably assisting the model network in carrying out effective defect feature extraction, so that the practicability is further improved.
In one embodiment, the module execution logic for configuring add modules with adaptive weighted feature fusion capability includes:
acquiring network output of a trunk feature extraction network of a first detection model, and acquiring a plurality of first feature tensors with different resolutions according to the network output;
setting a weight value for each first feature tensor based on the resolution of the feature tensor by a Sigmoid function;
performing product operation on each first characteristic tensor and the weight value of the characteristic tensor, obtaining a plurality of second characteristic tensors, and performing summation operation on the plurality of first characteristic tensors and the plurality of second characteristic tensors;
and outputting the summation result of the plurality of first characteristic tensors and the second characteristic tensors as a module logic of an add module with adaptive weighting characteristic fusion capability.
The beneficial effects of the technical scheme are as follows: the comprehensive detection effect of the model on the micro-defects to the large-defects can be improved, effective characteristic information can be processed and extracted from the defects with different scales, and the practicability is further improved.
In one embodiment, replacing the bounding box regression loss function of the second detection model to obtain a third detection model includes:
Replacing the original bounding box regression loss function CIoU in the second detection model by using the loss function Focal-SIoU;
and acquiring a third detection model according to the replaced second detection model.
The beneficial effects of the technical scheme are as follows: the problem of unbalance of the positive and negative samples can be relieved to a certain extent, so that the final regression result is more accurate.
In one embodiment, the iterative training of the third detection model by using the data set, selecting the best detection model as the final metal surface defect detection model after the third detection model converges, includes:
performing first iterative training on the third detection model by using the data set, detecting the model weight after the loss value stops decreasing in each training as the optimal weight of the training, and acquiring the current model;
performing a plurality of second iterative training on the optimal weight of the current model by adopting a transfer learning mode to obtain a plurality of trained current models;
obtaining the model detection precision of each trained current model through a preset evaluation index;
and selecting the target model with highest model precision as a final metal surface defect detection model.
The beneficial effects of the technical scheme are as follows: the target model with highest model precision is selected as the final metal surface defect detection model, so that the accuracy and reliability of the metal surface defect detection model for subsequent metal surface defect detection can be ensured, the detection precision of the model is ensured, and the overall practicability and stability are improved.
In one embodiment, the detecting the surface defect of the metal to be detected by using the metal surface defect detection model includes:
acquiring an area image with defects of metal to be detected, performing pixel optimization and denoising pretreatment on the area image, and acquiring a pretreated area image;
detecting texture characteristics of the preprocessed region image by using a metal surface defect detection model to obtain a detection result;
determining texture defect parameters of the region images according to the detection results, and determining defect types corresponding to each region image according to the texture defect parameters;
and generating an overall defect report of the metal to be detected based on the defect type corresponding to each area image, and uploading the overall defect report to a terminal server.
The beneficial effects of the technical scheme are as follows: the method and the device can rapidly and accurately judge and detect the defect texture type of the metal to be detected according to the defect area image, improve the experience and the working efficiency of the user, further, enable the user to intuitively determine the defect problem of the metal to be detected by generating a defect report and uploading the defect report to the terminal server so as to carry out subsequent decision, and further improve the experience and the practicability of the user.
In one embodiment, the preset evaluation index includes: recall, precision, average precision value, and number of pictures processed per second.
In one embodiment, the present embodiment further discloses a metal surface defect detection system with enhanced feature fusion capability, as shown in fig. 4, the system includes:
a construction module 401, configured to construct a dataset labeled with a surface defect of a preset sample metal according to defect information of the preset sample metal;
the first replacing module 402 is configured to obtain an original YOLOv7-tiny detection model, replace all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules, and obtain a first detection model;
an adding module 403, configured to add an add module with adaptive weighted feature fusion capability to a model network of the first detection model, to obtain a second detection model;
a second replacing module 404, configured to replace the bounding box regression loss function of the second detection model to obtain a third detection model;
the selecting module 405 is configured to perform iterative training on the third detection model by using the data set, and select an optimal detection model as a final metal surface defect detection model after the third detection model converges;
And the detection module 406 is configured to detect a surface defect of the metal to be detected by using the metal surface defect detection model.
The working principle and the beneficial effects of the above technical solution are described in the method claims, and are not repeated here.
In one embodiment, the embodiment discloses a rapid metal surface defect detection method with enhanced feature fusion capability, which comprises the following steps:
s1, constructing a data set marked with a metal surface defect to be detected; acquiring a metal surface defect detection data set image; the image resize is 640 x 640, and the acquired data set image is subjected to random overturn, contrast adjustment, clipping, scale transformation and other data enhancement methods to expand the data set size; and adapting the labels in the dataset, and generating corresponding labels according to different image enhancement operations.
S2, feature extraction of the original ELAN-T module can be performed in an unsatisfactory manner. Therefore, we try to improve the ELAN-T module of YOLOv7-tiny to the C5New3 module innovated by the present invention, and replace all ELAN-T modules in the whole model structure with C5New3 modules, so as to further enhance the capability of extracting and fusing defect features in the network on the premise of ensuring a certain reasoning speed.
As shown in fig. 5, the ELAN-T module contains only 5 convolutions, three of which are 1x 1 convolutions and two of which are 3x 3 convolutions. The 1x 1 convolution can only perform information interaction and feature fusion between channels, does not have information interaction and feature fusion between adjacent pixels, but only has two 3x 3 convolutions capable of enhancing the feature fusion between adjacent pixels, so that the number of the 1x 1 convolutions and the number of the 3x 3 convolutions are increased on the basis of a C5 module, the information between the channels is consolidated through the 1x 1 convolutions, the receptive field is further enlarged in a mode of increasing the 3x 3 convolutions, and the interaction and the feature fusion of spatial information are enhanced. And finally, adding the feature number through Concat operation, and simultaneously enhancing the nonlinear change of the network to a certain extent.
S3, according to the detection result of the non-optimized model, the detection effect of the indentation (roll_pit) in all target types is not ideal. The AP value is the lowest; the relative area of the targets is small, and certain differences exist in the classes; in order to improve the detection effect of the model on the conditions, an AWFP-Add module is added to the Head part of the network, so that effective characteristic information is enhanced to be processed and extracted from a plurality of defects with different scales;
Referring to fig. 6, a calculation flow chart is designed for AWFP-Add, in which three feature tensors of different resolutions output by a trunk feature extraction network are multiplied by weights of different magnitudes (weight values are limited between (0, 1) by a Sigmoid function); summing the new characteristic tensors with three different resolutions after the multiplication operation with the original characteristic tensors; and obtaining the output of the final whole module.
When the above three steps are completed, we will get an improved complete YOLOv7-tiny model structure, as shown in fig. 7.
S4, in order to enable the convergence speed of the network to be faster during training, and pay more attention to high-quality examples, and the problem of unbalance of positive and negative samples is relieved to a certain extent, so that the final regression result is more accurate. We replace the original CIoU function with a novel loss function Focal-SIoU,
and S5, carrying out iterative training on the detection model by utilizing a data set, keeping the optimal model weight of the training after the loss value stops descending through multiple times of training, carrying out a plurality of iterations by using the optimal weight of each training through transfer learning, and finally selecting the detection model with the highest precision from the optimal weight as the metal surface defect detection model.
In this example, we use the mainstream public dataset GC10-DET, GC10-DET dataset containing actual commercial metal surface defect detection images including punch-out (punching_hole, pu), weld line (Wl), crescent (creating_gap, cg), water spot (Ws), oil spot (oil_spot, os), silk spot (silk_spot, ss), inclusions (Inclusion, in), rolling (roller_pit, rp), cr (crease), wf (waist fold), 10 defect categories as shown In fig. 8; a total of 2257 images with a resolution of 2048×1000.
A Pytorch deep learning framework was used to train and test our proposed model. The experimental environment adopts the following configuration: AMD 15vCPU, RTXA5000GPU, 24GB memory, SGD optimizers are used to optimize the model, larger batch sizes are beneficial because they can improve the detection performance of the model, so here a batch size of 32, training round of 500, and picture size of 640x 640 are used. In the training process, the robustness of the model is improved by adopting random overturn, adjustment of contrast, clipping, scale transformation and other data enhancement methods. To analyze whether the proposed partial improvements are effective, a combination experiment was performed on each improvement strategy to control the variables.
1. YOLOv7-tiny with C5New3 module is called C-YOLO;
2. YOLOv7-tiny with C5New3 module and Focal-SIoU loss function is called CF-YOLO;
3. YOLOv7-tiny with C5New3 module and AWP-Add module is called CA-YOLO;
4. YOLOv7-tiny with C5New3 module, AWF-Add module and Focal-SIoU loss function is called CAF-YOLO.
Taking Recall rate R (Recall), precision P (Precision), average Precision value mAP (mean AveragePrecision) and processed picture number FPS (Frames Per Second) per second as evaluation indexes; calculating the time-to-cross ratio IoU threshold value, selecting 0.5, and considering that the target is successfully detected if the IoU is greater than 0.5; the recall ratio formula is:
/>
under the condition that all super parameters are the same, the improved network model is subjected to an ablation experiment, and the result is shown in the following table:
among them, we can easily find our improvement effective. The mAP of the original Yolov7-tiny model was 70.2, compared to C-Yolo which is superior to it 4.3 in mAP. Further, referring to FIG. 6, C-YOLO is more accurate than YOLOv7-tiny detection in all defect categories except Pu. The mAP of CF-YOLO and CA-YOLO reach 75.3 and 77.1 respectively, which are 5.1mAP and 6.9mAP higher than the original model. CAF-YOLO achieves 81% mAP, which is the best. As for the specific defect type, referring to fig. 9, CAF-YOLO achieves the highest performance in all defect detections except Wl, and comparison with the base model on the PR curve is shown in fig. 10, which demonstrates that our improvement can identify various defects. CF-YOLO improves mAP from +4.3 to +5.1, which shows that the mode of Focal loss+SIoU alleviates the influence of imbalance of positive and negative examples in bounding box regression to a certain extent and the problem of inaccurate regression box. By introducing the learnable parameters into the Add module and combining the architecture of the adaptive weighted fusion path, the improvement range is from +4.3 to +6.9, and the positive influence of the AWFP-Add module is reflected, as shown in FIG. 11.
It will be appreciated by those skilled in the art that the first and second aspects of the present application refer to different phases of application.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. The metal surface defect detection method with the enhanced feature fusion capability is characterized by comprising the following steps of:
constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal;
Acquiring an original Yolov7-tiny detection model, replacing all ELAN-T modules in the original Yolov7-tiny detection model with C5New3 modules, and acquiring a first detection model;
adding an add module with self-adaptive weighting feature fusion capability into a model network of the first detection model to obtain a second detection model;
replacing the boundary box regression loss function of the second detection model to obtain a third detection model;
performing iterative training on the third detection model by utilizing the data set, and selecting an optimal detection model as a final metal surface defect detection model after the third detection model converges;
and detecting surface defects of the metal to be detected by using the metal surface defect detection model.
2. The method for detecting surface defects of metal with enhanced feature fusion capability according to claim 1, wherein the constructing a data set labeled with surface defects of a preset sample metal according to defect information of the preset sample metal comprises:
acquiring a surface defect detection image of a preset sample metal, adjusting the image size of the surface defect detection image to be a preset size, and acquiring an adjusted surface defect detection image;
acquiring defect information of a preset sample metal according to the adjusted surface defect detection image;
Generating a plurality of defect labels based on defect information of preset sample metals and associating each defect label with a corresponding target defect feature;
and constructing a data set marked with the preset sample metal surface defects according to each defect label and the associated target defect characteristics.
3. The method for detecting metal surface defects with enhanced feature fusion capability according to claim 1, wherein the obtaining the original YOLOv7-tiny detection model and replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules, obtaining the first detection model includes:
acquiring a first convolution distribution of an ELAN-T module in an original YOLOv7-tiny detection model, and respectively determining the quantity and arrangement modes of 1 multiplied by 1 convolution and 3 multiplied by 3 convolution according to the first convolution distribution;
generating a convolution optimization scheme according to the number and arrangement modes of the 1 multiplied by 1 convolution and the 3 multiplied by 3 convolution of the ELAN-T module;
increasing the respective numbers of 1×1 convolutions and 3×3 convolutions of the ELAN-T module according to a convolution optimization scheme to construct a C5New3 module;
and replacing all ELAN-T modules in the original YOLOv7-tiny detection model with C5New3 modules to obtain a first detection model.
4. The method for detecting metal surface defects with enhanced feature fusion capability according to claim 1, wherein adding an add module with adaptive weighted feature fusion capability to a model network of the first detection model to obtain a second detection model comprises:
Configuring module running logic of an add module with self-adaptive weighting characteristic fusion capability;
and calling a model network of the first detection model, and adding the configured add module with the self-adaptive weighting characteristic fusion capability into the model network to obtain a second detection model.
5. The method of claim 4, wherein the configuring the module run logic of the add module with adaptive weighted feature fusion capability comprises:
acquiring network output of a trunk feature extraction network of a first detection model, and acquiring a plurality of first feature tensors with different resolutions according to the network output;
setting a weight value for each first feature tensor based on the resolution of the feature tensor by a Sigmoid function;
performing product operation on each first characteristic tensor and the weight value of the characteristic tensor, obtaining a plurality of second characteristic tensors, and performing summation operation on the plurality of first characteristic tensors and the plurality of second characteristic tensors;
and outputting the summation result of the plurality of first characteristic tensors and the second characteristic tensors as a module logic of an add module with adaptive weighting characteristic fusion capability.
6. The method for detecting metal surface defects with enhanced feature fusion capability according to claim 1, wherein replacing the bounding box regression loss function of the second detection model to obtain a third detection model comprises:
Replacing the original bounding box regression loss function CIoU in the second detection model by using the loss function Focal-SIoU;
and acquiring a third detection model according to the replaced second detection model.
7. The method for detecting metal surface defects with enhanced feature fusion capability according to claim 1, wherein the iterative training of the third detection model by using the data set, and selecting the optimal detection model as the final metal surface defect detection model after the convergence of the third detection model, comprises:
performing first iterative training on the third detection model by using the data set, detecting the model weight after the loss value stops decreasing in each training as the optimal weight of the training, and acquiring the current model;
performing a plurality of second iterative training on the optimal weight of the current model by adopting a transfer learning mode to obtain a plurality of trained current models;
obtaining the model detection precision of each trained current model through a preset evaluation index;
and selecting the target model with highest model precision as a final metal surface defect detection model.
8. The method for detecting metal surface defects with enhanced feature fusion capability according to claim 1, wherein the step of performing surface defect detection on the metal to be detected by using a metal surface defect detection model comprises the steps of:
Acquiring an area image with defects of metal to be detected, performing pixel optimization and denoising pretreatment on the area image, and acquiring a pretreated area image;
detecting texture characteristics of the preprocessed region image by using a metal surface defect detection model to obtain a detection result;
determining texture defect parameters of the region images according to the detection results, and determining defect types corresponding to each region image according to the texture defect parameters;
and generating an overall defect report of the metal to be detected based on the defect type corresponding to each area image, and uploading the overall defect report to a terminal server.
9. The method for detecting metal surface defects with enhanced feature fusion capability according to claim 7, wherein the preset evaluation index comprises: recall, precision, average precision value, and number of pictures processed per second.
10. A metal surface defect detection system with enhanced feature fusion capability, the system comprising:
the construction module is used for constructing a data set marked with the surface defects of the preset sample metal according to the defect information of the preset sample metal;
the first replacing module is used for acquiring an original Yolov7-tiny detection model, replacing all ELAN-T modules in the original Yolov7-tiny detection model with C5New3 modules and acquiring a first detection model;
The adding module is used for adding an add module with self-adaptive weighting characteristic fusion capability into the model network of the first detection model to obtain a second detection model;
the second replacing module is used for replacing the bounding box regression loss function of the second detection model to obtain a third detection model;
the selecting module is used for carrying out iterative training on the third detection model by utilizing the data set, and selecting the optimal detection model as a final metal surface defect detection model after the third detection model converges;
and the detection module is used for detecting the surface defects of the metal to be detected by using the metal surface defect detection model.
CN202310904939.XA 2023-07-21 2023-07-21 Metal surface defect detection method and system with enhanced feature fusion capability Active CN116958086B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310904939.XA CN116958086B (en) 2023-07-21 2023-07-21 Metal surface defect detection method and system with enhanced feature fusion capability

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310904939.XA CN116958086B (en) 2023-07-21 2023-07-21 Metal surface defect detection method and system with enhanced feature fusion capability

Publications (2)

Publication Number Publication Date
CN116958086A true CN116958086A (en) 2023-10-27
CN116958086B CN116958086B (en) 2024-04-19

Family

ID=88448751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310904939.XA Active CN116958086B (en) 2023-07-21 2023-07-21 Metal surface defect detection method and system with enhanced feature fusion capability

Country Status (1)

Country Link
CN (1) CN116958086B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052200A (en) * 2020-12-09 2021-06-29 江苏科技大学 Sonar image target detection method based on yolov3 network
CN113920400A (en) * 2021-10-14 2022-01-11 辽宁工程技术大学 Metal surface defect detection method based on improved YOLOv3
WO2022053001A1 (en) * 2020-09-10 2022-03-17 上海航天精密机械研究所 Weld seam internal defect intelligent detection device and method, and medium
CN114548376A (en) * 2022-02-24 2022-05-27 湖南工学院 Intelligent transportation system-oriented vehicle rapid detection network and method
CN114638784A (en) * 2022-02-17 2022-06-17 中南大学 Method and device for detecting surface defects of copper pipe based on FE-YOLO
WO2022160170A1 (en) * 2021-01-28 2022-08-04 东莞职业技术学院 Method and apparatus for detecting metal surface defects
CN114897802A (en) * 2022-04-25 2022-08-12 江苏科技大学 Metal surface defect detection method based on improved fast RCNN algorithm
CN115082855A (en) * 2022-06-20 2022-09-20 安徽工程大学 Pedestrian occlusion detection method based on improved YOLOX algorithm
CN115330729A (en) * 2022-08-16 2022-11-11 盐城工学院 Multi-scale feature attention-fused light-weight strip steel surface defect detection method
CN115880223A (en) * 2022-11-10 2023-03-31 淮阴工学院 Improved YOLOX-based high-reflectivity metal surface defect detection method
CN116152744A (en) * 2023-03-09 2023-05-23 深圳华付技术股份有限公司 Dynamic detection method and device for electric vehicle, computer equipment and storage medium
CN116188419A (en) * 2023-02-21 2023-05-30 浙江理工大学桐乡研究院有限公司 Lightweight cloth flaw detection method capable of being deployed in embedded equipment
CN116228730A (en) * 2023-03-16 2023-06-06 北京信息科技大学 Tablet surface defect detection method and system based on improved YOLOv7
CN116309361A (en) * 2023-02-17 2023-06-23 四川轻化工大学 Light machine vision detection method for permanent magnet surface defects
CN116343144A (en) * 2023-05-24 2023-06-27 武汉纺织大学 Real-time target detection method integrating visual perception and self-adaptive defogging
CN116416613A (en) * 2023-04-13 2023-07-11 广西壮族自治区农业科学院 Citrus fruit identification method and system based on improved YOLO v7

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022053001A1 (en) * 2020-09-10 2022-03-17 上海航天精密机械研究所 Weld seam internal defect intelligent detection device and method, and medium
CN113052200A (en) * 2020-12-09 2021-06-29 江苏科技大学 Sonar image target detection method based on yolov3 network
WO2022160170A1 (en) * 2021-01-28 2022-08-04 东莞职业技术学院 Method and apparatus for detecting metal surface defects
CN113920400A (en) * 2021-10-14 2022-01-11 辽宁工程技术大学 Metal surface defect detection method based on improved YOLOv3
CN114638784A (en) * 2022-02-17 2022-06-17 中南大学 Method and device for detecting surface defects of copper pipe based on FE-YOLO
CN114548376A (en) * 2022-02-24 2022-05-27 湖南工学院 Intelligent transportation system-oriented vehicle rapid detection network and method
CN114897802A (en) * 2022-04-25 2022-08-12 江苏科技大学 Metal surface defect detection method based on improved fast RCNN algorithm
CN115082855A (en) * 2022-06-20 2022-09-20 安徽工程大学 Pedestrian occlusion detection method based on improved YOLOX algorithm
CN115330729A (en) * 2022-08-16 2022-11-11 盐城工学院 Multi-scale feature attention-fused light-weight strip steel surface defect detection method
CN115880223A (en) * 2022-11-10 2023-03-31 淮阴工学院 Improved YOLOX-based high-reflectivity metal surface defect detection method
CN116309361A (en) * 2023-02-17 2023-06-23 四川轻化工大学 Light machine vision detection method for permanent magnet surface defects
CN116188419A (en) * 2023-02-21 2023-05-30 浙江理工大学桐乡研究院有限公司 Lightweight cloth flaw detection method capable of being deployed in embedded equipment
CN116152744A (en) * 2023-03-09 2023-05-23 深圳华付技术股份有限公司 Dynamic detection method and device for electric vehicle, computer equipment and storage medium
CN116228730A (en) * 2023-03-16 2023-06-06 北京信息科技大学 Tablet surface defect detection method and system based on improved YOLOv7
CN116416613A (en) * 2023-04-13 2023-07-11 广西壮族自治区农业科学院 Citrus fruit identification method and system based on improved YOLO v7
CN116343144A (en) * 2023-05-24 2023-06-27 武汉纺织大学 Real-time target detection method integrating visual perception and self-adaptive defogging

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
HONGZHI BIAN 等: "Detection Method of Helmet Wearing Based on UAV Images and Yolov7", 《2023 IEEE 6TH INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC)》, pages 1633 - 1640 *
WU RQ 等: "Enhanced You Only Look Once X for surface defect detection of strip steel", 《FRONTIERS IN NEUROROBOTICS》, vol. 16, pages 1 - 12 *
赵春江 等: "基于改进YOLO v7-tiny的笼养鸡/蛋自动识别与计数方法研究", 《农业机械学报》, pages 1 - 20 *
陈宏彩 等: "基于YOLOv3的医药玻璃瓶缺陷检测方法", 《包装工程》, vol. 41, no. 7, pages 241 - 246 *

Also Published As

Publication number Publication date
CN116958086B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
WO2023077404A1 (en) Defect detection method, apparatus and system
CN111179251B (en) Defect detection system and method based on twin neural network and by utilizing template comparison
Zhu et al. Modified densenet for automatic fabric defect detection with edge computing for minimizing latency
CN108830285B (en) Target detection method for reinforcement learning based on fast-RCNN
CN113592845A (en) Defect detection method and device for battery coating and storage medium
CN111915704A (en) Apple hierarchical identification method based on deep learning
CN105913415A (en) Image sub-pixel edge extraction method having extensive adaptability
CN109544522A (en) A kind of Surface Defects in Steel Plate detection method and system
CN101140216A (en) Gas-liquid two-phase flow type recognition method based on digital graphic processing technique
CN103914708A (en) Food variety detection method and system based on machine vision
CN113012153A (en) Aluminum profile flaw detection method
CN117115147B (en) Textile detection method and system based on machine vision
Cao et al. Balanced multi-scale target score network for ceramic tile surface defect detection
Chen et al. Real-time defect detection of TFT-LCD displays using a lightweight network architecture
Zhang et al. Fabric defect detection based on visual saliency map and SVM
Ma et al. A hierarchical attention detector for bearing surface defect detection
CN116958086B (en) Metal surface defect detection method and system with enhanced feature fusion capability
Wang et al. Optical fiber defect detection method based on DSSD network
Zhu et al. Surface defect detection of sawn timbers based on efficient multilevel feature integration
CN114092396A (en) Method and device for detecting corner collision flaw of packaging box
Zhang et al. IDDM: An incremental dual-network detection model for in-situ inspection of large-scale complex product
Gong et al. Research on surface defects detection method and system in manufacturing processes based on the fusion of multi-scale features and semantic segmentation for intelligent manufacturing
Yi et al. YOLOv7-SiamFF: Industrial defect detection algorithm based on improved YOLOv7
Saberironaghi Deep learning models for defect and anomaly detection on industrial surfaces
Ndukwe et al. Screw Production Optimization Using Artificial Neural Network (ANN) Technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant