CN115063348A - Part surface defect detection method, device, equipment and medium - Google Patents

Part surface defect detection method, device, equipment and medium Download PDF

Info

Publication number
CN115063348A
CN115063348A CN202210549864.3A CN202210549864A CN115063348A CN 115063348 A CN115063348 A CN 115063348A CN 202210549864 A CN202210549864 A CN 202210549864A CN 115063348 A CN115063348 A CN 115063348A
Authority
CN
China
Prior art keywords
processing
adopting
module
target
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210549864.3A
Other languages
Chinese (zh)
Inventor
艾如飞
李才博
吴斌
王迅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaotong Liangfengtai Information Technology Co ltd
Original Assignee
Zhaotong Liangfengtai Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaotong Liangfengtai Information Technology Co ltd filed Critical Zhaotong Liangfengtai Information Technology Co ltd
Priority to CN202210549864.3A priority Critical patent/CN115063348A/en
Publication of CN115063348A publication Critical patent/CN115063348A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method, a device, equipment and a medium for detecting surface defects of parts, which relate to the technical field of target detection and comprise the following steps: establishing an initial model based on a high-resolution network, and training the initial model by adopting training data to obtain a target model; in the target model, the feature graphs with different resolutions are connected in parallel, and interaction among the feature graphs with different resolutions is added on the basis of the feature graphs which are connected in parallel; performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in the convolution module and the processing module; extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion; and acquiring a target image, and processing the target image by adopting the target model to acquire a target knot with a defect mark so as to solve the problem of low detection result accuracy caused by the loss of small-area characteristics due to compression of the surface defects of the existing image recognition part.

Description

Part surface defect detection method, device, equipment and medium
Technical Field
The invention relates to the technical field of target detection, in particular to a method, a device, equipment and a medium for detecting surface defects of parts.
Background
The detection of the metal surface defects is always the most common difficulty in industrial production, and in the industrial metal production process, image information of a working environment is acquired by utilizing image acquisition devices such as a camera and the like, effective information is extracted by using an image processing technology, and various detections and judgments on the metal surface defects are made instead of human eyes, so that the detection efficiency and the automation level are greatly improved.
The traditional defect detection process based on machine vision generally comprises the steps of image acquisition and pretreatment, defect feature extraction and identification classification, but the defects are of various types (typical defects comprise oxidation, peeling, missing coating, watermark, crease, unevenness, rusty spots, scratch, crush, roll mark, air bubble, roll point, pock mark, unpainted paint, shrinkage cavity, impurity, fiber, paint slag, explosion paint, corrosion, wrinkle, foreign matter pressing-in, black spot, oil spot, color difference and other series of metal defects which are not suitable for production), so that the calculation is complicated.
With regard to the detection of surface defects, there are two main categories: based on the traditional image processing method and the machine learning method, the two methods are established in manual feature and shallow machine learning. The conventional image processing method utilizes original attributes reflected by local anomalies to detect and segment defects, and is further divided into a structural method, a threshold method, a spectrum method and a model-based method, but the conventional image processing method has the defect that the recognition precision is too low, mainly because the image loses the characteristics of micro areas when the pattern is compressed in the conventional image processing process, and the defects are more easily generated in the micro areas on the metal surface, so that a recognition method with higher accuracy is needed.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a part surface defect detection method, a device, equipment and a medium, which are used for overcoming the problem of low detection result accuracy caused by the fact that the small-area characteristics are lost due to compression in the existing image recognition part surface defects.
The invention discloses a part surface defect detection method, which comprises the following steps:
establishing an initial model based on a high-resolution network, and training the initial model by adopting training data to obtain a target model, wherein the training data comprises a part surface map with defect marks;
performing sampling processing for four times in the target model, performing parallel connection on the feature maps with different resolutions through the processing of a convolution module, a processing module and a fusion module, and adding interaction among the feature maps with different resolutions on the basis of the parallel connection feature maps;
performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in the convolution module and the processing module; the two-channel processing comprises that one channel sequentially adopts a first weight matrix, a 3 x 3depth convolution layer and a second weight matrix for processing input features and is fused with the input features output by the other channel;
extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion;
and acquiring a target image, and processing the target image by adopting the target model to obtain a target result with a defect mark.
Preferably, the first weight matrix is obtained by performing cross-resolution weight calculation on the input features;
and the second weight matrix is obtained by carrying out space weight calculation on the input features.
Preferably, the adding of the interaction between the feature maps with different resolutions on the basis of the feature maps after parallel connection includes:
copying feature maps with different resolutions, and unifying the number of channels by adopting bilinear sampling and single-layer convolution;
3, reducing the resolution of the characteristic graph with the uniform channel number by adopting 3-by-3 convolution;
and carrying out additive fusion on the characteristic graphs with the reduced resolutions so as to add interaction between the characteristic graphs with different resolutions.
Preferably, under the PaddleSeg framework, GPU resources are called to run to train the initial model, and a target model is obtained.
Preferably, before the training of the initial model by using the training data, the method includes:
collecting the surface images of the parts in a lossless compression format, marking the surface images of the parts by adopting an auxiliary marking tool, and generating the surface images of the parts with the defect marks as training data.
Preferably, after the surface map of each part is marked by the auxiliary marking tool, the method comprises the following steps:
and converting the gray mark into a pseudo-color mark by adopting a preset conversion tool.
Preferably, the establishing an initial model based on the high-resolution network, and training the initial model by using training data to obtain a target model, includes:
calling a preset configuration file to configure parameters for the initial model, and adjusting the parameters in the training process until a target model is obtained;
and calling a visual interface to output an output image in the training process so as to carry out real-time monitoring.
The invention also provides a part surface defect detection device, which comprises the following components:
the training module is used for establishing an initial model based on a high-resolution network, and training the initial model by adopting training data to obtain a target model, wherein the training data comprises a part surface map with defect marks;
the processing module is used for carrying out sampling processing for four times in the target model, connecting the feature maps with different resolutions in parallel, and adding interaction among the feature maps with different resolutions on the basis of the feature maps connected in parallel, wherein each stage comprises the processing of the convolution module, the processing module and the fusion module;
performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in the convolution module and the processing module; the two-channel processing comprises that one channel sequentially adopts a first weight matrix, a 3 x 3depth convolution layer and a second weight matrix for processing input features and is fused with the input features output by the other channel;
extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion;
and the identification module is used for acquiring a target image, processing the target image by adopting the target model and acquiring a target result with a defect mark.
The present invention also provides a computer apparatus, comprising:
a memory for storing executable program code; and
and the processor is used for calling the executable program codes in the memory and executing the detection method.
The invention also includes a computer-readable storage medium having stored thereon a computer program,
the computer program realizes the steps of the detection method when being executed by a processor.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects:
the method is based on the HRNet network, and combines double-channel processing, so that the trained target model tends to be light, and meanwhile, a weight matrix is adopted to replace 1 × 1 convolution in the double-channel processing operation, so that the calculation capability of the detection model is improved, the high-resolution expression is kept, and the accuracy is increased, so that the problem of low detection result accuracy caused by the fact that small-area characteristics are lost due to compression in the surface defect of the existing image recognition part is solved.
Drawings
FIG. 1 is a flowchart of a first embodiment of a method for detecting surface defects of a part according to the present invention;
FIG. 2 is a flowchart illustrating an interaction between feature maps with different resolutions added on the basis of feature maps after parallel connection according to a first embodiment of the method for detecting surface defects of parts of the present invention;
FIG. 3 is a schematic diagram showing output dimensions of layers in a target model according to a first embodiment of the method for detecting surface defects of parts of the present invention;
FIG. 4 is a schematic block diagram of a second embodiment of an apparatus for detecting surface defects of parts according to the present invention;
fig. 5 is a schematic block diagram of an embodiment of the apparatus of the present invention.
Reference numerals:
6-part surface defect detection device; 61-a training module; 62-a processing module; 63-an identification module; 7-a computer device; 71-a memory; 72-processor.
Detailed Description
The advantages of the invention are further illustrated in the following description of specific embodiments in conjunction with the accompanying drawings.
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
The first embodiment is as follows: the embodiment provides a method for detecting a surface defect of a part, and it should be noted that the method introduces a PaddleSeg-based application scenario model that uses an HRnet network model to perform multi-scale high-precision target recognition on a surface defect feature of the part, trains a high-performance and multi-functional application scenario model, has high recognition accuracy and can cover more metal defects, and specifically includes the following steps with reference to fig. 1:
s100: establishing an initial model based on a high-resolution network, and training the initial model by adopting training data to obtain a target model, wherein the training data comprises a part surface map with defect marks;
in the embodiment, a GPU resource is called to run under the PaddleSeg framework to train the initial model, and a target model is obtained. Specifically, a related training operating environment (Paddle > -1.7.0 Python > -3.5 +) is prepared, and a large number of pictures of the metal surface defects are obtained as training data, wherein the pictures as the training data contain the existing metal surface defects, so as to obtain a target model capable of detecting the metal type with a large coverage after training.
In the above step, before the training of the initial model by using the training data, the method includes:
collecting the surface images of the parts in a lossless compression format, marking the surface images of the parts by adopting an auxiliary marking tool, and generating the surface images of the parts with the defect marks as training data.
Specifically, the part surface map is labeled, and the labeling tool can adopt an auxiliary labeling tool EISeg (efficient Interactive segmentation), and is an efficient and intelligent Interactive segmentation labeling software developed based on the RITM and the EdgeFlow algorithm and based on Paddle. High-quality interactive segmentation models in different directions are covered, and the labeling cost is reduced. In addition, labels acquired by the EISeg are applied to other segmentation models provided by PaddleSeg for training, a high-precision model of a metal defect detection scene can be obtained, and the whole process from data labeling to model training and prediction of segmentation tasks is opened.
In the above steps, the labeling process needs to pay attention to: the surface map of the part uses a PNG lossless compression format picture, and the marked type is various metal defect characteristics (including but not limited to defects such as oxidation, peeling, missing coating, watermark, crease, concave-convex, rust spot, scratch, crush, roll mark, bubble, roll point, pock mark and the like); PaddleSeg supports gray scale labeling and also supports pseudo-color labeling, but in order to make the labeling uniform, after the surface drawings of each part are marked by adopting an auxiliary marking tool, the method comprises the following steps: the gray marks are converted into the pseudo-color marks by adopting a preset conversion tool, specifically, the unified pseudo-color marks are used, so that the marks clearly reflect the defect types, the pseudo-color marks can also be converted into the gray marks, and the gray marks can be selected and used according to specific implementation scenes.
Specifically, the establishing an initial model based on the high-resolution network, and training the initial model by using training data to obtain a target model includes:
calling a preset configuration file to configure parameters for the initial model, and adjusting the parameters in the training process until a target model is obtained;
calling a visual interface to output an output image in the training process so as to carry out real-time monitoring
In specific implementation, the configuration parameters are jointly determined by imported config.py and hrnet.yaml, wherein the priority of yaml files is higher than that of config.py, then train is called, py is called, GPU resources are called, the initial model is trained (cfg pre-training model), and then vis.py (namely the visual interface) is called, GPU resources are called, and the initial model is trained to realize training visualization.
In this embodiment, a high resolution representation is required for position sensitive computer vision tasks. The HRnet (high resolution network) maintains a high resolution representation throughout the identification process, and thus the introduction of the HRnet network during defect detection improves the accuracy of metal surface defect detection. The method has the advantages that the Shuffle Block and the HRNet in the Shuffle Net (CNN model) are simply fused, so that the light HRNet can be obtained, and the application scene of metal surface defect detection can be enlarged. The calculation efficiency of the metal surface defect detection network can be further improved by using a specific weight matrix in the HRNet instead of 1-by-1 convolution in the HRNet.
S200: performing sampling processing for four times in the target model, performing parallel connection on the feature maps with different resolutions through the processing of a convolution module, a processing module and a fusion module, and adding interaction among the feature maps with different resolutions on the basis of the parallel connection feature maps;
in the above steps, the target model is formed based on HRNet, which has four parallel branches, in the present embodiment, the target model constructed based on HRNet in the present embodiment includes a Stem portion and a Stage portion, the Stem portion includes convolution modules (1 3 × 3 convolution with a step size of 2 + two-pass processing), the Stage portion includes a plurality of processing modules (two-pass processing) and 1 fusion module (multi-resolution) (refer to the output dimension diagram after four-Stage sampling processing in fig. 3), and 2 sampling processing is assumed as 3 × 3 convolution with a step size of 2, i.e., let 1/4 of the input original image start, undergo 2 convolutions of 3 x 3 with step size 2, and then, the feature graphs with different resolutions are connected in parallel, and then the interaction among the feature graphs with different resolutions is added on the basis of parallel connection, which is different from the conventional common mode of firstly reducing the resolution and then increasing the resolution. Specifically, on the basis of the feature maps after parallel connection, the interaction between the feature maps with different resolutions is added, referring to fig. 2, and the method includes the following steps:
s210: copying feature maps with different resolutions, and unifying the number of channels by adopting bilinear sampling and single-layer convolution;
specifically, in the above steps, the bilinear upsampling (bilinear sampling) +1 × 1 convolution is adopted to unify the number of channels, so that the bilinear sampling speed is high and the sampling effect is good, and therefore the bilinear sampling method is widely used.
S220: 3, reducing the resolution of the characteristic graph with the uniform channel number by adopting 3-by-3 convolution;
it should be noted that in this step, the stroded 3 × 3 convolution is used to reduce the loss of information by means of learning, and the conventional maximum pooling or combined pooling is not adopted.
S230: and carrying out additive fusion on the characteristic graphs with the reduced resolutions so as to add interaction between the characteristic graphs with different resolutions.
Specifically, in the above steps, the manner of fusing the reduced-resolution feature maps is to use addition, which may be implemented by calling a cv.
S300: performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in a convolution module and a processing module; the two-channel processing comprises that one channel sequentially adopts a first weight matrix, a 3 x 3depth convolution layer and a second weight matrix for processing input features and is fused with the input features output by the other channel;
in the above step, the first weight matrix is obtained by performing cross-resolution weight calculation on the input features; and the second weight matrix is obtained by carrying out space weight calculation on the input features.
As supplementary explanation, the dual-channel processing includes a processing procedure of 2 branches, wherein one branch directly outputs an input feature map, the other branch performs a first weight matrix, a 3 × 3depthwise convolution and a second full matrix on the input features, then performs a collocation operation on the outputs of the two branches, and then performs a shuffle operation to obtain final output features, and the first weight matrix and the second full matrix are used instead of the conventional common convolution structure, so that the calculation amount is reduced while the high precision and the high resolution are ensured, the model is lighter, the processing efficiency is improved, and the error is reduced.
S400: extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion;
as an illustration, the target model in the present embodiment is different from the conventional light-weight HRNet network in that the Stem part includes 3 × 3 convolutions with a 21-step size of 2, and the Stage part includes a residual module and a fusion module, by replacing the 2 nd 3 × 3 convolution in Stem and all processing modules with a double-pass process (Shuffle Block), and replacing the conventional convolution in the fusion module with a separable (separable) convolution, the target model is further focused on a small-region part in the recognition process, and the accuracy of the recognition result is improved.
S500: and acquiring a target image, and processing the target image by adopting the target model to obtain a target result with a defect mark.
In the steps, the target image is an image of the surface of the part containing the defect to be identified, the defect mark is output by adopting a target model, the target model is improved on the basis of the HRnet network, multi-scale high-precision target identification is carried out on the defect characteristics of the two parts of the surface, and a target result is obtained.
The method introduces a PaddleSeg framework based on a computer vision technology, supports training acceleration strategies such as multi-process I/O and multi-card parallel, can greatly reduce the display and memory overhead of a segmentation model, completes image training with lower cost and higher efficiency, fuses double-channel processing on the basis of the original HRNet network, enables the trained target model to tend to be light, adopts a weight matrix to replace 1 × 1 convolution in double-channel processing operation, improves the computing capability of a detection model, increases the precision while maintaining the performance of high resolution, and solves the problem that the accuracy of a detection result is lower due to the fact that small-area features are lost by compression in the surface defect of the existing image recognition part.
Example two: the invention also provides a device 6 for detecting surface defects of parts, which is shown in fig. 4 and comprises the following components:
the training module 61 is configured to establish an initial model based on a high-resolution network, train the initial model with training data, and obtain a target model, where the training data includes a surface map of a part with a defect label;
specifically, GPU resources are called to run under a PaddleSeg framework to train the initial model, part surface maps in a lossless compression format are collected, and auxiliary marking tools EISeg are adopted to mark the part surface maps.
The processing module 62 is configured to perform sampling processing four times in the target model, parallel the feature maps with different resolutions through the processing of the convolution module, the processing module and the fusion module, and add interaction between the feature maps with different resolutions on the basis of the parallel feature maps;
performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in the convolution module and the processing module; the two-channel processing comprises that one channel sequentially adopts a first weight matrix, a 3 x 3depth convolution layer and a second weight matrix for processing input features and is fused with the input features output by the other channel;
extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion;
it should be noted that the first weight matrix is obtained by performing cross-resolution weight calculation on the input features; and the second weight matrix is obtained by carrying out space weight calculation on the input features.
Specifically, the target model is formed based on HRNet, the HRNet has four parallel branches, in this embodiment, the target model constructed based on HRNet in this embodiment includes a Stem portion and a Stage portion, the Stem portion includes convolution modules (1 3 × 3 convolution with step size of 2 + two-channel processing), the Stage portion includes a plurality of processing modules (two-channel processing) and 1 fusion module (multi-resolution), and compared with the existing lightweight HRNet network (the Stem portion includes 3 × 3 convolution with step size of 2 of 21, and the Stage portion includes a residual module and a fusion module), the 2 nd 3 × 3 convolution and all processing modules in Stem are replaced by two-channel processing (shuffblock), and the conventional convolution in the fusion module is replaced by separable (sepable) convolution. The dual-channel processing comprises a processing process of 2 branches, wherein one branch directly outputs an input feature graph, the other branch carries out first weight matrix, 3 × 3depthwise convolution and a second all-in-one matrix on the input feature, then carries out collocation operation on the output of the two branches, and then carries out shuffle operation to obtain final output features.
And the identification module 63 is configured to obtain a target image, process the target image by using the target model, and obtain a target result with a defect mark.
In the embodiment, a PaddleSeg framework based on a computer vision technology is introduced into a training module 61 to complete the training of a target model, a weight matrix is adopted to replace convolution in the operation of double-channel processing to maintain the expression of high resolution and increase the precision on the basis of the existing HRNet network in the target model (which can be stored in a processing module 62), and the target image is processed by utilizing an identification module 63 based on the operation of the target model in the processing module to obtain a target result with a defect mark so as to overcome the problem that the accuracy of a detection result is low due to the fact that the small-area characteristics are lost by compression when the surface defects of the existing image identification parts
Example three:
in order to achieve the above object, the present invention further provides a computer device 7, as shown in fig. 5, the computer device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc. executing a program. The computer device of the embodiment at least includes but is not limited to: a memory 71, a processor 72, which may be communicatively coupled to each other via a device bus, as shown in FIG. 5. It should be noted that fig. 5 only shows a computer device with components, but it should be understood that not all of the shown components are required to be implemented, and more or fewer components may be implemented instead.
In this embodiment, the storage 71 may be an internal storage unit of a computer device, such as a hard disk or a memory of the computer device. In other embodiments, the memory 71 may also be an external storage device of the computer device, such as a plug-in hard disk equipped on the computer device. In this embodiment, the memory 71 is generally used for storing an operating device installed on a computer device and various application software, such as program codes, training data, and the like of the method for detecting surface defects of a part according to the embodiment. Further, the memory 71 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 72 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 72 generally serves to control the overall operation of the computer apparatus. In this embodiment, the processor 72 is configured to run the program code stored in the memory 71 or process data, for example, run a part surface defect detecting apparatus, so as to implement the part surface defect detecting method according to an embodiment.
Example four:
to achieve the above objects, the present invention also provides a computer readable storage device including a plurality of storage media, such as a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or D memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., on which a computer program is stored, which when executed by a processor 72, implements corresponding functions. The computer-readable storage medium of the embodiment is used for storing a data storage and query device, and when being executed by the processor 72, the computer-readable storage medium implements the method for detecting the surface defect of the part of the embodiment.
It should be noted that the embodiments of the present invention have been described in a preferred embodiment and not limited to the embodiments, and those skilled in the art may modify and modify the above-disclosed embodiments to equivalent embodiments without departing from the scope of the present invention.

Claims (10)

1. A method for detecting surface defects of a part is characterized by comprising the following steps:
establishing an initial model based on a high-resolution network, and training the initial model by adopting training data to obtain a target model, wherein the training data comprises a part surface map with defect marks;
performing sampling processing for four times in the target model, performing parallel connection on the feature maps with different resolutions through the processing of a convolution module, a processing module and a fusion module, and adding interaction among the feature maps with different resolutions on the basis of the parallel connection feature maps;
performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in the convolution module and the processing module; the two-channel processing comprises that one channel sequentially adopts a first weight matrix, a 3 x 3depth convolution layer and a second weight matrix for processing input features and is fused with the input features output by the other channel;
extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion;
and acquiring a target image, and processing the target image by adopting the target model to obtain a target result with a defect mark.
2. The detection method according to claim 1, characterized in that:
the first weight matrix is obtained by carrying out cross resolution weight calculation on the input features;
and the second weight matrix is obtained by carrying out space weight calculation on the input features.
3. The detection method according to claim 1, wherein adding interaction between feature maps with different resolutions on the basis of the feature maps after parallel connection comprises:
copying feature maps with different resolutions, and unifying the number of channels by adopting bilinear sampling and single-layer convolution;
3, reducing the resolution of the characteristic graph with the uniform channel number by adopting 3-by-3 convolution;
and carrying out additive fusion on the characteristic graphs with the reduced resolutions so as to add interaction between the characteristic graphs with different resolutions.
4. The detection method according to claim 1, characterized in that:
and calling GPU resources to run under a PaddleSeg framework to train the initial model, and obtaining a target model.
5. The method of claim 1, wherein before training the initial model with training data, the method comprises:
collecting the surface images of the parts in a lossless compression format, marking the surface images of the parts by adopting an auxiliary marking tool, and generating the surface images of the parts with the defect marks as training data.
6. The inspection method of claim 5, wherein after marking the surface map of each part with the auxiliary marking tool, comprising:
and converting the gray mark into a pseudo-color mark by adopting a preset conversion tool.
7. The detection method according to claim 1, wherein the establishing an initial model based on the high resolution network, and the training of the initial model using training data to obtain a target model comprises:
calling a preset configuration file to configure parameters for the initial model, and adjusting the parameters in the training process until a target model is obtained;
and calling a visual interface to output an output image in the training process so as to carry out real-time monitoring.
8. A part surface defect detection device is characterized by comprising the following components:
the training module is used for establishing an initial model based on a high-resolution network, and training the initial model by adopting training data to obtain a target model, wherein the training data comprises a part surface map with defect marks;
the processing module is used for carrying out sampling processing for four times in the target model, connecting the feature maps with different resolutions in parallel, and adding interaction among the feature maps with different resolutions on the basis of the feature maps connected in parallel, wherein each stage comprises the processing of the convolution module, the processing module and the fusion module;
performing information exchange on each pixel point on the characteristic diagram in a channel dimension by adopting double-channel processing in the convolution module and the processing module; the two-channel processing comprises that one channel sequentially adopts a first weight matrix, a 3 x 3depth convolution layer and a second weight matrix for processing input features and is fused with the input features output by the other channel;
extracting features from the feature map by adopting separable convolution in a fusion module so as to perform multi-resolution fusion;
and the identification module is used for acquiring a target image, processing the target image by adopting the target model and acquiring a target result with a defect mark.
9. A computer device, characterized by: the computer device includes:
a memory for storing executable program code; and
a processor for invoking said executable program code in said memory, the executing step comprising the detection method of any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon a computer program, characterized in that:
the computer program when being executed by a processor realizes the steps of the detection method of any one of claims 1 to 7.
CN202210549864.3A 2022-05-17 2022-05-17 Part surface defect detection method, device, equipment and medium Pending CN115063348A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210549864.3A CN115063348A (en) 2022-05-17 2022-05-17 Part surface defect detection method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210549864.3A CN115063348A (en) 2022-05-17 2022-05-17 Part surface defect detection method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN115063348A true CN115063348A (en) 2022-09-16

Family

ID=83199037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210549864.3A Pending CN115063348A (en) 2022-05-17 2022-05-17 Part surface defect detection method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN115063348A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710379A (en) * 2024-02-06 2024-03-15 杭州灵西机器人智能科技有限公司 Nondestructive testing model construction method, nondestructive testing device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117710379A (en) * 2024-02-06 2024-03-15 杭州灵西机器人智能科技有限公司 Nondestructive testing model construction method, nondestructive testing device and medium
CN117710379B (en) * 2024-02-06 2024-05-10 杭州灵西机器人智能科技有限公司 Nondestructive testing model construction method, nondestructive testing device and medium

Similar Documents

Publication Publication Date Title
CN109190752B (en) Image semantic segmentation method based on global features and local features of deep learning
CN111160352B (en) Workpiece metal surface character recognition method and system based on image segmentation
CN111080693A (en) Robot autonomous classification grabbing method based on YOLOv3
CN109711407B (en) License plate recognition method and related device
CN111462120A (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN113449691A (en) Human shape recognition system and method based on non-local attention mechanism
CN113076804B (en) Target detection method, device and system based on YOLOv4 improved algorithm
CN113657409A (en) Vehicle loss detection method, device, electronic device and storage medium
CN114972316A (en) Battery case end surface defect real-time detection method based on improved YOLOv5
CN111444847B (en) Traffic sign detection and identification method, system, device and storage medium
CN115063348A (en) Part surface defect detection method, device, equipment and medium
CN109829421B (en) Method and device for vehicle detection and computer readable storage medium
CN114882002A (en) Target defect detection method and detection device, computer equipment and storage medium
CN115063725A (en) Airplane skin defect identification system based on multi-scale self-adaptive SSD algorithm
CN115008454A (en) Robot online hand-eye calibration method based on multi-frame pseudo label data enhancement
CN114693963A (en) Recognition model training and recognition method and device based on electric power data feature extraction
Bickel et al. Detection and classification of symbols in principle sketches using deep learning
CN112418033A (en) Landslide slope surface segmentation and identification method based on mask rcnn neural network
CN116935174A (en) Multi-mode fusion method and system for detecting surface defects of metal workpiece
CN116580232A (en) Automatic image labeling method and system and electronic equipment
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN116188361A (en) Deep learning-based aluminum profile surface defect classification method and device
CN113610838A (en) Bolt defect data set expansion method
CN113221604B (en) Target identification method and device, storage medium and electronic equipment
CN114219757A (en) Vehicle intelligent loss assessment method based on improved Mask R-CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination