CN117115158B - Defect detection method and device based on deep contrast learning - Google Patents

Defect detection method and device based on deep contrast learning Download PDF

Info

Publication number
CN117115158B
CN117115158B CN202311373822.XA CN202311373822A CN117115158B CN 117115158 B CN117115158 B CN 117115158B CN 202311373822 A CN202311373822 A CN 202311373822A CN 117115158 B CN117115158 B CN 117115158B
Authority
CN
China
Prior art keywords
defect detection
detection model
network
defect
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311373822.XA
Other languages
Chinese (zh)
Other versions
CN117115158A (en
Inventor
于洋
黄雪峰
熊海飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Original Assignee
Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinrun Fulian Digital Technology Co Ltd filed Critical Shenzhen Xinrun Fulian Digital Technology Co Ltd
Priority to CN202311373822.XA priority Critical patent/CN117115158B/en
Publication of CN117115158A publication Critical patent/CN117115158A/en
Application granted granted Critical
Publication of CN117115158B publication Critical patent/CN117115158B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a defect detection method and device based on deep contrast learning, wherein the method comprises the following steps: a first defect detection model is obtained by adopting a defect sample data supervised training defect detection network; adopting an unsupervised training feature extraction network of the workpiece structure data to obtain a second defect detection model; generating a third defect detection model by fusing the first defect detection model and the second defect detection model, and training the third defect detection model by using defect sample data to obtain a target defect detection model; and collecting a workpiece picture of the target workpiece, and performing defect detection on the workpiece picture by adopting a target defect detection model. By the embodiment of the invention, the technical problem of high false detection rate of the defect detection method in the related technology is solved, and the false detection rate of the workpiece defects is reduced.

Description

Defect detection method and device based on deep contrast learning
Technical Field
The invention relates to the field of computers, in particular to a defect detection method and device based on deep contrast learning.
Background
In the related art, the defect detection method based on the deep learning algorithm or the traditional algorithm is based on the identification of defect characteristics, so that the distinction between defects and product structures cannot be truly understood, and the condition of missing detection and over-detection cannot be avoided and is serious. Specifically, (1) the existing deep learning-based algorithm adopts a large number of picture data with defects to train, so that the deep neural network learns deep and shallow features of the defects, and further defect detection is realized, but the deep neural network is easy to cause missed detection for defects similar to the product structure, and easy to cause over detection for product structures similar to the defect features. (2) The defect detection method based on the traditional algorithm is generally a rule-based method, has poor detection effect under complex conditions, and is difficult to apply in a floor mode.
In view of the above problems in the related art, an efficient and accurate solution has not been found.
Disclosure of Invention
The invention provides a defect detection method and device based on deep contrast learning, which are used for solving the technical problems in the related art.
According to an embodiment of the present invention, there is provided a defect detection method based on depth contrast learning, including: a first defect detection model is obtained by adopting a defect sample data supervised training defect detection network, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network; adopting an unsupervised training feature extraction network of the workpiece structure data to obtain a second defect detection model, wherein the second defect detection model comprises: a second detection head, a second backbone network, and a second feature fusion network; generating a third defect detection model by adopting the first defect detection model and the second defect detection model in a fusion way, and training the third defect detection model by adopting the defect sample data to obtain a target defect detection model; and collecting a workpiece picture of a target workpiece, and carrying out defect detection on the workpiece picture by adopting the target defect detection model.
Optionally, generating a third defect detection model using the fusion of the first defect detection model and the second defect detection model includes: extracting a first feature processing network in the first defect detection model, extracting a second feature processing network of the second defect detection model, and freezing network parameters of the first feature processing network and the second feature processing network, wherein the first feature processing network is formed by connecting the first backbone network and a first feature fusion network in series, and the second feature processing network is formed by connecting the second backbone network and a second feature fusion network in series; and connecting the first feature processing network and the second feature processing network in parallel, and then connecting the first feature processing network and the second feature processing network to a third detection head through a feature splicing layer to obtain a third defect detection model, wherein the feature splicing layer is used for splicing the features output by the first feature processing network and the second feature processing network according to the channel dimension and inputting the spliced features into the third detection head.
Optionally, the workpiece structure data unsupervised training feature refinement network is used to obtain a second defect detection model, including: acquiring a non-defective picture set of a workpiece to be detected, and configuring the non-defective picture set as workpiece structure data; generating incomplete pictures of the non-defective pictures in the workpiece structure data, and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the non-defective picture set; and adopting the self-supervision labeling data set to self-supervise and train the feature extraction network to obtain a second defect detection model.
Optionally, generating a defect picture of the non-defective picture in the workpiece structure data, and creating a self-supervision annotation dataset using the defect picture and the artwork in the non-defective picture set includes: aiming at each target non-defective picture in the workpiece structure data, subtracting a sub-image block of the target non-defective picture at a random position or a designated position to obtain a incomplete picture; and creating self-supervision labeling data of the incomplete picture by taking a target non-defect picture corresponding to the incomplete picture as label data to obtain a self-supervision labeling data set.
Optionally, self-monitoring training the feature extraction network using the self-monitoring annotation dataset to obtain a second defect detection model, including: training the feature extraction network by taking the incomplete image as input data and the tag data as output data, wherein the self-supervision annotation data set comprises a plurality of groups of self-supervision annotation data, and each group of self-supervision annotation data comprises an incomplete image and corresponding tag data; and calculating a loss value of a loss function of the feature extraction network, and carrying out parameter optimization on the feature extraction network by adopting a back propagation algorithm based on the loss value to obtain a second defect detection model.
Optionally, training the third defect detection model by using the defect sample data to obtain a target defect detection model, including: extracting a defect picture and a defect label in the defect sample data; and taking the defect picture as input data of the third defect detection model and the defect label as output data to supervise and train a third detection head of the third defect detection model, so as to obtain a target defect detection model.
Optionally, performing defect detection on the workpiece picture using the target defect detection model includes: extracting a first feature of the workpiece picture by using a first feature processing network, extracting a second feature of the workpiece picture by using a second feature processing network, wherein the first feature is used for representing defect information of the target workpiece, the second feature is used for representing workpiece structure information of the target workpiece, and the target defect detection model comprises the first feature processing network, the second feature processing network, a feature splicing layer and a third detection head; splicing the first features and the second features into fusion features by adopting the feature splicing layer; and inputting the fusion characteristic to the third detection head, and outputting a defect detection result of the workpiece picture.
According to another embodiment of the present invention, there is provided a defect detection apparatus based on depth contrast learning, including: the first training module is used for performing supervised training on a defect detection network by using defect sample data to obtain a first defect detection model, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network; the second training module is used for adopting the work piece structure data unsupervised training feature extraction network to obtain a second defect detection model, wherein the second defect detection model comprises: a second detection head, a second backbone network, and a second feature fusion network; the third training module is used for generating a third defect detection model by adopting the fusion of the first defect detection model and the second defect detection model, and training the third defect detection model by adopting the defect sample data to obtain a target defect detection model; and the detection module is used for collecting workpiece pictures of the target workpiece and carrying out defect detection on the workpiece pictures by adopting the target defect detection model.
Optionally, the third training module includes: the extraction unit is used for extracting a first feature processing network in the first defect detection model, extracting a second feature processing network of the second defect detection model and freezing network parameters of the first feature processing network and the second feature processing network, wherein the first feature processing network is formed by connecting the first backbone network and a first feature fusion network in series, and the second feature processing network is formed by connecting the second backbone network and the second feature fusion network in series; and the splicing unit is used for connecting the first characteristic processing network and the second characteristic processing network in parallel and then connecting the first characteristic processing network and the second characteristic processing network to a third detection head through a characteristic splicing layer to obtain a third defect detection model, wherein the characteristic splicing layer is used for splicing the characteristics output by the first characteristic processing network and the second characteristic processing network according to the channel dimension and then inputting the spliced characteristics into the third detection head.
Optionally, the second training module includes: the configuration unit is used for acquiring a non-defective picture set of the workpiece to be detected and configuring the non-defective picture set into workpiece structure data; the creating unit is used for generating incomplete pictures of the non-defective pictures in the workpiece structure data and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the non-defective picture set; and the training unit is used for self-supervising and training the characteristic extraction network by adopting the self-supervising and labeling data set to obtain a second defect detection model.
Optionally, the creating unit includes: a deduction subunit, configured to deduct, for each target non-defective picture in the workpiece structure data, a sub-image block of the target non-defective picture at a random location or at a specified location, to obtain a residual picture; the creating subunit is used for creating self-supervision annotation data of the incomplete picture by taking the target defect-free picture corresponding to the incomplete picture as tag data, so as to obtain a self-supervision annotation data set.
Optionally, the training unit includes: the training subunit is used for training the feature extraction network by taking the incomplete image as input data and the tag data as output data, wherein the self-supervision annotation data set comprises a plurality of groups of self-supervision annotation data, and each group of self-supervision annotation data comprises an incomplete image and corresponding tag data; and the optimizing subunit is used for calculating the loss value of the loss function of the feature extraction network, and carrying out parameter optimization on the feature extraction network by adopting a back propagation algorithm based on the loss value to obtain a second defect detection model.
Optionally, the third training module includes: an extracting unit, configured to extract a defect picture and a defect label in the defect sample data; the training unit is used for taking the defect picture as input data of the third defect detection model and taking the defect label as output data to supervise and train a third detection head of the third defect detection model so as to obtain a target defect detection model.
Optionally, the detection module includes: the extraction unit is used for extracting a first feature of the workpiece picture by adopting a first feature processing network, extracting a second feature of the workpiece picture by adopting a second feature processing network, wherein the first feature is used for representing defect information of the target workpiece, the second feature is used for representing workpiece structure information of the target workpiece, and the target defect detection model comprises the first feature processing network, the second feature processing network, a feature splicing layer and a third detection head; the splicing unit is used for splicing the first features and the second features into fusion features by adopting the feature splicing layer; and the detection unit is used for inputting the fusion characteristic to the third detection head and outputting a defect detection result of the workpiece picture.
According to a further embodiment of the invention there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the device embodiments described above.
According to the embodiment of the invention, a first defect detection model is obtained by adopting a defect sample data supervised training defect detection network, wherein the first defect detection model comprises: the first detection head, the first backbone network and the first feature fusion network adopt the work piece structure data unsupervised training feature extraction network to obtain a second defect detection model, wherein the second defect detection model comprises: the second detection head, the second backbone network and the second feature fusion network are used for generating a third defect detection model by fusing the first defect detection model and the second defect detection model, training the third defect detection model by using defect sample data to obtain a target defect detection model, collecting workpiece pictures of a target workpiece, carrying out defect detection on the workpiece pictures by using the target defect detection model, enhancing the discrimination constraint of the model on defect identification by means of the contrast learning of multiple sample multiple models, realizing a deep learning detection method capable of really understanding the difference between the workpiece structure and the defect features, preventing missed detection and over detection caused by the structural features, reducing the probability of missed detection and over detection, solving the technical problem of high false detection rate of the defect detection method in the related art, and reducing the false detection rate of the workpiece defects.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiments of the invention and together with the description serve to explain the invention and do not constitute a limitation on the invention. In the drawings:
FIG. 1 is a block diagram of the hardware architecture of a computer according to an embodiment of the present invention;
FIG. 2 is a flow chart of a depth contrast learning based defect detection method according to an embodiment of the present invention;
FIG. 3 is a training schematic of a third defect detection model in an embodiment of the present invention;
fig. 4 is a block diagram of a defect detection apparatus based on depth contrast learning according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Example 1
The method embodiment provided in the first embodiment of the present application may be executed in a controller, a computer, an industrial robot, or a similar computing device. Taking a computer as an example, fig. 1 is a block diagram of a hardware structure of a computer according to an embodiment of the present invention. As shown in fig. 1, the computer may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and optionally, a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those of ordinary skill in the art that the configuration shown in FIG. 1 is merely illustrative and is not intended to limit the configuration of the computer described above. For example, the computer may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a defect detection method based on depth contrast learning in an embodiment of the present invention, and the processor 102 executes the computer program stored in the memory 104 to perform various functional applications and data processing, that is, implement the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, memory 104 may further include memory located remotely from processor 102, which may be connected to the computer via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communications provider of a computer. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a defect detection method based on deep contrast learning is provided, fig. 2 is a flowchart of the defect detection method based on deep contrast learning according to an embodiment of the present invention, as shown in fig. 2, and the flowchart includes the following steps:
step S202, a first defect detection model is obtained by adopting a defect sample data supervised training defect detection network, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network;
the defect detection network of the present embodiment uses a sufficient number of image data with a label and a defect as defect sample data, and each defect type is not less than 500 pictures, and the pictures are trained in a supervised manner by adopting architectures such as target detection, instance segmentation, image segmentation and the like. The training process comprises the following steps: collecting image data; labeling a label; data examination; training a supervised model; model testing and feedback. After training to obtain a first defect detection model, freezing a backbone network and a characteristic fusion part network except a detection head in the first defect detection model, and extracting parameters for standby.
The backbone network in this embodiment is used for extracting features, the feature fusion network is used for fusing deep and shallow features extracted by the backbone network, and the detection head is used for processing the fused features according to a specific task mode to obtain a final detection result.
Step S204, adopting an unsupervised training feature extraction network of the workpiece structure data to obtain a second defect detection model, wherein the second defect detection model comprises: a second detection head, a second backbone network, and a second feature fusion network;
in this embodiment, the first detection head of the first defect detection model is used for detecting whether a defect exists in an input workpiece picture and whether the defect exists in the input workpiece picture, and the second detection head of the second defect detection model is used for refining structural features of the input workpiece picture, the structural features of this embodiment refer to structural attribute features of the workpiece, such as concave-convex structures, perforation structures, movable connection structures and the like, and through training feature refining networks, understanding capability of the second backbone network and the second feature fusion network on the structural features is enhanced, and influence of workpiece pictures acquired by workpieces in different placement scenes (positions, backgrounds, light rays and the like) on defect recognition can be avoided, so that normal workpieces are prevented from being recognized as defective workpieces.
Step S206, fusing the first defect detection model and the second defect detection model to generate a third defect detection model, and training the third defect detection model by using defect sample data to obtain a target defect detection model;
Step S208, collecting a workpiece picture of the target workpiece, and performing defect detection on the workpiece picture by adopting a target defect detection model.
Through the steps, a first defect detection model is obtained by adopting a defect sample data supervised training defect detection network, wherein the first defect detection model comprises: the first detection head, the first backbone network and the first feature fusion network adopt the work piece structure data unsupervised training feature extraction network to obtain a second defect detection model, wherein the second defect detection model comprises: the second detection head, the second backbone network and the second feature fusion network are used for generating a third defect detection model by fusing the first defect detection model and the second defect detection model, training the third defect detection model by using defect sample data to obtain a target defect detection model, collecting workpiece pictures of a target workpiece, carrying out defect detection on the workpiece pictures by using the target defect detection model, enhancing the discrimination constraint of the model on defect identification by means of the contrast learning of multiple sample multiple models, realizing a deep learning detection method capable of really understanding the difference between the workpiece structure and the defect features, preventing missed detection and over detection caused by the structural features, reducing the probability of missed detection and over detection, solving the technical problem of high false detection rate of the defect detection method in the related art, and reducing the false detection rate of the workpiece defects.
In one implementation of this embodiment, the second defect detection model is obtained using an unsupervised training feature refinement network of the workpiece structure data, comprising:
s11, acquiring a non-defective picture set of a workpiece to be detected, and configuring the non-defective picture set as workpiece structure data;
the non-defective picture set comprises a plurality of non-defective pictures, and the characteristics extracted from the non-defective pictures are the structural characteristics of the corresponding workpiece.
S12, generating incomplete pictures of the defect-free pictures in the workpiece structure data, and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the defect-free picture set;
in one example, generating a incomplete picture of a non-defective picture in the workpiece structure data and creating a self-supervising annotation dataset using the incomplete picture and the artwork in the non-defective picture set comprises: aiming at each target non-defective picture in the workpiece structure data, subtracting a sub-image block of the target non-defective picture at a random position or a designated position to obtain a incomplete picture; and creating self-supervision labeling data of the incomplete picture by taking a target non-defect picture corresponding to the incomplete picture as label data to obtain a self-supervision labeling data set.
And S13, adopting a self-supervision training feature extraction network of the self-supervision labeling data set to obtain a second defect detection model.
In one example, the self-supervised training feature refinement network using the self-supervised labeling dataset results in a second defect detection model, comprising: training a feature extraction network by taking the incomplete picture as input data and the tag data as output data, wherein the self-supervision labeling data set comprises a plurality of groups of self-supervision labeling data, and each group of self-supervision labeling data comprises the incomplete picture and the corresponding tag data; and calculating a loss value of a loss function of the feature extraction network, and carrying out parameter optimization on the feature extraction network by adopting a back propagation algorithm based on the loss value to obtain a second defect detection model.
The obtained data set (self-supervision labeling data set) is adopted to train the feature extraction network, wherein the used data label is the original drawing of the unbuckled sub-image block. Inputting data (incomplete graph) of the subtracted sub-image blocks during network training, allowing the network to recover the buckled part, then using the part corresponding to the original graph as tag data, calculating a loss value of a loss function, and performing parameter optimization by using a back propagation algorithm until the current characteristic extraction network capability of recovering the sub-image blocks exceeds a certain threshold.
The feature extraction network of the embodiment is a self-supervision image generation network structure based on a visual transducer, and is used for extracting the appearance and structural features of a product, adopting a self-attention and mutual-attention mechanism, extracting transfer information layer by layer, obtaining abstract expression of the product structure in a deep structure, enabling a network result to obtain the abstract product structural features after compression and extraction through a learning and training process of big data, and enhancing the recognition capability of the structural features. The training process comprises the following steps: collecting defect-free pictures of various workpieces, wherein the product structure is required to be clear, and the number of pictures of each type of product is not too small; judging whether the product pictures exceed a preset size, if the product size is larger, taking a plurality of pictures, numbering the plurality of pictures, and ensuring that the pictures are distributed uniformly; a self-supervision labeling data set (the original image is a label) is created by adopting a mode of randomly buckling sub-image blocks and manually buckling the sub-image blocks aiming at specific parts of a product structure; training a feature extraction network in a self-supervision manner by using the created self-supervision annotation dataset; model testing and feedback. After training to obtain a second defect detection model, freezing a backbone network and a feature fusion network except for a detection head in the second defect detection model, extracting parameters for standby, and obtaining a target defect detection model through subsequent training.
In this embodiment, generating the third defect detection model by fusing the first defect detection model and the second defect detection model includes: extracting a first feature processing network in the first defect detection model, extracting a second feature processing network of the second defect detection model, and freezing network parameters of the first feature processing network and the second feature processing network, wherein the first feature processing network is formed by connecting a first backbone network and a first feature fusion network in series, and the second feature processing network is formed by connecting a second backbone network and a second feature fusion network in series; and after the first feature processing network and the second feature processing network are connected in parallel, connecting the first feature processing network and the second feature processing network to a third detection head through a feature splicing layer to obtain a third defect detection model, wherein the feature splicing layer is used for splicing the features output by the first feature processing network and the second feature processing network according to the channel dimension and inputting the spliced features into the third detection head.
After freezing, the parameters of the frozen backbone network and the characteristic fusion network are not updated any more in the subsequent training, and only the parameters of the detection head are updated.
In this embodiment, training a third defect detection model using defect sample data to obtain a target defect detection model includes: extracting a defect picture and a defect label in the defect sample data; and taking the defect picture as input data of a third defect detection model, taking the defect label as output data, and performing supervised training on a third detection head of the third defect detection model to obtain a target defect detection model.
And (3) connecting the two networks frozen in the previous two steps in parallel, splicing the characteristics output by the two networks according to the channel dimension, inputting the spliced characteristics into a third detection head network, training network parameters of the third detection head by using the defect sample data, and keeping the frozen network parameters unchanged to obtain a final target defect detection network. Fig. 3 is a schematic diagram of training a third defect detection model in an embodiment of the present invention, in which, during training, a supervised defect detection network (a first defect detection model including a plurality of DNN (deep neural network) modules i) freezes parameters, and an unsupervised feature extraction network (a second defect detection model including a plurality of DNN modules ii) freezes parameters, and the features output by the respective freeze parameters are input to a detection head after being spliced.
And respectively passing the defect picture through a supervised feature detection network and an unsupervised feature extraction network, splicing the features output by the two networks, inputting the spliced features into a third detection head, and performing supervised training by using a manually marked tag in the defect sample data as a tag output by the detection head. During the back propagation of the supervised training (optimization parameters), the already frozen network parameters are no longer changed, thus optimizing the parameters of the detection head.
In one implementation scenario of the present embodiment, performing defect detection on a workpiece picture using a target defect detection model includes: extracting a first feature of a workpiece picture by adopting a first feature processing network, and extracting a second feature of the workpiece picture by adopting a second feature processing network, wherein the first feature is used for representing defect information of a target workpiece, the second feature is used for representing workpiece structure information of the target workpiece, and the target defect detection model comprises the first feature processing network, the second feature processing network, a feature splicing layer and a third detection head; splicing the first features and the second features into fusion features by adopting a feature splicing layer; and inputting the fusion characteristic to a third detection head, and outputting a defect detection result of the workpiece picture.
In a deployment operation stage, simultaneously inputting a workpiece picture to be detected into a network structure of a target defect detection model to obtain a first feature (defect feature) and a second feature (part structure); splicing the first feature and the second feature; and inputting the spliced characteristics into a detection head of the target defect detection model to obtain a final defect detection result.
By adopting the scheme of the embodiment, the mode of judging whether the undetermined feature is a defect by simulation is as follows: when encountering an indistinguishable feature, the method can be compared with a normal and nondefective product picture, and then whether the feature is a defect feature or belongs to a product structure feature is determined. The scheme of the embodiment provides a new workpiece structural feature extraction network, and combines a deep learning defect detection network to detect defects under the condition of not increasing deployment deducing time, so as to solve the existing pain points such as missing detection, overdetection and the like.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a controller, or a network device, etc.) to perform the method according to the embodiments of the present invention.
Example 2
In this embodiment, a defect detection device based on deep contrast learning is further provided, and the defect detection device is used to implement the foregoing embodiments and preferred embodiments, and is not described again. The term "module" as used below may implement a combination of software and hardware for subscription functions. While the means described in the following embodiments are preferably implemented in software, implementations in hardware, or a combination of software and hardware, are also contemplated.
FIG. 4 is a block diagram of a defect detection apparatus based on deep contrast learning according to an embodiment of the present invention, as shown in FIG. 4, the apparatus includes:
a first training module 40, configured to obtain a first defect detection model by using defect sample data to supervise a training defect detection network, where the first defect detection model includes: a first detection head, a first backbone network, and a first feature fusion network;
a second training module 42, configured to refine the network using the unsupervised training features of the workpiece structure data, to obtain a second defect detection model, where the second defect detection model includes: a second detection head, a second backbone network, and a second feature fusion network;
a third training module 44, configured to generate a third defect detection model by using the first defect detection model and the second defect detection model in a fusion manner, and train the third defect detection model by using the defect sample data to obtain a target defect detection model;
and the detection module 46 is used for collecting a workpiece picture of the target workpiece and carrying out defect detection on the workpiece picture by adopting the target defect detection model.
Optionally, the third training module includes: the extraction unit is used for extracting a first feature processing network in the first defect detection model, extracting a second feature processing network of the second defect detection model and freezing network parameters of the first feature processing network and the second feature processing network, wherein the first feature processing network is formed by connecting the first backbone network and a first feature fusion network in series, and the second feature processing network is formed by connecting the second backbone network and the second feature fusion network in series; and the splicing unit is used for connecting the first characteristic processing network and the second characteristic processing network in parallel and then connecting the first characteristic processing network and the second characteristic processing network to a third detection head through a characteristic splicing layer to obtain a third defect detection model, wherein the characteristic splicing layer is used for splicing the characteristics output by the first characteristic processing network and the second characteristic processing network according to the channel dimension and then inputting the spliced characteristics into the third detection head.
Optionally, the second training module includes: the configuration unit is used for acquiring a non-defective picture set of the workpiece to be detected and configuring the non-defective picture set into workpiece structure data; the creating unit is used for generating incomplete pictures of the non-defective pictures in the workpiece structure data and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the non-defective picture set; and the training unit is used for self-supervising and training the characteristic extraction network by adopting the self-supervising and labeling data set to obtain a second defect detection model.
Optionally, the creating unit includes: a deduction subunit, configured to deduct, for each target non-defective picture in the workpiece structure data, a sub-image block of the target non-defective picture at a random location or at a specified location, to obtain a residual picture; the creating subunit is used for creating self-supervision annotation data of the incomplete picture by taking the target defect-free picture corresponding to the incomplete picture as tag data, so as to obtain a self-supervision annotation data set.
Optionally, the training unit includes: the training subunit is used for training the feature extraction network by taking the incomplete image as input data and the tag data as output data, wherein the self-supervision annotation data set comprises a plurality of groups of self-supervision annotation data, and each group of self-supervision annotation data comprises an incomplete image and corresponding tag data; and the optimizing subunit is used for calculating the loss value of the loss function of the feature extraction network, and carrying out parameter optimization on the feature extraction network by adopting a back propagation algorithm based on the loss value to obtain a second defect detection model.
Optionally, the third training module includes: an extracting unit, configured to extract a defect picture and a defect label in the defect sample data; the training unit is used for taking the defect picture as input data of the third defect detection model and taking the defect label as output data to supervise and train a third detection head of the third defect detection model so as to obtain a target defect detection model.
Optionally, the detection module includes: the extraction unit is used for extracting a first feature of the workpiece picture by adopting a first feature processing network, extracting a second feature of the workpiece picture by adopting a second feature processing network, wherein the first feature is used for representing defect information of the target workpiece, the second feature is used for representing workpiece structure information of the target workpiece, and the target defect detection model comprises the first feature processing network, the second feature processing network, a feature splicing layer and a third detection head; the splicing unit is used for splicing the first features and the second features into fusion features by adopting the feature splicing layer; and the detection unit is used for inputting the fusion characteristic to the third detection head and outputting a defect detection result of the workpiece picture.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Example 3
An embodiment of the invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store a computer program for execution:
s1, a defect sample data supervised training defect detection network is adopted to obtain a first defect detection model, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network;
s2, adopting an unsupervised training feature extraction network of the workpiece structure data to obtain a second defect detection model, wherein the second defect detection model comprises: a second detection head, a second backbone network, and a second feature fusion network;
S3, fusing the first defect detection model and the second defect detection model to generate a third defect detection model, and training the third defect detection model by using the defect sample data to obtain a target defect detection model;
s4, collecting a workpiece picture of the target workpiece, and carrying out defect detection on the workpiece picture by adopting the target defect detection model.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
An embodiment of the invention also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s1, a defect sample data supervised training defect detection network is adopted to obtain a first defect detection model, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network;
s2, adopting an unsupervised training feature extraction network of the workpiece structure data to obtain a second defect detection model, wherein the second defect detection model comprises: a second detection head, a second backbone network, and a second feature fusion network;
s3, fusing the first defect detection model and the second defect detection model to generate a third defect detection model, and training the third defect detection model by using the defect sample data to obtain a target defect detection model;
s4, collecting a workpiece picture of the target workpiece, and carrying out defect detection on the workpiece picture by adopting the target defect detection model.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology content may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and are merely a logical functional division, and there may be other manners of dividing the apparatus in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a controller, or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application and are intended to be comprehended within the scope of the present application.

Claims (7)

1. The defect detection method based on deep contrast learning is characterized by comprising the following steps of:
a first defect detection model is obtained by adopting a defect sample data supervised training defect detection network, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network;
adopting an unsupervised training feature extraction network of the workpiece structure data to obtain a second defect detection model, wherein the second defect detection model comprises: the second detection head of the second defect detection model is used for refining the structural characteristics of the input workpiece picture, the structural characteristics refer to the structural attribute characteristics of the workpiece, and the first detection head of the first defect detection model is used for detecting whether defects exist in the input workpiece picture and the types of the defects;
Generating a third defect detection model by adopting the first defect detection model and the second defect detection model in a fusion way, and training the third defect detection model by adopting the defect sample data to obtain a target defect detection model;
collecting a workpiece picture of a target workpiece, and carrying out defect detection on the workpiece picture by adopting the target defect detection model;
wherein, adopt work piece structural data unsupervised training feature to refine the network, obtain the second defect detection model, include: acquiring a non-defective picture set of a workpiece to be detected, and configuring the non-defective picture set as workpiece structure data; generating incomplete pictures of the non-defective pictures in the workpiece structure data, and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the non-defective picture set; self-monitoring and training the feature extraction network by adopting the self-monitoring and labeling data set to obtain a second defect detection model;
generating incomplete pictures of the non-defective pictures in the workpiece structure data, and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the non-defective picture set comprises the following steps: aiming at each target non-defective picture in the workpiece structure data, subtracting a sub-image block of the target non-defective picture at a random position or a designated position to obtain a incomplete picture; creating self-supervision labeling data of the incomplete picture by taking a target non-defect picture corresponding to the incomplete picture as label data to obtain a self-supervision labeling data set;
Wherein generating a third defect detection model using the fusion of the first defect detection model and the second defect detection model comprises: extracting a first feature processing network in the first defect detection model, extracting a second feature processing network of the second defect detection model, and freezing network parameters of the first feature processing network and the second feature processing network, wherein the first feature processing network is formed by connecting the first backbone network and a first feature fusion network in series, and the second feature processing network is formed by connecting the second backbone network and a second feature fusion network in series; and connecting the first feature processing network and the second feature processing network in parallel, and then connecting the first feature processing network and the second feature processing network to a third detection head through a feature splicing layer to obtain a third defect detection model, wherein the feature splicing layer is used for splicing the features output by the first feature processing network and the second feature processing network according to the channel dimension and inputting the spliced features into the third detection head.
2. The method of claim 1, wherein self-overseeing training the feature extraction network with the self-overseeing labeling dataset results in a second defect detection model comprising:
Training the feature extraction network by taking the incomplete image as input data and the tag data as output data, wherein the self-supervision annotation data set comprises a plurality of groups of self-supervision annotation data, and each group of self-supervision annotation data comprises an incomplete image and corresponding tag data;
and calculating a loss value of a loss function of the feature extraction network, and carrying out parameter optimization on the feature extraction network by adopting a back propagation algorithm based on the loss value to obtain a second defect detection model.
3. The method of claim 1, wherein training the third defect inspection model using the defect sample data to obtain a target defect inspection model comprises:
extracting a defect picture and a defect label in the defect sample data;
and taking the defect picture as input data of the third defect detection model and the defect label as output data to supervise and train a third detection head of the third defect detection model, so as to obtain a target defect detection model.
4. The method of claim 1, wherein performing defect detection on the workpiece picture using the target defect detection model comprises:
extracting a first feature of the workpiece picture by using a first feature processing network, extracting a second feature of the workpiece picture by using a second feature processing network, wherein the first feature is used for representing defect information of the target workpiece, the second feature is used for representing workpiece structure information of the target workpiece, and the target defect detection model comprises the first feature processing network, the second feature processing network, a feature splicing layer and a third detection head;
Splicing the first features and the second features into fusion features by adopting the feature splicing layer;
and inputting the fusion characteristic to the third detection head, and outputting a defect detection result of the workpiece picture.
5. A depth contrast learning-based defect detection device, comprising:
the first training module is used for performing supervised training on a defect detection network by using defect sample data to obtain a first defect detection model, wherein the first defect detection model comprises: a first detection head, a first backbone network, and a first feature fusion network;
the second training module is used for adopting the work piece structure data unsupervised training feature extraction network to obtain a second defect detection model, wherein the second defect detection model comprises: the second detection head of the second defect detection model is used for refining the structural characteristics of the input workpiece picture, the structural characteristics refer to the structural attribute characteristics of the workpiece, and the first detection head of the first defect detection model is used for detecting whether defects exist in the input workpiece picture and the types of the defects;
The third training module is used for generating a third defect detection model by adopting the fusion of the first defect detection model and the second defect detection model, and training the third defect detection model by adopting the defect sample data to obtain a target defect detection model;
the detection module is used for collecting a workpiece picture of a target workpiece and carrying out defect detection on the workpiece picture by adopting the target defect detection model;
wherein the second training module comprises: the configuration unit is used for acquiring a non-defective picture set of the workpiece to be detected and configuring the non-defective picture set into workpiece structure data; the creating unit is used for generating incomplete pictures of the non-defective pictures in the workpiece structure data and creating a self-supervision annotation data set by adopting the incomplete pictures and original pictures in the non-defective picture set; the training unit is used for self-supervising and training the characteristic extraction network by adopting the self-supervising and labeling data set to obtain a second defect detection model;
wherein the creation unit includes: a deduction subunit, configured to deduct, for each target non-defective picture in the workpiece structure data, a sub-image block of the target non-defective picture at a random location or at a specified location, to obtain a residual picture; the creating subunit is used for creating self-supervision annotation data of the incomplete picture by taking the target defect-free picture corresponding to the incomplete picture as tag data to obtain a self-supervision annotation data set;
Wherein the third training module comprises: the extraction unit is used for extracting a first feature processing network in the first defect detection model, extracting a second feature processing network of the second defect detection model and freezing network parameters of the first feature processing network and the second feature processing network, wherein the first feature processing network is formed by connecting the first backbone network and a first feature fusion network in series, and the second feature processing network is formed by connecting the second backbone network and the second feature fusion network in series; and the splicing unit is used for connecting the first characteristic processing network and the second characteristic processing network in parallel and then connecting the first characteristic processing network and the second characteristic processing network to a third detection head through a characteristic splicing layer to obtain a third defect detection model, wherein the characteristic splicing layer is used for splicing the characteristics output by the first characteristic processing network and the second characteristic processing network according to the channel dimension and then inputting the spliced characteristics into the third detection head.
6. A storage medium comprising a stored computer program, wherein the computer program when run performs the steps of the method of any of the preceding claims 1 to 4.
7. An electronic device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus; wherein:
a memory for storing a computer program;
a processor for performing the steps of the method of any one of claims 1 to 4 by running a program stored on a memory.
CN202311373822.XA 2023-10-23 2023-10-23 Defect detection method and device based on deep contrast learning Active CN117115158B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311373822.XA CN117115158B (en) 2023-10-23 2023-10-23 Defect detection method and device based on deep contrast learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311373822.XA CN117115158B (en) 2023-10-23 2023-10-23 Defect detection method and device based on deep contrast learning

Publications (2)

Publication Number Publication Date
CN117115158A CN117115158A (en) 2023-11-24
CN117115158B true CN117115158B (en) 2024-02-02

Family

ID=88800535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311373822.XA Active CN117115158B (en) 2023-10-23 2023-10-23 Defect detection method and device based on deep contrast learning

Country Status (1)

Country Link
CN (1) CN117115158B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754513A (en) * 2020-08-07 2020-10-09 腾讯科技(深圳)有限公司 Product surface defect segmentation method, defect segmentation model learning method and device
CN112184667A (en) * 2020-09-28 2021-01-05 京东方科技集团股份有限公司 Defect detection and repair method, device and storage medium
CN116648731A (en) * 2023-02-12 2023-08-25 香港应用科技研究院有限公司 System and method for classifying and locating defects using self-supervised pre-training

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI780881B (en) * 2021-08-27 2022-10-11 緯創資通股份有限公司 Method for establishing defect detection model and electronic apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754513A (en) * 2020-08-07 2020-10-09 腾讯科技(深圳)有限公司 Product surface defect segmentation method, defect segmentation model learning method and device
CN112184667A (en) * 2020-09-28 2021-01-05 京东方科技集团股份有限公司 Defect detection and repair method, device and storage medium
CN116648731A (en) * 2023-02-12 2023-08-25 香港应用科技研究院有限公司 System and method for classifying and locating defects using self-supervised pre-training

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Contrastive Learning for Fault Detection and Diagnostics in the Context of Changing Operating Conditions and Novel Fault Types;Rombach Katharina et al.;《Sensors》;第1-10页 *
基于漏磁内检测的自监督缺陷检测方法;刘金海 等;《仪器仪表学报》;第1-5页 *

Also Published As

Publication number Publication date
CN117115158A (en) 2023-11-24

Similar Documents

Publication Publication Date Title
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN108492291B (en) CNN segmentation-based solar photovoltaic silicon wafer defect detection system and method
CN111382785B (en) GAN network model and method for realizing automatic cleaning and auxiliary marking of samples
CN110928862A (en) Data cleaning method, data cleaning apparatus, and computer storage medium
CN111860353A (en) Video behavior prediction method, device and medium based on double-flow neural network
CN112037222B (en) Automatic updating method and system of neural network model
CN115131283A (en) Defect detection and model training method, device, equipment and medium for target object
CN110209819A (en) File classification method, device, equipment and medium
CN112487913A (en) Labeling method and device based on neural network and electronic equipment
CN111428664A (en) Real-time multi-person posture estimation method based on artificial intelligence deep learning technology for computer vision
CN115239644A (en) Concrete defect identification method and device, computer equipment and storage medium
CN114724246B (en) Dangerous behavior identification method and device
CN114266894A (en) Image segmentation method and device, electronic equipment and storage medium
CN116245876A (en) Defect detection method, device, electronic apparatus, storage medium, and program product
KR102325347B1 (en) Apparatus and method of defect classification based on machine-learning
CN115471681A (en) Image recognition method, device and storage medium
CN113780484B (en) Industrial product defect detection method and device
CN108664906B (en) Method for detecting content in fire scene based on convolutional network
CN117115158B (en) Defect detection method and device based on deep contrast learning
CN116934195A (en) Commodity information checking method and device, electronic equipment and storage medium
CN116580232A (en) Automatic image labeling method and system and electronic equipment
CN116110005A (en) Crowd behavior attribute counting method, system and product
CN111191584A (en) Face recognition method and device
CN116206334A (en) Wild animal identification method and device
CN115578362A (en) Defect detection method and device for electrode coating, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant