CN111681228A - Flaw detection model, training method, detection method, apparatus, device, and medium - Google Patents
Flaw detection model, training method, detection method, apparatus, device, and medium Download PDFInfo
- Publication number
- CN111681228A CN111681228A CN202010520818.1A CN202010520818A CN111681228A CN 111681228 A CN111681228 A CN 111681228A CN 202010520818 A CN202010520818 A CN 202010520818A CN 111681228 A CN111681228 A CN 111681228A
- Authority
- CN
- China
- Prior art keywords
- detection model
- picture
- training set
- flaw
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 121
- 238000012549 training Methods 0.000 title claims abstract description 102
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000007547 defect Effects 0.000 claims abstract description 75
- 238000013528 artificial neural network Methods 0.000 claims abstract description 47
- 239000011159 matrix material Substances 0.000 claims abstract description 46
- 238000010586 diagram Methods 0.000 claims abstract description 39
- 230000008569 process Effects 0.000 claims abstract description 26
- 230000006870 function Effects 0.000 claims description 56
- 238000012545 processing Methods 0.000 claims description 23
- 230000002950 deficient Effects 0.000 claims description 21
- 238000004891 communication Methods 0.000 claims description 13
- 230000000694 effects Effects 0.000 description 5
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000012512 characterization method Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000007635 classification algorithm Methods 0.000 description 1
- 230000007797 corrosion Effects 0.000 description 1
- 238000005260 corrosion Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000000696 magnetic material Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000002994 raw material Substances 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000005728 strengthening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/0008—Industrial image inspection checking presence/absence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application provides a flaw detection model, a training method, a detection method, a device, equipment and a medium, wherein the flaw detection model comprises the following steps: the neural network is used for extracting the characteristics of the input picture and outputting a characteristic diagram; the weight layer is used for multiplying the feature diagram output by the neural network by points to improve the weight of the position with flaws in the feature diagram output by the neural network, and outputting a final feature diagram after weight adjustment; and the weight layer is a matrix which has the same size with the characteristic diagram output by the neural network and the matrix value is related and changed with the matrix value of the characteristic diagram output by the neural network. In the final feature map obtained in this way, the position corresponding to the maximum value in the matrix corresponding to the final feature map may be the position of the flaw in the picture with a high probability. Therefore, the defect position detection in the defect detection process is realized to a certain extent, and the problem that the position information of the defect in the picture is not reflected in the current defect detection process is solved.
Description
Technical Field
The application relates to the technical field of computers, in particular to a flaw detection model, a training method, a detection method, a device, equipment and a medium.
Background
In the industry, due to raw materials, rolling equipment, processes and other reasons, different types of defects are inevitably generated on produced industrial products, and the defects not only affect the appearance of the products, but also reduce the performances of the products such as corrosion resistance, wear resistance, fatigue strength and the like, thereby causing great economic loss to enterprises. Therefore, flaw detection has been a great concern in the industry.
The classification algorithm is a common method for detecting flaws in the industry at present, and whether flaws exist in a product picture is judged through a neural network, so that detection results such as qualified, unqualified and undetermined are given. However, the conventional classification method only has a qualitative description of the picture, and does not reflect the position information of the flaw in the picture.
Disclosure of Invention
An object of the embodiments of the present application is to provide a defect detection model, a training method, a detection method, an apparatus, a device, and a medium, so as to solve a problem that position information of a defect in a picture is not reflected in a current defect detection process.
The embodiment of the application provides a flaw detection model, include: the neural network is used for extracting the characteristics of the input picture and outputting a characteristic diagram; the weight layer is used for multiplying the feature map points output by the neural network to improve the weight of the position with flaws in the feature map output by the neural network, and outputting a final feature map after weight adjustment; the weight layer is a matrix which has the same size with the characteristic diagram output by the neural network and the matrix value is related to the matrix value of the characteristic diagram output by the neural network.
In the flaw detection model, a weighting layer is connected behind the neural network, and the weighting layer is used for multiplying the point of the characteristic diagram output by the neural network, so that the weighting of the position where the flaw exists in the characteristic diagram output by the neural network is improved. In the final feature map obtained in this way, the position corresponding to the maximum value in the matrix corresponding to the final feature map may be the position of the flaw in the picture with a high probability. Therefore, the defect position detection in the defect detection process is realized to a certain extent, and the problem that the position information of the defect in the picture is not reflected in the current defect detection process is solved.
The embodiment of the application further provides a training method of the flaw detection model, which comprises the following steps: processing the training set picture by using the flaw detection model; the training set picture comprises an unblemished training set picture and a flawed training set picture, and the flawed training set picture is marked with a flawed position; inputting the final characteristic diagram corresponding to each training set image output by the flaw detection model into a preset loss function; judging whether the loss value of the loss function is converged; if not, performing back propagation according to the loss value, updating the parameters of the neural network and the matrix value of the weight layer in the flaw detection model, and repeating the process; and if the defect detection model is converged, finishing the training to obtain the trained defect detection model.
In the implementation process, the flaw detection model is trained through the flaw-free training set picture and the flaw training set picture marked with the flaw position, so that the training of the neural network playing a classification role can be effectively realized, meanwhile, the weight layer is also trained, and a suitable weight layer matrix is obtained, so that the flaw detection model obtained through training can be used for rapidly obtaining the position information of the flaw in the picture when the accurate judgment on whether the flaw exists in the picture is realized.
Further, the loss function is a classification loss function plus a flaw location loss function.
The trained flaw detection model comprises a neural network capable of realizing flaw classification and a weight layer for strengthening flaw position weight, and in the implementation process, the accuracy of image classification is judged through a classification loss function, the accuracy of flaw position determination is also judged through a flaw position loss function, and then whether the flaw detection model needs to be iterated continuously is determined through two aspects, so that the trained flaw detection model is more accurate and usable.
Further, the classification loss function is a cross entropy function or a softmax function.
Further, the function of the flaw position loss is that L is ∑n∈N(Tn(p1)-In(p2))2(ii) a Said InRepresenting the nth defective training set picture, TnCharacterizing a final feature map corresponding to the nth defective training set picture, Tn(p1) the position of the maximum value in the matrix corresponding to the final feature map; said In(p2) indicating the positions of the defects marked in the defective training set pictures; and N is the total number of the defective training set pictures.
In the implementation process, the positions considered as the flaws in the final feature diagram are compared with the flaw positions marked in the corresponding flaw training set diagram, so that the corresponding deviation value is obtained. Theoretically, Tn(p1)-InThe larger the absolute value of (p2), the less effective the flaw detection model.
The embodiment of the application further provides a flaw detection method, which comprises the following steps: inputting the picture to be detected into the flaw detection model to obtain a final characteristic diagram; inputting the final feature map into a preset classifier to obtain a classification result of the picture to be detected; and when the classification result is that the defects exist, outputting the position of the maximum value in the matrix corresponding to the final characteristic diagram.
In the implementation process, after the picture to be detected is input into the flaw detection model to obtain the final characteristic diagram, the picture to be detected can be classified through a preset classifier, and whether the picture to be detected has flaws or not is determined. When the defects exist, the position of the maximum value in the matrix corresponding to the final characteristic diagram can be output as the defect position, so that whether the defects exist in the product or not can be detected, the position of the defects can be determined, and the detection effect is improved.
The embodiment of the present application further provides a training device for a flaw detection model, including: the device comprises a first processing module, a first input module, a judging module and a back propagation module; the first processing module is used for processing the training set picture by using the flaw detection model; the training set picture comprises an unblemished training set picture and a flawed training set picture, and the flawed training set picture is marked with a flawed position; the first input module is used for inputting the final characteristic diagram corresponding to each training set image output by the flaw detection model into a preset loss function; the judging module is used for judging whether the loss value of the loss function is converged or not, and finishing the training when the loss value of the loss function is converged; and the back propagation module is used for carrying out back propagation according to the loss value when the defect detection model is not converged, updating the parameters of the neural network and the matrix value of the weight layer in the defect detection model, and enabling the processing module, the input module and the judgment module to sequentially and repeatedly execute the processes.
In the implementation process, the flaw detection model is trained through the flaw-free training set picture and the flaw training set picture marked with the flaw position, the training on the neural network playing a classification role is effectively realized, meanwhile, the weight layer is also trained, and a suitable weight layer matrix is obtained, so that the flaw detection model obtained through training can rapidly obtain the position information of the flaw in the picture when the accurate judgment on whether the flaw exists in the picture is realized.
The embodiment of the present application further provides a flaw detection device, including: the device comprises a second input module, a second processing module and a classification module; the second input module is used for inputting the picture to be detected into the flaw detection model; the second processing module is used for obtaining a final characteristic diagram through the flaw detection model; the classification module is used for inputting the final feature map into a preset classifier to obtain a classification result of the picture to be detected; and the second processing module is further configured to output a position of a maximum value in a matrix corresponding to the final feature map when the classification result indicates that a defect exists.
Above-mentioned realizing device can realize whether detecting the product and exist the flaw in, can also determine the flaw position, has promoted detection effect.
An embodiment of the present application further provides an electronic device, including: a data input/output interface, a processor, a memory and a communication bus; the communication bus is used for realizing the connection communication among the data input/output interface, the processor and the memory; the data input/output interface is used for acquiring a training set picture or acquiring a picture to be detected; the processor is configured to execute one or more programs stored in the memory to implement any of the above-described methods for training a fault detection model, or to implement the above-described fault detection method.
The present application further provides a readable storage medium storing one or more programs, where the one or more programs are executable by one or more processors to implement any of the above methods for training a fault detection model, or to implement any of the above methods for fault detection.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic structural diagram of a defect detection model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating a training process of a fault detection model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a process for performing defect detection according to an embodiment of the present disclosure;
FIG. 4 is a schematic structural diagram of a training apparatus for a defect detection model according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a defect detection apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
The first embodiment is as follows:
the embodiment of the application provides a flaw detection model, which is shown in fig. 1 and comprises a neural network and a weight layer. Wherein:
the weight layer is after the neural network and is a matrix of the same size as the eigenmap output by the neural network. It should be understood that, since the structure of the neural network used in the practical application is determined, the size of the feature map output by the neural network is accordingly determined, and thus the matrix size of the weight layer can be determined.
It should be understood that, when the neural network processes the pictures, the output feature map is essentially an n × n matrix (n is an integer greater than 0), so that after the neural network performs feature extraction on a picture and outputs the feature map, the feature map may be subjected to dot multiplication by a weighting layer matrix with the same size of n × n, so as to adjust the weight of the feature map output by the neural network, and increase the weight of the position where the defect exists in the feature map output by the neural network, thereby obtaining the final feature map. In the final feature map, the position of the maximum value in the matrix represented by the final feature map often corresponds to the position of the flaw in the picture, so that the position of the maximum value in the matrix represented by the final feature map can be used as the position of the flaw in the picture.
It should be noted that, in the embodiment of the present application, the matrix of the weight layer is not a matrix with a fixed value, the matrix value of the matrix of the weight layer is associated with the matrix value of the feature map output by the neural network, and after the matrix value of the feature map output by the neural network changes, the matrix value of the weight layer also changes correspondingly, thereby achieving an effect of increasing the weight of the position where the flaw exists in the feature map output by the neural network.
It should be understood that the neural network in the embodiment of the present application may adopt any neural network that can implement detection of whether a flaw exists in a picture, for example, CNN (convolutional neural network) or the like may be adopted.
In the embodiment of the application, before the flaw detection model is used, the flaw detection model needs to be trained to obtain the neural network with a better classification effect, and the weight of the position where the flaw exists in the feature map output by the neural network can be better improved. Referring to fig. 2, fig. 2 is a method for training a flaw detection model according to an embodiment of the present application, including:
s201: the flaw detection model provided in the embodiment of the application is used for processing the training set pictures.
It should be noted that the training set pictures described in the embodiments of the present application include an unblemished training set picture and a flawed training set picture. The defective training set picture needs to be marked with a defective position, and qualitative marks need to be given for the non-defective training set picture and the defective training set picture, namely, whether the training set picture is defective or not is marked.
It should be noted that in the embodiment of the present application, the training set pictures may be preprocessed, so that the training set pictures are in the same size (e.g., 512 × 512) and are grayscale pictures, so as to facilitate the processing by the flaw detection model.
S202: and inputting the final characteristic diagram corresponding to each training set image output by the flaw detection model into a preset loss function.
In a possible implementation manner of the embodiment of the present application, a conventional loss function (such as a cross entropy function, etc.) may be used as the loss function used in the embodiment of the present application.
However, considering that the defect detection model in the embodiment of the present application is to detect whether a picture product has a defect, and also to detect a defect position of a defective picture, in another possible implementation manner of the embodiment of the present application, a classification loss function plus a defect position loss function may be adopted as the loss function adopted in the embodiment of the present application.
For example, the classification loss function may employ a cross entropy function, a softmax function, or the like, and these conventional classification loss functions may well characterize the classification reliability of the flaw detection model for the input picture.
And for the defect position loss function, the detection accuracy of the defect detection model for the defect position in the input picture can be determined by considering the deviation between the defect position detected by the defect detection model and the position of the mark in the picture.
To this end, in a possible example of the embodiment of the present application, the defect position loss function may be set to L ∑n∈N(Tn(p1)-In(p2))2. In the formula:
l represents the loss value of the defect position loss function, InCharacterization of the nth defective training set, TnRepresenting the final characteristic diagram, T, corresponding to the n defective training set picturen(p1) the position of the maximum value in the matrix corresponding to the final feature map; i isn(p2) is the location of the defect marked in the defective training set picture, N is the total number of defective training set pictures, and ∑n∈N(Tn(p1)-In(p2))2The token is taken to be all (T) of Nn(p1)-In(p2))2The sum of the values.
It should be understood that the defect location loss function in the above example is only one possible defect location loss function that can be used in the embodiments of the present application, and is not representative of the embodiments of the present application, and can be implemented by using the defect location loss functionn∈N|Tn(p1)-In(p2) |, where | Tn(p1)-In(p2) | is Tn(p1) and In(p 2).
S203: judging whether the loss value of the loss function is converged, if not, turning to the step S204; if so, go to step S205.
In the embodiment of the application, the quality of the current model can be judged through the loss function, and when the loss value output by the loss function is converged, the flaw detection model can be considered to be trained, and the flaw detection model can be put into practical application.
S204: and performing back propagation according to the loss value, updating parameters of the neural network in the flaw detection model and the matrix value of the weight layer, and turning to the step S201.
S205: and finishing the training.
After the defect detection model is trained, the trained defect detection model can be obtained, and the trained defect detection model can be applied to industry to detect defects of products. As can be seen in fig. 3, the flaw detection process includes:
s301: and inputting the picture to be detected into the trained flaw detection model to obtain a final characteristic diagram.
S302: and inputting the final characteristic diagram into a preset classifier to obtain a classification result of the picture to be detected.
In the embodiment of the present application, the classifier may adopt a common classifier, such as a logistic classifier, a softmax classifier, or the like.
S303: and when the classification result is that the defects exist, outputting the position of the maximum value in the matrix corresponding to the final characteristic diagram.
In this embodiment of the application, when the classification result indicates that a defect exists, the position of the maximum value in the matrix corresponding to the final feature map may be output as a defect position.
In the embodiment of the application, in order to improve the output effect, when the classification result is that a defect exists, the position of the maximum value in the picture to be detected can be obtained by mapping according to the position of the maximum value in the matrix corresponding to the final feature map, and then, according to the position of the maximum value in the picture to be detected, a mark (such as an arrow and the like) indicating the position is generated in the picture to be detected.
It should be noted that the solution of the embodiment of the present application can be applied to various defect detection scenarios, for example, defect detection of magnetic materials.
According to the scheme provided by the embodiment of the application, a weight layer is connected behind the neural network, and the weight layer is multiplied by the feature map output by the neural network, so that the weight of the position with the flaw in the feature map output by the neural network is improved. In the final feature map obtained in this way, the position corresponding to the maximum value in the matrix corresponding to the final feature map may be the position of the flaw in the picture with a high probability. Therefore, the defect position detection in the defect detection process is realized to a certain extent, and the problem that the position information of the defect in the picture is not reflected in the current defect detection process is solved.
Example two:
based on the same inventive concept, the embodiment of the application also provides a training device of the flaw detection model and a flaw detection device. Referring to fig. 4 and 5, fig. 4 shows a training apparatus 100 corresponding to the method shown in fig. 2, and fig. 5 shows a defect detecting apparatus 200 corresponding to the method shown in fig. 3. It should be understood that the specific functions of the training device 100 and the defect detecting device 200 can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The training device 100 and the fault detection device 200 include at least one software function module that can be stored in memory in the form of software or firmware or solidified in the operating system of the training device 100 and the fault detection device 200. Specifically, the method comprises the following steps:
referring to fig. 4, the training apparatus 100 includes: a first processing module 101, a first input module 102, a judging module 103 and a back propagation module 104. Wherein:
a first processing module 101, configured to process a training set picture by using the flaw detection model provided in the first embodiment; the training set picture comprises an unblemished training set picture and a flawed training set picture, and the flawed training set picture is marked with a flawed position;
the first input module 102 is configured to input a final feature map corresponding to each training set image output by the flaw detection model into a preset loss function;
the judging module 103 is configured to judge whether a loss value of the loss function converges, and end the training when the loss value converges;
and the back propagation module 104 is configured to perform back propagation according to the loss value when the defect detection model is not converged, update parameters of the neural network in the defect detection model and a matrix value of the weight layer, and enable the processing module, the input module, and the determination module 103 to repeatedly perform the above processes in sequence.
In one possible implementation of the embodiment of the present application, the penalty function is a classification penalty function plus a flaw location penalty function.
In the above possible embodiment, the classification loss function is a cross entropy function or a softmax function, and in the above possible embodiment, the flaw location loss function is L ∑n∈N(Tn(p1)-In(p2))2;InCharacterization of the nth defective training set, TnRepresenting the final characteristic diagram, T, corresponding to the n defective training set picturen(p1) the position of the maximum value in the matrix corresponding to the final feature map; i isn(p2) indicating the positions of the defects marked in the defective training set pictures; and N is the total number of the defective training set pictures.
As shown in fig. 5, the defect detecting apparatus 200 includes: a second input module 201, a second processing module 202 and a classification module 203. Wherein:
the second input module 201 is configured to input the picture to be detected into the defect detection model provided in the first embodiment.
And a second processing module 202, configured to obtain a final feature map through the flaw detection model.
And the classification module 203 is configured to input the final feature map into a preset classifier to obtain a classification result of the to-be-detected picture.
The second processing module 202 is further configured to output a position of a maximum value in a matrix corresponding to the final feature map when the classification result indicates that a defect exists.
It should be understood that, for the sake of brevity, the contents described in some embodiments are not repeated in this embodiment.
Example three:
the present embodiment provides an electronic device, which is shown in fig. 6 and includes a data input/output interface 601, a processor 602, a memory 603, and a communication bus 604. Wherein:
the data input/output interface 601 is used for acquiring a training set picture or a picture to be detected.
The communication bus 604 is used to realize connection communication among the data input/output interface 601, the processor 602, and the memory 603.
The processor 602 is configured to execute one or more programs stored in the memory 603 to implement the method for training the fault detection model shown in fig. 2 in the first embodiment or implement the process for fault detection using the fault detection model shown in fig. 3 in the first embodiment.
It will be appreciated that the configuration shown in fig. 6 is merely illustrative and that the electronic device may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6, for example, may also have components such as a communications interface, a display screen, etc.
The present embodiment further provides a readable storage medium, such as a floppy disk, an optical disk, a hard disk, a flash Memory, a usb (secure digital Card) Card, an MMC (Multimedia Card) Card, etc., in which one or more programs for implementing the above steps are stored, and the one or more programs can be executed by one or more processors to implement the method for training the fault detection model in the first embodiment or the second embodiment, or to implement the process for fault detection using the fault detection model in the first embodiment or the second embodiment. And will not be described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
In this context, a plurality means two or more.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (10)
1. A fault detection model, comprising:
the neural network is used for extracting the characteristics of the input picture and outputting a characteristic diagram;
the weight layer is used for multiplying the feature map points output by the neural network to improve the weight of the position with flaws in the feature map output by the neural network, and outputting a final feature map after weight adjustment;
the weight layer is a matrix which has the same size with the characteristic diagram output by the neural network and the matrix value is related to the matrix value of the characteristic diagram output by the neural network.
2. A method for training a flaw detection model is characterized by comprising the following steps:
processing a training set picture using the flaw detection model of claim 1; the training set picture comprises an unblemished training set picture and a flawed training set picture, and the flawed training set picture is marked with a flawed position;
inputting the final characteristic diagram corresponding to each training set image output by the flaw detection model into a preset loss function;
judging whether the loss value of the loss function is converged;
if not, performing back propagation according to the loss value, updating the parameters of the neural network and the matrix value of the weight layer in the flaw detection model, and repeating the process;
and if the defect detection model is converged, finishing the training to obtain the trained defect detection model.
3. The training method of claim 2, wherein the loss function is a classification loss function plus a flaw location loss function.
4. A training method as claimed in claim 3, wherein the classification loss function is a cross-entropy function or a softmax function.
5. The training method of claim 3 or 4, wherein the flaw location loss function is L- ∑n∈N(Tn(p1)-In(p2))2;
Said InRepresenting the nth defective training set picture, TnCharacterizing a final feature map corresponding to the nth defective training set picture, Tn(p1) the position of the maximum value in the matrix corresponding to the final feature map; said In(p2) Marking flaw positions in the flaw training set pictures; and N is the total number of the defective training set pictures.
6. A method of defect detection, comprising:
inputting the picture to be detected into the flaw detection model according to claim 1 to obtain a final feature map;
inputting the final feature map into a preset classifier to obtain a classification result of the picture to be detected;
and when the classification result is that the defects exist, outputting the position of the maximum value in the matrix corresponding to the final characteristic diagram.
7. A training device for a flaw detection model is characterized by comprising: the device comprises a first processing module, a first input module, a judging module and a back propagation module;
the first processing module, configured to process a training set picture using the defect detection model of claim 1; the training set picture comprises an unblemished training set picture and a flawed training set picture, and the flawed training set picture is marked with a flawed position;
the first input module is used for inputting the final characteristic diagram corresponding to each training set image output by the flaw detection model into a preset loss function;
the judging module is used for judging whether the loss value of the loss function is converged or not, and finishing the training when the loss value of the loss function is converged;
and the back propagation module is used for carrying out back propagation according to the loss value when the defect detection model is not converged, updating the parameters of the neural network and the matrix value of the weight layer in the defect detection model, and enabling the processing module, the input module and the judgment module to sequentially and repeatedly execute the processes.
8. A defect detection apparatus, comprising: the device comprises a second input module, a second processing module and a classification module;
the second input module is used for inputting the picture to be detected into the flaw detection model according to claim 1;
the second processing module is used for obtaining a final characteristic diagram through the flaw detection model;
the classification module is used for inputting the final feature map into a preset classifier to obtain a classification result of the picture to be detected;
and the second processing module is further configured to output a position of a maximum value in a matrix corresponding to the final feature map when the classification result indicates that a defect exists.
9. An electronic device comprising a data input/output interface, a processor, a memory, and a communication bus;
the communication bus is used for realizing the connection communication among the data input/output interface, the processor and the memory;
the data input/output interface is used for acquiring a training set picture or acquiring a picture to be detected;
the processor is configured to execute one or more programs stored in the memory to implement the method for training a fault detection model according to any one of claims 2-5 or to implement the method for fault detection according to claim 6.
10. A readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the method for training a fault detection model according to any one of claims 2-5, or to implement the method for fault detection according to claim 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010520818.1A CN111681228A (en) | 2020-06-09 | 2020-06-09 | Flaw detection model, training method, detection method, apparatus, device, and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010520818.1A CN111681228A (en) | 2020-06-09 | 2020-06-09 | Flaw detection model, training method, detection method, apparatus, device, and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111681228A true CN111681228A (en) | 2020-09-18 |
Family
ID=72454436
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010520818.1A Pending CN111681228A (en) | 2020-06-09 | 2020-06-09 | Flaw detection model, training method, detection method, apparatus, device, and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111681228A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112129784A (en) * | 2020-09-23 | 2020-12-25 | 创新奇智(青岛)科技有限公司 | Visual detection device, visual detection method, electronic device, and storage medium |
CN114820618A (en) * | 2022-06-29 | 2022-07-29 | 心鉴智控(深圳)科技有限公司 | Defect detection model training method, device, equipment and storage medium |
WO2023019636A1 (en) * | 2021-08-18 | 2023-02-23 | 浙江工商大学 | Defect point identification method based on deep learning network |
US11615523B2 (en) | 2021-08-18 | 2023-03-28 | Zhejiang Gongshang University | Methods for recognizing small targets based on deep learning networks |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330405A (en) * | 2017-06-30 | 2017-11-07 | 上海海事大学 | Remote sensing images Aircraft Target Recognition based on convolutional neural networks |
CN110298387A (en) * | 2019-06-10 | 2019-10-01 | 天津大学 | Incorporate the deep neural network object detection method of Pixel-level attention mechanism |
-
2020
- 2020-06-09 CN CN202010520818.1A patent/CN111681228A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330405A (en) * | 2017-06-30 | 2017-11-07 | 上海海事大学 | Remote sensing images Aircraft Target Recognition based on convolutional neural networks |
CN110298387A (en) * | 2019-06-10 | 2019-10-01 | 天津大学 | Incorporate the deep neural network object detection method of Pixel-level attention mechanism |
Non-Patent Citations (1)
Title |
---|
武玉伟编著: "《深度学习基础与应用》", 30 April 2020, 北京理工大学出版社 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112129784A (en) * | 2020-09-23 | 2020-12-25 | 创新奇智(青岛)科技有限公司 | Visual detection device, visual detection method, electronic device, and storage medium |
WO2023019636A1 (en) * | 2021-08-18 | 2023-02-23 | 浙江工商大学 | Defect point identification method based on deep learning network |
US11615523B2 (en) | 2021-08-18 | 2023-03-28 | Zhejiang Gongshang University | Methods for recognizing small targets based on deep learning networks |
CN114820618A (en) * | 2022-06-29 | 2022-07-29 | 心鉴智控(深圳)科技有限公司 | Defect detection model training method, device, equipment and storage medium |
CN114820618B (en) * | 2022-06-29 | 2022-09-13 | 心鉴智控(深圳)科技有限公司 | Defect detection model training method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111681228A (en) | Flaw detection model, training method, detection method, apparatus, device, and medium | |
CN111369545B (en) | Edge defect detection method, device, model, equipment and readable storage medium | |
CN110942074B (en) | Character segmentation recognition method and device, electronic equipment and storage medium | |
US10810721B2 (en) | Digital image defect identification and correction | |
CN112734691B (en) | Industrial product defect detection method and device, terminal equipment and storage medium | |
CN113344826B (en) | Image processing method, device, electronic equipment and storage medium | |
CN113033537A (en) | Method, apparatus, device, medium and program product for training a model | |
CN110458791B (en) | Quality defect detection method and detection equipment | |
CN113436100B (en) | Method, apparatus, device, medium, and article for repairing video | |
CN116258707A (en) | PCB surface defect detection method based on improved YOLOv5 algorithm | |
CN117147561B (en) | Surface quality detection method and system for metal zipper | |
CN109086737B (en) | Convolutional neural network-based shipping cargo monitoring video identification method and system | |
CN117871545A (en) | Method and device for detecting defects of circuit board components, terminal and storage medium | |
CN113936232A (en) | Screen fragmentation identification method, device, equipment and storage medium | |
CN113902687A (en) | Methods, devices and media for determining the positivity and positivity of antibodies | |
CN117351505A (en) | Information code identification method, device, equipment and storage medium | |
CN115018857B (en) | Image segmentation method, image segmentation device, computer-readable storage medium and computer equipment | |
CN116188391A (en) | Method and device for detecting broken gate defect, electronic equipment and storage medium | |
CN112287898B (en) | Method and system for evaluating text detection quality of image | |
CN115205619A (en) | Training method, detection method, device and storage medium for detection model | |
CN112581001A (en) | Device evaluation method and device, electronic device and readable storage medium | |
CN118521586B (en) | Product vision detection device and method based on machine vision | |
CN110991296B (en) | Video annotation method and device, electronic equipment and computer-readable storage medium | |
CN113117341B (en) | Picture processing method and device, computer readable storage medium and electronic equipment | |
CN114881864B (en) | Training method and device for seal restoration network model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200918 |