CN111861966A - Model training method and device and defect detection method and device - Google Patents

Model training method and device and defect detection method and device Download PDF

Info

Publication number
CN111861966A
CN111861966A CN201910312755.8A CN201910312755A CN111861966A CN 111861966 A CN111861966 A CN 111861966A CN 201910312755 A CN201910312755 A CN 201910312755A CN 111861966 A CN111861966 A CN 111861966A
Authority
CN
China
Prior art keywords
area
label
training
defect
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910312755.8A
Other languages
Chinese (zh)
Other versions
CN111861966B (en
Inventor
陈佳伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910312755.8A priority Critical patent/CN111861966B/en
Priority to PCT/CN2020/085205 priority patent/WO2020211823A1/en
Publication of CN111861966A publication Critical patent/CN111861966A/en
Application granted granted Critical
Publication of CN111861966B publication Critical patent/CN111861966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Abstract

The application provides a model training method and a device, wherein the method comprises the following steps: acquiring a plurality of frames of training samples with labels, wherein the labels at least comprise a first label and a second label, the first label is used for recording a first marked area in the training samples as an easily-false-detected area, and the second label is used for recording a second marked area in the training samples as an area with defects; and training a detection model for detecting the defects by using the training samples with the labels, the position information of the first region and the position information of the second region in the training samples. Through increasing the label in the easy false detection region, can strengthen the study to the regional characteristic of false detection according to the label and the position in the easy false detection region that increase when training the model, reduce the false detection rate of detection model, promote the detection accuracy degree of detection model.

Description

Model training method and device and defect detection method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a model training method and apparatus, and a defect detection method and apparatus.
Background
In industrial production, defect detection (ASI) is required for the produced product. For example, after cloth is produced in the textile industry, whether flaws or defects exist in the cloth needs to be detected, so that timely repair is facilitated, and the quality of the cloth is improved. Although the traditional defect detection methods are various, the detection effect is not good, or the calculation amount is large, and the efficiency is low.
At present, the defect detection is carried out by adopting a neural network based on deep learning, but the detection false detection rate is higher.
Disclosure of Invention
In view of the above, the present application provides a defect detection method and apparatus to solve the problem of high detection error rate in the related art.
According to a first aspect of embodiments of the present application, there is provided a model training method, the method including:
acquiring a plurality of frames of training samples with labels, wherein the labels at least comprise a first label and a second label, the first label is used for recording a first marked area in the training samples as an easily-false-detected area, and the second label is used for recording a second marked area in the training samples as an area with defects;
and training a detection model for detecting the defects by using the training samples with the labels, the position information of the first region and the position information of the second region in the training samples.
According to a second aspect of embodiments of the present application, there is provided a model training apparatus, the apparatus comprising:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring a plurality of frames of training samples with labels, the labels at least comprise a first label and a second label, the first label is used for recording a marked first area in the training samples as an area easy to falsely detect, and the second label is used for recording a marked second area in the training samples as an area with defects;
And the training module is used for training a detection model for detecting the defects by using the training samples with the labels, the position information of the first region and the position information of the second region in the training samples.
By applying the embodiment of the application, when the detection model is trained, the learning of the characteristics of the false detection region is enhanced by adding the label of the false detection region according to the added label and the position information of the false detection region, so that the purposes of reducing the false detection rate of the detection model and improving the detection accuracy of the detection model are achieved.
According to a third aspect of embodiments of the present application, there is provided a defect detection method applying the detection model according to the first aspect, the method including:
inputting a target image to be detected into a detection model, and detecting the target image to be detected by the detection model to obtain a target defect probability value of each pixel point in the target image to be detected;
and determining whether a defect area exists in the target image to be detected according to the target defect probability value of each pixel point in the target image to be detected.
According to a fourth aspect of embodiments of the present application, there is provided a defect detection apparatus applying the detection model according to the first aspect, the apparatus comprising:
The detection module is used for inputting the target image to be detected into a detection model, so that the detection model detects the target image to be detected, and the target defect probability value of each pixel point in the target image to be detected is obtained;
and the determining module is used for determining whether the target image to be detected has a defect area according to the target defect probability value of each pixel point in the target image to be detected.
By applying the embodiment of the application, the model for defect detection is obtained by training the training sample added with the label of the area easy to falsely detect, and the label and the position information of the area easy to falsely detect can strengthen the learning of the characteristics of the area easy to falsely detect, so that the defect detection realized by the detection model has low false detection rate and high detection accuracy.
According to a fifth aspect of the embodiments of the present application, there is provided a method for acquiring a defect detection sample, the method including:
detecting a defect area existing in each acquired training sample;
if an area which is not overlapped with a second area marked with a second label exists in the detected defect area, determining the area as an area easy to falsely detect, and marking the area with the first label to obtain a training sample with the first label and the second label;
And each acquired training sample carries a second label, and the second label is used for recording that the marked second area in the training sample is an area with defects.
According to a sixth aspect of embodiments of the present application, there is provided an apparatus for obtaining a defect inspection sample, the apparatus including:
the detection module is used for detecting a defect area in each acquired training sample;
the labeling module is used for determining that the area is an area easy to falsely detect if an area which is not overlapped with the second area marked with the second label exists in the detected defect area, and labeling the area with the first label to obtain a training sample with the first label and the second label;
and each acquired training sample carries a second label, and the second label is used for recording that the marked second area in the training sample is an area with defects.
By applying the embodiment of the application, the easily-mistakenly-detected region is automatically obtained by comparing the detected defect region with the region marked with the defect, so that the easily-mistakenly-detected region is marked, the easily-mistakenly-detected region is more accurate compared with a manual marking mode, and the labor cost can be saved.
Drawings
Fig. 1A is a diagram of a chopstick to be detected according to an exemplary embodiment;
FIG. 1B is a defect diagram of a chopstick according to the related art shown in the present application according to an exemplary embodiment;
FIG. 2A is a flow chart of an embodiment of a model training method shown herein according to an exemplary embodiment;
FIG. 2B is a schematic diagram illustrating a cloth defect type according to the embodiment of FIG. 2A;
FIG. 2C is a schematic diagram of a model training architecture according to the embodiment of FIG. 2A;
FIG. 2D is a schematic diagram of another model training architecture according to the embodiment of FIG. 2A;
FIG. 3A is a flow chart illustrating an embodiment of a defect detection method according to an exemplary embodiment of the present application;
FIG. 3B is a schematic diagram of a defect detection structure according to the embodiment shown in FIG. 3A;
FIG. 3C is a defect view of a chopstick shown in the embodiment of FIG. 3A;
FIG. 3D is a diagram of a piece of cloth to be inspected according to the embodiment shown in FIG. 3A;
FIG. 3E is a defect diagram of a cloth according to the embodiment of FIG. 3A;
FIG. 4 is a flowchart illustrating an embodiment of a method for obtaining a defect inspection sample according to an exemplary embodiment of the present application;
FIG. 5 is a diagram illustrating a hardware configuration of an electronic device according to an exemplary embodiment of the present application;
FIG. 6 is a block diagram illustrating an embodiment of a model training apparatus according to an exemplary embodiment of the present application;
FIG. 7 is a block diagram illustrating an embodiment of a defect detection apparatus according to an exemplary embodiment of the present application;
fig. 8 is a block diagram illustrating an embodiment of a defect inspection sample acquiring apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The training process of the neural network based on deep learning adopted at present is as follows: acquiring a sample image of a target object, marking a defect area and a non-defect area in each sample image by different labels respectively, and then training a neural network model by using the sample image and the marked labels. However, the detection result obtained by adopting the currently trained network model has the over-fitting problem, so that the false detection rate is higher.
In one example, fig. 1A is a diagram of a target object to be examined, fig. 1A is inputted into a trained network model, the network model is used to detect fig. 1A, and a defect map determined by using the detection result is fig. 1B, as shown in fig. 1B, a black area is a non-defective area, a white area is a defective area, but a white area enclosed by a dotted line: the area 1, the area 2, the area 3, the area 4, and the area 5 are all erroneously detected as defective areas.
In order to solve the above problems, the present application provides a model training method, in which a plurality of frames of training samples with labels are obtained, where the labels at least include a first label and a second label, the first label is used to record a first marked region in the training samples as an error-prone region, the second label is used to record a second marked region in the training samples as a defective region, and then a detection model for detecting defects is trained by using each training sample with labels, and position information of the first region and position information of the second region in each training sample.
Based on the description, when the detection model is trained, the learning of the characteristics of the false detection area is enhanced by adding the label of the false detection area according to the added label and the position information of the false detection area, so that the purposes of reducing the false detection rate of the detection model and improving the detection accuracy of the detection model are achieved.
The model training method proposed in the present application is explained in detail below with specific embodiments.
Fig. 2A is a flowchart illustrating an embodiment of a model training method according to an exemplary embodiment of the present application, where the model training method includes the following steps:
step 201: the method comprises the steps of obtaining a multi-frame training sample with labels, wherein the labels at least comprise a first label and a second label, the first label is used for recording a first marked area in the training sample as an easily-false-detected area, and the second label is used for recording a second marked area in the training sample as an area with defects.
For example, training samples may be selected or captured according to actual detection requirements, and objects in each training sample belong to the same category.
Assuming that the trained model is used for detecting defects on the surface of the cloth, a plurality of training samples of the cloth can be obtained, as shown in fig. 2B, wherein the area circled with black in the diagram (a) represents wrong yarn defects, the area circled with black in the diagram (B) represents broken needle defects, the area circled with black in the diagram (c) represents open width line defects, and the area circled with black in the diagram (d) represents broken hole defects; assuming that the trained model is used to detect defects (e.g. pits) on the chopstick surface, multiple frames of training samples of chopsticks can be obtained.
In an embodiment, for the labeling process of the first label, a defect area existing in each training sample may be detected, and if an area that does not overlap with the second area marked with the second label exists in the detected defect area, the area is determined to be an area easy to be falsely detected, and the area is marked with the first label, so as to obtain the training sample with the first label and the second label.
The labeling process of the second label for marking the defect area can be implemented by related technologies, and will not be described in detail herein.
It is worth noting that the labeled training sample may further include a third label for recording a third area of the training sample that is labeled as an area where no defect exists.
The specific representation forms of the first label, the second label and the third label are not limited, and only three different areas can be distinguished.
In an exemplary scenario, again as shown in FIG. 1B, white areas circled with a dotted line: the areas 1, 2, 3, 4, and 5 are error-prone areas marked with a first label, the remaining white areas are defective areas marked with a second label, and the black areas are non-defective areas marked with a third label.
It will be appreciated by those skilled in the art that different labels may further be used for the defect regions to distinguish the defect types.
Step 202: and training a detection model for detecting the defects by using the training samples with the labels, the position information of the first region and the position information of the second region in the training samples.
In an embodiment, as shown in fig. 2C, when the detection model is a single model, the detection model for detecting the defect may be trained by inputting each labeled training sample and the position information of the first region, the position information of the second region, and the position information of the third region in each training sample into a designated model training engine, so as to use the input labeled training sample, the position information of the first region, the position information of the second region, and the position information of the third region by the model training engine.
For example, in the training process of the model training engine, the weight of the first label is greater than the weight of the second label and the weight of the third label, so that more calculation power can be used for learning the features of the region easy to be falsely detected during gradient return, and the false detection rate is reduced.
In another embodiment, as shown in fig. 2D, when the detection model includes a first sub-detection model and a second sub-detection model, the first sub-detection model may be trained by the model training engine using the input labeled training samples and the position information of the first region in each training sample by inputting each labeled training sample and the position information of the first region in each training sample to the designated model training engine, and the second sub-detection model may be trained by the model training engine using the input labeled training samples and the position information of the second region and the position information of the third region in each training sample by inputting each labeled training sample and the position information of the second region to the designated model training engine.
The first sub-detection model outputs a probability value that each pixel point in a training sample is easy to distinguish defects, the larger the value is, the easier defects are to be distinguished, and the smaller the value is, the more difficult defects are to be distinguished; the output of the second sub-detection model is the probability value of each pixel point in the training sample with defects, and the probability of the defects is higher when the value is higher.
For example, the first sub-detection model may be obtained by training based on a deep network learning method, and the network structure adopted by the first sub-detection model may be a structure of a convolutional neural network, where the convolutional neural network may include a convolutional layer, a pooling layer, a BN (batch normalization) layer, a fully-connected layer, and other computational layers. The second sub-detection model may be obtained by the same method as the first sub-detection model, but may be obtained by a method different from the first sub-detection model, which is not limited in the present application.
In the embodiment of the application, a plurality of frames of training samples with labels are obtained, the labels at least comprise a first label and a second label, the first label is used for recording a first marked area in the training samples as an area easy to falsely detect, the second label is used for recording a second marked area in the training samples as an area with defects, and then a detection model for detecting the defects is trained by using the training samples with the labels, the position information of the first area and the position information of the second area in the training samples.
Based on the description, when the detection model is trained, the learning of the characteristics of the false detection area is enhanced by adding the label of the false detection area according to the added label and the position information of the false detection area, so that the purposes of reducing the false detection rate of the detection model and improving the detection accuracy of the detection model are achieved.
Fig. 3A is a flowchart of an embodiment of a defect detection method according to an exemplary embodiment of the present application, where the present embodiment is based on the embodiment shown in fig. 2A, and the defect detection is implemented by applying the detection model, and the defect detection method includes the following steps:
step 301: and inputting the target image to be detected into a detection model, and detecting the target image to be detected by the detection model to obtain the target defect probability value of each pixel point in the target image to be detected.
In one embodiment, as shown in fig. 3B, when the detection model includes a first sub-detection model and a second sub-detection model, the target graph to be detected can be detected by the first sub-detection model by inputting the target graph to be detected into the first sub-detection model, so as to obtain a first candidate probability value of each pixel point in the target graph to be detected, wherein the first candidate probability value is used for representing the probability of easily distinguishing defects, meanwhile, the target graph to be detected is input into a second sub-detection model, the target graph to be detected is detected by the second sub-detection model, a second candidate probability value of each pixel point in the target graph to be detected is obtained, the second candidate probability value is used for representing the probability of the existence of the defects, and then fusing the first candidate probability value and the second candidate probability value of each pixel point to obtain the target defect probability value of each pixel point.
The detection result output by the first sub-detection model represents the probability that each pixel point in the target image to be detected is easy to distinguish the defect, the larger the value is, the easier the defect is to distinguish, the smaller the value is, the defect is difficult to distinguish, the detection result output by the second sub-detection model represents the probability that each pixel point in the target image to be detected has the defect, the larger the value is, the higher the probability of the defect is, and therefore the detection result output by the first sub-detection model and the detection result output by the second sub-detection model need to be fused to obtain the target defect probability of each pixel point.
For the process of fusing the first candidate probability value and the second candidate probability value of each pixel point, two fusion modes are described in detail below, and of course, other fusion modes may also be adopted, which is not limited in the present application.
The first fusion mode: for each pixel point, the mean value of the first candidate probability value and the second candidate probability value of the pixel point can be used as the target defect probability value of the pixel point. The expression is as follows:
target defect probability value 0.5 ═ 0.5 × (first candidate probability value + second candidate probability value)
The second fusion mode is as follows: for each pixel point, if the first candidate probability value of the pixel point is smaller than a preset value, the first candidate probability value of the pixel point is used as the target defect probability value of the pixel point, and if the first candidate probability value of the pixel point is not smaller than the preset value, the second candidate probability value of the pixel point is used as the target defect probability value of the pixel point. The expression is as follows:
the target defect probability value is the first candidate probability value (the first candidate probability value < the preset value) + the second candidate probability value (the first candidate probability value > -the preset value)
If the first candidate probability value of the pixel is smaller than a preset value, the pixel is a defect which is difficult to distinguish, and the second candidate probability value of the pixel is not high in reliability and is likely to be an erroneous detection result, so that the first candidate probability value is selected as the target defect probability value of the pixel; if the first candidate probability value of the pixel is not smaller than the preset value, the defect of the pixel is easy to distinguish, and the reliability of the second candidate probability value of the pixel is high, so that the second candidate probability value is selected as the target defect probability value of the pixel.
Assuming that the preset value is 0.5, if the first candidate probability value is less than 0.5, the first candidate probability value is taken as the target defect probability value, and if the first candidate probability value is greater than or equal to 0.5, the second candidate probability value is taken as the target defect probability value.
Based on the description, the target defect probability value of each pixel point in the target image to be detected is obtained by combining the first sub-detection model and the second sub-detection model, and because the first sub-detection model is used for detecting whether each pixel point is easy to distinguish the defect, the pixel points which are not easy to distinguish the defect can be prevented from being falsely detected as the defect by fusing the detection result of the first sub-detection model and the detection result of the second sub-detection model.
Step 302: and determining whether the target image to be detected has a defect area or not according to the target defect probability value of each pixel point in the target image to be detected.
In an embodiment, the target defect probability value of each pixel point in the target image to be detected can be converted into a defect image, so that the defect area can be judged conveniently. The conversion process may be: determining a pixel gray value mapped by the target defect probability value of each pixel point in the target image to be detected, and generating a defect image corresponding to the target image to be detected by using the pixel gray value mapped by the target defect probability value of each pixel point in the target image to be detected.
The value range of the target defect probability value of the pixel point is 0-1, the pixel gray value adopted by the defect map is 0-255, the target defect probability value of the pixel point needs to be converted into the pixel gray value used by the defect map, the mapping relation of the pixel gray value used by converting the target defect probability value into the defect map can be set in advance according to practical experience, and the larger the target defect probability value is, the larger the mapped pixel gray value is.
For example, the area formed by the pixel points in the defect map whose pixel gray values are greater than the preset value may be determined as the area with the defect.
Taking the defect of the detected chopsticks as an example, inputting the above-mentioned map to be detected 1A into a detection model, detecting the map 1A by the detection model to obtain the target defect probability value of each pixel point in the map 1A, and further converting the target defect probability value of each pixel point in the map to be detected into a defect map, such as the defect map shown in fig. 3C, wherein a black area is a non-defective area, a white area is a defective area, and compared with the defect map 1B obtained by the above-mentioned related technology, the false detection area in the map 1B is filtered by the detection model.
Taking the detection of the cloth defect as an example, as shown in fig. 3D, the cloth image to be detected is obtained by inputting the image 3D into the detection model, detecting the image 3D by the detection model to obtain the target defect probability value of each pixel point in the image 3D, and further converting the target defect probability value of each pixel point in the cloth image to be detected into a defect image, as shown in the defect image of the cloth shown in fig. 3E, a black area is a defect-free area, and a white area is a defect area.
In this embodiment, the model for defect detection is trained from a training sample in which the labels of the regions prone to false detection are added, and the labels and the position information of the regions prone to false detection can enhance the learning of the features of the regions prone to false detection, so that the defect detection implemented by the detection model has a low false detection rate and high detection accuracy.
Fig. 4 is a flowchart illustrating an embodiment of a method for acquiring a defect inspection sample according to an exemplary embodiment of the present application, where the method for acquiring a defect inspection sample includes the following steps:
step 401: and detecting the defect area existing in each acquired training sample.
And each acquired training sample carries a second label, and the second label is used for recording that the marked second area in the training sample is an area with defects. For the description of the training samples, reference may be made to the related description in step 201, and details are not repeated here.
It will be understood by those skilled in the art that the labeling process of the second tag can be implemented by related technologies, and will not be described in detail here.
It should be noted that the detection of the defect region in the training sample may be performed by a detection model used in the related art, or may be performed by a conventional detection algorithm, so as to compare the detected defect region with the marked defect region, thereby obtaining a region that is easy to be detected by mistake.
Step 402: if there is a region that does not overlap with the second region marked with the second label in the detected defect region, the region is determined to be an easily false-detected region, and the region is marked with the first label to obtain a training sample with the first label and the second label.
In an embodiment, each training sample obtained may further carry a third label, and the third label is used to record that the labeled third area in the training sample is an area without defects.
It will be understood by those skilled in the art that the labeling process of the third label can also be implemented by related technologies, and will not be described in detail here.
In this embodiment, since the easily-mistakenly-detected region is automatically obtained by comparing the detected defect region with the region marked with the defect, the easily-mistakenly-detected region is marked, which is more accurate than a manual marking method and can save labor cost.
Fig. 5 is a hardware block diagram of an electronic device according to an exemplary embodiment of the present application, where the electronic device includes: a communication interface 501, a processor 502, a machine-readable storage medium 503, and a bus 504; wherein the communication interface 501, the processor 502 and the machine-readable storage medium 503 are in communication with each other via a bus 504. The processor 502 may execute the above-described method for generating a detection model by reading and executing machine executable instructions in the machine readable storage medium 503 corresponding to the control logic of the method for generating a detection model, and the specific contents of the method are described in the above embodiments and will not be described herein again.
The machine-readable storage medium 503 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 503 may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), any type of storage disk (e.g., an optical disk, a DVD, etc.), or similar storage medium, or a combination thereof.
FIG. 6 is a block diagram of an embodiment of a model training apparatus according to an exemplary embodiment of the present application, the model training apparatus comprising:
the acquiring module 610 is configured to acquire multiple frames of training samples with labels, where the labels at least include a first label and a second label, the first label is used to record a first marked area in the training samples as an area prone to false detection, and the second label is used to record a second marked area in the training samples as an area with defects;
and the training module 620 is configured to train a detection model for detecting the defect by using each labeled training sample, and the position information of the first region and the position information of the second region in each training sample.
In an alternative implementation, the tag may further include a third tag; the third label is used for recording a marked third area in the training sample as an area without defects;
the training module 620 is specifically configured to input each labeled training sample and the position information of the first region, the position information of the second region, and the position information of the third region in each training sample to a specified model training engine, so that the model training engine trains a detection model for detecting a defect by using the input labeled training sample, the position information of the first region, the position information of the second region, and the position information of the third region.
In an alternative implementation, the detection model includes a first sub-detection model and a second sub-detection model;
the training module 620 is further specifically configured to input each labeled training sample and the position information of the first region in each training sample to a designated model training engine, so that the model training engine trains a first sub-detection model by using the input labeled training sample and the position information of the first region; and inputting the labeled training samples and the position information of the second area and the position information of the third area in the training samples into a specified model training engine, and training a second sub-detection model by using the input labeled training samples, the position information of the second area and the position information of the third area by using the model training engine.
In an optional implementation manner, the obtaining module 610 is specifically configured to, for each training sample, detect a defect region existing in the training sample; if there is a region that does not overlap with the second region marked with the second label in the detected defect region, the region is determined to be an easily false-detected region, and the region is marked with the first label to obtain a training sample with the first label and the second label.
Fig. 7 is a block diagram of an embodiment of a defect detection apparatus according to an exemplary embodiment of the present application, the defect detection apparatus including:
the detection module 710 is configured to input a target image to be detected into a detection model, so that the detection model detects the target image to be detected, and a target defect probability value of each pixel point in the target image to be detected is obtained;
the determining module 720 is configured to determine whether a defect region exists in the target image to be detected according to the target defect probability value of each pixel point in the target image to be detected.
In an alternative implementation, the detection model includes a first sub-detection model and a second sub-detection model;
the detection module 710 is specifically configured to input the target image to be detected into a first sub-detection model, so that the first sub-detection model detects the target image to be detected, and a first candidate probability value of each pixel point in the target image to be detected is obtained, where the first candidate probability value is used to indicate a probability that defects are easy to distinguish; inputting the target image to be detected into a second sub-detection model, and detecting the target image to be detected by the second sub-detection model to obtain a second candidate probability value of each pixel point in the target image to be detected, wherein the second candidate probability value is used for representing the probability of defects; and fusing the first candidate probability value and the second candidate probability value of each pixel point to obtain the target defect probability value of each pixel point.
In an optional implementation manner, the detecting module 710 is specifically configured to, in a process of fusing the first candidate probability value and the second candidate probability value of each pixel, regarding each pixel, use a mean value of the first candidate probability value and the second candidate probability value of the pixel as a target defect probability value of the pixel.
In an optional implementation manner, the detecting module 710 is specifically configured to, for each pixel point, use the first candidate probability value of the pixel point as the target defect probability value of the pixel point if the first candidate probability value of the pixel point is smaller than a preset value in a process of fusing the first candidate probability value and the second candidate probability value of each pixel point; and if the first candidate probability value of the pixel point is not smaller than the preset value, taking the second candidate probability value of the pixel point as the target defect probability value of the pixel point.
Fig. 8 is a block diagram of an embodiment of an apparatus for acquiring a defect inspection sample according to an exemplary embodiment of the present application, where the apparatus includes:
a detecting module 810, configured to detect, for each acquired training sample, a defect region existing in the training sample;
A labeling module 820, configured to determine, if an area that does not overlap with a second area marked with a second label exists in the detected defect area, that the area is an area easy to be falsely detected, and label the area with the first label to obtain a training sample with the first label and the second label;
and each acquired training sample carries a second label, and the second label is used for recording that the marked second area in the training sample is an area with defects.
In an optional implementation manner, each acquired training sample may further carry a third label, where the third label is used to record that the labeled third area in the training sample is an area where no defect exists.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (13)

1. A method of model training, the method comprising:
acquiring a plurality of frames of training samples with labels, wherein the labels at least comprise a first label and a second label, the first label is used for recording a first marked area in the training samples as an easily-false-detected area, and the second label is used for recording a second marked area in the training samples as an area with defects;
and training a detection model for detecting the defects by using the training samples with the labels, the position information of the first region and the position information of the second region in the training samples.
2. The method of claim 1, wherein the tag further comprises a third tag; the third label is used for recording a marked third area in the training sample as an area without defects;
training a detection model for detecting the defects by using each labeled training sample, the position information of the first region and the position information of the second region in each training sample, wherein the training model comprises the following steps:
The method comprises the steps of inputting each labeled training sample and position information of a first area, position information of a second area and position information of a third area in each training sample into a specified model training engine, and training a detection model for detecting defects by using the input labeled training sample, the position information of the first area, the position information of the second area and the position information of the third area by the model training engine.
3. The method of claim 2, wherein the detection model comprises a first sub-detection model and a second sub-detection model;
training a detection model for detecting the defects by using each labeled training sample, the position information of the first region and the position information of the second region in each training sample, wherein the training model comprises the following steps:
inputting each labeled training sample and the position information of the first area in each training sample to a specified model training engine, so that the model training engine trains a first sub-detection model by using the input labeled training sample and the position information of the first area;
and inputting the labeled training samples and the position information of the second area and the position information of the third area in the training samples into a specified model training engine, and training a second sub-detection model by using the input labeled training samples, the position information of the second area and the position information of the third area by using the model training engine.
4. The method of claim 1, wherein obtaining a plurality of frames of tagged training samples comprises:
detecting a defect area existing in each training sample;
if there is a region that does not overlap with the second region marked with the second label in the detected defect region, the region is determined to be an easily false-detected region, and the region is marked with the first label to obtain a training sample with the first label and the second label.
5. A method for obtaining a defect inspection sample, the method comprising:
detecting a defect area existing in each acquired training sample;
if an area which is not overlapped with a second area marked with a second label exists in the detected defect area, determining the area as an area easy to falsely detect, and marking the area with the first label to obtain a training sample with the first label and the second label;
and each acquired training sample carries a second label, and the second label is used for recording that the marked second area in the training sample is an area with defects.
6. The method of claim 5, wherein each training sample is further provided with a third label, and the third label is used to record the marked third area in the training sample as an area without defects.
7. A defect detection method using the detection model of claim 1, wherein the method comprises:
inputting a target image to be detected into a detection model, and detecting the target image to be detected by the detection model to obtain a target defect probability value of each pixel point in the target image to be detected;
and determining whether a defect area exists in the target image to be detected according to the target defect probability value of each pixel point in the target image to be detected.
8. The method of claim 7, wherein the detection model comprises a first sub-detection model and a second sub-detection model;
inputting a target graph to be detected into a detection model, comprising:
inputting the target image to be detected into a first sub-detection model, and detecting the target image to be detected by the first sub-detection model to obtain a first candidate probability value of each pixel point in the target image to be detected, wherein the first candidate probability value is used for representing the probability of easily distinguishing defects;
inputting the target image to be detected into a second sub-detection model, and detecting the target image to be detected by the second sub-detection model to obtain a second candidate probability value of each pixel point in the target image to be detected, wherein the second candidate probability value is used for representing the probability of defects;
And fusing the first candidate probability value and the second candidate probability value of each pixel point to obtain the target defect probability value of each pixel point.
9. The method of claim 8, wherein fusing the first candidate probability value and the second candidate probability value for each pixel point comprises:
and aiming at each pixel point, taking the mean value of the first candidate probability value and the second candidate probability value of the pixel point as the target defect probability value of the pixel point.
10. The method of claim 8, wherein fusing the first candidate probability value and the second candidate probability value for each pixel point comprises:
for each pixel point, if the first candidate probability value of the pixel point is smaller than a preset value, taking the first candidate probability value of the pixel point as the target defect probability value of the pixel point;
and if the first candidate probability value of the pixel point is not smaller than the preset value, taking the second candidate probability value of the pixel point as the target defect probability value of the pixel point.
11. A model training apparatus, the apparatus comprising:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring a plurality of frames of training samples with labels, the labels at least comprise a first label and a second label, the first label is used for recording a marked first area in the training samples as an area easy to falsely detect, and the second label is used for recording a marked second area in the training samples as an area with defects;
And the training module is used for training a detection model for detecting the defects by using the training samples with the labels, the position information of the first region and the position information of the second region in the training samples.
12. An apparatus for obtaining a defect inspection sample, the apparatus comprising:
the detection module is used for detecting a defect area in each acquired training sample;
the labeling module is used for determining that the area is an area easy to falsely detect if an area which is not overlapped with the second area marked with the second label exists in the detected defect area, and labeling the area with the first label to obtain a training sample with the first label and the second label;
and each acquired training sample carries a second label, and the second label is used for recording that the marked second area in the training sample is an area with defects.
13. A defect inspection apparatus using the inspection model of claim 1, wherein the apparatus comprises:
the detection module is used for inputting the target image to be detected into a detection model, so that the detection model detects the target image to be detected, and the target defect probability value of each pixel point in the target image to be detected is obtained;
And the determining module is used for determining whether the target image to be detected has a defect area according to the target defect probability value of each pixel point in the target image to be detected.
CN201910312755.8A 2019-04-18 2019-04-18 Model training method and device and defect detection method and device Active CN111861966B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910312755.8A CN111861966B (en) 2019-04-18 2019-04-18 Model training method and device and defect detection method and device
PCT/CN2020/085205 WO2020211823A1 (en) 2019-04-18 2020-04-16 Model training method and device, and defect detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910312755.8A CN111861966B (en) 2019-04-18 2019-04-18 Model training method and device and defect detection method and device

Publications (2)

Publication Number Publication Date
CN111861966A true CN111861966A (en) 2020-10-30
CN111861966B CN111861966B (en) 2023-10-27

Family

ID=72838053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910312755.8A Active CN111861966B (en) 2019-04-18 2019-04-18 Model training method and device and defect detection method and device

Country Status (2)

Country Link
CN (1) CN111861966B (en)
WO (1) WO2020211823A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784997A (en) * 2021-01-22 2021-05-11 北京百度网讯科技有限公司 Annotation rechecking method, device, equipment, storage medium and program product

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651941A (en) * 2020-12-25 2021-04-13 北京巅峰科技有限公司 Vehicle defect identification method and device, electronic device and storage medium
CN113706462A (en) * 2021-07-21 2021-11-26 南京旭锐软件科技有限公司 Product surface defect detection method, device, equipment and storage medium
CN117282687A (en) * 2023-10-18 2023-12-26 广州市普理司科技有限公司 Automatic mark picking and supplementing control system for visual inspection of printed matter

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473925A (en) * 2013-08-28 2013-12-25 惠州市德赛工业发展有限公司 Verification method of road vehicle detection system
CN104156734A (en) * 2014-08-19 2014-11-19 中国地质大学(武汉) Fully-autonomous on-line study method based on random fern classifier
CN106548155A (en) * 2016-10-28 2017-03-29 安徽四创电子股份有限公司 A kind of detection method of license plate based on depth belief network
CN107871134A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN107886133A (en) * 2017-11-29 2018-04-06 南京市测绘勘察研究院股份有限公司 A kind of underground piping defect inspection method based on deep learning
CN107966447A (en) * 2017-11-14 2018-04-27 浙江大学 A kind of Surface Flaw Detection method based on convolutional neural networks
CN108562589A (en) * 2018-03-30 2018-09-21 慧泉智能科技(苏州)有限公司 A method of magnetic circuit material surface defect is detected
CN108921111A (en) * 2018-07-06 2018-11-30 南京旷云科技有限公司 Object detection post-processing approach and corresponding intrument
US20190012579A1 (en) * 2017-07-10 2019-01-10 Fanuc Corporation Machine learning device, inspection device and machine learning method
CN109410190A (en) * 2018-10-15 2019-03-01 广东电网有限责任公司 Shaft tower based on High Resolution Remote Sensing Satellites image falls disconnected detection model training method
CN109522968A (en) * 2018-11-29 2019-03-26 济南浪潮高新科技投资发展有限公司 A kind of focal zone detection method and system based on serial double Task Networks
CN109558902A (en) * 2018-11-20 2019-04-02 成都通甲优博科技有限责任公司 A kind of fast target detection method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875381B (en) * 2017-01-17 2020-04-28 同济大学 Mobile phone shell defect detection method based on deep learning
US10977562B2 (en) * 2017-08-07 2021-04-13 International Business Machines Corporation Filter for harmful training samples in active learning systems
CN109146873B (en) * 2018-09-04 2020-12-29 凌云光技术股份有限公司 Learning-based intelligent detection method and device for defects of display screen
CN109389160A (en) * 2018-09-27 2019-02-26 南京理工大学 Electric insulation terminal defect inspection method based on deep learning

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473925A (en) * 2013-08-28 2013-12-25 惠州市德赛工业发展有限公司 Verification method of road vehicle detection system
CN104156734A (en) * 2014-08-19 2014-11-19 中国地质大学(武汉) Fully-autonomous on-line study method based on random fern classifier
CN107871134A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN106548155A (en) * 2016-10-28 2017-03-29 安徽四创电子股份有限公司 A kind of detection method of license plate based on depth belief network
US20190012579A1 (en) * 2017-07-10 2019-01-10 Fanuc Corporation Machine learning device, inspection device and machine learning method
CN107966447A (en) * 2017-11-14 2018-04-27 浙江大学 A kind of Surface Flaw Detection method based on convolutional neural networks
CN107886133A (en) * 2017-11-29 2018-04-06 南京市测绘勘察研究院股份有限公司 A kind of underground piping defect inspection method based on deep learning
CN108562589A (en) * 2018-03-30 2018-09-21 慧泉智能科技(苏州)有限公司 A method of magnetic circuit material surface defect is detected
CN108921111A (en) * 2018-07-06 2018-11-30 南京旷云科技有限公司 Object detection post-processing approach and corresponding intrument
CN109410190A (en) * 2018-10-15 2019-03-01 广东电网有限责任公司 Shaft tower based on High Resolution Remote Sensing Satellites image falls disconnected detection model training method
CN109558902A (en) * 2018-11-20 2019-04-02 成都通甲优博科技有限责任公司 A kind of fast target detection method
CN109522968A (en) * 2018-11-29 2019-03-26 济南浪潮高新科技投资发展有限公司 A kind of focal zone detection method and system based on serial double Task Networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邡鑫;史峥;: "基于卷积神经网络的晶圆缺陷检测与分类算法", 计算机工程, no. 08 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784997A (en) * 2021-01-22 2021-05-11 北京百度网讯科技有限公司 Annotation rechecking method, device, equipment, storage medium and program product
CN112784997B (en) * 2021-01-22 2023-11-10 北京百度网讯科技有限公司 Annotation rechecking method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN111861966B (en) 2023-10-27
WO2020211823A1 (en) 2020-10-22

Similar Documents

Publication Publication Date Title
CN111861966B (en) Model training method and device and defect detection method and device
CN110276754B (en) Surface defect detection method, terminal device and storage medium
KR101917000B1 (en) Methods and systems for inspecting goods
KR20190063839A (en) Method and System for Machine Vision based Quality Inspection using Deep Learning in Manufacturing Process
CN113344857B (en) Defect detection network training method, defect detection method and storage medium
CN110135225B (en) Sample labeling method and computer storage medium
KR20210150970A (en) Detecting defects in semiconductor specimens using weak labeling
CN111126393A (en) Vehicle appearance refitting judgment method and device, computer equipment and storage medium
CN114037672A (en) Image defect detection method and device and computer readable medium
CN111753877B (en) Product quality detection method based on deep neural network migration learning
CN113850749A (en) Method for training defect detector
Yao et al. A feature memory rearrangement network for visual inspection of textured surface defects toward edge intelligent manufacturing
CN113836850A (en) Model obtaining method, system and device, medium and product defect detection method
CN114299040A (en) Ceramic tile flaw detection method and device and electronic equipment
CN108508023A (en) The defect detecting system of end puller bolt is contacted in a kind of railway contact line
CN114972880A (en) Label identification method and device, electronic equipment and storage medium
CN111311545A (en) Container detection method, device and computer readable storage medium
CN115019133A (en) Method and system for detecting weak target in image based on self-training and label anti-noise
CN111311546A (en) Container detection method, device and computer readable storage medium
CN114663687A (en) Model training method, target recognition method, device, equipment and storage medium
Sruthy et al. Car damage identification and categorization using various transfer learning models
JP3322958B2 (en) Print inspection equipment
CN113744252A (en) Method, apparatus, storage medium and program product for marking and detecting defects
CN116629270B (en) Subjective question scoring method and device based on examination big data and text semantics
CN110533657A (en) A kind of liquid crystal display appearance detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant