CN110599453A - Panel defect detection method and device based on image fusion and equipment terminal - Google Patents

Panel defect detection method and device based on image fusion and equipment terminal Download PDF

Info

Publication number
CN110599453A
CN110599453A CN201910731007.3A CN201910731007A CN110599453A CN 110599453 A CN110599453 A CN 110599453A CN 201910731007 A CN201910731007 A CN 201910731007A CN 110599453 A CN110599453 A CN 110599453A
Authority
CN
China
Prior art keywords
sample
image
panel
images
defect
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910731007.3A
Other languages
Chinese (zh)
Inventor
梁勇
张胜森
郑增强
吴川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Jingce Electronic Group Co Ltd
Wuhan Jingli Electronic Technology Co Ltd
Original Assignee
Wuhan Jingce Electronic Group Co Ltd
Wuhan Jingli Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Jingce Electronic Group Co Ltd, Wuhan Jingli Electronic Technology Co Ltd filed Critical Wuhan Jingce Electronic Group Co Ltd
Priority to CN201910731007.3A priority Critical patent/CN110599453A/en
Publication of CN110599453A publication Critical patent/CN110599453A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Abstract

The invention discloses a panel defect detection method, a device and an equipment terminal based on image fusion, wherein the method comprises the following steps: acquiring sample images to form a training set, and marking the defects on each sample image to obtain a sample label; randomly selecting a plurality of sample pairs from a training set, and respectively fusing two sample images in each sample pair and sample labels corresponding to the two sample images to obtain a training image and a training label, and inputting the training image and the training label into a deep learning model so as to train the deep learning model; acquiring a panel image to be detected, arbitrarily selecting two panel images for fusion, and inputting an obtained predicted image into a trained deep learning model for defect detection; the invention combines the defect characteristics of two sample images into one fused image through image fusion, and after the model is trained by adopting the fused image, the model can detect two panel images at one time, thereby doubling the inference speed.

Description

Panel defect detection method and device based on image fusion and equipment terminal
Technical Field
The invention belongs to the technical field of automatic defect detection, and particularly relates to a panel defect detection method and device based on image fusion and an equipment terminal.
Background
In recent years, classification, detection, and segmentation methods based on deep learning are increasingly used in the field of panel defect detection, and are also gaining favor of manufacturers. The general process of deep neural network application and defect detection is as follows: firstly, obtaining a sample image and marking defects, then training a neural network by using the marked sample, enabling the neural network to learn the characteristics of the sample, and then predicting the image to be detected by using the trained neural network.
In the panel detection field, when the deep learning methods such as target detection or semantic segmentation are used for inference, the real-time performance is high, for example, the time requirement for processing an image with a resolution of 3240 × 1920 is within 0.5s, although the real-time performance requirement can be met by using a high-performance GPU, the equipment cost is also increased; how to improve the inference rate while reducing the cost is a problem which needs to be solved urgently when the panel defect detection is carried out based on a deep learning method.
Disclosure of Invention
Aiming at least one defect or improvement requirement in the prior art, the invention provides a panel defect detection method, a device and an equipment terminal based on image fusion, wherein the defect characteristics in two sample images are combined into one fused image through image fusion, the fused image is adopted to train a deep learning model, and the trained model can detect two panel images at one time, so that the inference speed is doubled; doubling the inference speed under the condition of unchanged hardware conditions; or the requirements on the hardware performance are reduced at the same inference speed so as to save the cost; the defect detection method aims to solve the problem that the existing defect detection method based on deep learning cannot give consideration to the inference efficiency and the equipment cost.
To achieve the above object, according to one aspect of the present invention, there is provided a panel defect detecting method based on image fusion, including the steps of:
s1: acquiring sample images to form a training set, and marking the defects on each sample image to obtain a sample label;
s2: randomly selecting a plurality of sample pairs from the training set, and fusing two sample images in each sample pair to obtain a training image; fusing sample labels corresponding to the two sample images to obtain a training label; inputting the training images and the training labels into a deep learning model to train the deep learning model;
s3: and acquiring a panel image to be detected, arbitrarily selecting two panel images for fusion, and inputting the acquired predicted image into a trained deep learning model for defect detection.
Preferably, in the panel defect detection method, when a defect is detected from the predicted image in step S3, two panel images corresponding to the predicted image are input to the deep learning model and detected.
Preferably, in the panel defect detecting method, in the step S2, when the two sample images in each sample pair and the corresponding sample labels are fused, the weight value of any one of the sample images in each sample pair satisfies the beta distribution.
Preferably, in the panel defect detecting method, in the step S3, when the panel images to be detected are merged, the weight value of the two panel images is 0.5.
Preferably, the panel defect detecting method further includes, before step S1, the steps of:
s0: and performing enhancement processing on the acquired panel image, and cutting the enhanced panel image into a plurality of sample images with preset pixel sizes.
Preferably, in the panel defect detecting method, in step S2, the sample label includes a defect location and a defect type, and different defect types are marked with different colors for easy distinction.
According to the second aspect of the invention, the invention also provides a panel defect detection device based on image fusion, which comprises a labeling unit and a fusion unit;
the marking unit is used for acquiring sample images to form a training set, and marking the defects on each sample image to obtain a sample label;
the fusion unit is used for randomly selecting a plurality of sample pairs from the training set and fusing two sample images in each sample pair to obtain a training image; fusing sample labels corresponding to the two sample images to obtain a training label; inputting the training images and the training labels into a deep learning model to train the deep learning model;
and the method is also used for acquiring a panel image to be detected, arbitrarily selecting two panel images for fusion, and inputting the acquired predicted image into a trained deep learning model for defect detection.
Preferably, in the panel defect detecting apparatus based on image fusion, the fusion means is further configured to input two panel images corresponding to the predicted image into the deep learning model for detection when a defect is detected from the predicted image.
Preferably, in the panel defect detecting apparatus based on image fusion, when two sample images in each sample pair and corresponding sample labels thereof are fused, the weight value given to any one sample image in each sample pair by the fusing unit satisfies the beta distribution.
Preferably, in the image fusion-based panel defect detection apparatus, when the panel images to be detected are fused, the fusing unit assigns a weight value of 0.5 to the two panel images.
Preferably, the sample label of the panel defect detecting apparatus based on image fusion includes a defect position and a defect type, and different defect types are marked with different colors for easy distinction.
Preferably, the panel defect detecting device based on image fusion further comprises a preprocessing unit and a cutting unit;
the preprocessing unit is used for acquiring a panel image and performing enhancement processing to improve the contrast;
the clipping unit is used for clipping the enhanced panel image into a plurality of sample images with preset pixel size.
According to a third aspect of the present invention, there is also provided a device terminal comprising at least one processing unit, and at least one memory unit, wherein the memory unit stores a computer program that, when executed by the processing unit, causes the processing unit to perform the steps of any of the above-mentioned methods.
According to a fourth aspect of the present invention, there is also provided a computer-readable medium storing a computer program executable by a terminal device, the computer program, when run on the terminal device, causing the terminal device to perform the steps of any of the methods described above.
In general, compared with the prior art, the above technical solution contemplated by the present invention can achieve the following beneficial effects:
(1) according to the panel defect detection method, device and equipment terminal based on image fusion, the defect characteristics in two sample images are combined into one fused image through image fusion, the fused image is adopted to train a deep learning model, the trained model can detect the two panel images at one time, and the inference speed is doubled; doubling the inference speed under the condition of unchanged hardware conditions; or reduce the hardware performance requirements at the same rate of inference to save cost.
(2) According to the panel defect detection method, device and equipment terminal based on image fusion, when image fusion is carried out, the weight value of any sample image in each sample pair is a random variable, but the weight value of the first sample image in each sample pair obeys beta distribution, the random setting of the weight value can prevent the deep learning model from being over-fitted, and most importantly, the generalization capability of the deep learning model can be increased, so that the trained model can adapt to various panel images with different defect characteristics and different contrasts.
Drawings
FIG. 1 is a flowchart of a panel defect detection method based on image fusion according to an embodiment of the present invention;
FIG. 2 is a block diagram of a deep learning model provided by an embodiment of the present invention;
fig. 3 is a logic block diagram of a panel defect detecting apparatus based on image fusion according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
Fig. 1 is a flowchart of a panel defect detection method based on image fusion, as shown in fig. 1, the method includes the following steps:
s1: acquiring sample images to form a training set, and marking the defects on each sample image to obtain a sample label;
in the embodiment, a plurality of panel images acquired by a camera need to be acquired and preprocessed, wherein the purpose of preprocessing is to enhance image characteristics, improve contrast and make defect characteristics in the images more obvious and prominent; the method of enhancement processing is not specifically limited in this embodiment, and may be implemented by gaussian filtering, mean filtering, high-pass filtering, or other commonly used filtering methods.
After preprocessing, cutting each panel image into a plurality of sample images with the size of K pixels; because the resolution of the panel image acquired by the camera is high, and only a partial area in one panel image has defects, the embodiment selects to cut the complete panel image into a plurality of relatively small sample images; in this embodiment, the value of K is 512.
After cutting, adopting a labeling tool labelme to perform pixel-level labeling on defect characteristics in each sample image to generate a sample label; in this embodiment, the sample label includes a defect position and a defect type, and different defect types are marked with different colors to facilitate distinguishing.
S2: randomly selecting a plurality of sample pairs from a training set, respectively fusing two sample images in each sample pair and corresponding sample labels thereof to obtain a training image and a training label, and inputting the training image and the training label into a deep learning model to train the deep learning model;
in the embodiment, a plurality of sample pairs are randomly selected from a large number of sample images in a training set, and each sample pair comprises two sample images; two sample images are randomly selected from the training set to form a sample pair, and the random selection rule is not particularly limited in this embodiment and can be implemented by using a randderm function, for example; then fusing the two sample images in each sample pair according to a certain weight proportion to obtain a training image; fusing sample labels corresponding to the two sample images according to the same weight proportion to obtain a training label; in this embodiment, sample images are labeled to obtain respective sample labels, and then two sample images in each sample pair and the corresponding sample labels are fused; in fact, another approach is also possible: each sample image is not marked independently, and the training image is directly marked after the training image is obtained, so that the step of sample label fusion can be saved; however, the contrast of the fused sample image is reduced to some extent, so that the defect characteristics are not obvious, and the accuracy of the defect mark may be affected.
The image fusion method is originally a data enhancement method, and the embodiment adopts the image fusion method to construct new training samples and sample labels; during fusion, two sample images in each sample pair need to be fused according to a certain weight proportion, and the formula is as follows:
wherein the content of the first and second substances,representing a fused image; x is the number ofiRepresenting a first sample image of the sample pair; x is the number ofjRepresenting a second sample image of the sample pair; λ represents a weight value of the first sample image; (1- λ) represents a weight value of the second sample image;
in this embodiment, the weight value λ of the first sample image is a random variable, but the weight value of the first sample image in each sample pair obeys beta distribution, a probability density function of the beta distribution has two parameters α, β >0, and in this embodiment, α ═ β is taken. The random setting of the weight value lambda can prevent the deep learning model from being over-fitted, and most importantly, the generalization capability of the deep learning model can be increased, so that the trained model can adapt to various panel images with different defect characteristics and different contrasts. In the field of panel detection, the gray scales of panel images shot by different cameras or panel images shot by the same camera under different environments are different, so that the contrast of defect characteristics is different; obviously, the deep learning model has relatively poor adaptability to low-contrast defect characteristics, and a prediction process may not give appropriate output; in the embodiment, the weighted value lambda is randomly set, so that a low contrast can be randomly given to any defect feature, and after the model is trained by using the fusion image generated in the way, the adaptability of the deep learning model to any defect type with low contrast can be obviously improved.
Inputting the training images and the training labels generated by the sample pairs into the constructed deep learning model so as to train the deep learning model; the type of the deep learning model is not specifically limited in this embodiment, and models such as ResNet, R-CNN, fast-RCNN, YOLO, deep v3+ may be selected, and in this embodiment, a deep v3+ model is selected, as shown in fig. 2, where the model includes an encoder and a decoder, where the encoder is configured to extract feature values of an image, and the decoder is configured to map the feature values to corresponding segmented images; iterative training is carried out on the Deeplab v3+ model by adopting training images and training labels of a plurality of sample pairs, after the model is trained for a certain number of times, the inference accuracy of the Deeplab v3+ model is verified by adopting a verification set until the average cross-over ratio (mIOU) accuracy of the Deeplab v3+ model on the verification set is not improved any more, and the model training is finished.
In the embodiment, more training samples are created by adopting an image fusion method, so that the model is not easy to over-fit during training, and the number of panel images in a training set can be reduced. For the application scene of panel detection with few negative samples, the trained deep learning model can detect two samples at a time in the inference stage, so that the inference rate is doubled; or, in the case of a constant inference rate, a worse GPU may be used to save cost.
S3: and acquiring a panel image to be detected, arbitrarily selecting two panel images for fusion, and inputting the acquired predicted image into a trained deep learning model for defect detection.
When the trained deep learning model is used for detecting the panel images, firstly, two panel images to be detected are randomly selected and preprocessed, wherein the preprocessing mode refers to the step S1, and the details are not repeated; after the preprocessing is completed, the panel images are fused, and the weight value of the two panel images is preferably set to 0.5, because the two images are reasonably weighted by 0.5 each under the condition that whether the predicted image has defects or not and the contrast of the defects are unknown. Inputting the fused predicted image into a trained deep learning model for defect detection. Because the trained deep learning model can detect two panel images at the same time, the inference speed is doubled.
When a defect is detected from the predicted image, two panel images corresponding to the predicted image are input into the deep learning model, and each sample is respectively inferred so as to determine which sample the defect is on. In the application scenario of panel defect detection, since most samples are defect-free, the number of secondary re-judgment is small, and the inference efficiency is not affected.
In order to implement the panel defect detection method, the present embodiment further provides a panel defect detection apparatus based on image fusion, fig. 3 is a logic block diagram of the panel defect detection apparatus provided in the present embodiment, as shown in fig. 2, the detection apparatus includes a preprocessing unit, a clipping unit, a labeling unit, and a fusion unit;
the preprocessing unit is used for acquiring a panel image acquired by the camera and performing enhancement processing to improve the contrast; the method of enhancement processing is not specifically limited in this embodiment, and may be implemented by gaussian filtering, mean filtering, high-pass filtering, or other commonly used filtering methods.
The cutting unit is used for cutting the enhanced panel image into a plurality of sample images with preset pixel size; in this embodiment, a complete panel image is cropped into multiple sample images with 512 × 512 pixels, which facilitates selection of defect features from the sample images and reduces the processing difficulty of the deep learning model.
The marking unit is used for marking the defects on each sample image to obtain a sample label; in the embodiment, a labeling tool labelme is adopted to perform pixel-level labeling on defect characteristics in each sample image to generate a sample label; the sample label includes a defect location and a defect type, with different defect types being marked with different colors for easy differentiation.
The fusion unit is used for randomly selecting a plurality of sample pairs from the training set, respectively fusing two sample images in each sample pair and corresponding sample labels thereof to obtain a training image and a training label, and inputting the training image and the training label into the deep learning model so as to train the deep learning model;
in this embodiment, the fusion unit randomly selects two sample images from the training set to combine into one sample pair, and the random selection rule is not specifically limited, and may be implemented by using a randderm function, for example; then the fusion unit fuses the two sample images in each sample pair according to a certain weight proportion to obtain a training image; fusing sample labels corresponding to the two sample images according to the same weight proportion to obtain a training label; in the fusion process, two sample images in each sample pair need to be fused according to a certain weight proportion, and the formula is as follows:
wherein the content of the first and second substances,representing a fused image; x is the number ofiRepresenting a first sample image of the sample pair; x is the number ofjRepresenting a second sample image of the sample pair; λ represents a weight value of the first sample image; (1- λ) represents a weight value of the second sample image;
in this embodiment, the weight value λ of the first sample image is a random variable, but the weight value of the first sample image in each sample pair obeys beta distribution, a probability density function of the beta distribution has two parameters α, β >0, and in this embodiment, α ═ β is taken. The random setting of the weight value lambda can prevent the deep learning model from being over-fitted, and most importantly, the generalization capability of the deep learning model can be increased, so that the trained model can adapt to various panel images with different defect characteristics and different contrasts.
And inputting the training images and the training labels generated by the sample pairs into the constructed deep learning model so as to train the deep learning model.
When the trained deep learning model is used for detecting the panel images, the preprocessing unit randomly selects two panel images to be detected and preprocesses the two panel images; after the preprocessing is completed, the fusion unit fuses the panel images, and the weight values of the two panel images are preferably set to 0.5, because it is most reasonable to set the weight of 0.5 for each of the two images without knowing whether the predicted image has defects and the contrast of the defects. Inputting the fused predicted image into a trained deep learning model for defect detection. Because the trained deep learning model can detect two panel images at the same time, the inference speed is doubled.
When the defect is detected from the predicted image, the fusion unit is further used for inputting two panel images corresponding to the predicted image into the deep learning model, and respectively deducing each sample so as to determine which sample the defect is on.
The present embodiment also provides an apparatus terminal, which includes at least one processor and at least one memory, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to execute the steps of the method. The type of processor and memory are not particularly limited, for example: the processor may be a microprocessor, digital information processor, on-chip programmable logic system, or the like; the memory may be volatile memory, non-volatile memory, a combination thereof, or the like.
The present embodiment also provides a computer-readable medium, which stores a computer program executable by a terminal device, and when the computer program runs on the terminal device, causes the terminal device to execute the steps of the above method. Types of computer readable media include, but are not limited to, storage media such as SD cards, usb disks, fixed hard disks, removable hard disks, and the like.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A panel defect detection method based on image fusion is characterized by comprising the following steps:
s1: acquiring sample images to form a training set, and marking the defects on each sample image to obtain corresponding sample labels;
s2: randomly selecting a plurality of sample pairs from a training set, and respectively fusing two sample images in each sample pair and sample labels corresponding to the two sample images to obtain a training image and a training label, and inputting the training image and the training label into a deep learning model so as to train the deep learning model;
s3: and acquiring a panel image to be detected, arbitrarily selecting two panel images for fusion, and inputting the acquired predicted image into a trained deep learning model for defect detection.
2. The panel defect detecting method according to claim 1, wherein in step S3, when a defect is detected from the predictive image, two panel images corresponding to the predictive image are inputted to the deep learning model and detected, respectively.
3. The method for detecting panel defects according to claim 1 or 2, wherein in step S2, when two sample images in each sample pair and their corresponding sample labels are fused, the weight value of any one sample image in each sample pair satisfies a beta distribution.
4. The method for detecting defects in a panel according to claim 3, wherein in step S3, when the panel images to be detected are merged, the weight value of the two panel images is 0.5.
5. The panel defect detecting method of claim 1, wherein in step S2, the sample label includes a defect location and a defect type, and different defect types are marked with different colors for easy distinction.
6. A panel defect detection device based on image fusion is characterized by comprising a labeling unit and a fusion unit;
the marking unit is used for acquiring sample images to form a training set, and marking the defects on each sample image to obtain a corresponding sample label;
the fusion unit is used for randomly selecting a plurality of sample pairs from a training set, and fusing two sample images in each sample pair and sample labels corresponding to the two sample images to obtain a training image and a training label which are input into a deep learning model so as to train the deep learning model;
and the method is also used for acquiring a panel image to be detected, arbitrarily selecting two panel images for fusion, and inputting the acquired predicted image into a trained deep learning model for defect detection.
7. The apparatus according to claim 6, wherein the fusion unit is further configured to input two panel images corresponding to the prediction images into the deep learning model for detection when a defect is detected from the prediction images.
8. The image fusion-based panel defect detecting apparatus according to claim 6 or 7, wherein when two sample images in each sample pair and their corresponding sample labels are fused, the weight value given to any one sample image in each sample pair by the fusing unit satisfies a beta distribution.
9. The image fusion-based panel defect detecting apparatus according to claim 8, wherein the fusion means assigns a weight value of 0.5 to each of the two panel images when fusing the panel images to be detected.
10. A device terminal, characterized by comprising at least one processing unit and at least one memory unit, wherein the memory unit stores a computer program which, when executed by the processing unit, causes the processing unit to carry out the steps of the method according to any one of claims 1 to 5.
CN201910731007.3A 2019-08-08 2019-08-08 Panel defect detection method and device based on image fusion and equipment terminal Pending CN110599453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910731007.3A CN110599453A (en) 2019-08-08 2019-08-08 Panel defect detection method and device based on image fusion and equipment terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910731007.3A CN110599453A (en) 2019-08-08 2019-08-08 Panel defect detection method and device based on image fusion and equipment terminal

Publications (1)

Publication Number Publication Date
CN110599453A true CN110599453A (en) 2019-12-20

Family

ID=68853755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910731007.3A Pending CN110599453A (en) 2019-08-08 2019-08-08 Panel defect detection method and device based on image fusion and equipment terminal

Country Status (1)

Country Link
CN (1) CN110599453A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199531A (en) * 2019-12-27 2020-05-26 中国民航大学 Interactive data expansion method based on Poisson image fusion and image stylization
CN111681220A (en) * 2020-06-04 2020-09-18 阿丘机器人科技(苏州)有限公司 Defect detection model construction method, device and system and storage medium
CN113095400A (en) * 2021-04-09 2021-07-09 安徽芯纪元科技有限公司 Deep learning model training method for machine vision defect detection
CN113268914A (en) * 2020-02-14 2021-08-17 聚积科技股份有限公司 Method for establishing LED screen adjustment standard judgment model
CN115345321A (en) * 2022-10-19 2022-11-15 小米汽车科技有限公司 Data augmentation method, data augmentation device, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118482A (en) * 2018-08-07 2019-01-01 腾讯科技(深圳)有限公司 A kind of panel defect analysis method, device and storage medium
CN109840537A (en) * 2017-11-29 2019-06-04 南京大学 A kind of image multitask classification method based on cross validation's neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840537A (en) * 2017-11-29 2019-06-04 南京大学 A kind of image multitask classification method based on cross validation's neural network
CN109118482A (en) * 2018-08-07 2019-01-01 腾讯科技(深圳)有限公司 A kind of panel defect analysis method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HONGYI ZHANG ET AL.: "mixup: BEYOND EMPIRICAL RISK MINIMIZATION", 《ARXIV:1710.09412V2》 *
ZHI ZHANG ET AL.: "Bag of Freebies for Training Object Detection Neural Network", 《ARXIV:1092.04103V3》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111199531A (en) * 2019-12-27 2020-05-26 中国民航大学 Interactive data expansion method based on Poisson image fusion and image stylization
CN111199531B (en) * 2019-12-27 2023-05-12 中国民航大学 Interactive data expansion method based on Poisson image fusion and image stylization
CN113268914A (en) * 2020-02-14 2021-08-17 聚积科技股份有限公司 Method for establishing LED screen adjustment standard judgment model
CN111681220A (en) * 2020-06-04 2020-09-18 阿丘机器人科技(苏州)有限公司 Defect detection model construction method, device and system and storage medium
CN111681220B (en) * 2020-06-04 2024-02-13 阿丘机器人科技(苏州)有限公司 Method, device, system and storage medium for constructing defect detection model
CN113095400A (en) * 2021-04-09 2021-07-09 安徽芯纪元科技有限公司 Deep learning model training method for machine vision defect detection
CN115345321A (en) * 2022-10-19 2022-11-15 小米汽车科技有限公司 Data augmentation method, data augmentation device, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN109671058B (en) Defect detection method and system for large-resolution image
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN110516514B (en) Modeling method and device of target detection model
CN110910343A (en) Method and device for detecting pavement cracks and computer equipment
CN107808126A (en) Vehicle retrieval method and device
CN109726678B (en) License plate recognition method and related device
CN110827312A (en) Learning method based on cooperative visual attention neural network
CN116310785B (en) Unmanned aerial vehicle image pavement disease detection method based on YOLO v4
CN111951154B (en) Picture generation method and device containing background and medium
CN114399644A (en) Target detection method and device based on small sample
CN112784724A (en) Vehicle lane change detection method, device, equipment and storage medium
CN114781514A (en) Floater target detection method and system integrating attention mechanism
CN117011563B (en) Road damage inspection cross-domain detection method and system based on semi-supervised federal learning
CN115131283A (en) Defect detection and model training method, device, equipment and medium for target object
CN114743102A (en) Furniture board oriented flaw detection method, system and device
CN112232368A (en) Target recognition model training method, target recognition method and related device thereof
CN111435445A (en) Training method and device of character recognition model and character recognition method and device
CN111191482B (en) Brake lamp identification method and device and electronic equipment
CN112784675B (en) Target detection method and device, storage medium and terminal
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
CN114155551A (en) Improved pedestrian detection method and device based on YOLOv3 under complex environment
CN113505702A (en) Pavement disease identification method and system based on double neural network optimization
CN113780287A (en) Optimal selection method and system for multi-depth learning model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191220