CN115409823A - Solar screen defect detection method based on generative countermeasure network - Google Patents

Solar screen defect detection method based on generative countermeasure network Download PDF

Info

Publication number
CN115409823A
CN115409823A CN202211083396.1A CN202211083396A CN115409823A CN 115409823 A CN115409823 A CN 115409823A CN 202211083396 A CN202211083396 A CN 202211083396A CN 115409823 A CN115409823 A CN 115409823A
Authority
CN
China
Prior art keywords
image
defect
generator
qualified
discriminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211083396.1A
Other languages
Chinese (zh)
Inventor
唐昆
彭琳和
潘淼
李佳旺
蔡文浩
朱勇建
张明军
毛聪
胡永乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202211083396.1A priority Critical patent/CN115409823A/en
Publication of CN115409823A publication Critical patent/CN115409823A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Geometry (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a solar halftone defect detection method based on a generation type countermeasure network, which takes a qualified image as a defect-free data set, takes a real defect sample as a test set, randomly generates an artificial defect image through the qualified image to expand the defect sample, and transmits the artificial defect image as an actual training image into a generator; the generator generates a reconstructed image and transmits the reconstructed image and a non-defective sample into the discriminator to be trained, the generator and the discriminator are in mutual confrontation, the trained discriminator identifies the defect of the image to be detected, the generator generates a contrast gray difference between the reconstructed image and the image to be detected and carries out defect positioning, and therefore the defect detection of the solar screen plate is achieved. The detection method provided by the invention effectively solves the problems of insufficient number of real defect samples and long time consumption of manual label marking, and improves the generalization capability of the model to unknown defect types, thereby improving the detection efficiency and accuracy of the defects and reducing the omission ratio.

Description

Solar screen defect detection method based on generative countermeasure network
Technical Field
The invention belongs to the field of defect detection, and particularly relates to a solar screen defect detection method based on a generative countermeasure network.
Background
With the increasing use of clean energy such as solar energy, the use of such energy is becoming widespread. Solar photovoltaic power generation refers to a power generation mode of directly converting light energy into electric energy without a thermal process, photovoltaic power generation is one of main ways for realizing solar power generation, and a solar cell is a key component of a photovoltaic power generation system; the mass production of solar cells mostly adopts the solar screen as a mold, and the quality of the solar cells is an important influence factor on the photoelectric conversion efficiency and the service life of the solar cells.
At present, the defect detection of the solar screen printing plate is mainly observed by human eyes, the accuracy and the working efficiency are low, and the cost is high. By adopting a machine vision technology, the detection and identification of the solar screen defects can be completed by utilizing an industrial camera and computer software. However, the traditional machine vision detection has high requirements on detection environment, has the defects of low fault tolerance rate, poor compatibility and high omission factor on unknown defects, and causes the accuracy of detection results to be reduced, thereby affecting the reliability of products. By adopting the defect detection method based on deep learning, higher detection precision can be realized, and the method can adapt to a more complex detection environment, but a network model of the method generally needs a large number of label data sets for training, and the labeling of labels needs more manpower and time consumption. Therefore, it is necessary to improve the existing method for detecting defects of the solar halftone and improve the detection accuracy and the working efficiency, thereby ensuring the production quality of the solar halftone and the cell and reducing the production cost.
Disclosure of Invention
The invention aims to provide a solar halftone defect detection method based on a generative countermeasure network (GAN), aiming at solving the problems of high missing detection rate and low training data set construction and detection efficiency in the existing solar halftone defect detection method.
The technical scheme adopted by the invention for solving the technical problems is as follows:
with reference to fig. 1 to 4, the method comprises the following steps:
step 1, collecting gray level images of the solar screen printing plate, wherein the resolution of each image is set to be a fixed value.
And 2, screening the images in the step 1, selecting qualified screen images to perform image preprocessing, eliminating noise interference, and taking the processed qualified images x as a defect-free data set and the real defect images as a test set.
Step 3, adding artificial random defects to the qualified image x and expanding the data volume, wherein the qualified image x automatically generates a plurality of artificial defect images x with random defects through an artificial defect module ~ And the number of random defects is 4.
Further, the artificial defect module generates an artificial defect image x ~ The specific process comprises the following steps:
and 3.1, randomly generating four groups of shearing frames, wherein the lengths and the widths of the shearing frames are randomly selected and the length and the width of each group of shearing frames are not completely the same.
3.2, cutting each group of cutting frames from random positions in the qualified image x, scaling the cutting frames in random proportion, and pasting the cutting frames to any position of the qualified image x, so as to generate the artificial defect image x ~
Step 3.3, repeating step 3.1 and step 3.2 for multiple times for each qualified image x to generate multiple artificial defect images x ~ Thereby expanding the data size of the training samples.
Step 4, the product generated in the step 3Image of an artificial defect x ~ As an actual training image, and constructing an actual training data set; artificial defect image x in actual training data set ~ Inputting a generator G of the GAN network model, training the generator G, and comparing the qualified image x with an output result of the generator G; training a discriminator by adopting the qualified image X and the reconstructed image X generated by the generator G, and contrasting the generator G and the discriminator D to make the artificial defect image X ~ Reverting to a reconstructed image X that is highly similar to the qualifying image X and providing the discriminator D with the ability to identify defective and non-defective images.
Further, a generator G network structure of the GAN network model is based on a U-Net network model structure and comprises 1 coding path, a plurality of residual modules and 1 decoding path; the coding path consists of 6 convolutional layers and 5 downsampling layers, each residual module comprises 2 convolutional layers, and the decoding path consists of 6 deconvolution layers and 5 upsampling layers; the discriminator D network structure of the GAN network model consists of 6 convolution layers and 5 down-sampling layers.
Further, the output layer of the generator G adopts a Tanh activation function; except the output layer, the convolution layer and the deconvolution layer in the G encoding-decoding path of the generator adopt Batchnorm normalization processing and a Mish nonlinear activation function, and the convolution layer of the residual module adopts Batchnorm normalization processing and a LeakReLU nonlinear activation function.
Further, an output layer of the discriminator D adopts a Sigmoid activation function; except for the output layer, each convolution layer of the discriminator D adopts Batchnorm normalization processing and a Mish nonlinear activation function.
Further, the specific process of GAN network model training is as follows:
step 4.1, inputting an artificial defect image x ~ Training generator G network, artificial defect image x ~ And outputting the characteristics to a residual error module through an encoding path, returning to an image space through a decoding path, and outputting a reconstructed image X.
And 4.2, connecting the characteristic information of the down sampling layer of the coding path with the up sampling layer of the corresponding decoding path, so that the generator G can reserve more image characteristic information.
Step 4.3, establishing a generating loss function according to the reconstructed image X and the qualified image X output in the step 4.1
Figure 129131DEST_PATH_IMAGE001
The reconstructed image X is brought closer to the qualifying image X.
And 4.4, inputting the reconstructed image X and the qualified image X to train a discriminator D network.
Step 4.5, fine adjustment is carried out on the trained GAN network; the Adam optimizer is adopted for fine adjustment, the initial learning rate is set to be 0.008, the learning rate is reduced to half of the original rate after 100 rounds of training, and the number of residual modules can be adjusted according to the depth of a required network.
Furthermore, in order to reconstruct the qualified image x to the maximum extent, a loss-resisting function is established by minimizing the output of the generator G and maximizing the output of the discriminator D
Figure 108589DEST_PATH_IMAGE002
The reconstructed image X generated by the generator G is ensured to be close to the qualified image X, and the discriminator D can discriminate the reconstructed image X from the qualified image X.
Further, in step 4.3, to ensure that the generator G can sufficiently obtain the pixel distribution of the qualified image, L is used 1 And calculating the distance between the pixel points of the qualified image X and the reconstructed image X by using the norm.
Further, the global loss function of the GAN network model
Figure 888326DEST_PATH_IMAGE003
By
Figure 283535DEST_PATH_IMAGE002
Figure 97907DEST_PATH_IMAGE001
Two loss functions weighted according to the overall loss function
Figure 982687DEST_PATH_IMAGE003
The network parameters of the generator G and the discriminator D are adjusted by the value of (1), so that the efficiency and the precision of model training are improved.
And 5, identifying and positioning the defects of the screen printing plate.
Further, in the step 5, the specific steps of the trained generator G and the trained discriminator D for defect identification and localization include:
and 5.1, carrying out image-level defect judgment on the image z to be detected by the discriminator D, and if the judgment result is 'defective', sending the image z to be detected into the generator G.
And 5.2, the generator G generates a reconstructed image Z through the image Z to be detected, gray value subtraction is carried out on pixel points of the image Z to be detected and the reconstructed image Z, n gray difference areas formed by adjacent points with gray difference can be obtained, and each gray difference area comprises m areas with the same gray difference.
Step 5.3, calculating the ratio of the average gray difference of each gray difference area to the total gray value 255 and the ratio of the area value of the area with the same area maximum gray difference in the area to the area value of the area according to the result calculated in the step 5.2, and then weighting and adding the two ratios to obtain a defect credible value
Figure 249720DEST_PATH_IMAGE004
Step 5.4, confidence value of defect obtained in step 5.3
Figure 448620DEST_PATH_IMAGE004
Setting the threshold value to be 0.6; confidence of defect
Figure 117499DEST_PATH_IMAGE004
And judging the gray difference area larger than 0.6 as a defect area, drawing a positioning frame, and finishing defect identification and positioning.
Compared with the prior art, the invention has the beneficial effects that:
(1) The model provided by the invention belongs to an unsupervised learning model, and training samples do not need to be labeled with defect labels manually, and only a small number of real defect samples and defect-free samples are needed; defect samples required by training can be automatically expanded through artificial defects, real defect samples are used for testing, and therefore the problems that the number of the real defect samples is insufficient and the manual label marking time is long are effectively solved; in addition, the trained generator and discriminator network are directly used for defect identification and positioning, and the defect detection can be completed by only training one model; through the improvement, the defect detection method provided by the invention reduces the labor cost and improves the detection efficiency;
(2) The model provided by the invention adopts artificial defects, and the generalization capability of the model to unknown defect types is improved due to the randomness of the artificial defects; meanwhile, the generator adopts a multilayer network structure, and the overfitting degree of the discriminator is greatly reduced along with the deepening of the network level of the generator, so that the accuracy of defect identification is improved, and the defect missing rate is reduced.
Drawings
FIG. 1 illustrates steps of a solar halftone defect detection method based on a generative countermeasure network according to the present invention;
FIG. 2 is a diagram of a deep convolution generated countermeasure network according to the present invention;
FIG. 3 is a block diagram of the residual module of the present invention;
fig. 4 shows the specific steps of the solar screen defect identification and positioning according to the present invention.
Detailed Description
Referring to fig. 1 to 4, a solar halftone defect detection method based on generation of a countermeasure network includes the following steps:
step 1, collecting gray level images of the solar screen printing plate, wherein the resolution of each image is set to be 512x512. And 2, screening the images in the step 1, selecting qualified screen images to perform image preprocessing, eliminating noise interference, and taking the processed qualified images x as a defect-free data set and the real defect images as a test set.
Step 3, adding artificial random defects to the qualified image x and expanding the data volume, wherein the qualified image x automatically generates a plurality of artificial defects with random defects through an artificial defect moduleImage x ~ And the number of random defects is 4.
Further, the artificial defect module generates an artificial defect image x ~ The specific process comprises the following steps:
and 3.1, randomly generating four groups of shear frames, wherein the maximum length and width values (the number of pixel points) of each group of shear frames are (10,3), (5,4), (20,5) and (12, 12).
3.2, cutting each group of cutting frames from random positions in the qualified image x, scaling the cutting frames in random proportion, and pasting the cutting frames to any position of the qualified image x, so as to generate the artificial defect image x ~
Step 3.3, repeating step 3.1 and step 3.2 for multiple times for each qualified image x to generate multiple artificial defect images x ~ Thereby expanding the data size of the training samples.
Step 4, generating the artificial defect image x in the step 3 ~ As an actual training image, and constructing an actual training data set; artificial defect image x in actual training data set ~ Inputting a generator G of the GAN network model, training the generator G, and comparing the qualified image x with an output result of the generator G; training a discriminator by adopting the qualified image X and the reconstructed image X generated by the generator G, and contrasting the generator G and the discriminator D to make the artificial defect image X ~ Reverting to a reconstructed image X that is highly similar to the qualifying image X and providing the discriminator D with the ability to identify defective and non-defective images.
Further, a generator G network structure of the GAN network model is based on a U-Net network model structure and comprises 1 coding path, a plurality of residual modules and 1 decoding path; the coding path consists of 6 convolutional layers and 5 downsampling layers, each residual module comprises 2 convolutional layers, and the decoding path consists of 6 deconvolution layers and 5 upsampling layers; the discriminator D network structure of the GAN network model is composed of 6 convolution layers and 5 down-sampling layers.
Furthermore, the output layer of the generator G adopts a Tanh activation function; except the output layer, the convolution layer and the deconvolution layer in the G encoding-decoding path of the generator adopt Batchnorm normalization processing and a Mish nonlinear activation function, and the convolution layer of the residual module adopts Batchnorm normalization processing and a LeakReLU nonlinear activation function.
Further, an output layer of the discriminator D adopts a Sigmoid activation function; in addition to the output layer, each convolution layer of the discriminator D employs Batchnorm normalization processing and a Mish nonlinear activation function.
Further, with reference to fig. 2, the specific process of GAN network model training is as follows:
step 4.1, inputting an artificial defect image x ~ Training generator G network, artificial defect image x ~ And outputting the characteristics to a residual error module through an encoding path, returning to an image space through a decoding path, and outputting a reconstructed image X.
And 4.2, connecting the characteristic information of the down sampling layer of the coding path and the up sampling layer of the corresponding decoding path, so that the generator G can keep more image characteristic information.
Step 4.3, establishing a generating loss function according to the reconstructed image X and the qualified image X output in the step 4.1
Figure 110863DEST_PATH_IMAGE001
The reconstructed image X is brought closer to the qualifying image X.
And 4.4, inputting the reconstructed image X and the qualified image X to train a discriminator D network.
Step 4.5, fine adjustment is carried out on the trained GAN network; the Adam optimizer is adopted for fine adjustment, the initial learning rate is set to be 0.008, the learning rate is reduced to be half of the original rate after 100 rounds of training, and the number of residual modules is set to be 10.
Furthermore, in order to reconstruct the qualified image x to the maximum extent, a loss-resisting function is established by minimizing the output of the generator G and maximizing the output of the discriminator D
Figure 927509DEST_PATH_IMAGE002
Ensuring that the reconstructed image X generated by the generator G is close to the qualified image X, and the discriminator D discriminating the reconstructed image X from the qualified image X, as shown in formula (1):
Figure 664521DEST_PATH_IMAGE005
(1)
WhereinP x A distribution of features representing a qualified image,D(x)indicating the result of discrimination on the reference image x,D(X)indicating the result of discrimination of the reconstructed image X.
Further, in step 4.3, to ensure that the generator G can sufficiently obtain the pixel distribution of the qualified image, L is used 1 And (3) calculating the distance between pixel points of the qualified image X and the reconstructed image X by using the norm as shown in a formula (2):
Figure 187906DEST_PATH_IMAGE006
(2)
further, the global loss function of the GAN network model
Figure 617751DEST_PATH_IMAGE007
By
Figure 924623DEST_PATH_IMAGE008
Figure 199746DEST_PATH_IMAGE009
Two loss functions weighted according to the overall loss function
Figure 843217DEST_PATH_IMAGE007
The network parameters of the generator G and the discriminator D are adjusted to improve the efficiency and the accuracy of model training, as shown in formula (3):
Figure 178383DEST_PATH_IMAGE010
(3)
wherein the content of the first and second substances,
Figure 907305DEST_PATH_IMAGE011
Figure 48436DEST_PATH_IMAGE012
the influence of the loss functions of all parts on the model can be adjusted according to the training effect as the weight parameter.
And 5, identifying and positioning the defects of the screen printing plate.
Further, in the step 5, the specific steps of the trained generator G and the trained discriminator D for defect identification and localization include:
and 5.1, carrying out image-level defect judgment on the image z to be detected by the discriminator D, and if the judgment result is 'defective', sending the image z to be detected into the generator G.
Step 5.2, the generator G generates a reconstructed image Z through the image Z to be measured, the pixel points of the image Z to be measured and the reconstructed image Z are subjected to gray value subtraction, so that n gray difference regions composed of adjacent points with gray difference can be obtained, and each gray difference region includes m regions with the same gray difference, as shown in formulas (4) and (5):
Figure 546414DEST_PATH_IMAGE013
(4)
Figure 318061DEST_PATH_IMAGE014
(5)
wherein the content of the first and second substances,
Figure 268699DEST_PATH_IMAGE015
indicates the area of the nth gray difference region,
Figure 213521DEST_PATH_IMAGE016
indicates the area of the mth gray-scale difference identical region in the nth gray-scale difference region,
Figure 566005DEST_PATH_IMAGE017
Figure 508554DEST_PATH_IMAGE018
respectively representing corresponding pixel points of reconstructed image Z and image Z to be detected
Figure 946488DEST_PATH_IMAGE019
Is determined by the gray-scale value of (a),
Figure 429422DEST_PATH_IMAGE020
the area of the region formed by calculating the difference point of the adjacent gray levels is expressed,
Figure 901992DEST_PATH_IMAGE021
the area of the region composed of the points for which the same gray level difference is calculated is indicated.
Step 5.3, according to the result calculated in the step 5.2, calculating the ratio of the average gray difference of each gray difference area to the total gray value 255, and the ratio of the area value of the area with the same area with the maximum gray difference in the area to the area value of the area, and then weighting and adding the two ratios to obtain a defect credibility value
Figure 15441DEST_PATH_IMAGE022
As shown in equation (6):
Figure 940672DEST_PATH_IMAGE023
(6)
wherein the content of the first and second substances,
Figure 227297DEST_PATH_IMAGE024
Figure 288794DEST_PATH_IMAGE025
is an adjustable weight parameter.
Step 5.4, confidence value of defect obtained in step 5.3
Figure 573145DEST_PATH_IMAGE022
Setting the threshold value to be 0.6; confidence of defect
Figure 985671DEST_PATH_IMAGE022
A gray difference region greater than 0.6, and determining as a defect regionAnd drawing a positioning frame to finish defect identification and positioning.
The foregoing detailed description is to be understood as being given by way of illustration only, and not as limitation of the scope of the invention, as various equivalent modifications of the invention will become apparent to those skilled in the art upon reading the present disclosure, as defined in the appended claims.

Claims (4)

1. A solar halftone defect detection method based on a generative countermeasure network is characterized by comprising the following steps: the method comprises the following steps:
step 1, collecting gray level images of a solar screen printing plate, wherein the resolution of each image is set to be a fixed value;
step 2, screening the images in the step 1, selecting qualified screen images for image preprocessing, eliminating noise interference, taking the processed qualified images x as a defect-free data set, and taking real defect images as a test set;
step 3, adding artificial random defects to the qualified image x and expanding the data volume, wherein the qualified image x automatically generates a plurality of artificial defect images x with random defects through an artificial defect module ~ And the number of random defects is 4;
step 4, generating the artificial defect image x in the step 3 ~ As an actual training image, and constructing an actual training data set; the artificial defect image x in the actual training data set ~ Inputting a generator G of the GAN network model, training the generator G, and comparing the qualified image x with an output result of the generator G; training a discriminator by adopting the qualified image X and the reconstructed image X generated by the generator G, and contrasting the generator G and the discriminator D to make the artificial defect image X ~ Restoring the reconstructed image X to a highly similar qualified image X and providing the discriminator D with the capability of identifying defective and non-defective images;
the G network structure of the generator of the GAN network model is based on a U-Net network model structure and comprises 1 coding path, a plurality of residual modules and 1 decoding path; the coding path consists of 6 convolutional layers and 5 downsampling layers, each residual module comprises 2 convolutional layers, and the decoding path consists of 6 deconvolution layers and 5 upsampling layers; the network structure of a discriminator D of the GAN network model consists of 6 convolution layers and 5 down-sampling layers;
the output layer of the generator G adopts a Tanh activation function; except for an output layer, adopting Batchnorm normalization processing and a Mish nonlinear activation function for each convolution layer and a deconvolution layer in a G encoding-decoding path of the generator, and adopting Batchnorm normalization processing and a LeakReLU nonlinear activation function for the convolution layer of a residual error module;
the output layer of the discriminator D adopts a Sigmoid activation function; except the output layer, each convolution layer of the discriminator D adopts Batchnorm normalization processing and a Mish nonlinear activation function;
and 5, identifying and positioning the defects of the screen printing plate.
2. The method for detecting the defects of the solar halftone based on the generative countermeasure network as claimed in claim 1, wherein the method comprises the following steps:
in step 3, the artificial defect module generates an artificial defect image x ~ The specific process comprises the following steps:
3.1, randomly generating four groups of shearing frames, wherein the lengths and the widths of the shearing frames are randomly selected and the length and the width of each group of shearing frames are not completely same in value range;
3.2, each group of cutting frames is cut from random positions in the qualified image x, and is pasted to any position of the qualified image x after scaling in random proportion, so that the artificial defect image x is generated ~
Step 3.3, repeating step 3.1 and step 3.2 for multiple times for each qualified image x to generate multiple artificial defect images x ~ Thereby expanding the data size of the training samples.
3. The method for detecting the defects of the solar halftone based on the generative countermeasure network as claimed in claim 1, wherein the method comprises the following steps:
in step 4, the specific process of GAN network model training is as follows:
step (ii) of4.1 inputting an image x of an artificial defect ~ Training generator G network, artificial defect image x ~ Outputting the characteristics of the image to a residual error module through an encoding path, returning the image to an image space through a decoding path, and outputting a reconstructed image X;
step 4.2, the down sampling layer of the coding path is connected with the up sampling layer of the corresponding decoding path by the characteristic information, so that the generator G can reserve more image characteristic information;
step 4.3, establishing a generating loss function according to the reconstructed image X and the qualified image X output in the step 4.1
Figure 528120DEST_PATH_IMAGE001
Making the reconstructed image X closer to the qualified image X;
step 4.4, inputting a reconstructed image X and a qualified image X to train a discriminator D network;
step 4.5, fine adjustment is carried out on the trained GAN network; the fine tuning adopts an Adam optimizer, the initial learning rate is set to be 0.008, the learning rate is reduced to a half of the original rate after each 100 training rounds, and the number of residual modules can be adjusted according to the depth of a required network;
in order to enable the model to reconstruct the qualified image X to the maximum extent, a loss resisting function is established in a mode of minimizing the output of the generator G and maximizing the output of the discriminator D, the reconstructed image X generated by the generator G is ensured to be close to the qualified image X, and the discriminator D can discriminate the reconstructed image X from the qualified image X;
in step 4.3, to ensure that the generator G can sufficiently obtain the pixel distribution of the qualified image, L is used 1 Calculating the distance between pixel points of the qualified image X and the reconstructed image X by using the norm;
global loss function of the GAN network model
Figure 774949DEST_PATH_IMAGE002
By
Figure 964622DEST_PATH_IMAGE003
Two loss functions are weighted according toThe global loss function
Figure 864445DEST_PATH_IMAGE002
The network parameters of the generator G and the discriminator D are adjusted by the value of (1), so that the efficiency and the precision of model training are improved.
4. The method for detecting defects of solar halftone based on generative countermeasure network as claimed in claim 1, wherein:
in the step 5, the specific steps of the trained generator G and the trained discriminator D for identifying and positioning the defect are as follows:
step 5.1, the discriminator D discriminates the image-level defects of the image z to be detected, and if the discrimination result is 'defective', the image z to be detected is sent to a generator G;
step 5.2, the generator G generates a reconstructed image Z through the image Z to be detected, gray value subtraction is carried out on pixel points of the image Z to be detected and the reconstructed image Z, n gray difference areas formed by adjacent points with gray difference can be obtained, and each gray difference area comprises m areas with the same gray difference;
step 5.3, calculating the ratio of the average gray difference of each gray difference area to the total gray value 255 and the ratio of the area value of the area with the same area maximum gray difference in the area to the area value of the area according to the result calculated in the step 5.2, and then weighting and adding the two ratios to obtain a defect credible value
Figure 492873DEST_PATH_IMAGE004
Step 5.4, confidence value of defect obtained in step 5.3
Figure 794541DEST_PATH_IMAGE004
Setting the threshold value to be 0.6; crediting a defect
Figure 155115DEST_PATH_IMAGE004
Judging the gray difference area larger than 0.6 as a defect area and drawingAnd the positioning frame is used for finishing defect identification and positioning.
CN202211083396.1A 2022-09-06 2022-09-06 Solar screen defect detection method based on generative countermeasure network Pending CN115409823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211083396.1A CN115409823A (en) 2022-09-06 2022-09-06 Solar screen defect detection method based on generative countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211083396.1A CN115409823A (en) 2022-09-06 2022-09-06 Solar screen defect detection method based on generative countermeasure network

Publications (1)

Publication Number Publication Date
CN115409823A true CN115409823A (en) 2022-11-29

Family

ID=84164738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211083396.1A Pending CN115409823A (en) 2022-09-06 2022-09-06 Solar screen defect detection method based on generative countermeasure network

Country Status (1)

Country Link
CN (1) CN115409823A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116167923A (en) * 2023-04-26 2023-05-26 无锡日联科技股份有限公司 Sample expansion method and sample expansion device for x-ray image
CN116883417A (en) * 2023-09-08 2023-10-13 武汉东方骏驰精密制造有限公司 Workpiece quality inspection method and device based on machine vision
CN117710371A (en) * 2024-02-05 2024-03-15 成都数之联科技股份有限公司 Method, device, equipment and storage medium for expanding defect sample

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116167923A (en) * 2023-04-26 2023-05-26 无锡日联科技股份有限公司 Sample expansion method and sample expansion device for x-ray image
CN116883417A (en) * 2023-09-08 2023-10-13 武汉东方骏驰精密制造有限公司 Workpiece quality inspection method and device based on machine vision
CN116883417B (en) * 2023-09-08 2023-12-05 武汉东方骏驰精密制造有限公司 Workpiece quality inspection method and device based on machine vision
CN117710371A (en) * 2024-02-05 2024-03-15 成都数之联科技股份有限公司 Method, device, equipment and storage medium for expanding defect sample
CN117710371B (en) * 2024-02-05 2024-04-26 成都数之联科技股份有限公司 Method, device, equipment and storage medium for expanding defect sample

Similar Documents

Publication Publication Date Title
CN115409823A (en) Solar screen defect detection method based on generative countermeasure network
CN107274393B (en) Monocrystaline silicon solar cell piece detection method of surface flaw based on grid line detection
CN109741320A (en) A kind of wind electricity blade fault detection method based on Aerial Images
CN113436169A (en) Industrial equipment surface crack detection method and system based on semi-supervised semantic segmentation
CN115994325B (en) Fan icing power generation data enhancement method based on TimeGAN deep learning method
CN114973032B (en) Deep convolutional neural network-based photovoltaic panel hot spot detection method and device
CN114021741A (en) Photovoltaic cell panel inspection method based on deep learning
CN115861263A (en) Insulator defect image detection method based on improved YOLOv5 network
CN114419014A (en) Surface defect detection method based on feature reconstruction
CN112365468A (en) AA-gate-Unet-based offshore wind power tower coating defect detection method
Umar et al. Deep Learning Approaches for Crack Detection in Solar PV Panels
CN114117886A (en) Water depth inversion method for multispectral remote sensing
CN115240069A (en) Real-time obstacle detection method in full-fog scene
CN116703885A (en) Swin transducer-based surface defect detection method and system
CN115082798A (en) Power transmission line pin defect detection method based on dynamic receptive field
CN112734732B (en) Railway tunnel cable leakage clamp detection method based on improved SSD algorithm
CN112381794B (en) Printing defect detection method based on deep convolution generation network
CN116704444A (en) Video abnormal event detection method based on cascade attention U-Net
CN114980723A (en) Fault prediction method and system for cross-working-condition chip mounter suction nozzle
CN116342496A (en) Abnormal object detection method and system for intelligent inspection
CN113192018B (en) Water-cooled wall surface defect video identification method based on fast segmentation convolutional neural network
Jiang et al. An enhancement generative adversarial networks based on feature moving for solar panel defect identification
CN113409237A (en) Novel solar cell panel hot spot detection method based on YOLOv3
CN117372720B (en) Unsupervised anomaly detection method based on multi-feature cross mask repair
CN115294392B (en) Visible light remote sensing image cloud removal method and system based on network model generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination