CN116703812A - Deep learning-based photovoltaic module crack detection method and system - Google Patents

Deep learning-based photovoltaic module crack detection method and system Download PDF

Info

Publication number
CN116703812A
CN116703812A CN202211347309.9A CN202211347309A CN116703812A CN 116703812 A CN116703812 A CN 116703812A CN 202211347309 A CN202211347309 A CN 202211347309A CN 116703812 A CN116703812 A CN 116703812A
Authority
CN
China
Prior art keywords
image
module
photovoltaic module
crack detection
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211347309.9A
Other languages
Chinese (zh)
Inventor
李东挥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Chifeng New Energy Co ltd
Original Assignee
Datang Chifeng New Energy Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Chifeng New Energy Co ltd filed Critical Datang Chifeng New Energy Co ltd
Priority to CN202211347309.9A priority Critical patent/CN116703812A/en
Publication of CN116703812A publication Critical patent/CN116703812A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E10/00Energy generation through renewable energy sources
    • Y02E10/50Photovoltaic [PV] energy

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning-based photovoltaic module crack detection method, which comprises the following steps of: s1: collecting a photovoltaic module picture data set; s2: acquiring an edge detection image; s3: marking cracks of the photovoltaic cell assembly, and creating a training set and a verification set according to a preset proportion by marked images; s4: carrying out data enhancement on the training set and the verification set created in the step S3; s5: building an improved Faster R-CNN model, and adding a SpotFPN multi-scale feature learning module into a feature extraction network; s6: training an improved Faster R-CNN model to obtain a photovoltaic module crack detection model; the advantages are that: introducing high-low threshold contrast increases the likelihood of crack detection areas; the photovoltaic module data are amplified by using the FNS style migration model, so that the stability and the prediction accuracy of the neural network are enhanced, and the problem of overfitting in the identification process is avoided; and the feature extraction effect is improved by combining with SpotFPN, and the crack detection accuracy of the photovoltaic module is improved.

Description

Deep learning-based photovoltaic module crack detection method and system
Technical field:
the invention belongs to the technical field of crack detection, and particularly relates to a deep learning-based photovoltaic module crack detection method and system.
The background technology is as follows:
with the continuous development of global economy and the increasing level of productivity, the demands for energy from countries around the world are also rising. The traditional energy is difficult to meet the increasing demand of people for energy, and the traditional energy can be continuously developed into an important topic in the 21 st century. Photovoltaic power generation is a major application of solar energy, which can be converted into electrical energy by solar photovoltaic cells. The crack defects on the surface of the solar cell have direct influence on the power generation efficiency and the service life of the cell, and most of the crack defects are invisible defects which are difficult to identify and find by naked eyes of people, so the rapid and effective crack detection method for the solar cell module is an important step of sustainable development of the photovoltaic industry. With the rapid development of computer science and technology and computer hardware, it is possible to acquire, store and detect the solar cell slice images in real time, and the detection of the solar cell slice images based on deep learning can improve the detection efficiency and save the labor cost.
In the prior art, li Mengyuan proposes a method for identifying the surface crack defect image of the solar cell based on a deep convolution belief network, but crack defects cannot be distinguished due to the fact that the data set is fewer and overfitting can be generated; yan Weixin verifies the applicability of the Faster R-CNN network structure model to defect positioning, but the average detection accuracy is 69%, and the practical application cannot be satisfied.
The invention comprises the following steps:
the invention aims to provide a photovoltaic module crack detection method which is capable of effectively avoiding overfitting and high in detection accuracy.
It is a further object of the present invention to provide a crack detection system for a photovoltaic module for operating the above method
The technical scheme of the invention relates to a deep learning-based photovoltaic module crack detection method, which comprises the following steps of:
s1: collecting a photovoltaic module picture data set;
s2: preprocessing the image obtained in the step S1 to obtain an edge detection image;
s3: marking cracks of the photovoltaic cell assembly in the edge detection image obtained in the step S2, and creating a training set and a verification set according to a preset proportion by the marked image;
s4: carrying out data enhancement on the training set and the verification set created in the step S3;
s5: building an improved Faster R-CNN model, wherein the improved Faster R-CNN model is added with a SpotFPN multi-scale feature learning module in a feature extraction network;
s6: training the improved Faster R-CNN model constructed in the step S5 by adopting a training set and a verification set enhanced by data in the step S4 to obtain a trained model weight, and loading the trained model weight into the improved Faster R-CNN model to obtain a crack detection model of the photovoltaic module;
s7: and (3) performing crack detection on the photovoltaic module image by using the photovoltaic crack detection model obtained in the step (S6), and marking the position where the crack defect is detected.
Preferably, the preprocessing of the image in step S2 includes the steps of:
s21: graying the color image;
s22: gaussian filtering smoothing, specifically, comparing the high threshold value with the low threshold value of the image subjected to the gray processing of S21, and then carrying out Gaussian filtering by utilizing improved two-dimensional Gao Siqi to obtain a high signal-to-noise ratio image;
s23: calculating the gradient direction and the amplitude of the pixel point filtered in the step S22, and obtaining an edge candidate point by using a first-order differential operator;
s24: comparing the gray values of the edge candidate points and the adjacent pixel points in the same gradient direction, and if the gray value of the pixel point of the candidate point is the largest, reserving;
s25: performing double-threshold detection on the pixel points processed by the S24 to obtain edge pixel points of the image;
s26: and connecting all the edge pixel points to form an edge detection image.
Preferably, the data augmentation in step S4 is implemented by batch generation of images by the FNS-style migration model, which includes Image Transform Net and VGG16.
Preferably, in step S23, the gradient direction and magnitude are calculated using a 3×3 neighborhood.
Preferably, the ratio of the low threshold T1 to the high threshold T2 set in step S25 is 2:1 or 3:1.
The technical scheme of the invention relates to a deep learning-based photovoltaic module crack detection system, which comprises a data acquisition module, an image preprocessing module, a data amplification module, an improved Faster R-CNN module and a photovoltaic module crack detection module;
the image preprocessing module comprises a graying module, a Gaussian filtering module and an edge processing module; the graying treatment is carried out on the image acquired by the data acquisition module, and the Gaussian filter module forms an edge pixel point candidate area; the edge processing module forms an edge image.
The invention has the advantages that: in the edge detection process, high and low threshold contrast is introduced, and two-dimensional Gaussian filtering is performed, so that the possibility of a crack detection area is enhanced; the photovoltaic module data are amplified by using the FNS style migration model, so that the number of training sets and verification sets of the Faster R-CNN is increased, the stability and prediction accuracy of the neural network are enhanced, and the overfitting problem in the identification process is avoided; and the feature extraction effect is improved by combining with SpotFPN, and the crack detection accuracy of the photovoltaic module is improved.
Description of the drawings:
fig. 1 is a flowchart of a method for detecting cracks in a photovoltaic module according to embodiment 1 of the present invention.
Fig. 2 is a crack diagram of the photovoltaic module in example 1 of the present invention.
Fig. 3 is a high-low threshold comparison chart in step S22 of embodiment 1 of the present invention.
FIG. 4 is a data amplification model diagram of example 1 of the present invention.
Fig. 5 is a residual block structure diagram of embodiment 1 of the present invention.
Fig. 6 is a SpotFPN framework of embodiment 1 of the present invention.
The specific embodiment is as follows:
the invention will be described in further detail by way of examples with reference to the accompanying drawings.
Example 1:
as shown in fig. 1, a method for detecting cracks of a photovoltaic module based on deep learning comprises the following steps:
s1: preparing a data set, and collecting and arranging crack images of the photovoltaic cell assembly shot by the unmanned aerial vehicle in a natural environment;
s2: preprocessing the image obtained in the step S1 to obtain an edge detection image;
s3: marking cracks of the photovoltaic cell assembly in the edge detection image obtained in the step S2, and creating a training set and a verification set according to a preset proportion by the marked image;
s4: carrying out data enhancement on the training set and the verification set created in the step S3;
s5: building an improved Faster R-CNN model, and adding a SpotFPN multi-scale feature learning module into a feature extraction network by the improved Faster R-CNN model;
s6: training the improved Faster R-CNN model constructed in the step S5 by adopting a training set and a verification set enhanced by data in the step S4 to obtain a trained model weight, and loading the trained model weight into the improved Faster R-CNN model to obtain a crack detection model of the photovoltaic module;
s7: and (3) performing crack detection on the photovoltaic module image by using the photovoltaic crack detection model obtained in the step (S6), and marking the position where the crack defect is detected.
Specifically, as shown in fig. 2, two common crack defect types of the photovoltaic module are shown in the diagram, and fig. 2 (a) and 2 (b) are respectively mesh cracks and linear cracks, and in this embodiment, recognition of the cracks and judgment of the crack types are realized through deep learning.
In this embodiment, the image collected by the unmanned aerial vehicle is a visible light image of the photovoltaic module, and the preprocessing of the image in step S2 includes graying processing, gaussian filtering processing and edge processing, specifically:
s21: graying a color image under visible light;
the color image is composed of three channels of RGB, and the average method is adopted to average the brightness of three components in the color image to obtain a gray level image. After graying, the original data volume of the image and the subsequent calculated volume are reduced.
S22: gaussian filtering smoothing, namely performing high-low threshold comparison on the image subjected to gray level processing, and performing Gaussian filtering by utilizing improved two-dimensional Gao Siqi to obtain a high signal-to-noise ratio image;
when both the edge and the noise are high-frequency signals, it is difficult to select a filtering signal in the filtering process, as shown in fig. 3, by comparing the high-threshold image with the low-threshold image, the edge information of the low-threshold image is completely displayed, and relatively, the high-threshold image has a plurality of gaps, and the positions of the gaps are positions where cracks may exist, so that the comparison of the high-threshold image and the low-threshold image can enhance the recognition degree of the cracks.
After the high-low threshold comparison, two-dimensional gaussian filtering is performed, and it is known that the two-dimensional gaussian function G (x, y) can be decomposed to obtain:
G(x,y)=G(x)G(y)
converting the two-dimensional Gaussian filter into two one-dimensional Gaussian filters in the x and y directions, and respectively convolving the two-dimensional Gaussian filters with the image f (x, y) to obtain output:
s23: calculating the gradient direction and the amplitude of the pixel point filtered in the step S22, and obtaining an edge candidate point by using a first-order differential operator;
order the
A (i, j), wherein alpha (i, j) is the gradient amplitude and the gradient direction corresponding to the (i, j) position; in this embodiment, the gradient magnitude and direction of the image are calculated using a 3×3 neighborhood, and the gradient direction is divided into: 0 °, 45 °, 90 °, and 135 °.
Accordingly, the first-order differential operator used in the present embodiment is as follows:
P x (i,j)=G(i,j+1)-G(i,j-1)
P y (i,j)=G(i+1,j)-G(i-1,j)
P 45° (i,j)=G(i-1,j+1)-G(i+1,j-1)
P 135° (i,j)=G(i+1,j+1)-G(i-1,j-1)
e for introducing a first-order differential operator into (1-1) x (i,j)、E y In (i, j), we obtain
Finally, the magnitude of the gradient at any point of the image f (x, y) is calculated.
S24: comparing the gray values of the edge candidate points and the adjacent pixel points in the same gradient direction, and if the gray value of the pixel point of the candidate point is the largest, reserving;
s25: performing double-threshold detection on the pixel points processed by the S24 to obtain edge pixel points of the image;
after non-maximum suppression is completed in step S24, an image formed by local minima of the gradient is obtained, which is represented as a number of discrete points, and the points that are actually edges are connected by using double threshold detection, while isolated noise points are removed.
Two thresholds are manually given, one is a low threshold T1 and one is a high threshold T2, more than the high threshold is a strong edge, less than the low threshold is not an edge, and the middle is a weak edge. Coefficients T1 and T2 are selected in a ratio of 2:1 or 3:1.
Discarding points smaller than the low threshold value, and giving 0; points above the high threshold are immediately marked (these points are determined edge points), and 1 or 255 is assigned; points below the high threshold and above the low threshold are determined using the 8-connected region, and when there are weak edge pixels in the 8-neighborhood of the strong edge, then the edge pixels become strong edges, for example, with a value of 1 or 255.
S26: and connecting all the edge pixel points to form an edge detection image.
After the edge detection image is obtained, manually marking the crack area of the pixel level on the image in the dataset by using Photoshop software, marking the crack as white, and marking the rest as black; and marking the image according to a preset proportion 4:1 is divided into a training set and a verification set.
In order to improve the robustness of model identification and solve the problem of overfitting caused by single data set, the data set is amplified, specifically, as shown in fig. 4, the data amplification in step S4 is realized by generating images in batches by using an FNS style migration model, which includes Image Transform Net and VGG16.
Image Transform Net includes an image conversion network and a loss network as a loss equation.
The image conversion network is a depth residual convolution network, converts an input image x into an output image, and trains by using a random gradient descent method; the network body includes: 2 downsampled convolutional layers with step length of 2, five residual blocks, two upsampled convolutional layers with step length of 1/2, wherein the first layer and the last layer use 9×9 convolutional kernels, and the other layers use 3×3 convolutional layers; the structure of each residual block is shown in fig. 5 and includes two 3×3 convolution layers, a batch normalization layer, and an excitation function layer.
Downsampling followed by upsampling results in reduced computational complexity, effective receptive field size, with each pixel in the output having a large area of effective receptive field in the input.
To overcome the disadvantages of pixel loss, a pre-trained classification network is used to define feature loss and grid loss as a function of loss.
The VGG16 in this embodiment is formed by combining a small convolution kernel, a small pooling kernel, and a ReLU, and the specific implementation process is as follows:
(1) The input image size is 224 multiplied by 3, the convolution is carried out by a convolution kernel of 3 multiplied by 3 with 64 channels, the step length is 1, the convolution is carried out twice, and the output size is 224 multiplied by 3 after the convolution is activated by the ReLU;
(2) The maximum value is pooled, the filter is 2 multiplied by 2, the step length is 2, the image size is halved, and the pooled size is 112 multiplied by 64;
(3) Twice convolutions with 128 convolution kernels of 3×3, reLU activated, becoming 112×112×128 in size;
(4) The size becomes 56×56×128 after maximum pooling;
(5) After 256 convolution kernels of 3×3, three convolutions, reLU is activated, changing the size to 56×56×256;
(6) Max pooling, the size becomes 28×28×256;
(7) After 512 convolution kernels of 3×3, three convolutions, reLU is activated, changing the size to 28×28×512; the size becomes 14 multiplied by 512 after maximum pooling;
(8) Three convolutions with 512 3×3 convolution kernels, reLU activated, size changed to 14×14×512; max pooling, the size becomes 7×7×512;
(9) Flattening the data into a vector into one dimension 7 x 512 = 25088;
(10) Through two layers of 1×1×4096 and one layer of 1×1×1000 full-connection layer convolution, the full-connection layer is activated by ReLU; finally, 1000 prediction results are output through softmax.
After the data set is amplified, a modified Faster R-CNN model is built based on a pytorch framework, and the modified Faster R-CNN model consists of a feature extraction network, an RPN, a region of interest pooling layer and a classification regression layer; and adding a SpotFPN multi-scale feature learning module into the feature extraction network.
As shown in FIG. 6, the SpotFPN adds an intermediate layer to each layer in the backbone network and the feature pyramid from top to bottom to improve the feature extraction and sampling range of the backbone network, doubles the sampling range in the FPN, and improves the detection precision while keeping the weight level.
In this embodiment, the main work of spotfin is to detect a single crack, which has a relatively high speed of monitoring a single crack, and well corresponds to a data set made by style migration, that is, the shape of the crack is made by style migration, so that spotfin is convenient to extract features to perform strong rapid detection.
Specifically, spotFPN is composed of a lateral attention mechanism, a backbone network feature extraction layer, and a dual feature sampling 3 part.
(1) Lateral attention mechanism: the lateral spatial attention mechanism of SpotFPN is to set a compression and excitation network SENet at the top { C5} layer of the backbone network that communicates with the top { P5} layer of the output layer feature pyramid. The excitation network SENet can automatically learn to acquire the importance degree of each channel, then strengthen the characteristics of important channels and weaken the characteristics of non-important channels according to the importance degree. The SENet is used in the top layer { C5} of the backbone network, and the output result and the output features of the middle layer { M5} are added and fused to form the top layer { P5} of the output feature pyramid.
(2) Backbone network feature extraction layer: the backbone network feature extraction layer directly acquires the features from the backbone network, so that the information loss of the features in transverse propagation and fusion of different features is reduced. Firstly, constructing an intermediate layer feature pyramid { M2, M3, M4, M5} which is a backbone network feature extraction layer by one convolution of features { C2, C3, C4, C5} from the backbone network, and then continuing to fuse the transverse convolution and the generated { P5} after upsampling to generate an output layer feature pyramid { P2, P3, P4}. In the cross connection convolution, a convolution kernel of 1×1 is adopted, input channels for the cross connection of the backbone network ResNet to the feature pyramid are respectively 256, 512, 1024 and 2048, and the number of channels in the feature pyramid is 256.
The use of a 1 x 1 convolution kernel to adjust the number of channels may lose a part of the features, resulting in a decrease in recognition accuracy; in the deep learning of the backbone network, the size of the feature map is continuously reduced, the number of channels is increased, so that features of small targets in the topmost layer { C5} are difficult to learn, small targets are detected in the middle layers { M2, M3 and M4} which are transversely connected to the lower layers of the backbone network, and large targets are detected in the top layers.
In this embodiment, spotfin is improved, a feature sampling range is increased in the middle layer feature pyramid, more features are output to the predictor to improve the accuracy of crack detection, and meanwhile, more features of small targets can be extracted, so that the method has a good effect on detecting small cracks.
(3) Dual feature sampling: and double sampling is carried out on the middle layer feature pyramid and the output layer feature pyramid, so that more feature information is obtained. The spotfin fuses the output layer feature pyramid features { P5} with the intermediate layers M { M2, M3, M4, M5}, forming the final output feature pyramid network P { P2, P3, P4, P5}, of the spotfin. The RoI Pooling samples at the M layer and the P layer to obtain RoI characteristics of each level, and connects classification and regression functions to the M layer characteristics and the P layer characteristics to generate auxiliary losses and main losses, wherein the total losses are the sum of the 2 partial losses; both the primary and secondary losses are used during the training and testing phases.
S6: training the improved Faster R-CNN model constructed in the step S5 by adopting a training set and a verification set enhanced by data in the step S4 to obtain a trained model weight, and loading the trained model weight into the improved Faster R-CNN model to obtain a crack detection model of the photovoltaic module;
to make the model more stable, weights are used to balance the auxiliary loss L generated by double sampling auxiliary And main loss L original . Formally, the final loss function L is expressed as:
L=αL auxiliary +βL original
L original =L P-cls (P p ,t * )+θL P-loc (d p ,b * )
L auxiliary =L M-cls (P m ,t * )+θL M-loc (d m ,b * )
wherein: alpha and beta are loss weights of auxiliary loss and main loss respectively and are used for balancing the main loss and the auxiliary loss; θ is used to balance the weights of the classification loss and the positioning loss, taking 0 or 1; l (L) M Is an auxiliary loss function on the middle layer { M2, M3, M4, M5 }; l (L) P Is the main loss function at the output layer { P2, P3, P4, P5 }; l (L) cls And L loc For classification loss and regression loss, the same as in the fast R-CNN model; p (P) m 、d m And P p 、d p Classification prediction and position prediction of the middle layer and the characteristic output layer respectively; t and b are respectively the target label and the position information.
S7: and (3) performing crack detection on the fusion image of the photovoltaic module by using the photovoltaic module crack detection model obtained in the step (S6), and marking the position where the crack defect is detected.
In this example, by introducing the improved SpotFPN, the crack detection rate and accuracy are both improved, and the comparison of the above methods is shown in table 1:
table 1 improved SpotFPN test results comparison
Method comparison Accuracy/% Accuracy/% Mean absolute error/% Average detection speed/s
Improved SpotFPN 87.572 98.956 2.87 1.113
SpotFPN 83.181 98.081 2.56 1.71
FPN 82.022 97.001 4.35 2.83
AugFPN 86.569 98.326 1.97 4.22
Example 2:
a photovoltaic module crack detection system based on deep learning comprises a data acquisition module, an image preprocessing module, a data amplification module, an improved fast R-CNN module and a photovoltaic module crack detection module;
the data acquisition module acquires the photovoltaic module image and transmits the photovoltaic module image to the image preprocessing module;
the image preprocessing module is used for preprocessing the image acquired by the data acquisition module and comprises a graying module, a Gaussian filtering module and an edge processing module; the gray processing is carried out on the image acquired by the data acquisition module, and the Gaussian filter module forms an edge pixel point candidate area; the edge processing module forms an edge image from the edge pixel point candidate region.
The data amplification module amplifies the edge-processed image, and in the embodiment, an FNS style migration network is used for amplifying the photovoltaic crack image;
the improved Faster R-CNN module comprises a training module and a testing module; the training module uses the training set to train the built improved Faster R-CNN network; the test module tests the trained Faster R-CNN network by using the test set; and finally, detecting cracks of the photovoltaic module by using the trained fast R-CNN network.

Claims (7)

1. The deep learning-based photovoltaic module crack detection method is characterized by comprising the following steps of:
s1: collecting a photovoltaic module picture data set;
s2: preprocessing the image obtained in the step S1 to obtain an edge detection image;
s3: marking cracks of the photovoltaic cell assembly in the edge detection image obtained in the step S2, and creating a training set and a verification set according to a preset proportion by the marked image;
s4: carrying out data enhancement on the training set and the verification set created in the step S3;
s5: building an improved Faster R-CNN model, wherein the improved Faster R-CNN model consists of a feature extraction network, an RPN, a region of interest pooling layer and a classification regression layer; adding a SpotFPN multi-scale feature learning module into a feature extraction network;
s6: training the improved Faster R-CNN model constructed in the step S5 by adopting a training set and a verification set enhanced by data in the step S4 to obtain a trained model weight, and loading the trained model weight into the improved Faster R-CNN model to obtain a crack detection model of the photovoltaic module;
s7: and (3) performing crack detection on the photovoltaic module image by using the photovoltaic crack detection model obtained in the step (S6), and marking the position where the crack defect is detected.
2. The deep learning-based photovoltaic module crack detection method according to claim 1, wherein the preprocessing of the image in step S2 comprises the steps of:
s21: graying the color image;
s22: gaussian filtering smoothing, specifically, comparing the high threshold value with the low threshold value of the image subjected to the gray processing of S21, and then carrying out Gaussian filtering by utilizing improved two-dimensional Gao Siqi to obtain a high signal-to-noise ratio image;
s23: calculating the gradient direction and the amplitude of the pixel point filtered in the step S22, and obtaining an edge candidate point by using a first-order differential operator;
s24: comparing the gray values of the edge candidate points and the adjacent pixel points in the same gradient direction, and if the gray value of the pixel point of the candidate point is the largest, reserving;
s25: performing double-threshold detection on the pixel points processed by the S24 to obtain edge pixel points of the image;
s26: and connecting all the edge pixel points to form an edge detection image.
3. The deep learning-based photovoltaic module crack detection method according to any one of claims 1 or 2, wherein the data amplification in the step S4 is implemented by batch generation of images by an FNS-style migration model, and the FNS-style migration model includes Image Transform Net and VGG16.
4. The method for detecting cracks of a photovoltaic module based on deep learning according to claim 2, wherein in step S23, gradient directions and magnitudes are calculated using a 3×3 neighborhood.
5. The deep learning-based photovoltaic module crack detection method according to claim 4, wherein the gradient direction is divided into: 0 °, 45 °, 90 °, and 135 °.
6. The method for detecting cracks in a photovoltaic module according to any one of claims 2 and 4, wherein the ratio of the low threshold T1 to the high threshold T2 set in step S25 is 2:1 or 3:1.
7. The photovoltaic module crack detection system based on deep learning is characterized by comprising a data acquisition module, an image preprocessing module, a data amplification module, an improved Faster R-CNN module and a photovoltaic module crack detection module;
the image preprocessing module comprises a graying module, a Gaussian filtering module and an edge processing module; the graying treatment is carried out on the image acquired by the data acquisition module, and the Gaussian filter module forms an edge pixel point candidate area; the edge processing module forms an edge image.
CN202211347309.9A 2022-10-31 2022-10-31 Deep learning-based photovoltaic module crack detection method and system Pending CN116703812A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211347309.9A CN116703812A (en) 2022-10-31 2022-10-31 Deep learning-based photovoltaic module crack detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211347309.9A CN116703812A (en) 2022-10-31 2022-10-31 Deep learning-based photovoltaic module crack detection method and system

Publications (1)

Publication Number Publication Date
CN116703812A true CN116703812A (en) 2023-09-05

Family

ID=87826387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211347309.9A Pending CN116703812A (en) 2022-10-31 2022-10-31 Deep learning-based photovoltaic module crack detection method and system

Country Status (1)

Country Link
CN (1) CN116703812A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036348A (en) * 2023-10-08 2023-11-10 中国石油大学(华东) Metal fatigue crack detection method based on image processing and crack recognition model
CN117541623A (en) * 2023-11-23 2024-02-09 中国水产科学研究院黑龙江水产研究所 Fish shoal activity track monitoring system
CN118399888A (en) * 2024-06-27 2024-07-26 沛煜光电科技(上海)有限公司 Off-line type photovoltaic module EL comprehensive visual defect detection system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767369A (en) * 2017-09-27 2018-03-06 杭州迈锐钶科技有限公司 A kind of the defects of buret detection method and device
CN112164038A (en) * 2020-09-16 2021-01-01 上海电力大学 Photovoltaic hot spot detection method based on deep convolutional neural network

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767369A (en) * 2017-09-27 2018-03-06 杭州迈锐钶科技有限公司 A kind of the defects of buret detection method and device
CN112164038A (en) * 2020-09-16 2021-01-01 上海电力大学 Photovoltaic hot spot detection method based on deep convolutional neural network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUSTIN JOHNSON, ALEXANDRE ALAHI, LI FEI-FE: "Perceptual Losses for Real-Time Style Transfer and Super-Resolution", 《ARXIV:1603.08155V1 [CS.CV] 27 MAR 2016》, 31 March 2016 (2016-03-31), pages 1 - 18 *
包煦康: "基于机器视觉的管接头锻造件外观缺陷检测系统设计与实现", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》, 15 May 2022 (2022-05-15), pages 1 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117036348A (en) * 2023-10-08 2023-11-10 中国石油大学(华东) Metal fatigue crack detection method based on image processing and crack recognition model
CN117036348B (en) * 2023-10-08 2024-01-09 中国石油大学(华东) Metal fatigue crack detection method based on image processing and crack recognition model
CN117541623A (en) * 2023-11-23 2024-02-09 中国水产科学研究院黑龙江水产研究所 Fish shoal activity track monitoring system
CN117541623B (en) * 2023-11-23 2024-06-07 中国水产科学研究院黑龙江水产研究所 Fish shoal activity track monitoring system
CN118399888A (en) * 2024-06-27 2024-07-26 沛煜光电科技(上海)有限公司 Off-line type photovoltaic module EL comprehensive visual defect detection system and method

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN116703812A (en) Deep learning-based photovoltaic module crack detection method and system
CN109086824B (en) Seabed substrate sonar image classification method based on convolutional neural network
CN109146784B (en) Image super-resolution reconstruction method based on multi-scale generation countermeasure network
CN105956582B (en) A kind of face identification system based on three-dimensional data
CN111325748B (en) Infrared thermal image nondestructive testing method based on convolutional neural network
CN105069807B (en) A kind of stamped workpieces defect inspection method based on image procossing
CN113538433A (en) Mechanical casting defect detection method and system based on artificial intelligence
CN112967243A (en) Deep learning chip packaging crack defect detection method based on YOLO
CN107633520A (en) A kind of super-resolution image method for evaluating quality based on depth residual error network
CN110992275A (en) Refined single image rain removing method based on generation countermeasure network
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
CN104299232B (en) SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM
CN111080574A (en) Fabric defect detection method based on information entropy and visual attention mechanism
CN100433795C (en) Method for image noise reduction based on transforming domain mathematics morphology
CN102542543A (en) Block similarity-based interactive image segmenting method
CN116258664A (en) Deep learning-based intelligent defect detection method for photovoltaic cell
CN107463895A (en) Weak and small damage target detection method based on neighborhood vector PCA
CN112669249A (en) Infrared and visible light image fusion method combining improved NSCT (non-subsampled Contourlet transform) transformation and deep learning
CN117218101A (en) Composite wind power blade defect detection method based on semantic segmentation
CN116012687A (en) Image interaction fusion method for identifying tread defects of wheel set
CN115829942A (en) Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
CN113361407B (en) PCANet-based spatial spectrum feature combined hyperspectral sea ice image classification method
CN106407975A (en) Multi-dimensional layered object detection method based on space-spectrum constraint
CN116596922B (en) Production quality detection method of solar water heater

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination