CN118154600A - String falling detection method and device of photovoltaic power generation system, electronic equipment and medium - Google Patents

String falling detection method and device of photovoltaic power generation system, electronic equipment and medium Download PDF

Info

Publication number
CN118154600A
CN118154600A CN202410575320.3A CN202410575320A CN118154600A CN 118154600 A CN118154600 A CN 118154600A CN 202410575320 A CN202410575320 A CN 202410575320A CN 118154600 A CN118154600 A CN 118154600A
Authority
CN
China
Prior art keywords
output result
frame
labeling
image
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410575320.3A
Other languages
Chinese (zh)
Inventor
洪流
柴东元
李小飞
杨昌东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snegrid Electric Technology Co ltd
Original Assignee
Snegrid Electric Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snegrid Electric Technology Co ltd filed Critical Snegrid Electric Technology Co ltd
Priority to CN202410575320.3A priority Critical patent/CN118154600A/en
Publication of CN118154600A publication Critical patent/CN118154600A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method, a device, electronic equipment and a medium for detecting a falling string of a photovoltaic power generation system, wherein the method comprises the following steps: acquiring a first image to be detected, and inputting the first image into a trained target detection network to obtain a first output result; under the condition that the labeling frame in the first output result meets a first setting condition, inputting the first output result into a trained feature extraction network to obtain a second output result; the second output result includes a first feature and at least one set of second features; the first features are used for representing the features extracted by the feature extraction network in the region where the annotation frame is located; the second feature is used for representing the feature extracted by the feature extraction network in the area outside the annotation frame; based on the second output result, determining that a component with a string falling exists in the first image. By adopting the method, the efficiency and the accuracy of the detection of the falling strings can be improved.

Description

String falling detection method and device of photovoltaic power generation system, electronic equipment and medium
Technical Field
The invention relates to the technical field of photovoltaic power generation, in particular to a method and a device for detecting a falling string of a photovoltaic power generation system, electronic equipment and a medium.
Background
With the development of science and the progress of technology, the electricity-generating cost of photovoltaic power generation gradually decreases, and the scale of a photovoltaic power station is also increased. However, more and more faults occur in the operation process of the photovoltaic power station, and among the faults, a fault with great influence on the generated energy exists: and (5) dropping strings. In the related art, most of investigation of the fallen strings need operation and maintenance personnel to carry out investigation on site through a universal meter, and the operation cost of manually detecting the fallen strings in situ is high, and the detection efficiency is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a method, an apparatus, an electronic device, and a medium for detecting a run-out of a photovoltaic power generation system, which can improve the run-out detection efficiency.
A string failure detection method of a photovoltaic power generation system comprises the following steps:
Acquiring a first image to be detected, and inputting the first image into a trained target detection network to obtain a first output result; the labeling frame in the first output result is used for labeling at least one target component in each bracket in the first image; the brightness of the target component is higher than that of other components;
Under the condition that the labeling frame in the first output result meets a first setting condition, inputting the first output result into a trained feature extraction network to obtain a second output result; the second output result includes a first feature and at least one set of second features; the first features are used for representing the features extracted by the feature extraction network in the region where the annotation frame is located; the second feature is used for representing the feature extracted by the feature extraction network in the area outside the annotation frame;
based on the second output result, determining that a component with a string falling exists in the first image.
In the above scheme, the labeling frame in the first output result includes a first labeling frame and/or a second labeling frame; the first labeling frame is used for labeling single target components in the same bracket; the second labeling frame is used for labeling continuous target components in the same bracket.
In the above solution, after obtaining the first output result, the method further includes:
and if the labeling frame in the first output result meets the second setting condition, determining that a component with a string falling exists in the first image according to the first output result.
In the above solution, the determining that the labeling frame in the first output result meets the second setting condition includes:
and determining that the labeling frame in the first output result meets a second setting condition under the condition that the first output result comprises the first labeling frame and the second labeling frame.
In the above solution, the determining that the labeling frame in the first output result meets the first setting condition includes:
and under the condition that the first output result does not contain the first annotation frame and the second annotation frame at the same time, determining that the annotation frame in the first output result meets a first setting condition.
In the above solution, the determining, based on the second output result, that the component of the first image has a string that is dropped includes:
Determining at least one set of feature similarities according to the second output result; the feature similarity is used to characterize similarity between the first feature and a set of the second features;
And under the condition that the group of feature similarity is smaller than a first set value, determining that the components in the labeling frame have strings.
In the above solution, the determining, according to the first output result, that the component of the first image has the string includes:
And determining that the components in the second annotation frame have strings when the number of the first annotation frames is larger than a second set value and the intersection of the first annotation frame and the second annotation frame in the first output result is determined.
A string-drop detection device of a photovoltaic power generation system, comprising:
The first detection module is used for acquiring a first image to be detected, inputting the first image into a trained target detection network and obtaining a first output result; the labeling frame in the first output result is used for labeling at least one target component in each bracket in the first image; the brightness of the target component is higher than that of other components;
The second detection module is used for inputting the first output result to the trained feature extraction network to obtain a second output result under the condition that the labeling frame in the first output result meets the first setting condition; the second output result includes the first feature and at least one set of second features; the first features are used for representing the features extracted by the feature extraction network in the region where the annotation frame is located; the second feature is used for representing the features extracted by the feature extraction network in the area outside the labeling frame;
And the determining module is used for determining that a component with a string falling exists in the first image based on the second output result.
An electronic device comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the steps of the string falling detection method of the photovoltaic power generation system when executing the computer program.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above-described method for detecting a run-out of a photovoltaic power generation system.
According to the method, the device, the electronic equipment and the medium for detecting the falling strings of the photovoltaic power generation system, through the trained target detection network, the target components in each support in the first image are marked by using the marking frame, under the condition that the first set condition is met, the first image containing the marking frame is input into the trained feature extraction network, the first features and at least one group of second features are extracted from the first image, the components with the falling strings in the first image are determined through the first features and the at least one group of second features, the falling string fault detection can be carried out from two different layers through the target detection network and the image feature network, and multi-scale target detection is achieved, so that the efficiency and the accuracy of the falling string fault detection can be improved.
Drawings
FIG. 1 is a flow chart of a method for detecting a run-out of a photovoltaic power generation system according to one embodiment;
FIG. 2 is a schematic diagram of labeling a first image using a first labeling frame in one embodiment;
FIG. 3 is a schematic diagram of labeling a second image using a second labeling frame in one embodiment;
FIG. 4 is a flowchart showing a specific implementation of step S103 in one embodiment;
fig. 5 is a block diagram of a device for detecting a string failure of a photovoltaic power generation system according to an embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Implementation details of the technical scheme of the embodiment of the present application are described in detail below.
In one embodiment, as shown in fig. 1, a method for detecting a falling string of a photovoltaic power generation system is provided, and the method may include the following steps:
Step S101, a first image to be detected is obtained, and the first image is input to a trained target detection network to obtain a first output result.
The first image can be obtained by aerial photography of the photovoltaic power station by using an unmanned aerial vehicle mounted infrared imager, wherein the first image is an infrared image.
In the process of acquiring a first image to be detected, in order to ensure the quality of the first image, a proper unmanned aerial vehicle and an infrared imager are required to be selected, so that the flight time and the load of the unmanned aerial vehicle can meet the requirements of the infrared imager, and meanwhile, the infrared imager is ensured to have high enough resolution and sensitivity. For unmanned aerial vehicles, the flight route and the height of the unmanned aerial vehicle need to be planned according to the area and the layout of the photovoltaic power station, so that all photovoltaic panels can be covered on the whole surface, and proper shooting angles and proper shooting distances are kept.
It should be noted that the falling strings means that connection faults occur between strings in the photovoltaic power station, so that some strings cannot normally generate power. Such faults can lead to a decrease in the power generation capacity of the whole photovoltaic power station, and affect the power generation efficiency. In the infrared image, because the abnormal heating phenomenon of some components can be caused by the component falling strings, the abnormal areas can show obvious temperature difference differences in the infrared image, based on the abnormal areas, the infrared image can be analyzed by shooting the infrared image of the photovoltaic power station by using the unmanned aerial vehicle, and therefore the specific component with the string falling fault can be positioned.
After the first image is acquired, the first image is analyzed by adopting a trained target detection network, and the target detection network can locate and mark an abnormal brightness area in the first image, wherein the abnormal brightness area refers to an area with brightness higher than that of the component in a normal state.
In the process of processing the first image by the target detection network, the brightness of each component is compared, in practical application, the brightness of one component can be compared with the brightness of other components in the same bracket, and the brightness of one component can be compared with the brightness of other components in the first image, so that the target component in the first image is determined, and the target component is marked by using a marking frame. The brightness of the target component is higher here than the brightness of the other components in the same holder.
After the first image is processed through the target detection network, a first output result can be obtained, and the first output result contains a labeling frame.
It will be appreciated that the heating characteristics of the dropped component are clear in the infrared imaging, that is, the brightness of the dropped component is higher than that of the normal component, so that the first image is analyzed by using the object detection network, the areas where the dropped component may exist are screened out from the first image, and further analysis is performed on the areas.
It should be noted that the bracket mentioned here refers to a structure for supporting and mounting a photovoltaic module. The photovoltaic modules are typically mounted on brackets that are in turn secured to the ground or other support structure to ensure that the photovoltaic modules are securely mounted in place. Typically, a plurality of photovoltaic modules are mounted in a rack, and the plurality of photovoltaic modules are connected in series to form a string, and the photovoltaic modules in the photovoltaic power station are typically mounted together by the rack to form a module array, where a module array may include a plurality of strings.
In one embodiment, in the process of labeling the target component in the first image by the target detection network, the first labeling frame and/or the second labeling frame are adopted for labeling, and then the labeling frame in the obtained first output result comprises the first labeling frame and/or the second labeling frame. Assuming that the brightness of a pixel in an image is measured in gray scale values, the brightness is considered to be maximum in the case of a gray scale value of 255 (i.e., white), and the brightness is considered to be minimum in the case of a gray scale value of 0 (i.e., black). As shown in fig. 2, fig. 2 shows a schematic diagram of labeling a first image using a first labeling frame, where fig. 2 includes 15 components, and for a single target component in fig. 2, the first labeling frame is used for labeling. The first labelling boxes are used to label individual target components located in the same rack, meaning that each first labelling box represents a separate, abnormally bright component. As shown in fig. 3, fig. 3 shows a schematic diagram of labeling a first image using a second labeling frame, where for the continuous target components in a group string in fig. 3, labeling is performed using the second labeling frame. The second label box is used to label consecutive target components located in the same rack, and 5 components in each vertical row of fig. 3 form a group string, that is, 3 group strings are included in fig. 3, and the consecutive target components have an adjacent relationship in one group string. It can be seen that within the first label frame is one target component, while within the second label frame is at least two target components.
In practical application, besides the first labeling frame and the second labeling frame, a third labeling frame can be further arranged, and the third labeling frame is used for labeling the supports, so that the positions and the ranges of the supports can be clearly identified, and each support of the photovoltaic power station can be distinguished.
It should be noted that, the support is an indispensable component of the photovoltaic power station, so in the process of analyzing the image by using the target detection network, the obtained output result contains the third labeling frame for labeling the support, and the separate target component and the continuous target component do not necessarily exist. Based on this, the labeling boxes contained in the first output result can be divided into the following cases:
(1) Only the third label box is contained. In this case, it is generally possible to determine that there is no run-out of the components of the photovoltaic power plant, indicating that there is no region of abnormal brightness in the first image.
(2) The method comprises a first labeling frame and a third labeling frame. In this case, it is explained that the luminance of a certain component in the photovoltaic power plant is higher than the luminance of the peripheral component, wherein the first label frame provides the luminance abnormality of the individual component in each rack, and the third label frame provides the position and range information of the rack in the photovoltaic power plant.
(3) The method comprises a second labeling frame and a third labeling frame. In this case, it is explained that the brightness of the continuous component in the photovoltaic power plant is higher than the brightness of the peripheral component, wherein the second label frame provides abnormal brightness of the continuous component in each rack, and the third label frame provides position and range information of the rack in the photovoltaic power plant.
(4) The method comprises a first labeling frame, a second labeling frame and a third labeling frame. In this case, it is stated that the photovoltaic power plant has a certain component with higher brightness than the peripheral component, and also has consecutive components with higher brightness than the peripheral component, wherein the first label frame provides the brightness abnormality of the individual component in each rack, the second label frame provides the brightness abnormality of the consecutive component in each rack, and the third label frame provides the position and range information of the rack in the photovoltaic power plant.
Step S102, when determining that the labeling frame in the first output result meets the first setting condition, inputting the first output result into the trained feature extraction network to obtain a second output result.
And under the condition that the labeling frame of the first output result meets the first setting condition, further analyzing the first output result based on the trained feature extraction network, namely analyzing the first image containing the labeling frame, and obtaining a second output result.
It should be noted that, the first output result obtained by the target detection network may have a possibility of missing detection, based on this, the condition that missing detection is easy to exist is defined by using the first setting condition, and when the first setting condition is satisfied, the first image is further analyzed by combining with the feature extraction network, so as to improve the accuracy of the detection of the missing string.
The feature extraction network can capture rich feature information in the image, including features in aspects of local structure, texture, illumination and the like, so that rich feature representations can be extracted from the first image, and the first output result is further verified by utilizing the feature extraction network, thereby reducing the possibility of missing detection and improving the accuracy of the string dropping detection.
The feature extraction Network may be a visual transducer (ViT, vision Transformer) or a convolutional neural Network such as a Residual Network (ResNet). In the process of processing the first output result by the feature extraction network, the feature extraction network carries out convolution operation on the first output result so as to capture features corresponding to different areas in the first image, wherein the features comprise a first feature and at least one group of second features; the first features refer to features extracted by the feature extraction network in the labeling frame, and the second features refer to features extracted by the feature extraction network in other groups of string regions outside the labeling frame, wherein one group of second features represents features extracted by one group of string regions outside the labeling frame.
In practical application, the first output result can be directly input into the feature extraction network, that is, the whole first image containing the labeling frame is input into the feature extraction network, so that the feature extraction network analyzes the whole first image containing the labeling frame, and extracts the first feature and at least one group of second features. Or extracting images of the region where the labeling frame is located and images of other groups of string regions except the labeling frame in the first output result, and respectively inputting the images to a feature extraction network, wherein the feature extraction network analyzes the images of the region where the labeling frame is located, so that the first features can be extracted, and the feature extraction network analyzes the images of the other groups of string regions except the labeling frame, so that at least one group of second features can be extracted.
In one embodiment, the first setting condition may be set such that the first image does not include the first label frame and the second label frame at the same time, so that when the first output result indicates that the first image includes the first label frame and does not include the second label frame, or includes the second label frame and does not include the first label frame, it may be determined that the label frame in the first output result satisfies the first setting condition.
It will be appreciated that when the first output result does not include both the first annotation box and the second annotation box, there may be a problem of incomplete detection, for example, an unobvious single target component may be ignored, or the boundary of a continuous target component region may be inaccurate, complex features and context information of the luminance anomaly region may not be sufficiently captured, and the back information of the luminance anomaly region may not be fully understood and analyzed. Based on this, these regions can be analyzed more deeply through the feature extraction network.
Step S103, determining that a component with a string falling exists in the first image based on the second output result.
Here, based on the second output result, that is, comparing the first feature with at least one set of second features, the similarity between the marked area and the surrounding area may be further verified, so as to determine whether there is a missed detection target component.
In this embodiment, the first image is processed by combining the target detection network and the feature extraction network, so that the appearance features and rich feature representations of the target can be comprehensively utilized, and whether the first image has a string falling or not can be more comprehensively judged. The target detection network provides position and rough appearance information of the target, the feature extraction network provides finer feature representation, and whether missing detection exists or not can be detected on the basis of obtaining a predicted area of the target, so that accuracy of detecting the missing string is improved.
In one embodiment, as shown in fig. 4, fig. 4 shows a specific implementation flow of step S103.
Step S401, determining at least one group of feature similarity according to the second output result.
In step S402, in the case that it is determined that there is a set of feature similarities smaller than the first set value, it is determined that there is a missing string of the components located in the annotation frame.
The first feature is compared with at least one group of second features, and at least one group of feature similarities is calculated, wherein the group of feature similarities refers to the similarity between the first feature and the group of second features.
In one implementation, the feature similarity may be a cosine similarity, where the cosine similarity is measured by an included angle between two feature vectors, and a calculation formula of the cosine similarity is:
wherein A represents a first feature, B represents a set of second features, Mode length representing the first feature,/>Representing the modulo length of a set of second features. The cosine similarity has a value ranging from-1 to 1, with a value closer to 1 indicating that the two eigenvectors are more similar and a value closer to-1 indicating that the two eigenvectors are less similar.
In practical applications, besides cosine similarity, feature similarity may be a euclidean distance, a manhattan distance, a pearson correlation coefficient, etc., and these parameters may measure the similarity between feature vectors.
After calculating the feature similarity between the first feature and each group of second features, if one group of feature similarity is smaller than a first set value, the difference between the first feature and the second feature is considered to be too large, which indicates that the region where the marking frame is located has unique features, which are obviously different from other regions in the image, and no missing detection condition exists, so that the occurrence of the missing string of the components in the region where the marking frame is located can be judged.
If the feature similarity of each group is greater than or equal to the first set value, the difference between the first feature and the second feature is considered to be smaller, which may mean that the target detection network fails to accurately detect the region where the target component in the first image is located, and there is a possibility of missed detection, and at this time, the first output result cannot be directly used as the final target detection result.
It should be noted that, the first set value here is an empirical value, and may be set to 0.6, and in practical applications, the first set value may be set based on practical experience.
In one embodiment, after the first output result is obtained, if it is determined that the label box in the first output result meets the second setting condition, it is determined that a component with a string falling in the first image exists according to the first output result.
It can be appreciated that in some cases, the target detection network can accurately identify the first image, so that the obtained first output result has strong characteristics and discrimination, and the string dropping area can be more reliably judged by the first output result, and the second setting condition is used for limiting the situation suitable for judging the string dropping by the first output result.
In one embodiment, the second setting condition may be set to include the first labeling frame and the second labeling frame in the first image, so that the first output result characterizes that the first image includes the first labeling frame and includes the second labeling frame, and it may be determined that the labeling frame in the first output result satisfies the second setting condition.
It can be understood that the first labeling frame and the second labeling frame exist in the first image at the same time, which indicates that the labeled region has stronger characteristics and differentiation, and the brightness abnormality in this case often has more obvious problems, and is more relevant to the string dropping detection, so that the labeling frame in the first output result can be used to determine that the string dropping component exists. At the same time, it has been shown that the object detection network is able to efficiently identify and distinguish between individual object components and successive object components, thus eliminating the need for additional feature extraction networks in combination for further processing.
Under the condition that the second setting condition is met, the position, the distribution and the like of the marking frame in the first output result are analyzed, so that the region with the strings can be determined in the first image.
In one embodiment, it is determined whether the number of first callout boxes is greater than a second set point, where the second set point may be set to 4. When the number of first label boxes is large, it may represent that there are multiple individual component brightness anomalies within the second label box, which may be signs of a string being dropped.
In addition, it is further required to determine whether there is an intersection between the first label frame and the second label frame according to the distribution of the first label frame and the second label frame, where the presence of the intersection between the first label frame and the second label frame may indicate that a component in the second label frame overlaps with a component in the first label frame, that is, that there is a component that is considered to be abnormal in brightness multiple times, which may be an indication that the component has a string drop.
Based on the above, when the number of the first labeling frames is larger than the second set value and the intersection exists between the first labeling frames and the second labeling frames, it can be determined that the component located in the second labeling frames has a string dropping area, that is, the area where the second labeling frames are located is a string dropping area.
The second setting value is an empirical value, and may be set based on practical experience.
The training process of the object detection network is described in detail below. The training process for the object detection network can be divided into: data preparation, data preprocessing, data set partitioning, model selection and initialization, loss function definition, model training, model verification and parameter tuning, model testing and evaluation. The following is a description of the various links in the model training process.
(1) Data preparation: first, a labeled dataset needs to be prepared, including labeled infrared images and corresponding labeling information. In the embodiment, the marking of the obtained infrared image of the photovoltaic power station comprises the steps of marking a module with brightness obviously higher than that of peripheral modules by using a first marking frame, wherein the corresponding category is 0; marking the continuous components with higher brightness than surrounding components by using a second marking frame, wherein the corresponding class is 1; each bracket in the graph is marked by using a third marking frame, and the corresponding analogy is 2, so that the distribution condition of components in each bracket can be determined.
(2) Data preprocessing: the prepared data set is preprocessed, including operations such as image scaling, cropping, data enhancement and the like, so as to facilitate training of the target detection network.
(3) Dividing the data set: the data set is divided into a training set, a validation set and a test set, typically in a certain proportion, to facilitate monitoring the training process and evaluating the model performance.
(4) Model selection and initialization: a suitable target detection network model is selected, in this embodiment using the YOLO network in a real-time target detection algorithm, and then the parameters of the model are initialized.
(5) Loss function definition: a loss function of the target detection network is defined.
(6) Model training: and training the target detection network by using the training set, and updating network parameters through a back propagation algorithm, so that the network can gradually learn the target detection and classification tasks.
(7) Model verification and parameter adjustment: and verifying the trained model by using a verification set, and adjusting parameters of the model according to a verification result so as to improve the performance of the model.
(8) Model test and evaluation: and finally, testing the trained model by using a test set, and evaluating the performance and generalization capability of the model.
Based on different links of the training process, training of the target detection network is completed.
The training process of the feature extraction network is described in detail below.
First, an infrared image without a falling string is screened out of infrared images of a photovoltaic power plant for training. And extracting images corresponding to the components in the same bracket from the infrared images, taking the images as a target if the number of the components in the same bracket is more than 20, taking a set of all the targets in one infrared image as a group, and putting the set of all the targets in the same bracket into a data set.
The feature extraction network is trained using the dataset. In the training process, images corresponding to two targets in the same group in the data set are randomly taken out, random sample pairs are constructed by utilizing the two images, paired samples are input into a feature extraction network, feature vectors A and B are obtained, and cosine similarity of the feature vectors A and B is calculated. If the sample pair is a positive sample, the label value corresponding to the computed cosine similarity is 1, otherwise, the label value is 0. The label value obtained through cosine similarity calculation is used for a back propagation training feature extraction network to adjust network parameters, so that the network can better distinguish positive samples from negative samples, and the accuracy and performance of feature extraction are improved.
In the process of constructing the sample pair, the whole increase (m+/-n) or decrease (m+/-n) of the value of r space in the first graph is taken as comparison, the value of n is an integer from 0 to 3 at random, m is an integer from 0 to 10, and the g space and the b space are subjected to random n addition and subtraction operation. If the second graph is processed in the same manner, the two graphs form a random positive sample pair. If the whole increase (j±k) or decrease (j±k) of the r space value in the second image is taken as a comparison, the value of j is an integer from 30 to 50 at random, k is an integer from 0 to 3, and the g space and the b space are also subjected to the operation of randomly adding and subtracting k, then the two images form a random negative sample pair.
The random positive sample pair and the random negative sample pair constructed in the method can ensure the training stability of the feature extraction network and ensure the sufficient randomness.
In the above embodiment, the luminance abnormal region in the image is identified through the target detection network, then the feature vector corresponding to the luminance abnormal region in the image and the feature vector corresponding to the other region outside the luminance abnormal region are extracted through the feature extraction network, and the components with the strings falling in the image can be determined through the two types of feature vectors, so that the efficiency of the string falling detection can be improved, the string falling detection is performed by combining the target detection network and the feature extraction network, the information in the image can be fully utilized, and the accuracy of the string falling detection is improved.
In one embodiment, a string drop detection device of a photovoltaic power generation system is provided, and referring to fig. 5, a string drop detection device 500 of the photovoltaic power generation system may include: a first detection module 501, a second detection module 502 and a determination module 503.
The first detection module 501 is configured to obtain a first image to be detected, and input the first image to a trained target detection network to obtain a first output result; the labeling frame in the first output result is used for labeling at least one target component in each bracket in the first image; the brightness of the target component is higher than that of other components; the second detection module 502 is configured to input the first output result to the trained feature extraction network to obtain a second output result when it is determined that the label frame in the first output result meets the first setting condition; the second output result includes the first feature and at least one set of second features; the first features are used for representing the features extracted by the feature extraction network in the region where the annotation frame is located; the second feature is used for representing the features extracted by the feature extraction network in the area outside the labeling frame; the determining module 503 is configured to determine, based on the second output result, that a component of the string is present in the first image.
In one embodiment, the annotation boxes in the first output result comprise a first annotation box and/or a second annotation box; the first labeling frame is used for labeling single target components in the same bracket; the second labeling frame is used for labeling the continuous target components in the same bracket.
In one embodiment, the determining module 503 is further configured to determine, according to the first output result, that a component of the first image has a string falling if it is determined that the labeling frame in the first output result meets the second setting condition.
In one embodiment, the determining module 503 is configured to determine that, in a case where the first output result includes the first label frame and the second label frame, the label frame in the first output result meets the second setting condition.
In one embodiment, the second detection module 502 is configured to determine that the label frame in the first output result meets the first setting condition when the first output result does not include the first label frame and the second label frame at the same time.
In one embodiment, the determining module 503 is configured to determine at least one set of feature similarities according to the second output result; the feature similarity is used to characterize the similarity between the first feature and a set of second features; and under the condition that the group of feature similarity is smaller than the first set value, determining that the component positioned in the labeling frame has a string.
In one embodiment, the determining module 503 is configured to determine that, in a case where it is determined that the number of the first label frames is greater than the second set value and that there is an intersection between the first label frame and the second label frame in the first output result, a component located in the second label frame has a string drop.
For specific limitation of the string drop detection device of the photovoltaic power generation system, reference may be made to the limitation of the string drop detection method of the photovoltaic power generation system hereinabove, and no further description is given here. The above-described modules in the string drop detection device 500 of the photovoltaic power generation system may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, an electronic device is provided that includes a memory storing a computer program and a processor that when executing the computer program implements a method for detecting a run-out of a photovoltaic power generation system.
In one embodiment, a computer storage medium having a computer program stored thereon, which when executed by a processor, implements a method of detecting a run-out of a photovoltaic power generation system is provided.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
While embodiments of the present invention have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the invention, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the invention.

Claims (10)

1. The method for detecting the falling strings of the photovoltaic power generation system is characterized by comprising the following steps of:
Acquiring a first image to be detected, and inputting the first image into a trained target detection network to obtain a first output result; the labeling frame in the first output result is used for labeling at least one target component in each bracket in the first image; the brightness of the target component is higher than that of other components;
Under the condition that the labeling frame in the first output result meets a first setting condition, inputting the first output result into a trained feature extraction network to obtain a second output result; the second output result includes a first feature and at least one set of second features; the first features are used for representing the features extracted by the feature extraction network in the region where the annotation frame is located; the second feature is used for representing the feature extracted by the feature extraction network in the area outside the annotation frame;
based on the second output result, determining that a component with a string falling exists in the first image.
2. The method for detecting the falling string of the photovoltaic power generation system according to claim 1, wherein the labeling frame in the first output result comprises a first labeling frame and/or a second labeling frame; the first labeling frame is used for labeling single target components in the same bracket; the second labeling frame is used for labeling continuous target components in the same bracket.
3. The method for detecting the falling string of the photovoltaic power generation system according to claim 2, wherein after obtaining the first output result, the method further comprises:
and if the labeling frame in the first output result meets the second setting condition, determining that a component with a string falling exists in the first image according to the first output result.
4. The method for detecting a failure of a photovoltaic power generation system according to claim 3, wherein determining that the label frame in the first output result satisfies a second setting condition includes:
and determining that the labeling frame in the first output result meets a second setting condition under the condition that the first output result comprises the first labeling frame and the second labeling frame.
5. The method for detecting a failure of a photovoltaic power generation system according to claim 2, wherein determining that the label frame in the first output result satisfies a first setting condition includes:
and under the condition that the first output result does not contain the first annotation frame and the second annotation frame at the same time, determining that the annotation frame in the first output result meets a first setting condition.
6. The method for detecting a missing string of a photovoltaic power generation system according to claim 1, wherein the determining that a component of a missing string exists in the first image based on the second output result includes:
Determining at least one set of feature similarities according to the second output result; the feature similarity is used to characterize similarity between the first feature and a set of the second features;
And under the condition that the group of feature similarity is smaller than a first set value, determining that the components in the labeling frame have strings.
7. The method for detecting a missing string of a photovoltaic power generation system according to claim 3, wherein the determining, according to the first output result, that a component of the missing string exists in the first image includes:
And determining that the components in the second annotation frame have strings when the number of the first annotation frames is larger than a second set value and the intersection of the first annotation frame and the second annotation frame in the first output result is determined.
8. The utility model provides a photovoltaic power generation system falls cluster detection device which characterized in that includes:
The first detection module is used for acquiring a first image to be detected, inputting the first image into a trained target detection network and obtaining a first output result; the labeling frame in the first output result is used for labeling at least one target component in each bracket in the first image; the brightness of the target component is higher than that of other components;
The second detection module is used for inputting the first output result to the trained feature extraction network to obtain a second output result under the condition that the labeling frame in the first output result meets the first setting condition; the second output result includes the first feature and at least one set of second features; the first features are used for representing the features extracted by the feature extraction network in the region where the annotation frame is located; the second feature is used for representing the features extracted by the feature extraction network in the area outside the labeling frame;
And the determining module is used for determining that a component with a string falling exists in the first image based on the second output result.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method for detecting a string drop of a photovoltaic power generation system according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method for detecting a run-out of a photovoltaic power generation system according to any one of claims 1 to 7.
CN202410575320.3A 2024-05-10 2024-05-10 String falling detection method and device of photovoltaic power generation system, electronic equipment and medium Pending CN118154600A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410575320.3A CN118154600A (en) 2024-05-10 2024-05-10 String falling detection method and device of photovoltaic power generation system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410575320.3A CN118154600A (en) 2024-05-10 2024-05-10 String falling detection method and device of photovoltaic power generation system, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN118154600A true CN118154600A (en) 2024-06-07

Family

ID=91287195

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410575320.3A Pending CN118154600A (en) 2024-05-10 2024-05-10 String falling detection method and device of photovoltaic power generation system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN118154600A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569837A (en) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 Method and device for optimizing damage detection result
CN111815564A (en) * 2020-06-09 2020-10-23 浙江华睿科技有限公司 Method and device for detecting silk ingots and silk ingot sorting system
CN114529817A (en) * 2022-02-21 2022-05-24 东南大学 Unmanned aerial vehicle photovoltaic fault diagnosis and positioning method based on attention neural network
CN114926395A (en) * 2022-04-12 2022-08-19 尚特杰电力科技有限公司 Photovoltaic panel infrared image string drop detection method and system
CN115082455A (en) * 2022-07-27 2022-09-20 中国科学技术大学 Photovoltaic assembly positioning and defect detecting method in infrared image based on deep learning
US20230267599A1 (en) * 2022-02-24 2023-08-24 Samsung Display Co., Ltd. System and method for defect detection
CN116645315A (en) * 2022-02-24 2023-08-25 三星显示有限公司 System and method for defect detection

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110569837A (en) * 2018-08-31 2019-12-13 阿里巴巴集团控股有限公司 Method and device for optimizing damage detection result
CN111815564A (en) * 2020-06-09 2020-10-23 浙江华睿科技有限公司 Method and device for detecting silk ingots and silk ingot sorting system
CN114529817A (en) * 2022-02-21 2022-05-24 东南大学 Unmanned aerial vehicle photovoltaic fault diagnosis and positioning method based on attention neural network
US20230267599A1 (en) * 2022-02-24 2023-08-24 Samsung Display Co., Ltd. System and method for defect detection
CN116645315A (en) * 2022-02-24 2023-08-25 三星显示有限公司 System and method for defect detection
CN114926395A (en) * 2022-04-12 2022-08-19 尚特杰电力科技有限公司 Photovoltaic panel infrared image string drop detection method and system
CN115082455A (en) * 2022-07-27 2022-09-20 中国科学技术大学 Photovoltaic assembly positioning and defect detecting method in infrared image based on deep learning

Similar Documents

Publication Publication Date Title
CN112380952B (en) Power equipment infrared image real-time detection and identification method based on artificial intelligence
CN108154508B (en) Method, apparatus, storage medium and the terminal device of product defects detection positioning
CN108257121B (en) Method, apparatus, storage medium and the terminal device that product defects detection model updates
Kim et al. Automatic fault recognition of photovoltaic modules based on statistical analysis of UAV thermography
CN112699876B (en) Automatic reading method for various meters of gas collecting station
KR101806217B1 (en) Method and apparatus for detecting malfunction panel from UAV thermal infrared images of photovoltaic modules
CN115980050B (en) Water quality detection method and device for water outlet, computer equipment and storage medium
CN112508950B (en) Anomaly detection method and device
CN116630327B (en) Boiler state abnormity monitoring system based on thermodynamic diagram
Kurukuru et al. Machine learning framework for photovoltaic module defect detection with infrared images
CN112927223A (en) Glass curtain wall detection method based on infrared thermal imager
CN109614512B (en) Deep learning-based power equipment retrieval method
CN112419243B (en) Power distribution room equipment fault identification method based on infrared image analysis
CN113554645A (en) Industrial anomaly detection method and device based on WGAN
CN111931721A (en) Method and device for detecting color and number of annual inspection label and electronic equipment
CN118154600A (en) String falling detection method and device of photovoltaic power generation system, electronic equipment and medium
CN116486146A (en) Fault detection method, system, device and medium for rotary mechanical equipment
CN110318953B (en) Temperature monitoring method and device for wind turbine generator electric control system
CN114529543A (en) Installation detection method and device for peripheral screw gasket of aero-engine
CN110874837B (en) Defect automatic detection method based on local feature distribution
CN114022873A (en) Instrument state detection method, electronic device and storage medium
CN114998194A (en) Product defect detection method, system and storage medium
CN109584279A (en) Ship detecting method, apparatus and ship detecting system based on SAR image
CN112508946A (en) Cable tunnel abnormity detection method based on antagonistic neural network
CN113850173A (en) Method, device and medium for eliminating repeated positioning of faulty photovoltaic module

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination