CN114972997B - Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction - Google Patents

Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction Download PDF

Info

Publication number
CN114972997B
CN114972997B CN202210625073.4A CN202210625073A CN114972997B CN 114972997 B CN114972997 B CN 114972997B CN 202210625073 A CN202210625073 A CN 202210625073A CN 114972997 B CN114972997 B CN 114972997B
Authority
CN
China
Prior art keywords
camera
sky
image
cloud
power generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210625073.4A
Other languages
Chinese (zh)
Other versions
CN114972997A (en
Inventor
宋华婷
张岗
邹纪明
邓建华
王刚
谢涛
周黄河
李晓
王平
田浩东
王一飞
陈军
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingyang Technology Hangzhou Co ltd
Zhongmin New Energy Ningxia Yanchi Photoelectric Energy Co ltd
Original Assignee
Lingyang Technology Hangzhou Co ltd
Zhongmin New Energy Ningxia Yanchi Photoelectric Energy Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingyang Technology Hangzhou Co ltd, Zhongmin New Energy Ningxia Yanchi Photoelectric Energy Co ltd filed Critical Lingyang Technology Hangzhou Co ltd
Priority to CN202210625073.4A priority Critical patent/CN114972997B/en
Publication of CN114972997A publication Critical patent/CN114972997A/en
Application granted granted Critical
Publication of CN114972997B publication Critical patent/CN114972997B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02SGENERATION OF ELECTRIC POWER BY CONVERSION OF INFRARED RADIATION, VISIBLE LIGHT OR ULTRAVIOLET LIGHT, e.g. USING PHOTOVOLTAIC [PV] MODULES
    • H02S50/00Monitoring or testing of PV systems, e.g. load balancing or fault identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a tracking type photovoltaic power generation optimization method based on full sky image 3D cloud layer reconstruction, which comprises the following steps: (1) building an intelligent weather station; (2) performing laboratory calibration on the fisheye camera; (3) intelligent weather station arrays are involved and deployed; (4) calibrating the azimuth of the fisheye camera in the field; (5) acquiring an all-sky image; (6) cloud layer area identification and division; (7) identifying cloud heights layer by layer; (8) verifying cloud height identification results; (9) layer-by-layer 3D reconstruction of the cloud layer; (10) And optimizing the angle of the tracking type photovoltaic power generation plate according to the cloud layer 3D reconstruction result. The invention is based on the deployment and data processing of the array type all-sky camera device, and can simultaneously meet the conditions of high precision, high real-time performance and high economical efficiency.

Description

Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction
Technical Field
The invention belongs to the technical field of photovoltaic power generation, and particularly relates to a tracking photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction.
Background
Real-time identification, analysis and prediction of cloud conditions is of great significance to many socioeconomic activities. Especially in the tracking type photovoltaic power generation industry, accurate and timely model establishment of cloud layer conditions, solar position and photovoltaic power generation plate angle relation can remarkably improve photovoltaic power generation power and optimize power generation curves.
The traditional cloud layer identification method, such as a satellite meteorological cloud image method, a ground cloud layer altimeter method, a ground laser radar scanning method and the like, has various limitations in practical application, and can not simultaneously meet the requirements of identification precision, processing speed and economy.
The Chinese patent document with publication number CN111652126A discloses an inversion radiation method based on satellite cloud image, which comprises the following steps: acquiring satellite cloud image data; performing image processing on the satellite cloud image data to obtain a cloud index; establishing a clear sky model, and obtaining a radiation attenuation index according to the clear sky model and the actually measured radiation value; establishing a corresponding mathematical relationship according to the cloud index and the radiation attenuation index; and calculating according to the mathematical relationship to obtain regional radiation data. The method can replace the radiation data of the foundation radiation station, saves the high cost for constructing the foundation radiation station and the labor cost for controlling the quality of the data, and more intuitively reflects the change condition of the radiation field in the whole area.
However, the satellite weather cloud image method has low precision in small-scale geographic positions. In addition, the foundation cloud layer altimeter method has higher cost, the equipment of the foundation laser radar scanning method has high price and complex deployment, and the requirement of real-time identification cannot be met due to the property of scanning all sky from angle to angle.
The traditional binocular foundation all-sky cloud layer imaging system can meet the requirements of real-time performance and economy, but the most important recognition precision in cloud layer recognition is often not good. The binocular 3D imaging system of the foundation has the following characteristics:
The farther the ground horizontal distance is set by the binocular camera, the better the 3D reduction imaging effect on the cloud layer with higher ground height. But has the disadvantages that: 1. the reduction degree of the low-altitude cloud layer is poor. 2. The farther the binocular camera is, the larger the difference of sky images shot at the same moment is, and the higher the difficulty and the slower the speed of an image matching algorithm are when 3D reduction is carried out.
The closer the ground horizontal distance that binocular camera set up, the better to the cloud layer 3D reduction effect of low altitude, but the effect to the cloud layer reduction of high altitude is worse, simultaneously because two full sky images that gather simultaneously similarity is higher, can lead to 3D reduction's accuracy to reduce under the general circumstances of camera performance.
Therefore, there is a need to design a new 3D cloud layer reconstruction method to optimize the tracking photovoltaic power generation, and to meet the conditions of high accuracy, high real-time performance and high economy.
Disclosure of Invention
The invention provides a tracking type photovoltaic power generation optimization method based on full sky image 3D cloud layer reconstruction, which is based on deployment and data processing of an array type full sky camera device and can simultaneously meet the conditions of high precision, high real-time performance and high economical efficiency.
A tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction comprises the following steps:
(1) An intelligent weather station is built, and the intelligent weather station comprises an anemograph, a anemoscope, a full sky camera and an irradiator;
(2) The full-sky camera adopts a fisheye camera, and the fisheye camera is calibrated in a laboratory;
(3) According to the arrangement condition of the tracking type photovoltaic power generation equipment, n intelligent weather stations are arranged on the ground in an array mode, and each row of photovoltaic panels is located in the wireless communication signal range of the intelligent weather stations;
(4) Performing field calibration on the horizontal positions, the central point orientations and the camera angles of all the fisheye cameras;
(5) The fish-eye camera is utilized to collect all-sky images, and the collected images are stored in a time sequence database in the intelligent weather station for subsequent processing;
(6) Identifying a cloud layer range in an image by using a convolutional deep neural network model, and removing a large-area blue sky region by using an image segmentation algorithm;
(7) According to the arrangement of n intelligent weather stations, C n 2 all-sky camera pairs are formed, the intervals of the camera pairs are arranged from small to large, and the cloud layer heights are identified layer by layer from low to high according to the corresponding all-sky image pairs;
(8) Verifying the cloud height identification result, if the error of the cloud height of a certain specific height is higher than a threshold value, returning to the step (4), and adding all-sky camera pairing at the position of the ground corresponding to the space; if the errors are all lower than the threshold value, executing the step (9);
(9) 3D reconstructing the cloud layer by utilizing the photographed images and the accurate distances among the cameras by using the all-sky cameras obtained in the step (7);
(10) Inputting the cloud layer 3D reconstruction result into a tracking type photovoltaic power generation system, and simulating the shielding and light scattering relationship between the sun and the cloud layer; and optimizing the angle of the tracking type photovoltaic power generation plate according to the proportion of the direct solar irradiance to the scattered and diffuse reflection irradiance in the simulation result.
Further, in the step (2), when the fisheye camera is calibrated in a laboratory, a Scaramuzza model is adopted, 6 to 30 pictures are shot at different angles for the same black-white alternate chessboard image, and then the characteristic parameters of the camera are determined through an angular point detection algorithm provided by Opencv software;
If the specificity of the characteristic parameters of the cameras obtained through calibration is not large, each camera is not corrected, and the same group of correction parameters are adopted; if the parameter difference between the cameras exceeds 2%, parameter correction is required before each camera is deployed.
In the step (4), the horizontal position of the fish-eye camera is corrected by a GPS positioning system arranged on the intelligent weather station; the center point orientation of the fish-eye camera takes the zenith as the pointing direction, and the direction is corrected by a level gauge arranged on the intelligent weather station during the on-site deployment; the camera angle adopts a compass method to determine the approximate angle of the lens, and then a sun position recognition algorithm is applied to further correct the lens.
The sun position recognition algorithm specifically comprises the following steps:
The method comprises the steps that a circular recognition module is utilized to recognize the shape of the sun in an on-site all-sky image so as to judge the position of the center of the sun; the color recognition module is used for analyzing the color value of the sun position in the image to confirm the characteristic range of the sun color, and the characteristic range is characterized in that the blue and green channel values are high, and the brightness and saturation are high; finally, the center position of the sun is further accurately confirmed on the image;
And averaging the differences of a plurality of groups of theoretical solar azimuth angles and actual solar azimuth angles identified by the solar position identification algorithm, and correcting the angle of the camera according to the average value.
In the step (5), the fisheye camera selects 1944×1944 resolution to capture all-sky images, and the capturing interval of the images is five minutes.
In the step (7), before the layer-by-layer identification of the cloud layer height, similar pixels between the image pairs need to be densely matched, and the specific process is as follows:
firstly, automatically identifying a characteristic point group which has local characteristics and has invariance to rotation, scale scaling and brightness change in two images by adopting a SIFT algorithm; secondly, matching the feature point groups of the respective images found in the previous step by adopting a K-D tree algorithm;
before searching the feature point group, constructing a feature point set in a data structure of a K-D tree according to the information of the feature point vector, and searching by the data structure constructed before when searching a feature point with the feature point metric closest to the feature point.
The corresponding relation between the distance of the all-sky camera pair and the cloud layer height is as follows: h i≈10×di;di is the distance between the ith full-sky camera pair, and h i is the corresponding cloud cover height.
In the step (9), the layer-by-layer 3D reconstruction process of the cloud layer comprises the following steps:
inputting images shot by paired all-sky cameras and the accurate distance between the cameras, and comparing the same characteristic pixel points between the images through an algorithm to complete 3D parallax modeling;
directly solving the real world coordinate position of each pixel point in the image by utilizing a formula through a trigonometric function relation, wherein the coordinate reflects the position and the height information of the cloud layer; the formula is as follows:
x w,yw,zw represents real world coordinates of pixel points in the image with the whole geographic position information as a reference; xl, yl, zl represent real world coordinates referenced to the location of the first full sky camera, respectively; r and T are built-in parameters of the all-sky camera, and are obtained through previous indoor correction; r l and T l are orientation and rotation angle data of the all-sky camera when deployed in the field;
And reconstructing a group of 3D results in each cloud height range of each image pair, and calculating the average value of each coordinate point by adopting an equal weight method as a final result after obtaining the 3D reconstruction results of all the image pairs.
In the step (10), specific logic for optimizing the angle of the tracking photovoltaic power generation panel is as follows:
cases where the direct solar irradiance is high: the tracking type photovoltaic power generation system sends an instruction to the photovoltaic panel angle controller, so that the photovoltaic panel angle is perpendicular to the sun direction, and direct solar radiation is received to the maximum extent;
high ambient scattering, diffuse reflection irradiance: and the tracking type photovoltaic power generation system determines the optimal angle of the tracking type photovoltaic panel through a machine learning model according to the simulation result, and sends the result to the on-site photovoltaic panel angle controller in a command form.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts an array deployment design, and the common household fisheye camera can meet the requirement of precision through greatly improving the quantity of image data.
2. According to the invention, the cloud layer range is divided by using the convolutional deep neural network model, and the accuracy of the identification effect is obviously improved by inputting only the cloud layer divided by the model into the subsequent algorithm.
3. The method is based on horizontal distance selection of the all-sky camera, and a layer-by-layer modeling mode is adopted in cloud height modeling to improve accuracy.
4. The high-precision 3D cloud layer reconstruction result finally obtained by the invention can not only provide information such as on-site cloud height, cloud coverage rate, cloud boundary and the like, but also provide relations among the photovoltaic power generation panel, the cloud layer and the sun position, and provide important guiding information for better optimizing the angle of the tracking type photovoltaic power generation panel.
Drawings
Fig. 1 is a specific flowchart of a tracking type photovoltaic power generation optimization method based on the reconstruction of an all-sky image 3D cloud layer;
FIG. 2 is a schematic diagram of laboratory calibration of a fisheye camera in an embodiment of the invention;
FIG. 3 is a schematic diagram of an array arrangement of intelligent weather stations on the ground in an embodiment of the invention;
FIG. 4 is a schematic view of a sun position recognition image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of the full sky cloud layer range division in an embodiment of the present invention;
FIG. 6 is a schematic diagram of an array type all-sky camera according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a result of dense matching of pixels between all-sky images according to an embodiment of the present invention;
fig. 8 is a schematic diagram of 3D parallax modeling in an embodiment of the present invention;
fig. 9 is a schematic diagram of layer-by-layer cloud layer 3D reconstruction in an embodiment of the present invention.
Detailed Description
The invention will be described in further detail with reference to the drawings and examples, it being noted that the examples described below are intended to facilitate the understanding of the invention and are not intended to limit the invention in any way.
As shown in fig. 1, a tracking photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction includes the following steps:
Step 1, preparing an intelligent weather station SWM.
The intelligent weather station comprises an anemograph, a anemoscope, a full sky camera, an irradiance meter and the like.
And 2, calibrating parameters of the fisheye camera.
Unlike traditional pinhole camera lens, the fish-eye camera used in the all-sky camera system has the advantage of wide field of view. But in order to obtain an ultra-wide field of view, the imaging result of the fisheye camera also has a remarkable distortion effect. In order to restore the cloud layer result more accurately in the following links, the characteristic parameters of the fisheye camera need to be measured very accurately first.
Typically, the camera manufacturer will provide parameters characteristic of the lens, but such parameters are often not accurate enough for a particular camera, such as a fisheye camera. And for process reasons, each camera may also have a specific difference in parameters. So to achieve a high precision 3D reconstruction work for the cloud layer, the method requires an additional calibration work for the fish-eye camera.
The calibration will be done in a laboratory using Scaramuzza model. As shown in fig. 2, the determination of the characteristic parameters of the camera is performed by shooting 6 to 30 images at different angles for the same black-white checkerboard image, and then by using a corner detection algorithm provided by Opencv software. If the specificity of the camera parameters obtained through calibration is not large, then each camera can be not corrected, and the same set of correction parameters are adopted. If the parameter difference between cameras is large (more than 2%), parameter correction is required before each camera is deployed.
And 3, designing the array position of the full-sky camera.
In order to meet the requirements of the identification and 3D reconstruction precision of cloud layers with different heights, namely high, medium and low, the camera position selection in the camera array is optimized by adopting parameters so as to obtain all-aerial image pairing at various distances of long distance, medium distance and short distance.
Because besides the whole sky image is required to be acquired, the intelligent weather station where the camera device is located is also responsible for the follow-up work of sending the angle optimizing instruction to the tracking type photovoltaic power generation panel. Based on the calculation capability of the intelligent weather station at present, the tracking type photovoltaic power generation equipment of about 100 rows can be simultaneously responsible, and the position of the tracking type photovoltaic power generation equipment is selected from the central positions of the photovoltaic panels so as to provide better wireless communication signals.
As shown in fig. 3, the final position design of the full-sky camera array will satisfy the following conditions:
1. The whole body is arranged in a matrix; 2. ensuring that each row of photovoltaic panels in the field is in the range of wireless communication signals of the intelligent weather station; 3. on the basis of meeting the two conditions, intelligent weather stations are arranged as few as possible to save purchasing and operating costs.
And 4, calibrating the position and the azimuth of the camera in the field.
The accuracy of the camera orientation also directly affects the accuracy of subsequent 3D reconstruction. In the azimuth calibration step, the operation standard must be strictly adhered to until the preset standard is reached. The method comprises the following steps:
1) The center point orientation of the camera will be pointed at the zenith, which will be corrected by a level gauge equipped above the intelligent weather station at the time of deployment in the field.
2) The position of the camera on the horizontal plane will be determined by a GPS positioning system equipped on the intelligent weather station.
3) The unification of the angular orientation of the camera on the horizontal plane has a significant impact on the subsequent multi-image pixel matching algorithm. Traditional camera angle correction modes comprise a compass method, a landmark image recognition method and the like. In the method proposed by the company, the compass method is adopted to determine the approximate angle of the lens, and then the self-developed sun position recognition algorithm of the company is applied to further correct the angle of the camera.
The core module of the solar position recognition algorithm is mainly divided into two parts, namely a circular recognition module and a color recognition module.
The circular recognition module is used for judging the position of the center of the sun through recognizing the shape of the sun in the field all-sky image. Because the sun in the image is influenced by distortion from the fisheye camera and cloud cover, the optimization process needs to iteratively adjust the polygon judgment threshold in the algorithm so as to achieve the capability of recognizing irregular circles.
The color recognition module can confirm the characteristic range of the sun color through analyzing the color numerical value of the sun position in the image, and is characterized by high blue and green channel numerical values and high brightness and saturation. Finally, the center position of the sun is further accurately confirmed on the image.
By averaging the differences between the plurality of sets of theoretical solar azimuth angles and the actual solar azimuth angles calculated by the image recognition algorithm, a more accurate actual camera angle can be obtained, as shown in fig. 4. The camera angle can then be adjusted at the field end or corrected at a later algorithm end. Additional corrections are made every certain period after the initialized angle correction is completed to prevent some conditions that may lead to angle variations from occurring in the field.
And 5, acquiring an all-sky image.
The acquisition of the all-sky image is completed by a 185-degree by 360-degree special fish-eye camera, and the result is stored in a time sequence database in an intelligent weather station SWM on site for subsequent processing. For the conventional cloud layer identification task, the image quality with the common 540p resolution can basically meet the requirement. But for the cloud layer 3D reconstruction task, a higher definition image resolution will be required to meet the accuracy requirements. In the method, 1944×1944 resolution is selected to capture an all-sky image, and the definition can meet the accuracy requirement of each algorithm in subsequent tasks. The image was taken every five minutes in a storage format of png. If necessary, document compression is performed during transmission to promote economy in terms of traffic in 5G network transmission.
And 6, identifying and dividing cloud layer areas.
If the sky area and the cloud layer range are not divided, and the 3D reconstruction of the cloud layer is directly carried out, the situations of unclear cloud layer boundary, misjudgment of the sky cloud layer and the like can easily occur. Therefore, in the invention, a convolution depth neural network model is added to divide the cloud layer range. By inputting only the cloud layer divided by the model into the subsequent algorithm, the accuracy of the identification effect is remarkably improved.
As shown in fig. 5, when performing 3D restoration of the cloud layer, if there is a large blue sky area in the input image, it is first required to be removed by an image segmentation algorithm. Because the blue sky area belongs to the same-texture low-frequency area in computer graphics, misjudgment is easily introduced when multi-image dense pixel matching is performed because the characteristics are too similar.
And 7, identifying the cloud heights layer by layer.
The determination of the cloud layer height also has a significant impact on the 3D reconstruction work of the cloud layer. In general, even a conventional binocular full-sky imaging system can accurately identify a cloud layer on XY coordinates in the horizontal direction, but in the Z-axis direction perpendicular to the ground, the accuracy error of identification is often high. Therefore, the method of the invention also mainly solves the technical difficulty of identifying the cloud height under the foundation all-sky camera system.
According to industry experience and practice, the distance d of the foundation camera and the optimal cloud height identification height h have the following approximately relations:
h=10*d
The height distribution of the cloud layer is generally between 1500 and 7500 meters, so the invention deploys the full sky camera with the nearest distance of 150 meters and the farthest distance of 750 meters, and can meet the requirement of providing optimization treatment for various cloud height situations during optimization.
The conventional binocular camera system can only acquire a pair of all-sky images at the same time for subsequent 3D modeling. In the invention, the array type all-sky camera system can provide at most C n 2 combinations at the same time. As shown in fig. 6, an array system consisting of 16 all-sky cameras can provide a total of 120 pairs of all-sky images. Compared with the traditional imaging system, the number of the obtained images is in the order of O (n 2), and the problem of insufficient image data in the modeling process can be remarkably solved.
Meanwhile, the horizontal distance selection based on the full-sky camera is adopted, and a layer-by-layer modeling mode is adopted to improve the accuracy in cloud height modeling. Because different camera pitches have optimal modeling heights for different cloud heights, the method can be used for identifying the cloud layer heights layer by layer from low to high according to the full-sky image pairs by arranging the camera pitches from small to large. The corresponding relation is as follows: h i≈10×di.
Similar pixel dense matching between image pairs is required prior to cloud height identification. The working algorithm adopts SIFT and K-D tree algorithm, and can efficiently and accurately complete the matching between pixels. And then, the height of the corresponding pixel point can be calculated according to the horizontal distance and trigonometry relation of the paired cameras by using the obtained pixel matching relation.
The detection algorithm of the matching points between all-sky images is input into two photovoltaic power generation field photos shot at the same time. As shown in fig. 7, the matching process is mainly divided into two steps:
Firstly, automatically identifying a characteristic point group which has local characteristics and has invariance to rotation, scale scaling and brightness change in two images by adopting a SIFT algorithm. Meanwhile, the method has better stability in the face of visual angle change, affine transformation and image noise, has high information content of feature description expression and larger distinguishing degree, and can be more accurately matched in a large amount of feature data. Meanwhile, the SIFT algorithm is relatively fast, and the real-time level requirement can be met.
And secondly, matching the characteristic point groups of the respective images found in the previous step by adopting a K-D tree algorithm. Before searching the feature point group, constructing the feature point set in a data structure of a K-D tree according to the information of the feature point vector, and when searching a feature point with the feature point measurement distance closest to the feature point, searching the feature point set by the data structure constructed before, thereby reducing the complexity of searching.
And 8, identifying and verifying the cloud height.
After the cloud height identification link is completed, other methods can be adopted to verify the identification result. There are conventional cloud height measuring methods, laser radar scanning methods, unmanned aerial vehicle measuring methods, etc. which can be adopted (although the above methods can obtain higher cloud height accuracy, they have disadvantages in other aspects). The error of the traditional binocular full-sky imaging system in the cloud height measurement link is about 10%. Through theoretical and practical verification, the array type all-sky imaging system method can reach the error level of 3% of each cloud high layer.
If the error of a cloud cover with a specific height is found to be larger in the verification link, for example, a cloud cover with a height of more than 5 km is found, the corresponding (the distance is more than 500 m) all-sky camera devices can be correspondingly increased on the ground to be paired, and the whole array type all-sky camera device is designed as an adaptive system so as to better improve the identification degree of the specific cloud cover.
And 9, reconstructing the cloud layer by layer 3D.
After the accuracy verification of the cloud height identification is passed, the final cloud layer 3D reconstruction work can be started.
The traditional cloud layer 3D restoration algorithm firstly requires that the round and distorted fisheye camera image is subjected to de-distortion before being processed. In the embodiment of the invention, the reconstruction work of the cloud layer is directly carried out by utilizing a self-developed fisheye image 3D reconstruction algorithm. In order to accurately restore the cloud layer condition of the site, the algorithm has higher requirements on the input precision of data. The required data types include: GPS information of each camera in the array type all-sky camera system, high-resolution all-sky images shot by the fisheye cameras, accurate time (ensured by network module synchronous time of an intelligent weather station) of each image shooting, and detailed lens parameters (corrected by an indoor laboratory) of the fisheye cameras.
By inputting images taken by paired all-sky cameras and the precise distance between the cameras, 3D parallax modeling can be accomplished by algorithmically comparing the same feature pixels between the images, as shown in fig. 8. Under the condition of accurate camera position information and image information, the real world coordinate position of each pixel point in the image can be directly obtained by utilizing an analytic formula through a trigonometric function relation. In the method of the invention, the coordinates can accurately reflect the position and height information of the cloud layer. The specific formula is as follows:
Wherein x w,yw,zw represents real world coordinates of pixel points in the image with the whole geographic position information as a reference; xl, yl, zl represent real world coordinates referenced to the location of the first full sky camera, respectively; r and T are built-in parameters of the all-sky camera, and are obtained through previous indoor correction; r l and T l are orientation and rotation angle data of the all-sky camera when deployed in the field.
Since each image pair can reconstruct a group of 3D results in each cloud height range, after the 3D reconstruction results of all image pairs are obtained, an average value of each coordinate point location is calculated by adopting an equal weight method as a final result.
And 10, carrying out result simulation by using a tracking type photovoltaic power generation digital twin project.
The 3D cloud layer information acquired before is input into a self-grinding tracking type photovoltaic power generation digital twin system, and the shielding and light scattering relation between the sun and the cloud layer can be simulated through physical modeling. The simulation result can reach irradiance numerical simulation with error below 5% at the minimum 5m level, and can output simulation results of direct irradiance, scattered irradiance and diffuse reflection irradiance in the current environment.
And optimizing the angle of the tracking type photovoltaic power generation plate according to the proportion of the direct solar irradiance to the scattered and diffuse reflection irradiance in the simulation result. The specific optimization logic is as follows:
Cases where the direct solar irradiance is high: the tracking type photovoltaic power generation optimizing system sends an instruction to the photovoltaic panel angle controller, so that the photovoltaic panel angle is perpendicular to the sun direction, and direct solar radiation is received to the maximum extent.
High ambient scattering, diffuse reflection irradiance: the intelligent weather station determines the optimal angle of the tracking photovoltaic panel through a machine learning model according to the numerical simulation results of the direct solar irradiance, the scattered environment irradiance and the diffuse reflection irradiance, and distributes the results to the on-site photovoltaic panel angle controller in a command form.
The high-precision 3D cloud layer reconstruction result finally obtained by the method can provide information such as on-site cloud height, cloud coverage rate, cloud boundary and the like, can also provide relations among the photovoltaic power generation panel, the cloud layer and the sun position, and provides important guiding information for better optimizing the angle of the tracking type photovoltaic power generation panel.
In practical application, compared with the photovoltaic power generation power which has applied the traditional tracking algorithm, the information obtained by using the array type full-sky shooting system is used as the guidance of a subsequent model, at least more than 2% of power improvement can be additionally obtained under most weather conditions, and the power generation power curve can be optimized according to the power generation grid connection requirement.
The foregoing embodiments have described in detail the technical solution and the advantages of the present invention, it should be understood that the foregoing embodiments are merely illustrative of the present invention and are not intended to limit the invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the invention.

Claims (8)

1. The tracking type photovoltaic power generation optimization method based on the whole sky image 3D cloud layer reconstruction is characterized by comprising the following steps of:
(1) An intelligent weather station is built, and the intelligent weather station comprises an anemograph, a anemoscope, a full sky camera and an irradiator;
(2) The full-sky camera adopts a fisheye camera, and the fisheye camera is calibrated in a laboratory;
(3) According to the arrangement condition of the tracking type photovoltaic power generation equipment, n intelligent weather stations are arranged on the ground in an array mode, and each row of photovoltaic panels is located in the wireless communication signal range of the intelligent weather stations;
(4) Performing field calibration on the horizontal positions, the central point orientations and the camera angles of all the fisheye cameras;
(5) The fish-eye camera is utilized to collect all-sky images, and the collected images are stored in a time sequence database in the intelligent weather station for subsequent processing;
(6) Identifying a cloud layer range in an image by using a convolutional deep neural network model, and removing a large-area blue sky region by using an image segmentation algorithm;
(7) According to the arrangement of n intelligent weather stations, C n 2 all-sky camera pairs are formed, the intervals of the camera pairs are arranged from small to large, and the cloud layer heights are identified layer by layer from low to high according to the corresponding all-sky image pairs;
(8) Verifying the cloud height identification result, if the error of the cloud height of a certain specific height is higher than a threshold value, returning to the step (4), and adding all-sky camera pairing at the position of the ground corresponding to the space; if the errors are all lower than the threshold value, executing the step (9);
(9) 3D reconstructing the cloud layer by utilizing the photographed images and the accurate distances among the cameras by using the all-sky cameras obtained in the step (7); the process of carrying out layer-by-layer 3D reconstruction on the cloud layer is as follows:
inputting images shot by paired all-sky cameras and the accurate distance between the cameras, and comparing the same characteristic pixel points between the images through an algorithm to complete 3D parallax modeling;
directly solving the real world coordinate position of each pixel point in the image by utilizing a formula through a trigonometric function relation, wherein the coordinate reflects the position and the height information of the cloud layer; the formula is as follows:
Wherein x w,yw,zw represents real world coordinates of pixel points in the image with the whole geographic position information as a reference; xl, yl, zl represent real world coordinates referenced to the location of the first full sky camera, respectively; r and T are built-in parameters of the all-sky camera, and are obtained through previous indoor correction; r l and T l are orientation and rotation angle data of the all-sky camera when deployed in the field;
reconstructing a group of 3D results in each cloud height range of each image pair, and calculating the average value of each coordinate point by adopting an equal weight method as a final result after obtaining the 3D reconstruction results of all the image pairs;
(10) Inputting the cloud layer 3D reconstruction result into a tracking type photovoltaic power generation system, and simulating the shielding and light scattering relationship between the sun and the cloud layer; and optimizing the angle of the tracking type photovoltaic power generation plate according to the proportion of the direct solar irradiance to the scattered and diffuse reflection irradiance in the simulation result.
2. The tracking type photovoltaic power generation optimization method based on the 3D cloud layer reconstruction of the all-sky image according to claim 1, wherein in the step (2), when a fisheye camera is calibrated in a laboratory, a Scaramuzza model is adopted, 6 to 30 pictures are shot at different angles for the same black-white chessboard image, and then the characteristic parameters of the camera are determined through a corner detection algorithm provided by Opencv software;
If the specificity of the characteristic parameters of the cameras obtained through calibration is not large, each camera is not corrected, and the same group of correction parameters are adopted; if the parameter difference between the cameras exceeds 2%, parameter correction is required before each camera is deployed.
3. The tracking type photovoltaic power generation optimization method based on the all-sky image 3D cloud cover reconstruction according to claim 1, wherein in the step (4), the horizontal position of the fisheye camera is corrected by a GPS positioning system arranged on an intelligent weather station; the center point orientation of the fish-eye camera takes the zenith as a pointing direction, and the pointing direction is corrected by a level gauge arranged on the intelligent weather station when the fish-eye camera is deployed in the field; the camera angle is determined by a compass method, and then a sun position recognition algorithm is applied to further correct the camera angle.
4. The tracking photovoltaic power generation optimization method based on the all-sky image 3D cloud layer reconstruction of claim 3, wherein the solar position recognition algorithm is specifically:
The method comprises the steps that a circular recognition module is utilized to recognize the shape of the sun in an on-site all-sky image so as to judge the position of the center of the sun; the color recognition module is used for analyzing the color value of the sun position in the image to confirm the characteristic range of the sun color, and the characteristic range is characterized in that the blue and green channel values are high, and the brightness and saturation are high; finally, the center position of the sun is further accurately confirmed on the image;
And averaging the differences of a plurality of groups of theoretical solar azimuth angles and actual solar azimuth angles identified by the solar position identification algorithm, and correcting the angle of the camera according to the average value.
5. The tracking photovoltaic power generation optimization method based on the 3D cloud layer reconstruction of the all-sky image according to claim 1, wherein in the step (5), the fisheye camera selects 1944×1944 resolution to shoot the all-sky image, and the shooting interval of the image is five minutes.
6. The tracking type photovoltaic power generation optimization method based on the all-sky image 3D cloud layer reconstruction according to claim 1, wherein in the step (7), before the cloud layer height is identified layer by layer, similar pixel dense matching between image pairs is needed, and the specific process is as follows:
firstly, automatically identifying a characteristic point group which has local characteristics and has invariance to rotation, scale scaling and brightness change in two images by adopting a SIFT algorithm; secondly, matching the feature point groups of the respective images found in the previous step by adopting a K-D tree algorithm;
before searching the feature point group, constructing a feature point set in a data structure of a K-D tree according to the information of the feature point vector, and searching by the data structure constructed before when searching a feature point with the feature point metric closest to the feature point.
7. The tracking type photovoltaic power generation optimization method based on the all-sky image 3D cloud layer reconstruction according to claim 1, wherein in the step (7), the corresponding relation between the distance of the all-sky camera pair and the cloud layer height is as follows: h i≈10×di;di is the distance between the ith full-sky camera pair, and h i is the corresponding cloud cover height.
8. The tracking photovoltaic power generation optimization method based on the all-sky image 3D cloud layer reconstruction according to claim 1, wherein in the step (10), specific logic for optimizing the angle of the tracking photovoltaic power generation panel is as follows:
cases where the direct solar irradiance is high: the tracking type photovoltaic power generation system sends an instruction to the photovoltaic panel angle controller, so that the photovoltaic panel angle is perpendicular to the sun direction, and direct solar radiation is received to the maximum extent;
high ambient scattering, diffuse reflection irradiance: and the tracking type photovoltaic power generation system determines the optimal angle of the tracking type photovoltaic panel through a machine learning model according to the simulation result, and sends the result to the on-site photovoltaic panel angle controller in a command form.
CN202210625073.4A 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction Active CN114972997B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210625073.4A CN114972997B (en) 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210625073.4A CN114972997B (en) 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction

Publications (2)

Publication Number Publication Date
CN114972997A CN114972997A (en) 2022-08-30
CN114972997B true CN114972997B (en) 2024-05-24

Family

ID=82959018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210625073.4A Active CN114972997B (en) 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction

Country Status (1)

Country Link
CN (1) CN114972997B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017155421A1 (en) * 2016-03-07 2017-09-14 Centro De Investigação Em Energia Ren - State Grid, S.A Method and system for forecasting the power output of a group of photovoltaic power plants and managing the integration of said power output into a power grid
CN112801184A (en) * 2021-01-28 2021-05-14 江苏中信博新能源科技股份有限公司 Cloud tracking method, system and device
CN113936031A (en) * 2021-10-15 2022-01-14 威海若维信息科技有限公司 Cloud shadow track prediction method based on machine vision

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11423610B2 (en) * 2019-11-26 2022-08-23 Applied Research Associates, Inc. Large-scale environment-modeling with geometric optimization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017155421A1 (en) * 2016-03-07 2017-09-14 Centro De Investigação Em Energia Ren - State Grid, S.A Method and system for forecasting the power output of a group of photovoltaic power plants and managing the integration of said power output into a power grid
CN112801184A (en) * 2021-01-28 2021-05-14 江苏中信博新能源科技股份有限公司 Cloud tracking method, system and device
CN113936031A (en) * 2021-10-15 2022-01-14 威海若维信息科技有限公司 Cloud shadow track prediction method based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于云 超短期光伏功率预测方法;余光正等;中国电机工程学报;20211020;第41卷(第20期);全文 *
面向光伏发电功率预报的云层图像采集与分割研究;吴颖东;穆清萍;董霏;侯北平;黄俊;;浙江科技学院学报;20181030(第05期);全文 *

Also Published As

Publication number Publication date
CN114972997A (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN111754583B (en) Automatic method for vehicle-mounted three-dimensional laser radar and camera external parameter joint calibration
CN111815038B (en) Photovoltaic ultra-short term prediction method and system
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN110244766B (en) Planning method and system for unmanned aerial vehicle routing inspection route of photovoltaic power station
CN105203023B (en) A kind of one-stop scaling method of vehicle-mounted three-dimensional laser scanning system placement parameter
CN110570466A (en) Method and device for generating three-dimensional live-action point cloud model
CN113610044A (en) 4D millimeter wave three-dimensional target detection method and system based on self-attention mechanism
CN111723464A (en) Typhoon elliptic wind field parametric simulation method based on remote sensing image characteristics
CN111105496A (en) High-precision DEM construction method based on airborne laser radar point cloud data
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN104392457A (en) Automatic matching method and device for connection points of slanted images
CN107767454A (en) A kind of three-dimensional mobile fast modeling method of outdoor scene, apparatus and system
CN112947526B (en) Unmanned aerial vehicle autonomous landing method and system
CN116182805A (en) Homeland mapping method based on remote sensing image
CN113971768A (en) Unmanned aerial vehicle-based three-dimensional dynamic detection method for power transmission line illegal building
CN108919319A (en) Sea island reef satellite image Pillarless caving localization method and system
CN112446844B (en) Point cloud feature extraction and registration fusion method
CN113902792A (en) Building height detection method and system based on improved RetinaNet network and electronic equipment
CN116223511A (en) Distributed roof photovoltaic module defect diagnosis method and device based on unmanned aerial vehicle automatic inspection
CN116129064A (en) Electronic map generation method, device, equipment and storage medium
CN104180794B (en) The disposal route in digital orthoimage garland region
CN113936031A (en) Cloud shadow track prediction method based on machine vision
CN117392237A (en) Robust laser radar-camera self-calibration method
CN114972997B (en) Tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction
CN115546266B (en) Multi-strip airborne laser point cloud registration method based on local normal correlation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant