CN114972997A - Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image - Google Patents

Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image Download PDF

Info

Publication number
CN114972997A
CN114972997A CN202210625073.4A CN202210625073A CN114972997A CN 114972997 A CN114972997 A CN 114972997A CN 202210625073 A CN202210625073 A CN 202210625073A CN 114972997 A CN114972997 A CN 114972997A
Authority
CN
China
Prior art keywords
camera
sky
cloud
image
power generation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210625073.4A
Other languages
Chinese (zh)
Inventor
宋华婷
张岗
邹纪明
邓建华
王刚
谢涛
周黄河
李晓
王平
田浩东
王一飞
陈军
张睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingyang Technology Hangzhou Co ltd
Zhongmin New Energy Ningxia Yanchi Photoelectric Energy Co ltd
Original Assignee
Lingyang Technology Hangzhou Co ltd
Zhongmin New Energy Ningxia Yanchi Photoelectric Energy Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lingyang Technology Hangzhou Co ltd, Zhongmin New Energy Ningxia Yanchi Photoelectric Energy Co ltd filed Critical Lingyang Technology Hangzhou Co ltd
Priority to CN202210625073.4A priority Critical patent/CN114972997A/en
Publication of CN114972997A publication Critical patent/CN114972997A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02SGENERATION OF ELECTRIC POWER BY CONVERSION OF INFRARED RADIATION, VISIBLE LIGHT OR ULTRAVIOLET LIGHT, e.g. USING PHOTOVOLTAIC [PV] MODULES
    • H02S50/00Monitoring or testing of PV systems, e.g. load balancing or fault identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Abstract

The invention discloses a tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of an all-sky image, which comprises the following steps: (1) building an intelligent weather station; (2) carrying out laboratory calibration on the fisheye camera; (3) the intelligent weather station array relates to and deploys; (4) calibrating the orientation of the fisheye camera in the field; (5) collecting an all-sky image; (6) cloud layer region identification and division; (7) identifying the cloud heights layer by layer; (8) cloud height identification result verification; (9) 3D rebuilding cloud layers layer by layer; (10) and optimizing the angle of the tracking type photovoltaic power generation panel according to the cloud layer 3D reconstruction result. The invention can simultaneously meet the conditions of high precision, high real-time performance and high economy based on the deployment and data processing of the array type all-sky camera device.

Description

Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image
Technical Field
The invention belongs to the technical field of photovoltaic power generation, and particularly relates to a tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of an all-sky image.
Background
Real-time identification, analysis and prediction of cloud layer conditions are of great significance to many socioeconomic activities. Particularly in the tracking type photovoltaic power generation industry, accurate and timely model establishment of cloud layer conditions and the angle relation between the sun position and the photovoltaic power generation panel can remarkably improve the power of photovoltaic power generation and optimize a power generation curve.
The traditional cloud layer identification methods, such as a satellite meteorological cloud image method, a foundation cloud layer altimeter method, a foundation laser radar scanning method and the like, have various limitations in practical application, and cannot meet the requirements on identification precision, processing speed and economy at the same time.
Chinese patent publication No. CN111652126A discloses an inversion radiation method based on satellite cloud images, which includes: acquiring satellite cloud picture data; carrying out image processing on the satellite cloud picture data to obtain a cloud index; establishing a clear sky model, and obtaining a radiation attenuation index according to the clear sky model and an actually measured radiation value; establishing a corresponding mathematical relation according to the cloud index and the radiation attenuation index; and calculating according to the mathematical relation to obtain regional radiation data. The method can replace the radiation data of the foundation radiation station, saves the high cost for building the foundation radiation station and the labor cost for controlling the data quality, and more intuitively reflects the change condition of the radiation field in the whole area.
However, satellite weather cloud mapping is not accurate at small scale geographic locations. In addition, the cost of the ground cloud layer altimeter method is high, the equipment price of the ground laser radar scanning method is high, the deployment is complex, and the requirement of real-time identification cannot be met due to the property that the ground cloud layer altimeter method scans the whole sky angle by angle.
The traditional binocular foundation all-sky cloud layer imaging system can meet the requirements of real-time performance and economy, but the most important identification precision in cloud layer identification is often not good. The binocular 3D imaging system of the foundation has the following characteristics:
the longer the ground horizontal distance of the binocular camera is, the better the 3D reduction imaging effect of the cloud layer with higher ground height is. But has the disadvantages that: 1. the reduction degree of the low-altitude cloud layer is poor. 2. The farther the distance between the binocular cameras is, the greater the difference of the sky images shot at the same moment is, and the higher the difficulty and the slower the speed of an image matching algorithm are during 3D (three-dimensional) reduction.
The ground horizontal distance that binocular camera set up is more close, and is better to the cloud layer 3D reduction effect of low latitude, but is poorer to the effect that high altitude cloud layer reduced, simultaneously because two whole sky images similarity of gathering simultaneously is higher, under the general condition of camera performance, can lead to the accuracy of 3D reduction to reduce.
Therefore, it is urgently needed to design a new 3D cloud layer reconstruction method to optimize the tracking photovoltaic power generation, and meanwhile, the conditions of high precision, high real-time performance and high economy need to be met.
Disclosure of Invention
The invention provides a tracking type photovoltaic power generation optimization method based on all-sky image 3D cloud layer reconstruction, which can simultaneously meet the conditions of high precision, high real-time performance and high economy based on the deployment and data processing of an array type all-sky camera device.
A tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of an all-sky image comprises the following steps:
(1) an intelligent meteorological station is built, and the intelligent meteorological station comprises an anemoscope, a anemoscope, an all-sky camera and an irradiator;
(2) the all-sky camera adopts a fisheye camera to carry out laboratory calibration on the fisheye camera;
(3) according to the arrangement condition of the tracking type photovoltaic power generation equipment, n intelligent weather stations are arranged on the ground in an array mode, and each row of photovoltaic panels is located in the range of wireless communication signals of the intelligent weather stations;
(4) calibrating the horizontal positions, the central point orientations and the camera angles of all the fisheye cameras on the spot;
(5) using a fisheye camera to collect all-sky images, and storing the collected images in a time sequence database in the intelligent weather station for subsequent processing;
(6) identifying a cloud layer range in the image by using a convolution depth neural network model, and removing a large-area blue-sky area by using an image segmentation algorithm;
(7) according to the arrangement of n intelligent weather stations, forming C n 2 The space between the camera pairs is arranged from small to large according to the total sky camera pairsThe corresponding all-sky image pair carries out layer-by-layer identification on the cloud layer height from low to high;
(8) verifying the cloud height identification result, if the error of the cloud height of a certain specific height is higher than a threshold value, returning to the step (4), and adding all-sky camera pairing at the position of the corresponding ground distance; if the errors are all lower than the threshold value, executing the step (9);
(9) 3D reconstructing the cloud layer by utilizing the images shot by the all-sky cameras obtained in the step (7) and the accurate distance between the cameras;
(10) inputting a cloud layer 3D reconstruction result into a tracking type photovoltaic power generation system, and simulating the shielding and light scattering relation between the sun and the cloud layer; and optimizing the angle of the tracking type photovoltaic power generation panel according to the ratio of the direct solar irradiance to the environmental scattering and diffuse reflection irradiance in the simulation result.
Further, in the step (2), when laboratory calibration is performed on the fisheye camera, a Scaramuzza model is adopted, 6 to 30 pictures are shot at different angles for the same black-white chessboard image, and then the characteristic parameters of the camera are determined through an angular point detection algorithm provided by Opencv software;
if the camera characteristic parameters obtained through calibration are not large in specificity, then not correcting each camera, and adopting the same group of correction parameters; if the parameter difference between cameras exceeds 2%, parameter correction is required before each camera is deployed.
In the step (4), the horizontal position of the fisheye camera is corrected by a GPS (global positioning system) equipped on the intelligent weather station; the center point of the fisheye camera faces to the zenith point, and the direction is corrected by a level equipped above the intelligent meteorological station when the fisheye camera is deployed on the spot; the camera angle adopts compass method to determine the approximate lens angle, and then uses sun position recognition algorithm to further correct.
The sun position identification algorithm specifically comprises the following steps:
identifying the shape of the sun in the field all-sky image by using a circular identification module to judge the position of the center of the sun; analyzing the color value of the sun position in the image by using a color identification module, and determining the characteristic range of the sun color, wherein the characteristic range is characterized in that the blue and green channel values are high, and the brightness and the saturation are high; finally, the position of the center of the sun is further accurately confirmed on the image;
and averaging the difference values of the plurality of groups of theoretical solar azimuth angles and actual solar azimuth angles identified by the solar position identification algorithm, and correcting the camera angle according to the average value.
In the step (5), the fisheye camera selects 1944 × 1944 resolution to shoot the whole sky image, and the shooting interval of the image is five minutes.
In the step (7), before the cloud layer height is identified layer by layer, the dense matching of similar pixels between image pairs needs to be performed, and the specific process is as follows:
firstly, automatically identifying a characteristic point group with local characteristics in two images by adopting an SIFT algorithm, and having invariance to rotation, scale scaling and brightness change; secondly, matching the feature point groups of the respective images found in the previous step by adopting a K-D tree algorithm;
before searching the feature point group, constructing a feature point set by a data structure of a K-D tree according to the information of the feature point vector, and when searching a feature point with the closest feature point metric distance, searching by the previously constructed data structure.
The corresponding relation between the space of the all-sky camera pair and the height of the cloud layer is as follows: h is i ≈10×d i ;d i Is the distance of the ith all-sky camera pair, h i Corresponding to the cloud level.
In the step (9), the process of performing layer-by-layer 3D reconstruction on the cloud layer is as follows:
inputting images shot by paired all-sky cameras and the accurate distance between the cameras, and comparing the same characteristic pixel points between the images through an algorithm to complete 3D parallax modeling;
directly solving the real world coordinate position of each pixel point in the image by using a formula through a trigonometric function relationship, wherein the position and height information of the cloud layer are reflected by the coordinate; the formula is as follows:
Figure BDA0003676793570000041
x w ,y w ,z w respectively representing real world coordinates of pixel points in the image based on the integral geographic position information; xl, yl and zl respectively represent real world coordinates based on the position of the first all-sky camera; r and T are built-in parameters of the all-sky camera and are obtained through previous indoor correction; r is l And T l Orientation and corner data for an all-sky camera when deployed in the field;
and reconstructing a group of 3D results of each image pair in each cloud height range, and calculating the average value of each coordinate point position by adopting an equal weight method after obtaining the 3D reconstruction results of all the image pairs as a final result.
In the step (10), the specific logic for optimizing the angle of the tracking photovoltaic power generation panel is as follows:
case of high direct solar irradiance: the tracking type photovoltaic power generation system sends an instruction to the photovoltaic panel angle controller, so that the angle of the photovoltaic panel is perpendicular to the direction of the sun, and direct solar radiation is received to the maximum extent;
ambient scattering, diffuse reflected irradiance is high: and the tracking type photovoltaic power generation system determines the optimized angle of the tracking type photovoltaic panel through a machine learning model according to the simulation result, and distributes the result to the on-site photovoltaic panel angle controller in a command form.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention adopts an array type deployment design, and the common household fisheye camera can meet the requirement on precision by greatly improving the quantity of image data.
2. The invention divides the cloud layer range by using the convolution deep neural network model, and only the cloud layer divided by the model is input into a subsequent algorithm, thereby obviously improving the accuracy of the identification effect.
3. According to the method, the accuracy is improved by adopting a layer-by-layer modeling mode when the cloud height is modeled based on the horizontal distance selection of the all-sky camera.
4. The high-precision 3D cloud layer reconstruction result finally obtained by the method not only can provide information such as on-site cloud height, cloud coverage rate and cloud boundary, but also can provide the relation among the photovoltaic power generation panel, the cloud layer and the position of the sun, and provides important guidance information for better optimizing the angle of the tracking type photovoltaic power generation panel.
Drawings
Fig. 1 is a specific flowchart of a tracking-type photovoltaic power generation optimization method based on 3D cloud reconstruction of an all-sky image according to the present invention;
FIG. 2 is a schematic diagram of a laboratory calibration of a fisheye camera in an embodiment of the invention;
FIG. 3 is a schematic diagram of an array arrangement of smart weather stations on the ground according to an embodiment of the present invention;
FIG. 4 is a schematic view of an image for identifying the position of the sun according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating the division of a full sky cloud range in an embodiment of the present invention;
FIG. 6 is a schematic view of an array sky camera arrangement according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a result of dense pixel matching between sky images according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of 3D disparity modeling according to an embodiment of the present invention;
fig. 9 is a schematic diagram of layer-by-layer cloud layer 3D reconstruction in the embodiment of the present invention.
Detailed Description
The invention will be described in further detail below with reference to the drawings and examples, which are intended to facilitate the understanding of the invention without limiting it in any way.
As shown in fig. 1, a tracking type photovoltaic power generation optimization method based on 3D cloud reconstruction of an all-sky image includes the following steps:
step 1, preparing the intelligent weather station SWM.
The intelligent meteorological station comprises an anemoscope, a anemoscope, an all-sky camera, an irradiator and the like.
And 2, calibrating parameters of the fisheye camera.
Different from the traditional pinhole type camera lens, the fisheye camera used by the all-sky camera system has the advantage of wide visual field. But at the same time, in order to obtain an ultra-wide visual field range, the imaging result of the fisheye camera also has a remarkable distortion effect. In order to restore the cloud layer result more accurately in the following link, firstly, the characteristic parameters of the fisheye camera need to be measured very accurately.
Usually, a camera manufacturer provides characteristic parameters related to a lens, but such parameters are often not accurate enough for special cameras, such as fisheye cameras. And for process reasons, there may also be specific differences in parameters for each camera. Therefore, in order to realize high-precision 3D reconstruction work for the cloud layer, the method needs additional calibration work for the fisheye camera.
This calibration work will be done in a laboratory using the Scaramuzza model. As shown in fig. 2, 6 to 30 checkerboard images are taken at different angles for the same black-and-white checkerboard image, and then the determination of the camera feature parameters is performed through a corner detection algorithm provided by Opencv software. If the camera parameters obtained through calibration are not large in specificity, each camera can not be corrected later, and the same group of correction parameters are adopted. If the parameter difference between cameras is large (more than 2%), parameter correction is required before each camera is deployed.
And 3, designing the array position of the all-sky camera.
In order to simultaneously meet the requirements of identification and 3D reconstruction accuracy of cloud layers with different heights, namely high, medium and low, the position selection of the camera in the camera array is optimized by adopting parameters so as to obtain all-sky image pairs under various distances of long distance, medium distance and short distance.
Besides collecting the whole sky image, the intelligent weather station where the camera device is located is also responsible for the follow-up work of sending an optimized angle instruction to the tracking type photovoltaic power generation panel. Based on the computing power of the current intelligent weather station, the tracking type photovoltaic power generation equipment with about 100 rows can be simultaneously responsible, and the position of the tracking type photovoltaic power generation equipment is selected from the central position of the photovoltaic panels so as to provide better wireless communication signals.
As shown in fig. 3, the final position design of the all-sky camera array will satisfy the following conditions:
1. the whole is arranged in a matrix form; 2. ensuring that each row of photovoltaic panels on site is within the range of wireless communication signals of the intelligent weather station; 3. on the basis of meeting the two conditions, the intelligent weather stations are arranged as few as possible to save the purchasing cost and the operating cost.
And 4, calibrating the position and the direction of the camera on the spot.
The accuracy of the camera orientation also directly affects the subsequent 3D reconstruction accuracy. Therefore, in the calibration process of the orientation, the operation specification must be strictly observed until the preset standard is reached. The method specifically comprises the following steps:
1) the center point orientation of the camera will be pointed at the zenith point, which will be corrected by the level provided above the smart weather station when deployed in the field.
2) The position of the camera on the horizontal plane will be determined by the GPS positioning system equipped on the smart weather station.
3) The uniform angle orientation of the camera on the horizontal plane has a significant influence on a subsequent multi-image pixel matching algorithm. The traditional camera angle correction method comprises compass method, landmark image recognition method and the like. In the method proposed by my company, the approximate angle of the lens is determined by compass first, and then the sun position recognition algorithm self-developed by the company is applied to further correct the angle of the camera.
A core module of the sun position identification algorithm is mainly divided into two parts, namely a circular identification module and a color identification module.
The circle identification module identifies the shape of the sun in the scene all-sky image to judge the position of the center of the sun. Because the sun in the image is influenced by distortion from a fisheye camera and cloud layer shielding, the optimization process needs to iteratively adjust a polygon judgment threshold value in an algorithm so as to achieve the capability of identifying irregular circles.
The color identification module can determine the characteristic range of the sun color through the color numerical analysis of the sun position in the image, and is characterized in that the blue and green channel numerical values are high, and the brightness and the saturation are high. And finally, the position of the center of the sun is further accurately confirmed on the image.
By averaging the difference values between the plurality of groups of theoretical solar azimuth angles and the actual solar azimuth angle calculated by the image recognition algorithm, a more accurate actual camera angle can be obtained, as shown in fig. 4. Then the camera angle can be adjusted at the field end or corrected at the later algorithm end. After the initialized angle correction is completed, additional correction is carried out in each certain period so as to prevent some situations which can cause angle change from occurring in the field.
And 5, collecting the all-sky image.
The acquisition of the whole sky image is completed by a 185-360-degree special fish-eye camera, and the result is stored in a time sequence database in an on-site intelligent weather station SWM for subsequent processing. For the conventional cloud layer recognition task, the image quality of the ordinary 540p resolution can basically meet the requirement. But for the cloud layer 3D reconstruction task, a higher definition image resolution will be required to meet the requirement on accuracy. In the method, 1944 × 1944 resolution is selected to shoot the whole sky image, and the definition of the whole sky image can meet the accuracy requirement of each algorithm in subsequent tasks. The image was taken every five minutes in the storage format png. If necessary, the file compression is performed during transmission to improve the traffic economy in 5G network transmission.
And 6, identifying and dividing cloud layer areas.
If the sky region and the cloud layer range are not divided, and the 3D reconstruction of the cloud layer is directly carried out, the situations of unclear cloud layer boundary, misjudgment of the sky cloud layer and the like can easily occur. Therefore, in the invention, a convolution deep neural network model is added to divide the cloud layer range. And only the cloud layer divided by the model is input into a subsequent algorithm, so that the accuracy of the identification effect is obviously improved.
As shown in fig. 5, when performing 3D reduction of cloud layers, if a large area of blue sky area exists in an input image, it needs to be removed by an image segmentation algorithm. Because the blue sky region belongs to the same-texture low-frequency region in computer graphics, false judgment is easily introduced due to over-similar features when the work of dense pixel matching of multiple images is carried out.
And 7, identifying the cloud heights layer by layer.
The determination of the cloud height also has a significant impact on the 3D reconstruction work of the cloud. In general, even a conventional binocular all-sky imaging system can accurately identify a cloud layer on XY coordinates in a horizontal direction, but an identification accuracy error is often high in a Z-axis direction perpendicular to the ground. Therefore, the method of the invention also mainly solves the technical difficulty of cloud height identification under a foundation all-sky camera system.
According to experience and practice in the industry, the distance d of the ground-based camera and the optimal cloud height identification height h have the following relationship:
h=10*d
generally, the height distribution of cloud layers is approximately between 1500 and 7500 meters, so that the distance between the nearest whole sky camera deployed by the invention and the farthest whole sky camera deployed by the invention is 150 meters, and the farthest whole sky camera deployed by the invention can meet the requirement of providing optimization treatment for various cloud height situations during optimization.
The traditional binocular camera system can only acquire a pair of all-sky images at the same time for subsequent 3D modeling. In the invention, the array type all-sky camera system can provide multi-C at the same time n 2 And (4) combination. As shown in fig. 6, an array system of 16 all-sky cameras can provide a total of 120 pairs of all-sky images. The number of images available is on the order of O (n) compared to conventional imaging systems 2 ) And the problem of insufficient image data amount in the modeling process can be remarkably solved.
Meanwhile, based on horizontal distance selection of all-sky cameras, cloud height modeling is carried outAnd the accuracy is improved by adopting a layer-by-layer modeling mode. Because different camera intervals have the optimal modeling heights for different cloud heights, the cloud layer heights are identified layer by layer from low to high according to the full sky image pair by arranging the camera intervals from small to large. The corresponding relationship is as follows: h is i ≈10×d i
The cloud height identification needs to be preceded by dense matching of similar pixels between image pairs. The SIFT and K-D tree algorithms are adopted in the working algorithm of the part, and the matching between the pixels can be efficiently and accurately finished. And then, by utilizing the obtained pixel matching relationship, the height of the corresponding pixel point can be calculated according to the horizontal distance of the paired cameras and the trigonometric relationship.
The detection algorithm of the matching points between the all-sky images is input into two photovoltaic power generation field photos shot at the same time. As shown in fig. 7, the matching process is mainly divided into two steps:
firstly, a SIFT algorithm is adopted to automatically identify a characteristic point group with local characteristics in two images, and the characteristic point group has invariance to rotation, scale scaling and brightness change. Meanwhile, the method has better stability in the face of visual angle change, affine transformation and image noise, the information quantity of feature description expression is high, the discrimination is relatively large, and more accurate matching can be realized in a large amount of feature data. Meanwhile, the SIFT algorithm is relatively high in speed and can meet the requirement of real-time level.
And secondly, matching the feature point groups of the respective images found in the previous step by using a K-D tree algorithm. And before searching the feature point group, constructing a feature point set by a data structure of a K-D tree according to the information of the feature point vector, and searching by the previously constructed data structure when searching a feature point with the closest feature point measurement distance, thereby reducing the complexity of the search.
And 8, identifying and verifying the cloud height.
After the cloud height identification link is completed, the identification result can be verified in other modes. The method which can be adopted comprises the traditional ceilometer measurement method, the laser radar scanning method, the unmanned aerial vehicle measurement method and the like (although the methods can obtain higher ceilometer accuracy, the methods all have disadvantages in other aspects). The traditional binocular whole sky imaging system has an error of about 10% in a cloud height measurement link. Through theoretical and practical verification, the array type all-sky imaging system method can achieve the error level of 3% of each cloud high layer on average.
If the error of a certain specific height cloud height is found to be large in the verification link, for example, a cloud layer with a height of more than 5 kilometers, the corresponding whole-sky camera device pairing (with a distance of more than 500 meters) can be correspondingly added on the ground, and the whole array type whole-sky camera device is designed into an adaptive system, so that the recognition degree of the specific cloud height can be better improved.
And 9, reconstructing the cloud layer by layer in a 3D manner.
After the accuracy verification of cloud height identification is passed, the final cloud layer 3D reconstruction work can be started.
The traditional cloud layer 3D restoration algorithm firstly requires that the circular and distorted fisheye camera image is subjected to distortion removal so as to be processed. In the embodiment of the invention, the self-developed fisheye image 3D reconstruction algorithm is utilized to directly reconstruct the cloud layer. In order to accurately restore the cloud layer condition on the spot, the algorithm has higher requirements on the input precision of data. The types of data required include: the method comprises the steps of GPS information of each camera in the array type all-sky camera system, high-resolution all-sky images shot by the fisheye cameras, accurate time (guaranteed by network module synchronization time of an intelligent meteorological station) when each image is shot, and detailed lens parameters (obtained by indoor laboratory correction) of the fisheye cameras.
By inputting images shot by paired all-sky cameras and the accurate distance between the cameras, the 3D parallax modeling can be completed by comparing the same characteristic pixel points between the images through an algorithm, as shown in fig. 8. Under the condition of accurate camera position information and image information, the real world coordinate position of each pixel point in the image can be directly solved by utilizing an analytic formula through a trigonometric function relation. In the method, the coordinates can accurately reflect the position and height information of the cloud layer. The specific formula is as follows:
Figure BDA0003676793570000111
in the formula, x w ,y w ,z w Respectively representing real world coordinates of pixel points in the image based on the integral geographic position information; xl, yl and zl respectively represent real world coordinates based on the position of the first all-sky camera; r and T are built-in parameters of the all-sky camera and are obtained through previous indoor correction; r l And T l Orientation and corner data for all-sky cameras when deployed in the field.
Since each image pair can reconstruct a group of 3D results in each cloud height range, after the 3D reconstruction results of all the image pairs are obtained, the average value of each coordinate point position is calculated by using the equal weight method as the final result.
And step 10, performing result simulation by using a tracking type photovoltaic power generation digital twin project.
By inputting the previously acquired 3D cloud layer information into a self-developed tracking type photovoltaic power generation digital twin system, the shielding and light scattering relation between the sun and the cloud layer can be simulated through physical modeling. The simulation result can achieve irradiance numerical simulation with an error of less than 5% on the minimum 5 m level, and can output the simulation results of direct irradiance, scattering irradiance and diffuse reflection irradiance in the current environment.
And optimizing the angle of the tracking type photovoltaic power generation panel according to the ratio of the direct solar irradiance to the environmental scattering and diffuse reflection irradiance in the simulation result. The specific optimization logic is as follows:
case of high direct solar irradiance: the tracking type photovoltaic power generation optimization system sends an instruction to the photovoltaic panel angle controller, so that the angle of the photovoltaic panel is perpendicular to the direction of the sun, and direct solar radiation is received to the maximum extent.
Ambient scattering, diffuse reflected irradiance is high: the intelligent meteorological station determines the optimized angle of the tracking type photovoltaic panel through a machine learning model according to numerical simulation results of the direct solar irradiance, the environmental scattering irradiance and the diffuse reflection irradiance, and sends the result to the on-site photovoltaic panel angle controller in an instruction form.
The high-precision 3D cloud layer reconstruction result finally obtained by the method not only can provide information such as on-site cloud height, cloud coverage rate and cloud boundary, but also can provide the relation among the photovoltaic power generation panel, the cloud layer and the position of the sun, and provides important guidance information for better optimizing the angle of the tracking type photovoltaic power generation panel.
In practical application, compared with photovoltaic power generation power to which a traditional tracking algorithm is applied, the information obtained by the array type all-sky camera system is used as guidance of a subsequent model, power increase of at least 2% can be obtained additionally under most weather conditions, and a power generation power curve can be optimized according to power generation grid connection requirements.
The embodiments described above are intended to illustrate the technical solutions and advantages of the present invention, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present invention, and are not intended to limit the present invention, and any modifications, additions and equivalents made within the scope of the principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of an all-sky image is characterized by comprising the following steps:
(1) an intelligent meteorological station is built, and the intelligent meteorological station comprises an anemoscope, a anemoscope, an all-sky camera and an irradiator;
(2) the all-sky camera adopts a fisheye camera to carry out laboratory calibration on the fisheye camera;
(3) according to the arrangement condition of the tracking type photovoltaic power generation equipment, n intelligent weather stations are arranged on the ground in an array mode, and each row of photovoltaic panels is located in the range of wireless communication signals of the intelligent weather stations;
(4) calibrating the horizontal positions, the central point orientations and the camera angles of all the fisheye cameras on the spot;
(5) using a fisheye camera to collect all-sky images, and storing the collected images in a time sequence database in the intelligent weather station for subsequent processing;
(6) identifying a cloud layer range in the image by using a convolution depth neural network model, and removing a large-area blue-sky area by using an image segmentation algorithm;
(7) according to the arrangement of n intelligent weather stations, forming C n 2 Arranging the camera pairs from small to large, and identifying the cloud layer height layer by layer from low to high according to the corresponding all-sky image pair;
(8) verifying the cloud height identification result, if the error of the cloud height of a certain specific height is higher than a threshold value, returning to the step (4), and adding all-sky camera pairing at the position of the corresponding ground distance; if the errors are all lower than the threshold value, executing the step (9);
(9) 3D reconstructing the cloud layer by utilizing the images shot by the all-sky cameras obtained in the step (7) and the accurate distance between the cameras;
(10) inputting a cloud layer 3D reconstruction result into a tracking type photovoltaic power generation system, and simulating the shielding and light scattering relation between the sun and the cloud layer; and optimizing the angle of the tracking type photovoltaic power generation panel according to the ratio of the direct solar irradiance to the environmental scattering and diffuse reflection irradiance in the simulation result.
2. The tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of the all-sky image in claim 1, wherein in the step (2), when laboratory calibration is performed on the fisheye camera, a Scaramuzza model is adopted, 6 to 30 pictures are taken at different angles for the same black and white checkerboard image, and then the characteristic parameters of the camera are determined through an angular point detection algorithm provided by Opencv software;
if the specificity of the camera characteristic parameters obtained through calibration is not large, then not correcting each camera, and adopting the same group of correction parameters; if the parameter difference between cameras exceeds 2%, parameter correction is required before each camera is deployed.
3. The method for optimizing tracking photovoltaic power generation based on 3D cloud layer reconstruction of the all-sky image in claim 1, wherein in the step (4), the horizontal position of the fisheye camera is corrected by a GPS positioning system equipped on the intelligent weather station; the center point of the fisheye camera faces to the zenith point, and the direction is corrected by a level equipped above the intelligent meteorological station when the fisheye camera is deployed on the spot; the camera angle adopts a compass method to determine the approximate angle of the lens, and then the sun position recognition algorithm is applied to further correct the approximate angle.
4. The method of claim 3, wherein the sun position identification algorithm is specifically:
identifying the shape of the sun in the field all-sky image by using a circular identification module to judge the position of the center of the sun; analyzing the color value of the sun position in the image by using a color identification module, and determining the characteristic range of the sun color, wherein the characteristic range is characterized in that the blue and green channel values are high, and the brightness and the saturation are high; finally, the position of the center of the sun is further accurately confirmed on the image;
and averaging the difference values of the plurality of groups of theoretical solar azimuth angles and actual solar azimuth angles identified by the solar position identification algorithm, and correcting the camera angle according to the average value.
5. The method of claim 1, wherein in the step (5), the fisheye camera takes 1944 × 1944 resolution to capture the all-sky image, and the image capture interval is five minutes.
6. The tracking photovoltaic power generation optimization method based on full-sky image 3D cloud layer reconstruction as claimed in claim 1, wherein in step (7), before the cloud layer height is identified layer by layer, dense matching of similar pixels between image pairs is required, and the specific process is as follows:
firstly, automatically identifying a characteristic point group with local characteristics in two images by adopting an SIFT algorithm, and having invariance to rotation, scale scaling and brightness change; secondly, matching the feature point groups of the respective images found in the previous step by adopting a K-D tree algorithm;
before searching the feature point group, constructing a feature point set by a data structure of a K-D tree according to the information of the feature point vector, and when searching a feature point with the closest feature point metric distance, searching by the previously constructed data structure.
7. The method for optimizing tracking-type photovoltaic power generation based on 3D cloud layer reconstruction of an all-sky image in claim 1, wherein in the step (7), the corresponding relationship between the space of the all-sky camera pair and the height of the cloud layer is as follows: h is i ≈10×d i ;d i Is the distance of the ith all-sky camera pair, h i Corresponding to the cloud level.
8. The method for optimizing tracking-type photovoltaic power generation based on full-sky image 3D cloud layer reconstruction as claimed in claim 1, wherein in step (9), the process of performing layer-by-layer 3D cloud layer reconstruction is as follows:
inputting images shot by paired all-sky cameras and the accurate distance between the cameras, and comparing the same characteristic pixel points between the images through an algorithm to complete 3D parallax modeling;
directly solving the real world coordinate position of each pixel point in the image by using a formula through a trigonometric function relationship, wherein the position and height information of the cloud layer are reflected by the coordinate; the formula is as follows:
Figure FDA0003676793560000031
in the formula, x w ,y w ,z w Respectively representing real world coordinates of pixel points in the image based on the integral geographic position information; xl, yl and zl represent eachThe position of the first all-sky camera is a real world coordinate of a reference; r and T are built-in parameters of the all-sky camera and are obtained through previous indoor correction; r l And T l Orientation and corner data for an all-sky camera when deployed in the field;
and reconstructing a group of 3D results of each image pair in each cloud height range, and calculating the average value of each coordinate point position by adopting an equal weight method after obtaining the 3D reconstruction results of all the image pairs as a final result.
9. The method for optimizing tracking photovoltaic power generation based on 3D cloud layer reconstruction of all-sky image according to claim 1, wherein in the step (10), the specific logic for optimizing the angle of the tracking photovoltaic power generation panel is as follows:
case of high direct solar irradiance: the tracking type photovoltaic power generation system sends an instruction to the photovoltaic panel angle controller, so that the angle of the photovoltaic panel is perpendicular to the direction of the sun, and direct solar radiation is received to the maximum extent;
ambient scattering, diffuse reflected irradiance is high: and the tracking type photovoltaic power generation system determines the optimized angle of the tracking type photovoltaic panel through a machine learning model according to the simulation result, and distributes the result to the on-site photovoltaic panel angle controller in a command form.
CN202210625073.4A 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image Pending CN114972997A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210625073.4A CN114972997A (en) 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210625073.4A CN114972997A (en) 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image

Publications (1)

Publication Number Publication Date
CN114972997A true CN114972997A (en) 2022-08-30

Family

ID=82959018

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210625073.4A Pending CN114972997A (en) 2022-06-02 2022-06-02 Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image

Country Status (1)

Country Link
CN (1) CN114972997A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017155421A1 (en) * 2016-03-07 2017-09-14 Centro De Investigação Em Energia Ren - State Grid, S.A Method and system for forecasting the power output of a group of photovoltaic power plants and managing the integration of said power output into a power grid
CN112801184A (en) * 2021-01-28 2021-05-14 江苏中信博新能源科技股份有限公司 Cloud tracking method, system and device
US20210158609A1 (en) * 2019-11-26 2021-05-27 Applied Research Associates, Inc. Large-scale environment-modeling with geometric optimization
CN113936031A (en) * 2021-10-15 2022-01-14 威海若维信息科技有限公司 Cloud shadow track prediction method based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017155421A1 (en) * 2016-03-07 2017-09-14 Centro De Investigação Em Energia Ren - State Grid, S.A Method and system for forecasting the power output of a group of photovoltaic power plants and managing the integration of said power output into a power grid
US20210158609A1 (en) * 2019-11-26 2021-05-27 Applied Research Associates, Inc. Large-scale environment-modeling with geometric optimization
CN112801184A (en) * 2021-01-28 2021-05-14 江苏中信博新能源科技股份有限公司 Cloud tracking method, system and device
CN113936031A (en) * 2021-10-15 2022-01-14 威海若维信息科技有限公司 Cloud shadow track prediction method based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
余光正等: "基于云 超短期光伏功率预测方法", 中国电机工程学报, vol. 41, no. 20, 20 October 2021 (2021-10-20) *
吴颖东;穆清萍;董霏;侯北平;黄俊;: "面向光伏发电功率预报的云层图像采集与分割研究", 浙江科技学院学报, no. 05, 30 October 2018 (2018-10-30) *

Similar Documents

Publication Publication Date Title
CN110570466B (en) Method and device for generating three-dimensional live-action point cloud model
CN111629193B (en) Live-action three-dimensional reconstruction method and system
CN105160702B (en) The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud
CN114758252B (en) Image-based distributed photovoltaic roof resource segmentation and extraction method and system
CN105243637A (en) Panorama image stitching method based on three-dimensional laser point cloud
CN115937288A (en) Three-dimensional scene model construction method for transformer substation
CN112947526B (en) Unmanned aerial vehicle autonomous landing method and system
CN111723464A (en) Typhoon elliptic wind field parametric simulation method based on remote sensing image characteristics
CN107767454A (en) A kind of three-dimensional mobile fast modeling method of outdoor scene, apparatus and system
CN113971768A (en) Unmanned aerial vehicle-based three-dimensional dynamic detection method for power transmission line illegal building
CN113298947A (en) Multi-source data fusion-based three-dimensional modeling method medium and system for transformer substation
CN111247564A (en) Method for constructing digital earth surface model, processing equipment and system
CN114463521B (en) Building target point cloud rapid generation method for air-ground image data fusion
CN114998545A (en) Three-dimensional modeling shadow recognition system based on deep learning
CN115359130A (en) Radar and camera combined calibration method and device, electronic equipment and storage medium
CN112529498B (en) Warehouse logistics management method and system
CN117115359B (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
CN113936031A (en) Cloud shadow track prediction method based on machine vision
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography
CN117392237A (en) Robust laser radar-camera self-calibration method
CN116824079A (en) Three-dimensional entity model construction method and device based on full-information photogrammetry
CN116704112A (en) 3D scanning system for object reconstruction
CN114972997A (en) Tracking type photovoltaic power generation optimization method based on 3D cloud layer reconstruction of all-sky image
CN114782357A (en) Self-adaptive segmentation system and method for transformer substation scene
CN114332364A (en) Three-dimensional cloud scene modeling and visualization method based on foundation cloud picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination