WO2024037123A1 - 一种全场精细化dni预测方法 - Google Patents

一种全场精细化dni预测方法 Download PDF

Info

Publication number
WO2024037123A1
WO2024037123A1 PCT/CN2023/098238 CN2023098238W WO2024037123A1 WO 2024037123 A1 WO2024037123 A1 WO 2024037123A1 CN 2023098238 W CN2023098238 W CN 2023098238W WO 2024037123 A1 WO2024037123 A1 WO 2024037123A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud
dni
shadow
point
image
Prior art date
Application number
PCT/CN2023/098238
Other languages
English (en)
French (fr)
Inventor
代增丽
王仁宝
谢宇
宋秀鹏
李涛
韩兆辉
王东祥
江宇
Original Assignee
山东电力建设第三工程有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东电力建设第三工程有限公司 filed Critical 山东电力建设第三工程有限公司
Publication of WO2024037123A1 publication Critical patent/WO2024037123A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/507Depth or shape recovery from shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the present invention relates to the technical field of tower photothermal stations, and specifically to a full-field refined DNI prediction method.
  • the tower solar thermal power generation system uses a heliostat that tracks the sun in real time to reflect sunlight onto the heat absorber panel on the heat absorption tower to heat the thermal medium in the heat absorber to generate electricity.
  • the main component of reflected sunlight is direct solar radiation (DNI). Sudden changes in DNI will affect the reliability and power generation efficiency of solar thermal power stations. Among them, the blocking of the sun by clouds is the biggest influencing factor. Therefore, it is necessary to predict cloud cover and then predict DNI changes in the mirror field area.
  • the existing technology generally predicts the average DNI of the entire field, and then operates the entire field heliostats uniformly before clouds arrive. For example, the entire mirror field uniformly stops some heliostats from reflecting sunlight to the heat absorber.
  • the invention patent with the publication number CN114021442A discloses a DNI prediction method for tower solar thermal power stations. This solution is designed based on this point; the method includes image formatting, image cutting, cloud detection, and training VGG -16 convolutional neural network identifies cloud transmittance and predicts half-hour DNI in five steps.
  • This technical solution applies this type of neural network to ultra-short-term optical power prediction for the first time to more refined cloud classification, and uses measured DNI sequences to determine cloud cover, effectively avoiding misdetection between solar halos and thin clouds; it can Predicting the changes in DNI in advance can provide guidance and suggestions on the number of solar mirrors to be invested, preventing the sudden departure of the clouds from causing a sudden increase in mirror field energy and impact on the heat absorber; therefore, it helps to extend the use of the heat absorber. life.
  • the purpose of the present invention is to provide a full-field refined DNI prediction method to solve the problems raised in the above background technology.
  • one of the purposes of the present invention is to provide a full-field refined DNI prediction
  • the measurement method uses at least two all-sky imagers to determine the actual position of the cloud (relative to the image position), and then determines the shadow position according to the sun angle; determines the thickness of the cloud through the imaging brightness of the cloud, and then predicts the DNI value; specifically Includes the following steps:
  • Cloud identification Accurately identify clouds in images from all-sky imagers
  • step S4 Calculation of actual cloud/shadow speed: From step S2, we can know the image speed of a point on the cloud. By confirming the same point on the cloud, step S3 calculates the coordinates of the same point on the cloud at two different times, and proves that the shadow speed is consistent with the cloud speed. The speed is the same, resulting in the cloud/shadow actual speed;
  • Shadow position prediction Predict the shadow position after a period of time by calculating the coordinate changes of shadow points in different time periods, and then determine which heliostats will be blocked under the shadow;
  • Cloud thickness extraction Use machine learning methods to fit the collected red-blue ratio, cloud-sun image distance, and sun altitude angle data to obtain cloud thickness, red-blue ratio, cloud-sun image distance, and sun altitude. The functional relationship between angles, after obtaining the fitting model, can be used to predict cloud thickness;
  • DNI mapping Use machine learning methods to fit cloud thickness and solar altitude angle, and obtain DNI values through radiometer measurements. After obtaining the fitting model, you can use it to predict DNI;
  • DNI prediction Use the shadow position predicted in step S5, the cloud thickness or red-blue ratio obtained in step S6, the cloud-sun image distance, and the sun altitude angle, combined with the mapping relationship obtained in step S7 to predict the DNI value of the current shadow position. .
  • the specific method for accurately identifying clouds in the images of the all-sky imager is:
  • the blue sky shows a larger gray value of the blue channel and a smaller gray value of the red channel; thick clouds show a similar gray value of the blue channel and the red channel; thin clouds It is often between the two; therefore, you can judge whether it is a thin cloud, thick cloud or blue sky based on the different performance of the object in the red and blue channels;
  • the threshold judgment method of channel ratio is used, and three thresholds are first set.
  • the red-blue ratio is less than the first threshold, it is considered a blue sky; when it is greater than the first threshold and less than the second threshold, it is considered a thin cloud; when it is greater than the second threshold, it is considered a thick cloud.
  • the channel mean value is greater than the third threshold for the sun (before background deduction, this point will not be considered after deduction); among them, the three thresholds can be determined statistically by collecting sky data, and the identification of thick clouds and thin clouds is subject to human calibration;
  • cloud recognition and judgment methods include but are not limited to threshold judgment methods of channel ratios, machine learning methods or deep learning methods, and multiple methods can be combined with each other;
  • the Farneback algorithm is used to calculate the speed magnitude and direction of each cloud pixel as follows:
  • R, G, and B respectively represent the brightness values of red, green, and blue in the RGB color space
  • x is a two-dimensional column vector
  • A is a 2 ⁇ 2 symmetric matrix
  • b is a 2 ⁇ 1 matrix
  • f(x) is equivalent to f(x,y), and represents the gray value of the pixel
  • f 2 (x) x T Ax + b 2 T x + c 2 ;
  • b 1 and b 2 respectively represent the 2 ⁇ 1 matrix before and after the change
  • c 1 and c 2 respectively represent the constant items before and after the change
  • the specific algorithm for calculating the actual location of the S3 cloud is as follows:
  • the two cameras are named camera 1 and camera 2 respectively.
  • the coordinates of camera 2 are (x cam2 , y cam2 , 0); then A specified point (x, y, z) in the camera 1 coordinate system is (xx cam2 , yy cam2 , z) in the camera 2 coordinate system;
  • Point (x, y, z) is projected in camera 1 as:
  • u and v are the horizontal and vertical coordinates of the image of camera 1 respectively
  • f x and f y are the focal lengths of the camera in the x and y directions respectively (because the same model of all-sky imager is used, these two parameters are different for two full-sky imagers.
  • the sky imager is the same
  • d is the distance between camera 1 and point (x, y, z);
  • u 2 and v 2 are the horizontal and vertical coordinates of the image of camera 1 respectively
  • f x and f y are the focal lengths of the camera in x and y directions respectively (the two all-sky imagers are the same)
  • d 2 is the distance between camera 2 and point ( x, y, z) distance;
  • the specific algorithm also includes:
  • the convergence discriminant is:
  • This equation represents the difference in cloud height z calculated at the positions of the two all-sky imagers under the current d value; when the equation is small enough, the iteration stops; the threshold is determined based on the required cloud position accuracy (For example, if the cloud height error is less than 10 meters, the threshold can be set to 10 meters); the coordinates calculated during iterative convergence are the actual cloud position coordinates of the corresponding point.
  • step S3 the specific method for calculating the coordinates of the same point on the cloud at two different times in step S3 is as follows:
  • the image speed of a point on the cloud can be known from step S2, and then the image position of the point at the next moment can be predicted; therefore, the cloud pixels corresponding to the image positions of the two all-sky imagers at the next moment are the same as the previous moment. a little;
  • step S3 can calculate the coordinates of the same point on the cloud at two different times, which are (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) respectively.
  • the cloud height generally does not change. Therefore, the three components of cloud velocity are:
  • ⁇ t is the time difference between two moments.
  • the coordinates of the point on the cloud are (x 2 , y 2 , z 2 ), and the coordinates of the corresponding shadow point on the ground are:
  • the red-to-blue ratio and the cloud-sun image distance can be obtained from image data;
  • the solar altitude angle can be calculated based on time;
  • the cloud thickness data can be obtained from satellite cloud images ;
  • the fitting method can use machine learning methods including but not limited to support vector machines, random forests, artificial neural networks, etc.
  • step S6 can be omitted.
  • the second object of the present invention is to provide a prediction method execution platform device, which includes a processor, a memory, and a computer program stored in the memory and run on the processor.
  • the processor is used to implement the above-mentioned full field when executing the computer program. Steps to refine the DNI prediction method.
  • a third object of the present invention is to provide a computer-readable storage medium that stores a computer program.
  • the computer program is executed by a processor, the above-mentioned full-field refined DNI prediction method is implemented. A step of.
  • This full-field refined DNI prediction method can accurately predict the DNI changes at each specific location in the mirror field.
  • Figure 1 is a flow chart of an exemplary overall method in the present invention
  • Figure 2 is an exemplary overall method flow diagram after omitting the cloud thickness extraction step in the present invention
  • Figure 3 is a structural diagram of an exemplary electronic computer platform device in the present invention.
  • this embodiment provides a full-field refined DNI prediction method, using at least two all-sky imagers to determine the actual position of clouds (relative to the image position), and then based on the sun angle. Determine the shadow position; determine the thickness of the cloud through the imaging brightness of the cloud, and then predict the DNI value; the specific steps include the following:
  • Cloud identification Accurately identify clouds in images from all-sky imagers
  • step S4 Calculation of actual cloud/shadow speed: From step S2, we can know the image speed of a point on the cloud. By confirming the same point on the cloud, step S3 calculates the coordinates of the same point on the cloud at two different times, and proves that the shadow speed is consistent with the cloud speed. The speed is the same, resulting in the cloud/shadow actual speed;
  • Shadow position prediction Predict the shadow position after a period of time by calculating the coordinate changes of shadow points in different time periods, and then determine which heliostats will be blocked under the shadow;
  • Cloud thickness extraction Use machine learning methods to fit the collected red-blue ratio, cloud-sun image distance, and sun altitude angle data to obtain cloud thickness, red-blue ratio, cloud-sun image distance, and sun altitude. The functional relationship between angles, after obtaining the fitting model, can be used to predict cloud thickness;
  • DNI mapping Use machine learning methods to fit cloud thickness and solar altitude angle, and obtain DNI values through radiometer measurements. After obtaining the fitting model, you can use it to predict DNI;
  • DNI prediction Use the shadow position predicted in step S5, the cloud thickness or red-blue ratio obtained in step S6, the cloud-sun image distance, and the sun altitude angle, combined with the mapping relationship obtained in step S7 to predict the DNI value of the current shadow position. .
  • step S2 step S3 and step S6 can be performed at the same time without conflicting with each other; step S4 is based on step S2 and step S3, and step S5 is based on step S4; step S7 can be based on step S6.
  • step S7 can also directly use step S1 as the calculation basis.
  • the specific method for accurately identifying clouds in images from the all-sky imager is:
  • the blue sky shows a larger gray value of the blue channel and a smaller gray value of the red channel; thick clouds show a similar gray value of the blue channel and the red channel; thin clouds It is often between the two; therefore, it can be judged whether it is thin clouds, thick clouds or blue sky based on the different performances of the object in the red and blue channels; the more common and simple one is often the threshold segmentation method, and according to the different forms of red and blue channels The composition and the method of segmentation are also different;
  • the threshold judgment method of channel ratio is used. Three thresholds are first set. When the red-blue ratio is less than the first threshold p 1 , it is considered a blue sky. When it is greater than the first threshold p 1 and less than the second threshold p 2 , it is considered a thin cloud. When it is greater than the second threshold p 2 , it is considered a thin cloud.
  • the threshold p 2 indicates thick clouds, and the average value of the three channels greater than the third threshold (such as 238) indicates the sun (before background deduction, this point will not be considered after deduction); among them, the three thresholds can be determined statistically by collecting sky data, thick clouds, The identification of Bo Yun is based on human calibration;
  • cloud recognition and judgment methods include but are not limited to threshold judgment methods of channel ratios, machine learning methods or deep learning methods, and multiple methods can be combined with each other;
  • the Farneback algorithm is used to calculate the speed magnitude and direction of each cloud pixel, as follows:
  • R, G, and B respectively represent the brightness values of red, green, and blue in the RGB color space
  • x is a two-dimensional column vector
  • A is a 2 ⁇ 2 symmetric matrix
  • b is a 2 ⁇ 1 matrix
  • f(x) is equivalent to f(x,y), and represents the gray value of the pixel
  • f 2 (x) x T Ax + b 2 T x + c 2 ;
  • b 1 and b 2 respectively represent the 2 ⁇ 1 matrix before and after the change
  • c 1 and c 2 respectively represent the constant items before and after the change
  • the two cameras are named camera 1 and camera 2 respectively.
  • the coordinates of camera 2 are (x cam2 , y cam2 , 0); then A specified point (x, y, z) in the camera 1 coordinate system is (xx cam2 , yy cam2 , z) in the camera 2 coordinate system;
  • Point (x, y, z) is projected in camera 1 as:
  • u and v are the horizontal and vertical coordinates of the image of camera 1 respectively
  • f x and f y are the focal lengths of the camera in the x and y directions respectively (because the same model of all-sky imager is used, these two parameters are different for two full-sky imagers.
  • the sky imager is the same
  • d is the distance between camera 1 and point (x, y, z);
  • u 2 and v 2 are the horizontal and vertical coordinates of the image of camera 1 respectively
  • f x and f y are the focal lengths of the camera in x and y directions respectively (the two all-sky imagers are the same)
  • d 2 is the distance between camera 2 and point ( x, y, z) distance;
  • the convergence discriminant is:
  • This equation represents the difference in cloud height z calculated at the positions of the two all-sky imagers under the current d value; when the equation is small enough, the iteration stops; the threshold is determined based on the required cloud position accuracy (For example, if the cloud height error is less than 10 meters, the threshold can be set to 10 meters); the coordinates calculated during iterative convergence are the actual cloud position coordinates of the corresponding point.
  • step S4 the specific method of calculating the coordinates of the same point on the cloud at two different times in step S3 is as follows:
  • the image speed of a point on the cloud can be known from step S2, and then the image position of the point at the next moment can be predicted; therefore, the cloud pixels corresponding to the image positions of the two all-sky imagers at the next moment are the same as the previous moment. a little;
  • step S3 can calculate the coordinates of the same point on the cloud at two different times, which are (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) respectively.
  • the cloud height generally does not change. Therefore, the three components of cloud velocity are:
  • ⁇ t is the time difference between two moments.
  • the straight line equation is expressed as:
  • the coordinates of the point on the cloud are (x 2 , y 2 , z 2 ), and the coordinates of the corresponding shadow point on the ground are:
  • the specific algorithm for S5 shadow position prediction is:
  • step S6 cloud thickness extraction
  • the rough thickness of the cloud has been given in step S1, but it is not accurate enough; in fact, the cloud thickness judgment is not only related to the red-blue ratio in step S1, but also related to the red-blue ratio in step S1.
  • the image distance between clouds and the sun is related to the sun's altitude angle; therefore, the above data can be collected for fitting, and the functional relationship between cloud thickness and the red-to-blue ratio, the cloud-sun image distance, and the sun's altitude angle can be obtained;
  • the red-to-blue ratio and cloud-sun image distance can be obtained from image data; the sun's altitude angle can be calculated based on time; cloud thickness data can be obtained from satellite cloud images;
  • the fitting method can use machine learning methods including but not limited to support vector machines, random forests, artificial neural networks, etc.
  • step S6 can be omitted, as shown in Figure 2.
  • this embodiment also proposes an alternative 1 to the main solution, specifically:
  • the all-sky imager can be replaced by multiple ordinary pinhole cameras covering the entire sky; ordinary pinhole cameras deployed staggered can capture the same cloud, and the location of the cloud can be determined.
  • this embodiment also proposes an alternative 2 to the main solution, specifically:
  • is the distance between the camera center and the center of the sphere; then the back projection is:
  • the pixel coordinates are:
  • this embodiment also provides a prediction method execution platform device, which includes a processor, a memory, and a computer program stored in the memory and run on the processor.
  • the processor includes one or more processing cores.
  • the processor is connected to the memory through a bus.
  • the memory is used to store program instructions.
  • the processor executes the program instructions in the memory, it implements the above-mentioned full-field refined DNI prediction method.
  • the memory can be implemented by any type of volatile or non-volatile storage device or their combination, such as static anytime access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), Erase programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static anytime access memory
  • EEPROM electrically erasable programmable read-only memory
  • EPROM Erase programmable read-only memory
  • PROM programmable read-only memory
  • ROM read-only memory
  • magnetic memory magnetic memory
  • flash memory magnetic disk or optical disk.
  • the present invention also provides a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the steps of the above-mentioned full-field refined DNI prediction method are implemented.
  • the present invention also provides a computer program product containing instructions that, when run on a computer, causes the computer to perform the steps of the above-mentioned full-field refined DNI prediction method.
  • the process of implementing all or part of the steps of the above embodiments can be completed by hardware, or can be completed by instructing the relevant hardware through a program.
  • the program can be stored in a computer-readable storage medium.
  • the storage medium can be read-only memory, magnetic disk or optical disk, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Photovoltaic Devices (AREA)

Abstract

一种全场精细化DNI预测方法。使用至少两个全天空成像仪来确定云的实际位置,再根据太阳角来确定阴影位置;通过云的成像亮度来确定云的厚度,进而预测DNI值;具体包括如下步骤:云识别,云图像速度计算,云实际位置计算,云/阴影实际速度计算,阴影位置预测,云厚度提取,DNI映射,DNI预测。该方法采用至少两个全天空成像仪或针孔相机来进行DNI预测操作,可以准确预测镜场每一个具体位置的DNI变化,提高发电效率。

Description

一种全场精细化DNI预测方法 技术领域
本发明涉及塔式光热站技术领域,具体地说,涉及一种全场精细化DNI预测方法。
背景技术
塔式太阳能热发电系统利用实时跟踪太阳的定日镜将太阳光反射到吸热塔上的吸热器面屏上,加热吸热器中的热介质,进而实现发电。反射的太阳光中最主要的成分就是太阳直接辐射(DNI)。DNI的骤变会影响光热电站的可靠性和发电效率。其中,云对太阳的遮挡是最大的影响因素。因此,需要预测云遮情况,进而预测镜场区域的DNI变化。现有技术一般是对全场的平均DNI进行预测,然后在云到来前,全场定日镜统一进行操作,比如整个镜场均匀的停止部分定日镜反射阳光的到吸热器上。如公开号为CN114021442A的发明专利公开了一种用于塔式光热电站的DNI预测方法,该方案就是基于这一点来设计的;该方法包括图像格式化、图像切割、云团检测、训练VGG-16卷积神经网络识别云团透光率和预测半小时DNI共五个步骤。该技术方案首次将该类型神经网络应用于超短期光功率预测,对云团分类更精细化,使用实测DNI序列进行云遮判定,有效避免了太阳光晕与薄云之间的误检;可以提前预测出DNI的变化,可以给定日镜的投入数量给出指导建议,防止出现云层突然离开造成镜场能量突升,对吸热器造成的冲击;因此有助于延长吸热器的使用寿命。
然而,大多数情况下,全场定日镜在云来前减少投射阳光会存在大量不必要的操作,影响发电效率。如果能准确预测镜场每个定日镜所在位置的DNI,就可以有针对性的操作定日镜,同时未被云遮的区域就可以减少操作,并持续反射阳光来发电。鉴于此,我们提出了一种全场精细化DNI预测方法。
发明内容
本发明的目的在于提供一种全场精细化DNI预测方法,以解决上述背景技术中提出的问题。
为实现上述技术问题的解决,本发明的目的之一在于,提供了一种全场精细化DNI预 测方法,使用至少两个全天空成像仪来确定云(与图像位置相对)的实际位置,再根据太阳角来确定阴影位置;通过云的成像亮度来确定云的厚度,进而预测DNI值;具体包括如下步骤:
S1、云识别:在全天空成像仪的图像中准确识别云团;
S2、云图像速度计算:采用Farneback算法计算每个云像素点的速度大小和方向;
S3、云实际位置计算:以其中一个全天空成像仪的坐标系为标准,通过计算指定点与两个全天空成像仪中的距离关系来确定云实际位置;
S4、云/阴影实际速度计算:由步骤S2可知云上一点的图像速度,通过确认云上的相同点,由步骤S3计算两个不同时刻的云上同一点的坐标,并证明阴影速度与云速度是相同的,从而得出云/阴影实际速度;
S5、阴影位置预测:通过计算不同时间段阴影点的坐标变化来预测一段时间后的阴影位置,进而确定阴影下哪些定日镜会被遮挡;
S6、云厚度提取:采用机器学习方法对采集的红蓝比、云-太阳的图像距离、太阳高度角数据进行拟合,得出云厚与红蓝比、云-太阳的图像距离、太阳高度角之间的函数关系,得到拟合模型后,即可用其预测云厚;
S7、DNI映射:使用机器学习方法对云厚和太阳高度角进行拟合,通过辐照计测量获得DNI值,得到拟合模型后即可用其预测DNI;
S8、DNI预测:采用步骤S5预测的阴影位置,步骤S6得到的云厚或者红蓝比、云-太阳的图像距离、太阳高度角,结合步骤S7得到的映射关系来预测当前阴影位置的DNI值。
作为本技术方案的进一步改进,所述S1云识别中,在全天空成像仪的图像中准确识别云团的具体方法为:
首先,在全天空图像中蓝天表现为蓝色通道灰度值较大,红色通道灰度值较小;厚云则表现为蓝色通道灰度值和红色通道灰度值相差不大;薄云往往介于两者之间;因此可以根据物体在红蓝通道不同的表现来判断是否为薄云、厚云及蓝天;
其次,采用通道比值的阈值判断方法,先设定三个阈值,当红蓝比小于第一阈值认为是蓝天,大于第一阈值且小于第二阈值为薄云,大于第二阈值为厚云,三通道均值大于第三阈值为太阳(未扣除背景前,扣除后不考虑此点);其中,三个阈值可以通过采集天空数据统计确定,厚云、薄云的认定以人为标定为准;
同时,云识别判断的方法包括但不限于通道比值的阈值判断方法、机器学习方法或深度学习方法,且多个方法彼此之间可以结合;
此外,还需要考虑晴天背景拟合,采用背景扣除进行太阳区域的云检测,用于避免图像中太阳附近被识别成云团。
作为本技术方案的进一步改进,所述S2云图像速度计算中,采用Farneback算法计算每个云像素点的速度大小和方向具体如下:
首先,将图像进行灰度化处理:将图像进行线性变换,转换为HSV颜色空间,使用该颜色空间的亮度维度V作为灰度信息,即:
V=max(R,G,B);
其中,R、G、B分别代表RGB颜色空间中的红、绿、蓝三色的亮度值;
然后,将图像像素点的灰度值看成是一个二维变量的函数f(x,y),以感兴趣的像素点为中心,构建一个局部坐标系,对函数进行二项展开,表示为:
f(x,y)=f(x)=xTAx+bTx+c;
式中,x为二维列向量,A为2×2的对称矩阵,b为2×1的矩阵,f(x)与f(x,y)等价,表示像素点的灰度值,c表示二次展开的常数项;如果这个像素点移动了,整个多项式就会发生变化,位移为d;位移前后A不变,则变化前后分别表示为
f1(x)=xTAx+b1 Tx+c1
f2(x)=xTAx+b2 Tx+c2
其中,b1和b2分别表示变化前后的2×1矩阵,c1和c2分别表示变化前后的常数项;
从而得到约束条件:Ad=Δb;其中,
最后,建立目标函数:‖Ad-b‖2,通过最小化目标函数求解出位移d,位移d除以发生位移的时间就是速度矢量。
作为本技术方案的进一步改进,所述S3云实际位置计算中,具体算法如下:
设两个全天空成像仪均带有鱼眼相机,两个相机分别命名为相机1和相机2,以相机1坐标系为标准,相机2的坐标为(xcam2,ycam2,0);则相机1坐标系下某一指定点(x,y,z)在相机2坐标系下为(x-xcam2,y-ycam2,z);
点(x,y,z)在相机1中投影为:

其中,u、v分别是相机1的图像横纵坐标,fx、fy分别是相机的x和y方向的焦距(因为采用同型号的全天空成像仪,因此这两个参数对于两个全天空成像仪是相同的),d是相机1与点(x,y,z)的距离;
同时,点(x,y,z)在相机2中投影为:

其中,u2、v2分别是相机1的图像横纵坐标,fx、fy分别是相机的x和y方的焦距(两个全天空成像仪相同),d2是相机2与点(x,y,z)的距离;进而:
若该点与两相机的距离远大于相机间距离,则可以认为d≈d2,则:
同理有:
进而可以迭代求解,具体求解过程为:
令D=ξd+z,D2=ξd2+z;取:



可得:
(D2-z)2=ξ2[(x-xcam2)2+(y-ycam2)2+z2];
z2-2zD2+D2 2=ξ2(x-xcam2)22(y-ycam2)22z2
(1-ξ2)z2-2zD2+D2 22(x-xcam2)22(y-ycam2)2=0;
如果ξ2>1,只有取负号,z才大于0;如果ξ2<1,取正号则z>D2,显然是不对的;因此,也要取负号;因此,对于ξ2≠1的情况有:
如果ξ2=1,则:

即:
同理,从相机1的方程也可得到:
将Diter1、xiter1、yiter1、D2,iter1的值代入上述求z的表达式,并取平均,就得到了ziter1
作为本技术方案的进一步改进,所述S3云实际位置计算中,具体算法还包括:
以更普遍的情况ξ2≠1为例,根据前述的计算,进而还有:
在下一步迭代中:
也即在后续迭代中,满足:




收敛判别式为:
该式表示在当前d数值下,分别在这两个全天空成像仪的位置计算得到的云高z的差别;当该式足够小时,则停止迭代;该阈值根据所需的云位置精度来确定(比如,云高误差要小于10米,该阈值则可以设定为10米);迭代收敛时计算所得的坐标则为对应点的云实际位置坐标。
作为本技术方案的进一步改进,所述S4云/阴影实际速度计算中,由步骤S3计算两个不同时刻的云上同一点的坐标的具体方法如下:
首先,由步骤S2可知云上一点的图像速度,那么就可以预测下一时刻该点图像位置;因此,下一时刻两个全天空成像仪各自对应图像位置的云像素点就是前一时刻的同一点;
然后,由步骤S3可以计算两个不同时刻的云上同一点的坐标,分别为(x1,y1,z1)和(x2,y2,z2),云高一般不发生变化,因此,云速度的三个分量分别为:
其中,Δt是两个时刻的时间差。
作为本技术方案的进一步改进,所述S4云/阴影实际速度计算中,证明阴影速度与云速度是相同的,证明如下:
首先,太阳角度是可以推算出来的(科技文献中已有详细解释,这里不做介绍),设已知太阳与正北方向夹角为θ、与水平方向夹角为φ;那么,云上一点(x1,y1,z1)在地面上的阴影点则为过点(x1,y1,z1)、与正北方向夹角为θ、与水平方向夹角为φ的直线与平面z=0的交点;取x轴正半轴方向为正东,y轴正半轴方向为正北,则直线方程表示为:
则地面上阴影点的坐标为:
下一时刻,云上该点的坐标为(x2,y2,z2),对应地面上阴影点的坐标为:
由于z1=z2,则云的阴影速度和云的速度相同(此计算中,由于是短时预测,因此不考虑太阳角度变化)。
作为本技术方案的进一步改进,所述S5阴影位置预测中,具体算法为:
设阴影点当前坐标为:
则一段时间(Δt2)后,阴影点的位置为:
由此则可以预测一段时间后的阴影位置,进而可以预判阴影下哪些定日镜会被遮挡。
作为本技术方案的进一步改进,所述S6云厚度提取中,红蓝比、云-太阳的图像距离均可由图像数据得到;太阳高度角可以按时间计算得到;云厚数据可从卫星云图中得到;
同时,拟合方法可以采用包括但不限于支持向量机、随机森林、人工神经网络等机器学习方法。
作为本技术方案的进一步改进,所述S7DNI映射中,还可以使用机器学习方法直接对红蓝比、云-太阳的图像距离、太阳高度角进行拟合来获得DNI值,模型训练好后即可用其预测DNI,此时不必预测云厚,即可以省略步骤S6。
本发明的目的之二在于,提供了一种预测方法运行平台装置,包括处理器、存储器以及存储在存储器中并在处理器上运行的计算机程序,处理器用于执行计算机程序时实现上述的全场精细化DNI预测方法的步骤。
本发明的目的之三在于,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述的全场精细化DNI预测方法 的步骤。
与现有技术相比,本发明的有益效果:
1.该全场精细化DNI预测方法中,针对全场定日镜在云来前减少投射阳光会存在大量不必要的操作影响发电效率的问题,采用至少两个全天空成像仪或针孔相机,采用三通道预制分割法准确识别云,采用Farneback算法计算每个云像素点的速度大小和方向,并以两个全天空成像仪的坐标系来计算云实际位置,再计算云/阴影实际速度,进而预测阴影位置,确定阴影下会被遮挡的定日镜,然后通过提取云厚度并进行DNI拟合,从而实现最终的DNI预测操作,整体方法清晰明了、预测精度较高;
2.该全场精细化DNI预测方法中,可以准确预测镜场每一个具体位置的DNI变化,塔式光热站运行过程中只需要操作DNI剧烈变化区域的定日镜就可以避免损伤吸热器,同时保持其它定日镜的正常工作,提高发电效率,有效解决现有方法中的DNI预测是镜场平均DNI而必须全场定日镜操作,导致的降低发电效率的问题。
附图说明
图1为本发明中示例性的整体方法流程框图;
图2为本发明中示例性的省略云厚度提取步骤后的整体方法流程框图;
图3为本发明中示例性的电子计算机平台装置结构图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例1
如图1-图3所示,本实施例提供了一种全场精细化DNI预测方法,使用至少两个全天空成像仪来确定云(与图像位置相对)的实际位置,再根据太阳角来确定阴影位置;通过云的成像亮度来确定云的厚度,进而预测DNI值;具体包括如下步骤:
S1、云识别:在全天空成像仪的图像中准确识别云团;
S2、云图像速度计算:采用Farneback算法计算每个云像素点的速度大小和方向;
S3、云实际位置计算:以其中一个全天空成像仪的坐标系为标准,通过计算指定点与两个全天空成像仪中的距离关系来确定云实际位置;
S4、云/阴影实际速度计算:由步骤S2可知云上一点的图像速度,通过确认云上的相同点,由步骤S3计算两个不同时刻的云上同一点的坐标,并证明阴影速度与云速度是相同的,从而得出云/阴影实际速度;
S5、阴影位置预测:通过计算不同时间段阴影点的坐标变化来预测一段时间后的阴影位置,进而确定阴影下哪些定日镜会被遮挡;
S6、云厚度提取:采用机器学习方法对采集的红蓝比、云-太阳的图像距离、太阳高度角数据进行拟合,得出云厚与红蓝比、云-太阳的图像距离、太阳高度角之间的函数关系,得到拟合模型后,即可用其预测云厚;
S7、DNI映射:使用机器学习方法对云厚和太阳高度角进行拟合,通过辐照计测量获得DNI值,得到拟合模型后即可用其预测DNI;
S8、DNI预测:采用步骤S5预测的阴影位置,步骤S6得到的云厚或者红蓝比、云-太阳的图像距离、太阳高度角,结合步骤S7得到的映射关系来预测当前阴影位置的DNI值。
其中,值得说明的是,步骤S2、步骤S3以及步骤S6可以同时进行,互不冲突;步骤S4以步骤S2、步骤S3为计算基础,步骤S5以步骤S4为计算基础;步骤S7可以以步骤S6为计算基础,若省略步骤S6,则步骤S7也可以直接以步骤S1为计算基础。
本实施例中,S1云识别中,在全天空成像仪的图像中准确识别云团的具体方法为:
首先,在全天空图像中蓝天表现为蓝色通道灰度值较大,红色通道灰度值较小;厚云则表现为蓝色通道灰度值和红色通道灰度值相差不大;薄云往往介于两者之间;因此可以根据物体在红蓝通道不同的表现来判断是否为薄云、厚云及蓝天;比较常见且简单的往往为阈值分割方法,并且根据红蓝通道不同形式的组成,其分割方法也有所不同;
其次,采用通道比值的阈值判断方法,先设定三个阈值,当红蓝比小于第一阈值p1认为是蓝天,大于第一阈值p1且小于第二阈值p2为薄云,大于第二阈值p2为厚云,三通道均值大于第三阈值(如238)为太阳(未扣除背景前,扣除后不考虑此点);其中,三个阈值可以通过采集天空数据统计确定,厚云、薄云的认定以人为标定为准;
同时,云识别判断的方法包括但不限于通道比值的阈值判断方法、机器学习方法或深度学习方法,且多个方法彼此之间可以结合;
此外,还需要考虑晴天背景拟合,采用背景扣除进行太阳区域的云检测,用于避免图像中太阳附近被识别成云团;其中,太阳背景可以通过晴空图像数据采集,结合人工神经网络方法进行学习,在实际使用时通过模型先生成晴空图像,再用实际图像扣除。
在图像中太阳附近容易被识别成云团,所以进行云识别前首先需要进行太阳背景扣除,以提高后续的识别准确度。
本实施例中,S2云图像速度计算中,采用Farneback算法计算每个云像素点的速度大小和方向具体如下:
首先,将图像进行灰度化处理:将图像进行线性变换,转换为HSV颜色空间,使用该颜色空间的亮度维度V作为灰度信息,即:
V=max(R,G,B);
其中,R、G、B分别代表RGB颜色空间中的红、绿、蓝三色的亮度值;
然后,将图像像素点的灰度值看成是一个二维变量的函数f(x,y),以感兴趣的像素点为中心,构建一个局部坐标系,对函数进行二项展开,表示为:
f(x,y)=f(x)=xTAx+bTx+c;
式中,x为二维列向量,A为2×2的对称矩阵,b为2×1的矩阵,f(x)与f(x,y)等价,表示像素点的灰度值,c表示二次展开的常数项;如果这个像素点移动了,整个多项式就会发生变化,位移为d;位移前后A不变,则变化前后分别表示为
f1(x)=xTAx+b1 Tx+c1
f2(x)=xTAx+b2 Tx+c2
其中,b1和b2分别表示变化前后的2×1矩阵,c1和c2分别表示变化前后的常数项;
从而得到约束条件:Ad=Δb;其中,
最后,建立目标函数:‖Ad-b‖2,通过最小化目标函数求解出位移d,位移d除以发生位移的时间就是速度矢量。
本实施例中,S3云实际位置计算中,具体算法如下:
设两个全天空成像仪均带有鱼眼相机,两个相机分别命名为相机1和相机2,以相机1坐标系为标准,相机2的坐标为(xcam2,ycam2,0);则相机1坐标系下某一指定点(x,y,z)在相机2坐标系下为(x-xcam2,y-ycam2,z);
点(x,y,z)在相机1中投影为:

其中,u、v分别是相机1的图像横纵坐标,fx、fy分别是相机的x和y方向的焦距(因为采用同型号的全天空成像仪,因此这两个参数对于两个全天空成像仪是相同的),d是相机1与点(x,y,z)的距离;
同时,点(x,y,z)在相机2中投影为:

其中,u2、v2分别是相机1的图像横纵坐标,fx、fy分别是相机的x和y方的焦距(两个全天空成像仪相同),d2是相机2与点(x,y,z)的距离;进而:
若该点与两相机的距离远大于相机间距离,则可以认为d≈d2,则:
同理有:
进而可以迭代求解,具体求解过程为:
令D=ξd+z,D2=ξd2+z;取:



可得:
(D2-z)2=ξ2[(x-xcam2)2+(y-ycam2)2+z2];
z2-2zD2+D2 2=ξ2(x-xcam2)22(y-ycam2)22z2
(1-ξ2)z2-2zD2+D2 22(x-xcam2)22(y-ycam2)2=0;
如果ξ2>1,只有取负号,z才大于0;如果ξ2<1,取正号则z>D2,显然是不对的;因此,也要取负号;因此,对于ξ2≠1的情况有:
如果ξ2=1,则:
-2zD2+D2 22(x-xcam2)22(y-ycam2)2=0;
即:
同理,从相机1的方程也可得到:
将Diter1、xiter1、yiter1、D2,iter1的值代入上述求z的表达式,并取平均,就得到了ziter1。进一步地,以更普遍的情况ξ2≠1为例,根据前述的计算,进而还有:
在下一步迭代中:
也即在后续迭代中,满足:





收敛判别式为:
该式表示在当前d数值下,分别在这两个全天空成像仪的位置计算得到的云高z的差别;当该式足够小时,则停止迭代;该阈值根据所需的云位置精度来确定(比如,云高误差要小于10米,该阈值则可以设定为10米);迭代收敛时计算所得的坐标则为对应点的云实际位置坐标。
此外,值得说明的是,如果有两个以上全天空成像仪,可以按以上方法使用其中两个计算,多个组合所得结果再取平均。
同时,在实际应用过程中,使用更多的全天空成像仪(两个及以上)可以增加预测的精度,但同时也增加了成本,因此用户可根据自身需求及成本预算来选择投入全天空成像仪的数量。
本实施例中,S4云/阴影实际速度计算中,由步骤S3计算两个不同时刻的云上同一点的坐标的具体方法如下:
首先,由步骤S2可知云上一点的图像速度,那么就可以预测下一时刻该点图像位置;因此,下一时刻两个全天空成像仪各自对应图像位置的云像素点就是前一时刻的同一点;
然后,由步骤S3可以计算两个不同时刻的云上同一点的坐标,分别为(x1,y1,z1)和(x2,y2,z2),云高一般不发生变化,因此,云速度的三个分量分别为:
其中,Δt是两个时刻的时间差。
进一步地,证明阴影速度与云速度是相同的,证明如下:
首先,太阳角度是可以推算出来的(该推算方法为现有成熟技术,在相关的科技文献 中已有详细解释,在此不做介绍),设已知太阳与正北方向夹角为θ、与水平方向夹角为φ;那么,云上一点(x1,y1,z1)在地面上的阴影点则为过点(x1,y1,z1)、与正北方向夹角为θ、与水平方向夹角为φ的直线与平面z=0的交点;取x轴正半轴方向为正东,y轴正半轴方向为正北,则直线方程表示为:
则地面上阴影点的坐标为:
下一时刻,云上该点的坐标为(x2,y2,z2),对应地面上阴影点的坐标为:
由于z1=z2,则云的阴影速度和云的速度相同(此计算中,由于是短时预测,因此不考虑太阳角度变化)。
本实施例中,S5阴影位置预测中,具体算法为:
设阴影点当前坐标为:
则一段时间(Δt2)后,阴影点的位置为:
由此则可以预测一段时间后的阴影位置,进而可以预判阴影下哪些定日镜会被遮挡。
本实施例中,S6云厚度提取中,首先,在步骤S1中已经给出云的粗略厚度,但不够准确;而事实上,云的厚度判断除了与步骤S1的红蓝比相关外,还与云与太阳的图像距离、太阳高度角有关;因此,可采集以上数据进行拟合,得出云厚与红蓝比、云-太阳的图像距离、太阳高度角之间的函数关系;
其中,红蓝比、云-太阳的图像距离均可由图像数据得到;太阳高度角可以按时间计算得到;云厚数据可从卫星云图中得到;
同时,拟合方法可以采用包括但不限于支持向量机、随机森林、人工神经网络等机器学习方法。
此外,S7DNI映射中,还可以使用机器学习方法直接对红蓝比、云-太阳的图像距离、太阳高度角进行拟合来获得DNI值,模型训练好后即可用其预测DNI,此时不必预测云厚,即可以省略步骤S6,如图2所示。
实施例2
在实施例1的基础上,本实施例还提出了一种主方案的替代方案1,具体为:
首先,全天空成像仪可以由覆盖全天空的多个普通针孔摄像头代替;交错部署的普通针孔摄像头可以拍摄到同一片云,也就可以确定云的位置。
两个针孔摄像头确定云位置的方法如下:
现考虑有可以拍摄到同一片云的两个针孔相机,两个相机拍摄角度相同,相机位置不同;设相机1的坐标为(0,0),相机2的坐标为(xcam2,ycam2),则对于相机1,有:
对于相机2,有:
则:
同理:
那么,可得:

于是,可得:

此外,其它步骤与实施例1中的主方案相同。
实施例3
在实施例2的基础上,本实施例还提出了一种主方案的替代方案2,具体为:
将全天空成像仪的图像坐标转换成针孔相机坐标,再按实施例2中的替代方案1进行求解。其中,坐标转换方式如下:
假设全天空成像仪坐标系下的点为(x,y,z),像素坐标为(u,v),则投影公式为:

其中,ξ是相机中心和球心距离;则反投影为:
这里,有:
换算成针孔摄像头,则像素坐标为:
此外,其它步骤与实施例1中的主方案/实施例2中的替代方案1相同。
如图3所示,本实施例还提供了一种预测方法运行平台装置,该装置包括处理器、存储器以及存储在存储器中并在处理器上运行的计算机程序。
处理器包括一个或一个以上处理核心,处理器通过总线与存储器相连,存储器用于存储程序指令,处理器执行存储器中的程序指令时实现上述的全场精细化DNI预测方法。
可选的,存储器可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随时存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
此外,本发明还提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,计算机程序被处理器执行时实现上述的全场精细化DNI预测方法的步骤。
可选的,本发明还提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述各方面全场精细化DNI预测方法的步骤。
本领域普通技术人员可以理解,实现上述实施例的全部或部分步骤的过程可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,程序可以存储于计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上显示和描述了本发明的基本原理、主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的仅为本发明的优选例,并不用来限制本发明,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。

Claims (10)

  1. 一种全场精细化DNI预测方法,其特征在于:使用至少两个全天空成像仪来确定云的实际位置,再根据太阳角来确定阴影位置;通过云的成像亮度来确定云的厚度,进而预测DNI值;具体包括如下步骤:
    S1、云识别:在全天空成像仪的图像中准确识别云团;
    S2、云图像速度计算:采用Farneback算法计算每个云像素点的速度大小和方向;
    S3、云实际位置计算:以其中一个全天空成像仪的坐标系为标准,通过计算指定点与两个全天空成像仪中的距离关系来确定云实际位置;
    S4、云/阴影实际速度计算:由步骤S2可知云上一点的图像速度,通过确认云上的相同点,由步骤S3计算两个不同时刻的云上同一点的坐标,并证明阴影速度与云速度是相同的,从而得出云/阴影实际速度;
    S5、阴影位置预测:通过计算不同时间段阴影点的坐标变化来预测一段时间后的阴影位置,进而确定阴影下哪些定日镜会被遮挡;
    S6、云厚度提取:采用机器学习方法对采集的红蓝比、云-太阳的图像距离、太阳高度角数据进行拟合,得出云厚与红蓝比、云-太阳的图像距离、太阳高度角之间的函数关系,得到拟合模型后,即可用其预测云厚;
    S7、DNI映射:使用机器学习方法对云厚和太阳高度角进行拟合,通过辐照计测量获得DNI值,得到拟合模型后即可用其预测DNI;
    S8、DNI预测:采用步骤S5预测的阴影位置,步骤S6得到的云厚或者红蓝比、云-太阳的图像距离、太阳高度角,结合步骤S7得到的映射关系来预测当前阴影位置的DNI值。
  2. 根据权利要求1所述的全场精细化DNI预测方法,其特征在于:所述S1云识别中,在全天空成像仪的图像中准确识别云团的具体方法为:
    首先,在全天空图像中蓝天表现为蓝色通道灰度值较大,红色通道灰度值较小;厚云则表现为蓝色通道灰度值和红色通道灰度值相差不大;薄云往往介于两者之间;因此可以根据物体在红蓝通道不同的表现来判断是否为薄云、厚云及蓝天;
    其次,采用通道比值的阈值判断方法,先设定三个阈值,当红蓝比小于第一阈值认为是蓝天,大于第一阈值且小于第二阈值为薄云,大于第二阈值为厚云,三通道均值大于第三阈值为太阳;其中,三个阈值可以通过采集天空数据统计确定,厚云、薄云的认定以人为标定为准;
    同时,云识别判断的方法包括但不限于通道比值的阈值判断方法、机器学习方法或深度学习方法,且多个方法彼此之间可以结合;
    此外,还需要考虑晴天背景拟合,采用背景扣除进行太阳区域的云检测,用于避免图像中太阳附近被识别成云团。
  3. 根据权利要求2所述的全场精细化DNI预测方法,其特征在于:所述S2云图像速度计算中,采用Farneback算法计算每个云像素点的速度大小和方向具体如下:
    首先,将图像进行灰度化处理:将图像进行线性变换,转换为HSV颜色空间,使用该颜色空间的亮度维度V作为灰度信息,即:
    V=max(R,G,B);
    其中,R、G、B分别代表RGB颜色空间中的红、绿、蓝三色的亮度值;
    然后,将图像像素点的灰度值看成是一个二维变量的函数f(x,y),以感兴趣的像素点为中心,构建一个局部坐标系,对函数进行二项展开,表示为:
    f(x,y)=f(x)=xTAx+bTx+c;
    式中,x为二维列向量,A为2×2的对称矩阵,b为2×1的矩阵,f(x)与f(x,y)等价,表示像素点的灰度值,c表示二次展开的常数项;如果这个像素点移动了,整个多项式就会发生变化,位移为d;位移前后A不变,则变化前后分别表示为
    f1(x)=xTAx+b1 Tx+c1
    f2(x)=xTAx+b2 Tx+c2
    其中,b1和b2分别表示变化前后的2×1矩阵,c1和c2分别表示变化前后的常数项;
    从而得到约束条件:Ad=Δb;其中,
    最后,建立目标函数:‖Ad-b‖2,通过最小化目标函数求解出位移d,位移d除以发生位移的时间就是速度矢量。
  4. 根据权利要求3所述的全场精细化DNI预测方法,其特征在于:所述S3云实际位置计算中,具体算法如下:
    设两个全天空成像仪均带有鱼眼相机,两个相机分别命名为相机1和相机2,以相机1坐标系为标准,相机2的坐标为(xcam2,ycam2,0);则相机1坐标系下某一指定点(x,y,z)在相机2坐标系下为(x-xcam2,y-ycam2,z);
    点(x,y,z)在相机1中投影为:

    其中,u、v分别是相机1的图像横纵坐标,fx、fy分别是相机的x和y方向的焦距,d是相机1与点(x,y,z)的距离;
    同时,点(x,y,z)在相机2中投影为:

    其中,u2、v2分别是相机1的图像横纵坐标,fx、fy分别是相机的x和y方的焦距,d2是相机2与点(x,y,z)的距离;进而:
    若该点与两相机的距离远大于相机间距离,则可以认为d≈d2,则:
    同理有:
    进而可以迭代求解,具体求解过程为:
    令D=ξd+z,D2=ξd2+z;取:



    可得:
    (D2-z)2=ξ2[(x-xcam2)2+(y-ycam2)2+z2];
    z2-2zD2+D2 2=ξ2(x-xcam2)22(y-ycam2)22z2
    (1-ξ2)z2-2zD2+D2 22(x-xcam2)22(y-ycam2)2=0;
    如果ξ2>1,只有取负号,z才大于0;如果ξ2<1,取正号则z>D2,显然是不对的;因此,也要取负号;因此,对于ξ2≠1的情况有:
    如果ξ2=1,则:
    -2zD2+D2 22(x-xcam2)22(y-ycam2)2=0;
    即:
    同理,从相机1的方程也可得到:
    将Diter1、xiter1、yiter1、D2,iter1的值代入上述求z的表达式,并取平均,就得到了ziter1
  5. 根据权利要求4所述的全场精细化DNI预测方法,其特征在于:所述S3云实际位置计算中,具体算法还包括:
    根据前述的计算,进而还有:
    在下一步迭代中:
    也即在后续迭代中,满足:





    收敛判别式为:
    该式表示在当前d数值下,分别在这两个全天空成像仪的位置计算得到的云高z的差别;当该式足够小时,则停止迭代;该阈值根据所需的云位置精度来确定;迭代收敛时计算所得的坐标则为对应点的云实际位置坐标。
  6. 根据权利要求5所述的全场精细化DNI预测方法,其特征在于:所述S4云/阴影实际速度计算中,由步骤S3计算两个不同时刻的云上同一点的坐标的具体方法如下:
    首先,由步骤S2可知云上一点的图像速度,那么就可以预测下一时刻该点图像位置;因此,下一时刻两个全天空成像仪各自对应图像位置的云像素点就是前一时刻的同一点;
    然后,由步骤S3可以计算两个不同时刻的云上同一点的坐标,分别为(x1,y1,z1)和(x2,y2,z2),云高一般不发生变化,因此,云速度的三个分量分别为:
    其中,Δt是两个时刻的时间差。
  7. 根据权利要求6所述的全场精细化DNI预测方法,其特征在于:所述S4云/阴影实际速度计算中,证明阴影速度与云速度是相同的,证明如下:
    首先,太阳角度是可以推算出来的,设已知太阳与正北方向夹角为θ、与水平方向夹角为φ;那么,云上一点(x1,y1,z1)在地面上的阴影点则为过点(x1,y1,z1)、与正北方向夹角为θ、与水平方向夹角为φ的直线与平面z=0的交点;取x轴正半轴方向为正东,y轴正半轴方向为正北,则直线方程表示为:
    则地面上阴影点的坐标为:
    下一时刻,云上该点的坐标为(x2,y2,z2),对应地面上阴影点的坐标为:
    由于z1=z2,则云的阴影速度和云的速度相同。
  8. 根据权利要求7所述的全场精细化DNI预测方法,其特征在于:所述S5阴影位置预测中,具体算法为:
    设阴影点当前坐标为:
    则一段时间(Δt2)后,阴影点的位置为:
    由此则可以预测一段时间后的阴影位置,进而可以预判阴影下哪些定日镜会被遮挡。
  9. 根据权利要求8所述的全场精细化DNI预测方法,其特征在于:所述S6云厚度提取中,红蓝比、云-太阳的图像距离均可由图像数据得到;太阳高度角可以按时间计算得到;云厚数据可从卫星云图中得到;
    同时,拟合方法可以采用包括但不限于支持向量机、随机森林、人工神经网络等机器学习方法。
  10. 根据权利要求9所述的全场精细化DNI预测方法,其特征在于:所述S7 DNI映射中,还可以使用机器学习方法直接对红蓝比、云-太阳的图像距离、太阳高度角进行拟合来获得DNI值,模型训练好后即可用其预测DNI,此时不必预测云厚,即可以省略步骤S6。
PCT/CN2023/098238 2022-08-15 2023-06-05 一种全场精细化dni预测方法 WO2024037123A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210976310.1A CN115423758B (zh) 2022-08-15 2022-08-15 一种全场精细化dni预测方法
CN202210976310.1 2022-08-15

Publications (1)

Publication Number Publication Date
WO2024037123A1 true WO2024037123A1 (zh) 2024-02-22

Family

ID=84198711

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/098238 WO2024037123A1 (zh) 2022-08-15 2023-06-05 一种全场精细化dni预测方法

Country Status (2)

Country Link
CN (1) CN115423758B (zh)
WO (1) WO2024037123A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115423758B (zh) * 2022-08-15 2023-07-11 山东电力建设第三工程有限公司 一种全场精细化dni预测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130258068A1 (en) * 2012-03-30 2013-10-03 General Electric Company Methods and Systems for Predicting Cloud Movement
CN107202982A (zh) * 2017-05-22 2017-09-26 徐泽宇 一种基于无人机位姿计算的信标布置及图像处理方法
CN108121990A (zh) * 2017-11-27 2018-06-05 中国电力科学研究院有限公司 一种基于全天空成像设备的太阳辐照度预测方法和装置
CN114021442A (zh) * 2021-10-28 2022-02-08 山东电力建设第三工程有限公司 一种用于塔式光热电站的dni预测方法
CN115423758A (zh) * 2022-08-15 2022-12-02 山东电力建设第三工程有限公司 一种全场精细化dni预测方法

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103513295B (zh) * 2013-09-25 2016-01-27 青海中控太阳能发电有限公司 一种基于多相机实时拍摄与图像处理的天气监测系统与方法
CN106779130B (zh) * 2015-11-20 2021-01-15 中国电力科学研究院 一种基于全天空云图的光伏电站辐射预测方法
CN105787464B (zh) * 2016-03-18 2019-04-09 南京大学 一种大量图片在三维场景中的视点标定方法
CN109461180B (zh) * 2018-09-25 2022-08-30 北京理工大学 一种基于深度学习的三维场景重建方法
CN110033447B (zh) * 2019-04-12 2022-11-08 东北大学 一种基于点云方法的高铁重轨表面缺陷检测方法
CN111156998B (zh) * 2019-12-26 2022-04-15 华南理工大学 一种基于rgb-d相机与imu信息融合的移动机器人定位方法
CN112085751B (zh) * 2020-08-06 2024-03-26 浙江工业大学 一种基于云图阴影匹配算法的云层高度估算方法
CN112734652B (zh) * 2020-12-22 2023-03-31 同济大学 一种基于双目视觉的近红外血管图像投影校正方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130258068A1 (en) * 2012-03-30 2013-10-03 General Electric Company Methods and Systems for Predicting Cloud Movement
CN107202982A (zh) * 2017-05-22 2017-09-26 徐泽宇 一种基于无人机位姿计算的信标布置及图像处理方法
CN108121990A (zh) * 2017-11-27 2018-06-05 中国电力科学研究院有限公司 一种基于全天空成像设备的太阳辐照度预测方法和装置
CN114021442A (zh) * 2021-10-28 2022-02-08 山东电力建设第三工程有限公司 一种用于塔式光热电站的dni预测方法
CN115423758A (zh) * 2022-08-15 2022-12-02 山东电力建设第三工程有限公司 一种全场精细化dni预测方法

Also Published As

Publication number Publication date
CN115423758B (zh) 2023-07-11
CN115423758A (zh) 2022-12-02

Similar Documents

Publication Publication Date Title
AU2020100323A4 (en) Solar Power Forecasting
CN110514298B (zh) 一种基于地基云图的太阳辐照强度计算方法
CN103513295B (zh) 一种基于多相机实时拍摄与图像处理的天气监测系统与方法
CN113159466B (zh) 一种短时光伏发电功率预测系统及方法
WO2024037123A1 (zh) 一种全场精细化dni预测方法
CN115427946A (zh) 自动化的三维建筑物模型估计
CN113538391A (zh) 一种基于Yolov4和热红外图像的光伏缺陷检测方法
WO2015104281A1 (en) Solar irradiance forecasting
US20150302575A1 (en) Sun location prediction in image space with astronomical almanac-based calibration using ground based camera
WO2017193172A1 (en) &#34;solar power forecasting&#34;
CN107644416A (zh) 一种基于地基云图的实时动态云量反演方法
Salamanca et al. On the detection of solar panels by image processing techniques
Magnone et al. Cloud motion identification algorithms based on all-sky images to support solar irradiance forecast
CN112257340A (zh) 一种光伏电池板除霜机器人的控制方法和控制系统
CN113936031A (zh) 一种基于机器视觉的云影轨迹预测方法
CN109727217B (zh) 基于改进Criminisi算法的地基云图修复方法
CN116402904A (zh) 一种基于激光雷达间和单目相机的联合标定方法
CN114021442B (zh) 一种用于塔式光热电站的dni预测方法
CN110390715A (zh) 一种同时检测建筑物屋顶、建筑物墙体及地面阴影的方法
CN116148800A (zh) 一种基于雷达的定日镜纠偏方法、装置、设备和介质
CN109636840B (zh) 一种基于鬼影像检测建筑物阴影的方法
Yang et al. Performance measurement of photoelectric detection and target tracking algorithm
CN114355977B (zh) 一种基于多旋翼无人机的塔式光热电站镜场巡检方法及装置
CN110926428B (zh) 一种计算太阳辐照度的遮挡检测方法及装置
Arosh et al. Composite imagery-based non-uniform illumination sensing for system health monitoring of solar power plants

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23854029

Country of ref document: EP

Kind code of ref document: A1