WO2023236886A1 - 一种基于密集光流法的云遮挡预测方法 - Google Patents

一种基于密集光流法的云遮挡预测方法 Download PDF

Info

Publication number
WO2023236886A1
WO2023236886A1 PCT/CN2023/098235 CN2023098235W WO2023236886A1 WO 2023236886 A1 WO2023236886 A1 WO 2023236886A1 CN 2023098235 W CN2023098235 W CN 2023098235W WO 2023236886 A1 WO2023236886 A1 WO 2023236886A1
Authority
WO
WIPO (PCT)
Prior art keywords
cloud
pixel
optical flow
dense optical
image
Prior art date
Application number
PCT/CN2023/098235
Other languages
English (en)
French (fr)
Inventor
代增丽
王仁宝
谢宇
宋秀鹏
李涛
韩兆辉
王东祥
江宇
Original Assignee
山东电力建设第三工程有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 山东电力建设第三工程有限公司 filed Critical 山东电力建设第三工程有限公司
Publication of WO2023236886A1 publication Critical patent/WO2023236886A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the invention relates to the technical field of solar thermal power generation, and in particular to a cloud occlusion prediction method based on the dense optical flow method.
  • Blockage of the sun by clouds will affect the stability of the photothermal power generation system. Since the light source collected by the light field before and after cloud obstruction changes greatly, the temperature of the heat absorber will change greatly, which may cause damage to the heat absorber and cause production accidents. Therefore, predicting the arrival of clouds in advance and reducing the light source reflected on the heat absorber in advance is an effective means to avoid damage to the heat absorber. Meteorological data such as weather forecasts can only predict whether it will be cloudy over a longer period of time, but it is impossible to determine when the clouds will block the sun, making it impossible to collect light. An effective way to predict when occlusion will occur is generally through image processing. Existing methods either treat clouds as shape-invariant objects for prediction, or extract cloud feature points for tracking.
  • patent CN111583298A a short-term cloud image tracking method based on the optical flow method
  • the Lucas-Kanade optical flow method is a sparse point optical flow method that first extracts feature points and then tracks the feature points.
  • Cloud image tracking based on this optical flow method has the following two problems:
  • the present invention provides a cloud occlusion prediction method based on the dense optical flow method.
  • the cloud occlusion time can be better predicted, the accuracy of tracking can be improved, and the Continuously track emerging cloud groups.
  • a cloud occlusion prediction method based on dense optical flow method including the following steps:
  • Step 1 Collect real-time sky video through video collection equipment and convert it into a picture sequence
  • Step 2 Preprocess the collected image sequence to eliminate irrelevant image information and only retain the sky area
  • Step 3 Perform cloud identification and judgment on the obtained sky area
  • Step 4 Perform dense optical flow calculation on the image sequence obtained in Step 1 to obtain the velocity and direction of each pixel;
  • Step 5 Determine the cloud movement area based on the obtained pixel speed
  • Step 6 Remove abnormal points based on the velocity and direction of the pixels to correct the cloud movement velocity
  • Step 7 Predict the start and end times of cloud occlusion based on the determined cloud movement area and the corrected cloud movement speed.
  • the video capture device includes multiple pinhole cameras.
  • the video collection device is an all-sky imager or an ordinary fisheye camera
  • the prediction method also includes a coordinate transformation step between steps five and six.
  • the cloud recognition and judgment method includes a threshold judgment method of channel ratio, a machine learning method or a deep learning method.
  • the dense optical flow method is the Farneback algorithm.
  • the Farneback algorithm is used to calculate the speed and direction of each pixel as follows:
  • R, G, and B respectively represent the brightness values of red, green, and blue in the RGB color space
  • x is a two-dimensional column vector
  • A is a 2 ⁇ 2 symmetric matrix
  • b is a 2 ⁇ 1 matrix
  • f(x) is equivalent to f(x,y), and represents the gray value of the pixel
  • b 1 and b 2 respectively represent the 2 ⁇ 1 matrix before and after the change
  • c 1 and c 2 respectively represent the constant items before and after the change
  • step five is as follows:
  • step six is as follows:
  • step seven is as follows:
  • the departure time t 2 of the cloud pixel is expressed as:
  • the minimum value is the time when the cloud front reaches the sun area
  • the maximum value of the cloud pixel departure time is the predicted time when all clouds leave the sun area.
  • the coordinate transformation method is as follows:
  • d is the distance from point (x, y, z) to the origin of the camera coordinate system, (c x , c y ) is the coordinate of the image center;
  • the cloud occlusion prediction method based on the dense optical flow method provided by the present invention has the following beneficial effects:
  • the present invention can overcome the technical difficulties of continuous tracking and feature point updating as the shape of the cloud continues to change, and can better predict the cloud occlusion time and improve the accuracy of tracking, and New clouds can be tracked continuously.
  • the present invention uses the dense optical flow method combined with frame extraction for motion detection and speed calculation. It does not need to input feature points and can directly give the speed vector of each pixel of the moving object.
  • the present invention removes abnormal points based on the speed, size and direction of pixels to correct cloud movement speed. It can eliminate abnormal data caused by the movement of other objects, false detections caused by noise in the image itself, false detections caused by light changes, etc. , improve the forecast Measurement accuracy.
  • the method for calculating the start and end time of cloud occlusion in the present invention does not need to separately calculate the reachability and arrival time for each point in the sun area, and can calculate the start and end time at the same time, which greatly saves the amount of calculation.
  • Figure 1 is a schematic diagram of a cloud occlusion prediction method based on the dense optical flow method disclosed in Embodiment 2 of the present invention
  • Figure 2 is a schematic diagram of speed correction using the sliding window method
  • Figure 3 is a schematic diagram of the prediction time for clouds to reach the sun and end time.
  • the present invention provides a cloud occlusion prediction method based on the dense optical flow method, as shown in Figure 1.
  • the specific implementation examples are as follows:
  • Step 1 Collect real-time sky video through an all-sky imager or ordinary fisheye camera and convert it into a picture sequence.
  • the image uses RGB color space, that is, red, green, and blue.
  • Step 2 Preprocess the collected image sequence to eliminate irrelevant image information and only retain the sky area.
  • a camera that actually collects cloud images to collect a picture, mark the ground distant views and buildings that have nothing to do with the sky, and generate a mask matrix. Then, when preprocessing the actual collected pictures, a mask matrix is used to remove pixels that have nothing to do with the sky and convert them into pixels that will not be recognized as clouds, such as the sky.
  • Step 3 Perform cloud identification and judgment on the obtained sky area.
  • the solar background can be learned through clear sky image data collection and combined with artificial neural network methods.
  • the clear sky image is first generated through the model, and then subtracted from the actual image.
  • the blue sky shows a larger gray value of the blue channel and a smaller gray value of the red channel; thick clouds show a similar gray value of the blue channel and the red channel; thin clouds tend to be between between the two. Therefore, it can be judged whether it is thin clouds, thick clouds or blue sky based on the different performances of the object in the red and blue channels.
  • the more common and simple one is often the threshold segmentation method, and the segmentation methods are also different according to the different compositions of the red and blue channels. .
  • the red-to-blue ratio When the red-to-blue ratio is less than p 1 , it is considered a blue sky, when it is greater than p 1 and less than p 2 , it is thin clouds, when it is greater than p 2 , it is thick clouds, and when the average value of the three channels is greater than 238, it is the sun (before background deduction, this point is not considered after deduction).
  • the three thresholds can be determined statistically by collecting sky data, and the determination of thick clouds and thin clouds is subject to human calibration.
  • Cloud recognition and judgment methods include threshold judgment methods of channel ratios, machine learning methods or deep learning methods, which can be combined with each other. In addition, clear sky background fitting needs to be considered, and background subtraction can be used for cloud detection in solar areas.
  • Step 4 Perform dense optical flow calculation on the image sequence obtained in Step 1 to obtain the velocity and direction of each pixel.
  • Optical flow method is an effective method to detect motion.
  • Using the sparse optical flow method for cloud detection has problems such as the disappearance of feature points due to cloud changes and the difficulty in updating feature points when new clouds appear.
  • the dense optical flow method does not require the input of feature points, and can directly give the velocity vector of each pixel of the moving object.
  • the Farneback algorithm which is recognized as having the best effect, is used here. Based on the same principle, other dense optical flow methods can also be used.
  • Dense optical flow methods have two main disadvantages. First, the amount of calculation is large, and real-time performance will be lost if calculated every frame; second, although it can give the velocity vector of each pixel, it will actually cause the fluctuation of the velocity vector and the magnitude of the velocity to be close to Noise velocity vector of zero. For the first point, the amount of calculation can be reduced by extracting frames in step 1. Since the pixel movement speed of clouds in the sky is relatively small, there is no need to perform intensive optical flow calculations in every frame. For the second point, see the movement speed correction in step 6.
  • the Farneback algorithm is used to calculate the velocity and direction of each pixel as follows:
  • R, G, and B respectively represent the brightness values of red, green, and blue in the RGB color space
  • x is a two-dimensional column vector
  • A is a 2 ⁇ 2 symmetric matrix
  • b is a 2 ⁇ 1 matrix
  • f(x) is equivalent to f(x,y), and represents the gray value of the pixel
  • b 1 and b 2 respectively represent the 2 ⁇ 1 matrix before and after the change
  • c 1 and c 2 respectively represent the constant items before and after the change
  • Step 5 Determine the cloud movement area based on the obtained pixel speed.
  • the confidence that a pixel with a red-to-blue ratio of the threshold p 1 is a cloud is 0.5, a pixel with a red-to-blue ratio greater than the threshold p 2 is considered a cloud, and a pixel with a red-to-blue ratio greater than the threshold p 2 is considered to be a cloud, and pixels that are less than the threshold p 1 are considered to be blue sky, and there is no need to proceed. Confidence comparison, no calculation required.
  • the point between p 1 and p 2 can be obtained by interpolation. For example, linear interpolation uses the following formula to calculate the confidence:
  • the mean value of pixel velocity is It is considered that the speed is greater than The point where the confidence level is that the cloud is moving is 1, the point where the velocity is v thre1 is the confidence level of the cloud is moving is 0, the parameter q is less than 1, the actual value is determined by statistics of measured data.
  • the speed is The confidence of points between 0 and 0 can be calculated by interpolation, such as linear interpolation method:
  • the weight of cloud detection and motion detection (optical flow method) is taken.
  • the overlapping area is regarded as the cloud movement area; when the cloud detection confidence is high, it is considered that the cloud area identified by the cloud detection method has always been correct, and the cloud movement is corrected (see step 7). In this way, both cases of false detection as clouds and false detection of other moving objects as clouds can be removed.
  • Step six coordinate transformation.
  • the all-sky imager uses a fisheye camera, and its pixel coordinates are distorted, which is inconvenient for subsequent processing. It needs to be converted into ordinary pineye camera coordinates.
  • is the distance between the camera center and the center of the sphere
  • d is the distance from the point (x, y, z) to the origin of the camera coordinate system
  • (c x , c y ) is the coordinate of the image center
  • f x and f y are the camera coordinate system respectively Focal length in x and y directions;
  • the back projection is:
  • Step 7 Remove abnormal points based on the velocity and direction of the pixels to correct the cloud movement velocity.
  • the sliding window algorithm is shown in Figure 2.
  • the specific method of velocity correction within the sliding window is as follows:
  • the image here refers to a two-channel pseudo image composed of velocity vectors, the size of which is the same as the image detected by the optical flow method.
  • the first channel represents the x component of the velocity
  • the second channel represents the y component of the velocity. From these two components the velocity magnitude and direction can be determined.
  • the speed within the range, m is an empirical parameter. After adjustment, the detected motion area can be consistent with the actual moving cloud area. Generally, it is 0.1. The velocity size of pixels in the sliding window other than the pixels that retain velocity is set to 0.
  • Step 8 Predict the start and end times of cloud occlusion based on the determined cloud movement area and the corrected cloud movement speed.
  • the coordinates of the center of the sun in the image in the sky are (x 0 ,y 0 ) and the radius is r 0 ; the point on the edge of the solar disk is expressed as (x 0 +r 0 cos ⁇ ,y 0 +r 0 sin ⁇ ) ,0 ⁇ 2 ⁇ ; the coordinates of a specific cloud pixel are (x 1 , y 1 ), and the velocity vector is (u 1 , v 1 ).
  • the departure time t 2 of the cloud pixel is expressed as:
  • the minimum value is the time when the cloud front reaches the sun area
  • the maximum value of the cloud pixel departure time is the predicted time when all clouds leave the sun area.
  • the gray area in the figure represents moving clouds
  • the black area represents clear sky
  • the circle represents the solar area.
  • the front end of the cloud group predicted to reach the solar area is specially marked with light gray.
  • the color depth of the gray area represents the vector direction (excluding the specially marked cloud front). It can be seen from the figure that the direction of cloud movement is basically the same.
  • This embodiment uses multiple pinhole cameras to collect images, and each camera is responsible for an area of the sky.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于密集光流法的云遮挡预测方法,包括如下步骤:通过视频采集设备采集实时天空视频,并转成图片序列;对采集到的图片序列进行预处理,消除无关的图像信息,只保留天空区域;对获得的天空区域进行云识别判断;对获得的图片序列进行密集光流法计算,得到每个像素点的速度大小和方向;根据得到的像素点的速度确定云团运动区域;根据像素点的速度大小和方向去除异常点进行云团运动速度校正;根据确定的云团运动区域和校正后的云团运动速度进行云遮挡开始和结束时间的预测。本发明所公开的方法通过对云团整体进行运动追踪,可以更好地预测云团遮挡时间,提高追踪的准确性,并且可以不间断的对新出现云团进行跟踪。

Description

一种基于密集光流法的云遮挡预测方法 技术领域
本发明涉及太阳能热发电技术领域,特别涉及一种基于密集光流法的云遮挡预测方法。
背景技术
云团对太阳的遮挡会影响光热发电系统的稳定性。由于云团遮挡前后光场采集到的光源变化非常大,进而吸热器的温度会发生很大变化,有可能造成吸热器损坏,引起生产事故。因此,提前预测云团的到来,并提前减少反射到吸热器上的光源是避免吸热器损坏的有效手段。天气预报等气象数据只能预测一个较长时间段内是否多云,而具体云团会于何时遮挡太阳,导致无法采集到光是无法确定的。如何预测何时会发生遮挡有效的手段一般是通过图像处理的方法。现有的方法要么是通过把云看作形状不变的物体进行预测,要么提取云的特征点进行跟踪。
比如,专利CN111583298A,一种基于光流法的短时云图追踪方法中使用Lucas-Kanade光流法进行短时云图追踪。Lucas-Kanade光流法是一种稀疏点光流法,首先通过取出特征点再进行特征点的跟踪。基于这种光流法的云图跟踪存在以下两个问题:
1、云团运动过程中,特征点会发生变化;新云团出现会引入新的特征点。怎么进行特征点更新,何时进行特征点更新是难以解决的问题。该专利没有给出解决方案。
2、太阳区域由于光线变化会生成特征点,该专利没有对这一问题给出解决方案。
云团的形状事实上是在不断发生变化的,而特征点也会发生变化,因此质心法和特征点法这两种方法都存在一定缺陷。因此,有必要设计一种新的技术方案来解决上述问题。
发明内容
为解决上述技术问题,本发明提供了一种基于密集光流法的云遮挡预测方法,通过对云团整体进行运动追踪,可以更好地预测云团遮挡时间,提高追踪的准确性,并且可以不间断的对新出现云团进行跟踪。
为达到上述目的,本发明的技术方案如下:
一种基于密集光流法的云遮挡预测方法,包括如下步骤:
步骤一,通过视频采集设备采集实时天空视频,并转成图片序列;
步骤二,对采集到的图片序列进行预处理,消除无关的图像信息,只保留天空区域;
步骤三,对获得的天空区域进行云识别判断;
步骤四,对步骤一获得的图片序列进行密集光流法计算,得到每个像素点的速度大小和方向;
步骤五,根据得到的像素点的速度确定云团运动区域;
步骤六,根据像素点的速度大小和方向去除异常点进行云团运动速度校正;
步骤七,根据确定的云团运动区域和校正后的云团运动速度进行云遮挡开始和结束时间的预测。
上述方案中,所述视频采集设备包括多个针眼摄像头。
另一技术方案中,所述视频采集设备为全天空成像仪或普通鱼眼摄像头,预测方法中在步骤五和步骤六之间还包括坐标变换的步骤。
进一步的技术方案中,步骤三中,云识别判断的方法包括通道比值的阈值判断方法、机器学习方法或深度学习方法。
进一步的技术方案中,步骤四中,所述密集光流法为Farneback算法。
更进一步的技术方案中,采用Farneback算法计算每个像素点的速度大小和方向具体如下:
首先,将图像进行灰度化处理:将图像进行线性变换,转换为HSV颜色空间,使用该颜色空间的亮度维度V作为灰度信息:
V=max(R,G,B)
其中,R、G、B分别代表RGB颜色空间中的红、绿、蓝三色的亮度值;
然后,将图像像素点的灰度值看成是一个二维变量的函数f(x,y),以感兴趣的像素点为中心,构建一个局部坐标系,对函数进行二项展开,表示为:
f(x,y)=f(x)=xTAx+bTx+c
这里,x为二维列向量,A为2×2的对称矩阵,b为2×1的矩阵,f(x)与f(x,y)等价,表示像素点的灰度值,c表示二次展开的常数项;如果这个像素点移动了,整个多项式就会发生变化,位移为d;位移前后A不变,则变化前后分别表示为
f1(x)=xTAx+b1 Tx+c1
f2(x)=xTAx+b2 Tx+c2
其中,b1和b2分别表示变化前后的2×1矩阵,c1和c2分别表示变化前后的常数项;
从而得到约束条件:
Ad=Δb
其中,
最后,建立目标函数:
‖Ad-b‖2
最小化目标函数求解出位移d,位移d除以发生位移的时间就是速度矢量。
进一步的技术方案中,步骤五的具体方法如下:
(1)进行有效速度阈值的确定;
(2)去除云运动噪音数据;
(3)计算云检测置信度;
(4)计算运动置信度;
(5)根据云检测和运动检测的置信度来确定云团运动区域。
进一步的技术方案中,步骤六的具体方法如下:
(1)确定有效云运动的速度大小范围;
(2)将超出范围的速度大小置零,再将大小相近,但方向不同的速度取平均方向;
(3)以滑窗在步骤四生成的速度矢量生成的伪图像上从左到右,从上到下依次对滑窗内速度进行校正。
进一步的技术方案中,步骤七的具体方法如下:
设天空中太阳在图像中的圆心坐标为(x0,y0),半径为r0;太阳圆盘边缘上的点表示为(x0+r0cos θ,y0+r0sin θ),0≤θ<2π;一个特定的云像素点坐标为(x1,y1),速度矢量为(u1,v1);
首先,判断是否满足如果不满足,则该点不能到达太阳区域,不用继续计算;
然后,计算云像素点到达太阳的时间t1,表示为:
如果t1<0,说明该云像素点向远离太阳的方向运动,舍去;
该云像素点的离开时间t2表示为:
最后,每个云像素点到达时间都确定后,其中的最小值即为云前端到达太阳区域的时间,云像素点离开时间中的最大值即为预测的云团全部离开太阳区域的时间。
更进一步的技术方案中,坐标变换的方法如下:
首先,假设相机坐标系下的点为(x,y,z),像素坐标为(u,v),采用下式通过已知坐标点计算相机中心和球心距离ξ、相机坐标系x和y方向的焦距fx和fy
其中,
d是点(x,y,z)到相机坐标系原点的距离,(cx,cy)是图像中心的坐标;
然后,对于实际采集的图像,通过上两式,计算每个像素的实际坐标,再结合如下公式计算针眼摄像头对应的图像:
通过上述技术方案,本发明提供的一种基于密集光流法的云遮挡预测方法具有如下有益效果:
1、本发明通过对云团整体进行运动追踪,可以克服云团形状不断变化连续跟踪困难和特征点更新困难的技术难题,能够更好地预测云团遮挡时间,可以提高追踪的准确性,并且可以不间断的对新出现云团进行跟踪。
2、本发明使用密集光流法结合抽帧进行运动检测和速度计算,不需要输入特征点,可以直接给出运动物体每个像素点运动的速度矢量。
3、本发明根据像素点的速度大小和方向去除异常点进行云团运动速度校正,可以剔除由其它物体运动、图像本身噪音带来的误检、光线变化带来的误检等造成的异常数据,提高预 测的准确度。
4、本发明进行云遮挡开始和结束时间的计算方法不需要对太阳区域每个点分别计算是否可到达和到达时间,且可同时计算开始和结束时间,大大节省计算量。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍。
图1为本发明实施例2所公开的一种基于密集光流法的云遮挡预测方法示意图;
图2为使用滑窗法进行速度校正的示意图;
图3为云团到达太阳的时间和结束时间预测示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
本发明提供了一种基于密集光流法的云遮挡预测方法,如图1所示,具体实施例如下:
实施例1
步骤一,通过全天空成像仪或普通鱼眼摄像头采集实时天空视频,并转成图片序列。
使用全天空成像仪或普通鱼眼摄像头采集视频,并提取视频中的每帧图像。在一特定时间间隔内(比如1秒),只保留1帧图像可以降低计算量。该时间间隔根据计算设备性能和预测精度要求确定,一般在1~60秒之间。图像采用RGB颜色空间,即红、绿、蓝三色。
步骤二,对采集到的图片序列进行预处理,消除无关的图像信息,只保留天空区域。
首先,用实际采集云图的摄像头先采集一张图片,在其中标出与天空无关的地面远景、建筑物,生成掩膜矩阵。然后,在对实际采集的图片进行预处理时,使用掩膜矩阵去除掉与天空无关的像素,转换为不会被识别为云的像素,比如睛空。
步骤三,对获得的天空区域进行云识别判断。
当图像中太阳附近,容易被识别成云团,所以首先需要进行太阳背景扣除。太阳背景可以通过晴空图像数据采集,结合人工神经网络方法进行学习。在实际使用时通过模型先生成晴空图像,再用实际图像扣除。
在全天空图像中蓝天表现为蓝色通道灰度值较大,红色通道灰度值较小;厚云则表现为蓝色通道灰度值和红色通道灰度值相差不大;薄云往往介于两者之间。因此可以根据物体在红蓝通道不同的表现来判断是否为薄云、厚云及蓝天,比较常见且简单的往往为阈值分割方法,并且根据红蓝通道不同形式的组成,其分割方法也有所不同。当红蓝比小于p1认为是蓝天,大于p1且小于p2为薄云,大于p2为厚云,三通道均值大于238为太阳(未扣除背景前,扣除后不考虑此点)。三个阈值可以通过采集天空数据统计确定,厚云、薄云的认定以人为标定为准。云识别判断的方法包括通道比值的阈值判断方法、机器学习方法或深度学习方法,彼此之间可以结合。此外还需要考虑晴天背景拟合,背景扣除可以用于太阳区域的云检测。
步骤四,对步骤一获得的图片序列进行密集光流法计算,得到每个像素点的速度大小和方向。
光流法是检测运动的有效方法。使用稀疏光流法进行云检测存在特征点因云团变化消失和新云团出现特征点更新困难等问题。而密集光流法不需要输入特征点,并且可以直接给出运动物体每个像素点运动的速度矢量。这里采用公认效果最好的Farneback算法。基于原理的相同,也可以采用其它密集光流法。
密集光流法有两个主要缺点。一是计算量较大,如果每帧都计算会失去实时性;二是尽管其能给出每个像素点的速度矢量,实际上会因为光照变化等因素导致速度矢量的波动和速度大小接近于零的噪音速度矢量。对于第一点可以通过步骤一的抽帧来降低计算量。由于在天空中云的像素运动速度比较小,没有必要每一帧都进行密集光流计算。对于第二点,见步骤六的运动速度校正。
采用Farneback算法计算每个像素点的速度大小和方向具体如下:
首先,将图像进行灰度化处理:将图像进行线性变换,转换为HSV颜色空间,使用该颜色空间的亮度维度V作为灰度信息:
V=max(R,G,B)
其中,R、G、B分别代表RGB颜色空间中的红、绿、蓝三色的亮度值;
然后,将图像像素点的灰度值看成是一个二维变量的函数f(x,y),以感兴趣的像素点为中心,构建一个局部坐标系,对函数进行二项展开,表示为:
f(x,y)=f(x)=xTAx+bTx+c
这里,x为二维列向量,A为2×2的对称矩阵,b为2×1的矩阵,f(x)与f(x,y)等价,表示像素点的灰度值,c表示二次展开的常数项;如果这个像素点移动了,整个多项式就会 发生变化,位移为d;位移前后A不变,则变化前后分别表示为
f1(x)=xTAx+b1 Tx+c1
f2(x)=xTAx+b2 Tx+c2
其中,b1和b2分别表示变化前后的2×1矩阵,c1和c2分别表示变化前后的常数项;
从而得到约束条件:
Ad=Δb
其中,
最后,建立目标函数:
‖Ad-b‖2
最小化目标函数求解出位移d,位移d除以发生位移的时间就是速度矢量。
步骤五,根据得到的像素点的速度确定云团运动区域。
(1)进行有效速度阈值的确定:
采集多个时间点密集光流法计算的速度,并对比人工标定为静止的物体,确定一个有效速度的最小阈值vthre1,即小于该速度大小的值,认为是云团是静止的。
(2)去除云运动噪音数据:
将运动像素点速度大小小于有效速度的最小阈值vthre1的速度全部置为0。
(3)计算云检测置信度:
认为红蓝比为阈值p1的像素点是云的置信度为0.5,红蓝比大于阈值p2的像素点是云的置信度为1,小于阈值p1的像素认为是蓝天,不需要进行置信度比较,不需要计算。处于p1和p2中间的点可以插值来获得,比如线性插值用下式计算置信度:
(4)计算运动置信度:
像素点速度大小的均值为认为速度大小大于的点是云在运动的置信度为1,速度大小为vthre1的点是云在运动的置信度为0,参数q小于1,实际取值通过统计实测数据来确定。速度大小在和0之间的点的置信度可以通过插值计算,比如线性插值方法:
(5)根据云检测和运动检测的置信度来确定云团运动区域:
当运动检测置信度比较高或两者置信度接近时,取云团检测和运动检测(光流法)的重 叠区域作为云团运动区域;当云检测置信度高时,认为云检测方法识别的云团区域一直是正确的,对云运动进行校正(见步骤七)。这样可以把误检测为云和其它运动物体误检为云两种情况都去除掉。
步骤六,坐标变换。
全天空成像仪使用鱼眼摄像头,其像素坐标是扭曲的,不方便后续处理,需要将其转换为普通针眼摄像头坐标。
首先,假设相机坐标系下的点为(x,y,z),像素坐标为(u,v),则投影公式为

ξ是相机中心和球心距离,d是点(x,y,z)到相机坐标系原点的距离,(cx,cy)是图像中心的坐标,fx和fy分别是相机坐标系x和y方向的焦距;
反投影为:
其中,
采用上式通过已知坐标点计算相机中心和球心距离ξ、相机坐标系x和y方向的焦距fx和fy
然后,对于实际采集的图像,通过上两式,计算每个像素的实际坐标,再结合如下公式计算针眼摄像头对应的图像:
步骤七,根据像素点的速度大小和方向去除异常点进行云团运动速度校正。
在实际检测中,大部分检测到的运动是来源于云的运动。经过坐标变换后,图像上云运动速度的大小和方向基本是一致的,那么偏离云运动矢量的其它运动应该被认为是异常数据(可能来源于其它物体运动、图像本身噪音带来的误检、光线变化带来的误检等)。这一方法在天空中无云或少云时会失效,因此要结合云检测结果来使用。具体方法如下:
(1)确定有效云运动的速度大小范围,也就是确定两个阈值,具体做法如下:
①采集多个时间不同云团运动的视频数据;
②使用密集光流法计算速度;
③清除掉非云团区域的速度、太阳区域的云团速度、其它云团区域的异常速度;
④取清除后数据中的最小值作为阈值1,最大值作为阈值2。
(2)将速度值小于一定阈值(阈值1)和大于一定阈值(阈值2)的速度大小置零,再将大小相近,但方向不同的速度取平均方向;
(3)以滑窗在步骤四生成的速度矢量生成的伪图像上从左到右,从上到下依次对滑窗内速度进行校正,滑窗算法示意见图2。滑窗内速度校正的具体方法如下:这里的图像指的是由速度矢量构成的二通道伪图像,大小与光流法检测的图像相同。第一个通道表示速度的x分量,第二个通道表示速度的y分量。由这两个分量可以确定速度大小和方向。
①把滑窗内两个阈值内的速度大小取平均为保留范围内的速度,m是个经验参数,通过调整后能够使检测到的运动区域与实际运动云团区域一致即可,一般取0.1即可。保留速度的像素点以外的滑窗内像素点的速度大小置为0。
②保留后的像素的速度方向再次取平均为把这些保留的像素点的方向都设为大小都设为
步骤八,根据确定的云团运动区域和校正后的云团运动速度进行云遮挡开始和结束时间的预测。
设天空中太阳在图像中的圆心坐标为(x0,y0),半径为r0;太阳圆盘边缘上的点表示为(x0+r0cos θ,y0+r0sin θ),0≤θ<2π;一个特定的云像素点坐标为(x1,y1),速度矢量为(u1,v1)。
推导过程如下:
若经过时间t后,这个像素点恰好到达太阳边缘上一点,则
令Δx=x0-x1,Δy=y0-y1,则
v1Δx+v1r0cos θ=u1Δy+u1r0sin θ
v1Δx-u1Δy=u1r0sin θ-v1r0cos θ





那么,可以得出以下限定条件
可以推导出

代入
则可以求得时间。
具体实施计算过程如下:
首先,判断是否满足如果不满足,则该点不能到达太阳区域,不用继续计算;
然后,计算云像素点到达太阳的时间t1,表示为:
如果t1<0,说明该云像素点向远离太阳的方向运动,舍去;
该云像素点的离开时间t2表示为:
最后,每个云像素点到达时间都确定后,其中的最小值即为云前端到达太阳区域的时间,云像素点离开时间中的最大值即为预测的云团全部离开太阳区域的时间。
如图3所示,已变换至针眼摄像头坐标系下。图中灰色区域代表运动的云,黑色区域代表晴空,圆代表太阳区域,预测要到达太阳区域的云团前端用浅灰色特别标记。灰色区域的颜色深浅代表矢量方向(不包括特别标记的云前端),从图中可以看出云团运动方向基本一致。
实施例2
本实施例采用多个针眼摄像头进行图像采集,每个摄像头负责天空的一片区域。
该实施例的具体方法与实施例1的区别在于,省略了步骤六的坐标变换,其余步骤相同。
对所公开的实施例的上述说明,使本领域专业技术人员能够实现或使用本发明。对这些实施例的多种修改对本领域的专业技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。

Claims (10)

  1. 一种基于密集光流法的云遮挡预测方法,其特征在于,包括如下步骤:
    步骤一,通过视频采集设备采集实时天空视频,并转成图片序列;
    步骤二,对采集到的图片序列进行预处理,消除无关的图像信息,只保留天空区域;
    步骤三,对获得的天空区域进行云识别判断;
    步骤四,对步骤一获得的图片序列进行密集光流法计算,得到每个像素点的速度大小和方向;
    步骤五,根据得到的像素点的速度确定云团运动区域;
    步骤六,根据像素点的速度大小和方向去除异常点进行云团运动速度校正;
    步骤七,根据确定的云团运动区域和校正后的云团运动速度进行云遮挡开始和结束时间的预测。
  2. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,所述视频采集设备包括多个针眼摄像头。
  3. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,所述视频采集设备为全天空成像仪或普通鱼眼摄像头,预测方法中在步骤五和步骤六之间还包括坐标变换的步骤。
  4. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,步骤三中,云识别判断的方法包括通道比值的阈值判断方法、机器学习方法或深度学习方法。
  5. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,步骤四中,所述密集光流法为Farneback算法。
  6. 根据权利要求5所述的一种基于密集光流法的云遮挡预测方法,其特征在于,采用Farneback算法计算每个像素点的速度大小和方向具体如下:
    首先,将图像进行灰度化处理:将图像进行线性变换,转换为HSV颜色空间,使用该颜色空间的亮度维度V作为灰度信息:
    V=max(R,G,B)
    其中,R、G、B分别代表RGB颜色空间中的红、绿、蓝三色的亮度值;
    然后,将图像像素点的灰度值看成是一个二维变量的函数f(x,y),以感兴趣的像素点为中心,构建一个局部坐标系,对函数进行二项展开,表示为:
    f(x,y)=f(x)=xTAx+bTx+c
    这里,x为二维列向量,A为2×2的对称矩阵,b为2×1的矩阵,f(x)与f(x,y)等价, 表示像素点的灰度值,c表示二次展开的常数项;如果这个像素点移动了,整个多项式就会发生变化,位移为d;位移前后A不变,则变化前后分别表示为
    f1(x)=xTAx+b1 Tx+c1
    f2(x)=xTAx+b2 Tx+c2
    其中,b1和b2分别表示变化前后的2×1矩阵,c1和c2分别表示变化前后的常数项;
    从而得到约束条件:
    Ad=Δb
    其中,
    最后,建立目标函数:
    ‖Ad-b‖2
    最小化目标函数求解出位移d,位移d除以发生位移的时间就是速度矢量。
  7. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,步骤五的具体方法如下:
    (1)进行有效速度阈值的确定;
    (2)去除云运动噪音数据;
    (3)计算云检测置信度;
    (4)计算运动置信度;
    (5)根据云检测和运动检测的置信度来确定云团运动区域。
  8. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,步骤六的具体方法如下:
    (1)确定有效云运动的速度大小范围;
    (2)将超出范围的速度大小置零,再将大小相近,但方向不同的速度取平均方向;
    (3)以滑窗在步骤四计算速度矢量生成的伪图像上从左到右,从上到下依次对滑窗内速度进行校正。
  9. 根据权利要求1所述的一种基于密集光流法的云遮挡预测方法,其特征在于,步骤七的具体方法如下:
    设天空中太阳在图像中的圆心坐标为(x0,y0),半径为r0;太阳圆盘边缘上的点表示为(x0+r0cosθ,y0+r0sinθ),0≤θ<2π;一个特定的云像素点坐标为(x1,y1),速度矢量为(u1,v1);
    首先,判断是否满足如果不满足,则该点不能到达太阳区域,不用继续计算;
    然后,计算云像素点到达太阳的时间t1,表示为:
    如果t1<0,说明该云像素点向远离太阳的方向运动,舍去;
    该云像素点的离开时间t2表示为:
    最后,每个云像素点到达时间都确定后,其中的最小值即为云前端到达太阳区域的时间,云像素点离开时间中的最大值即为预测的云团全部离开太阳区域的时间。
  10. 根据权利要求3所述的一种基于密集光流法的云遮挡预测方法,其特征在于,坐标变换的方法如下:
    首先,假设相机坐标系下的点为(x,y,z),像素坐标为(u,v),采用下式通过已知坐标点计算相机中心和球心距离ξ、相机坐标系x和y方向的焦距fx和fy
    其中,
    d是点(x,y,z)到相机坐标系原点的距离,(cx,cy)是图像中心的坐标;
    然后,对于实际采集的图像,通过上两式,计算每个像素的实际坐标,再结合如下公式计算针眼摄像头对应的图像:
PCT/CN2023/098235 2022-06-10 2023-06-05 一种基于密集光流法的云遮挡预测方法 WO2023236886A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210651845.1A CN115170619B (zh) 2022-06-10 2022-06-10 一种基于密集光流法的云遮挡预测方法
CN202210651845.1 2022-06-10

Publications (1)

Publication Number Publication Date
WO2023236886A1 true WO2023236886A1 (zh) 2023-12-14

Family

ID=83485078

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/098235 WO2023236886A1 (zh) 2022-06-10 2023-06-05 一种基于密集光流法的云遮挡预测方法

Country Status (2)

Country Link
CN (1) CN115170619B (zh)
WO (1) WO2023236886A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170619B (zh) * 2022-06-10 2023-08-15 山东电力建设第三工程有限公司 一种基于密集光流法的云遮挡预测方法
CN117369026B (zh) * 2023-12-06 2024-03-08 江苏省气象台 一种实时高精度预测云团驻留时间方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780540A (zh) * 2016-12-08 2017-05-31 浙江科技学院 面向光伏发电的地基云图云层跟踪及预警方法
CN108871290A (zh) * 2018-06-07 2018-11-23 华南理工大学 一种基于光流法检测与贝叶斯预测的可见光动态定位方法
CN111583298A (zh) * 2020-04-24 2020-08-25 杭州远鉴信息科技有限公司 一种基于光流法的短时云图追踪方法
CN115170619A (zh) * 2022-06-10 2022-10-11 山东电力建设第三工程有限公司 一种基于密集光流法的云遮挡预测方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3047829B1 (fr) * 2016-02-12 2019-05-10 Compagnie Nationale Du Rhone Procede d'estimation de la position du disque solaire dans une image de ciel
US10303942B2 (en) * 2017-02-16 2019-05-28 Siemens Aktiengesellschaft Short term cloud forecast, improved cloud recognition and prediction and uncertainty index estimation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780540A (zh) * 2016-12-08 2017-05-31 浙江科技学院 面向光伏发电的地基云图云层跟踪及预警方法
CN108871290A (zh) * 2018-06-07 2018-11-23 华南理工大学 一种基于光流法检测与贝叶斯预测的可见光动态定位方法
CN111583298A (zh) * 2020-04-24 2020-08-25 杭州远鉴信息科技有限公司 一种基于光流法的短时云图追踪方法
CN115170619A (zh) * 2022-06-10 2022-10-11 山东电力建设第三工程有限公司 一种基于密集光流法的云遮挡预测方法

Also Published As

Publication number Publication date
CN115170619B (zh) 2023-08-15
CN115170619A (zh) 2022-10-11

Similar Documents

Publication Publication Date Title
WO2023236886A1 (zh) 一种基于密集光流法的云遮挡预测方法
CN103325112B (zh) 动态场景中运动目标快速检测方法
CN107833221B (zh) 一种基于多通道特征融合和机器学习的漏水检测方法
WO2019101221A1 (zh) 一种基于场景多维特征的船只检测方法及系统
WO2020098195A1 (zh) 基于ais与视频数据融合的船舶身份识别方法
Chang et al. Tracking Multiple People Under Occlusion Using Multiple Cameras.
CN104978567B (zh) 基于场景分类的车辆检测方法
CN105046649A (zh) 一种去除运动视频中运动物体的全景图拼接方法
CN103761514A (zh) 基于广角枪机和多球机实现人脸识别的系统及方法
CN102750527A (zh) 一种银行场景中长期稳定的人脸检测与跟踪方法及装置
CN111914695B (zh) 一种基于机器视觉的涌潮监测方法
CN103279791A (zh) 基于多特征的行人计算方法
Labati et al. Weight estimation from frame sequences using computational intelligence techniques
CN114170552A (zh) 基于红外热成像的天然气泄漏实时预警方法及系统
CN107862713A (zh) 针对轮询会场的摄像机偏转实时检测预警方法及模块
CN112801184A (zh) 一种云跟踪方法、系统及装置
CN114612933B (zh) 单目社交距离检测追踪方法
CN105869184B (zh) 基于路径分析的林火烟雾图像检测方法
Wu et al. Video surveillance object recognition based on shape and color features
CN113781523A (zh) 一种足球检测跟踪方法及装置、电子设备、存储介质
Zhang et al. Intrahour cloud tracking based on optical flow
CN115512263A (zh) 一种面向高空坠物的动态视觉监测方法及装置
CN115690190B (zh) 基于光流图像和小孔成像的运动目标检测与定位方法
CN111583298B (zh) 一种基于光流法的短时云图追踪方法
WO2023019699A1 (zh) 一种基于3d人脸模型的俯角人脸识别方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23819051

Country of ref document: EP

Kind code of ref document: A1