CN116994065A - Cloud cluster classification and cloud evolution trend prediction method - Google Patents
Cloud cluster classification and cloud evolution trend prediction method Download PDFInfo
- Publication number
- CN116994065A CN116994065A CN202311120826.7A CN202311120826A CN116994065A CN 116994065 A CN116994065 A CN 116994065A CN 202311120826 A CN202311120826 A CN 202311120826A CN 116994065 A CN116994065 A CN 116994065A
- Authority
- CN
- China
- Prior art keywords
- cloud
- image
- representing
- motion
- trend
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 239000011159 matrix material Substances 0.000 claims abstract description 13
- 230000001052 transient effect Effects 0.000 claims description 28
- 230000008859 change Effects 0.000 claims description 15
- 238000013507 mapping Methods 0.000 claims description 6
- 238000009825 accumulation Methods 0.000 claims description 5
- 230000006870 function Effects 0.000 claims description 5
- 230000007704 transition Effects 0.000 claims description 5
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 238000012938 design process Methods 0.000 claims description 3
- 230000008447 perception Effects 0.000 claims description 3
- 230000035945 sensitivity Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 5
- 238000010248 power generation Methods 0.000 description 5
- 230000007423 decrease Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000002679 ablation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application discloses a cloud cluster classification and cloud evolution trend prediction method, which comprises the following steps: introducing MHI, acquiring and updating a motion history image, and identifying cloud layers in a cloud picture video; the classification of different cloud pictures is realized by using the improved MHI of cloud layer characteristic points; graying the cloud image, then introducing a gray coexistence matrix and an inverse difference matrix to evaluate whether the classification result is accurate, if so, classifying the cloud cluster by using a value obtained by calculating the inverse difference matrix; if not, updating the locally enhanced motion history image, and repeating the process. On the basis of obtaining cloud cluster classification results, a MotionGRU unit is built and embedded between layers of an existing GRU prediction model, and MotionHighway is added between the layers for connection, so that an enhanced GRU model is obtained; and predicting the cloud evolution trend by using the enhanced GRU model. The cloud cluster identification method and the cloud cluster identification system realize accurate identification and classification of cloud clusters and improve cloud evolution trend prediction accuracy.
Description
Technical Field
The application relates to a cloud cluster classification and cloud evolution trend prediction method.
Background
With the large-scale development of solar energy, the photovoltaic power generation amount is increased increasingly, and the proportion of the photovoltaic power generation amount in a power grid is increased. However, various meteorological factors, especially the motion evolution of the cloud, cause the fluctuation of solar radiation, and directly lead to the instability of the output power of the photovoltaic power station.
Before photovoltaic power generation power prediction is carried out, cloud in a foundation cloud image is accurately identified to be a key and a foundation of prediction, and the cloud prediction method is also a premise for researching cloud classification and cloud evolution. The traditional identification method mainly depends on manual experience, and needs weather staff to manually mark the position, shape and other information of the cloud according to weather knowledge and real-time weather conditions, so that the workload is great, and the identification difficulty is increased for overcast and rainy weather or thin cloud.
On the other hand, the existing cloud evolution trend prediction method mainly focuses on the change of weather factors at the current moment or before, ignores the influence of the type of the cloud or the movement trend on solar radiation, and limits the accuracy of photovoltaic power prediction. Under the condition that the sky is cloudy, the influence of cloud movement, shielding and ablation on solar irradiance in a short time is high compared with other meteorological factors; in clear sky, other meteorological factors also have an effect on solar irradiance, but solar irradiance changes relatively little in a short time.
In conclusion, how to identify cloud clusters and the position change of the cloud clusters by utilizing the foundation cloud chart accurately predicts the future cloud cluster positions, and has important significance for photovoltaic power generation power prediction.
Disclosure of Invention
The application aims to: the first object of the application is to provide a cloud cluster classification method capable of accurately identifying and classifying cloud clusters; the second object of the application is to provide a cloud evolution trend prediction method with high prediction accuracy.
The technical scheme is as follows: the cloud cluster classification method provided by the application comprises the following steps:
(1) Introducing a motion history image theory MHI, acquiring and updating a motion history image, and identifying cloud layers in a cloud picture video;
(2) The classification of different cloud pictures is realized by using the MHI with improved cloud layer characteristic points, and the classification result comprises clear sky, partial clouds and all clouds;
(3) Graying the cloud image, and then introducing a gray coexistence matrix and a reverse difference moment to evaluate whether the classification result of the cloud image with cloud in part and all cloud in the step (2) is accurate, if so, classifying the cloud image by using a value obtained by calculation of the reverse difference moment; if not, executing the step (4);
(4) Updating the locally enhanced motion history image, and repeating the steps (2) to (3).
Further, step (1) includes:
the MHI obtains a binary difference map at the position (x, y) of an image frame at the time t and the time t-1 through an inter-frame difference method, namely a motion history image D (x, y, t):
wherein A (x, y, t) is a binary differential distance, the expression formula is A (x, y, t) = |B (x, y, t) -B (x, y, t+/-delta) |, B (x, y, t) is an intensity value of a pixel position with coordinates (x, y) at the moment of an image sequence t, and delta represents a time variation; xi is a threshold value and represents the sensitivity degree of the binary differential graph to scene change during generation;
MHI updates the motion history image D (x, y, t):
wherein, (x, y) is pixel point coordinates, t is time, and the frame number of the video is represented in the video data of the foundation cloud picture; duration τ represents the time range of motion; h τ (x, y, t-1) represents the time range of motion at time t-1; delta is the attenuation parameter;
and identifying cloud layers in the cloud picture video through pixel point coordinates (x, y) at different moments t.
Further, the attenuation parameter δ takes a value of 1.
Further, step (2) includes:
an updated motion history image D (x, y, t) is obtained by frame subtraction, and a perception mask M (x, y, t) is generated when a cloud feature is detected, expressed as follows:
wherein L is a set of cloud layer coordinates; w denotes a pixel coordinate around the cloud layer coordinate; alpha is a weight; d, d M(L,P) Is Manhattan distance;
the cloud coordinates have the highest weight, while the surrounding pixels are lower in weight, and they are proportional to the distance of the corresponding cloud coordinates; and classifying different cloud pictures by calculating the weight value of the cloud layer coordinates and the weight value of pixels around the cloud layer.
Further, step (3) includes:
the gray scale coexistence matrix formula is:
g(i,j)=#{f(x 1 ,z 1 )=i,f(x 2 ,z 2 )=j|(x 1 ,x 2 ),(x 2 ,z 2 )∈M×N} (4)
wherein g is a gray scale coexistence matrix, i, j is a coordinate index; f is an image; (x) 1 ,z 1 )、(x 2 ,z 2 )、(x 1 ,x 2 )、(x 2 ,z 2 ) Representing image coordinate points; m×n represents an image range;
the inverse difference moment formula is:
wherein IDM is the inverse difference moment.
Further, in the step (3), cloud cluster classification results include blocky clouds, thin clouds, thick clouds, curly clouds and layer clouds.
Further, step (4) includes:
obtaining an enhanced differential image E (x, y, t), wherein the expression formula is as follows:
E(x,y,t)=M(x,y,t)·D(x,y,t) (6)
binarizing the enhanced differential image, and updating a motion history image D (x, y, t):
updating the motion history image MHI (x, y, t) to obtain a new motion history image:
the cloud evolution trend prediction method provided by the application comprises the following steps:
(1) According to the cloud cluster classification method, a cloud cluster classification result is obtained;
(2) Constructing a MotionGRU unit capable of capturing transient changes and realizing motion trend accumulation;
(3) Embedding the MotionGRU units between layers of the existing GRU prediction model, and simultaneously adding MotionHighway between the layers for connection to obtain an enhanced GRU model;
(4) And predicting the cloud evolution trend by using the enhanced GRU model.
Further, in the step (2), the design process of the MotionGRU unit is as follows:
wherein ,F′t Representing the current Transient variable, the Transient (-) is a Transient change learner,representing the transient variable captured by the motion filter from the previous moment,/and the like>An input representing the last prediction unit; t represents the time step, L e {1, …, L } represents the current layer; l represents an upper hierarchical limit;
representing the current Trend momentum, trend (·) is a Trend momentum updater, +_a->The trend momentum of the previous moment is represented;
F t l representing the transition of each pixel location between adjacent states as a motion filter;
representing a position filter, broadcast (·) is a point multiplication operation, σ is a sigmoid function, W hm Representing the weight value of the prediction unit, representing the convolution operator;
H′ t indicating position information after image mapping, wherein, by indicating the Hadamard product, warp (·) indicates image mapping;
g t representing an update gate, W 1×1 Representing input weights, concat (·) representing the join operation, dec (·) representing the decoder,coordinate position information indicating time t;
representing output cloud layer prediction information->And represents the coordinate position information at time t-1.
Further, in step (3), the MotionGRU unit is embedded between layers of the existing GRU prediction model, and MotionHighway is added between layers for connection, so as to obtain an enhanced GRU model, and the formula is as follows:
wherein ,the input of the layer is the t moment l; />Representing a hidden state of the prediction unit; />The trend cumulative amount at the time t is represented;
equation (18) represents MotioHighway, out t Is the output gate of the prediction unit.
The beneficial effects are that: compared with the prior art, the application has the following remarkable advantages:
(1) The cloud layer in the cloud image video is analyzed and identified by introducing a motion history image theory MHI, so that the accurate identification and classification of cloud clusters can be realized;
(2) The time correlation of the instantaneous movement of the cloud layer is considered, a motionGRU unit is constructed to perform unified modeling on the transient change and movement trend in the movement process of the cloud cluster, and the complex space-time movement rule of the cloud cluster is accurately reflected; meanwhile, a MotionHighway balance cloud cluster moving and non-moving part is introduced, an enhanced GRU model is constructed, the capability of capturing transient changes and movement trends in the cloud cluster movement process is achieved, the evolution trends of different types of cloud clusters can be better analyzed, and the cloud evolution trend prediction precision is improved.
Drawings
FIG. 1 is a flow chart of a cloud cluster classification and cloud evolution trend prediction method provided by an embodiment of the application;
FIG. 2 is a block diagram of a MotionGRU unit in an embodiment of the application;
FIG. 3 is a block diagram of an enhanced MotionGRU model in an embodiment of the application.
Detailed Description
The application is further described below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a cloud cluster classification method for implementing cloud layer identification and cloud cluster classification of a ground cloud image, including the following steps:
(1) Introducing a motion history image theory MHI, acquiring and updating a motion history image, and identifying cloud layers in a cloud picture video;
MHI is generated based on binary Motion Energy Images (MEIs), which represent the locations in the image sequence where motion occurs, describing the motion shape and spatial distribution of the motion.
The MHI obtains a binary difference map at the position (x, y) of an image frame at the time t and the time t-1 through an inter-frame difference method, namely a motion history image D (x, y, t):
wherein A (x, y, t) is a binary differential distance, the expression formula is A (x, y, t) = |B (x, y, t) -B (x, y, t+/-delta) |, B (x, y, t) is the intensity value of the pixel position with the coordinates of (x, y) at the moment of the image sequence t, delta represents the time variation, namely the moment t is changed to the moment t+/-delta; xi is a threshold value and represents the sensitivity degree of the binary differential graph to scene change during generation;
MHI updates the motion history image D (x, y, t):
wherein, (x, y) is pixel point coordinates, t is time, and the frame number of the video is represented in the video data of the foundation cloud picture; duration τ represents the time range of motion; h τ (x, y, t-1) represents the time range of motion at time t-1; delta is the attenuation parameter;
too small a duration τ will decrease to 0 faster as the decay parameter δ decreases, thereby becoming submerged in the background, losing some of the information of the motion; if τ is too large, the intensity change of the pixel value is not obvious, and it is difficult to determine the exact movement direction.
The larger the attenuation parameter delta value, the faster the attenuation is, and if the form of a pixel point in the video image sequence is not changed or is changed from motion to static, the delta is subtracted from the pixel value of the point. In this embodiment, the attenuation parameter δ takes a value of 1 during the calculation of MHI.
And identifying cloud layers in the cloud picture video through pixel point coordinates (x, y) at different moments t.
(2) The classification of different cloud pictures is realized by using the MHI with improved cloud layer characteristic points, and the classification result comprises clear sky, partial clouds and all clouds;
an updated motion history image D (x, y, t) is obtained by frame subtraction, and a perception mask M (x, y, t) is generated when a cloud feature is detected, expressed as follows:
wherein L is a set of cloud layer coordinates; w denotes a pixel coordinate around the cloud layer coordinate; alpha is a weight; d, d M(L,P) Is Manhattan distance;
the cloud coordinates have the highest weight, while the surrounding pixels are lower in weight, and they are proportional to the distance of the corresponding cloud coordinates; the formula (3) is generated when the cloud layer is detected, and different cloud images are classified by calculating the weight value of the cloud layer coordinates and the weight value of pixels around the cloud layer.
(3) Graying the cloud image, and then introducing a gray coexistence matrix and an inverse difference moment IDM to evaluate whether the classification results of the cloud image with cloud parts and all clouds in the step (2) are accurate or not, if so, classifying the cloud image by using a value obtained by calculating the inverse difference moment; if not, executing the step (4);
the purpose of image graying is to increase the operation speed.
The gray scale coexistence matrix and the inverse difference moment are used as evaluation indexes when the cloud image is a local cloud and all the cloud, and the gray scale coexistence matrix formula is as follows:
g(i,j)=#{f(x 1 ,z 1 )=i,f(x 2 ,z 2 )=j|(x 1 ,x 2 ),(x 2 ,z 2 )∈M×N} (4)
wherein g is a gray scale coexistence matrix, i, j is a coordinate index; f is an image; (x) 1 ,z 1 )、(x 2 ,z 2 )、(x 1 ,x 2 )、(x 2 ,z 2 ) Representing image coordinate points; m×n represents an image range;
the inverse difference moment formula is:
wherein IDM is the inverse difference moment.
The inverse difference moment IDM is added to reflect the uniformity of the image texture and is used for measuring the local variation of the image texture. Because the cloud generally has a certain expansion structure, the change of the texture characteristics of the cloud is larger, and the cloud clusters are classified by the value obtained by calculation of the inverse difference moment and mainly comprise five types of blocky cloud, thin cloud, thick cloud, coiled cloud and layer cloud.
(4) Updating the locally enhanced motion history image, and repeating the steps (2) to (3).
Obtaining an enhanced differential image E (x, y, t), wherein the expression formula is as follows:
E(x,y,t)=M(x,y,t)·D(x,y,t) (6)
binarizing the enhanced differential image, and updating a motion history image D (x, y, t):
as with the conventional MHI, the IDM formula updates the motion-history image theory MHI (x, y, t) to obtain a new motion-history image calculation formula:
locally enhanced MHI with cloud signatures helps to capture fine motion while the cloud is moving and can effectively suppress background noise.
As shown in fig. 1, the embodiment of the application further provides a cloud evolution trend prediction method, which includes the following steps:
(1) According to the cloud cluster classification method provided by the embodiment of the application, a cloud cluster classification result is obtained;
(2) Constructing a MotionGRU unit capable of capturing transient changes and realizing motion trend accumulation;
the constructed MotionGRU unit can uniformly simulate transient changes and motion trends. In prediction, motion may be represented as a displacement of pixels corresponding to a hidden state transition in a GRU network. By using MotionGRU to learn the pixel offset between adjacent states, the learned pixel level offset is used with a motion filter F t l And (3) representing. Since the real world motion is composed of transient changes and motion trends, two modules are designed in the MotionGRU to model the two components respectively, the two modules are a module for capturing transient changes and a motion trend accumulation module respectively, the dotted line part in fig. 2 is a module for capturing transient changes, and the dotted line part is a motion trend accumulation module. In the transient change part, a convolution GRU is adopted to learn the transient change rule of the cloud cluster, and by using the cyclic convolution network, not only the transient state can be considered, but also the time-space correlation in the cloud cluster change process can be associated, and the formula is expressed as follows:
wherein ut Representing an update gate, r t Indicating reset gate, z t For the reset feature at the current time, σ is a sigmoid function, W u ,W r and Wz Indicating the convolution kernel, ++indicates the convolution operator and hadamard product.Input representing the last prediction unit,/->Representing the transient variation captured by the motion filter from the previous instant. Transient variable F 'of current frame' t Calculated from update gates, etc. F (F) t l Representing the transition of each pixel location between adjacent states, is a motion filter.
The texture of the cloud cluster in the foundation cloud picture is complex, and especially the color texture variation of the cloud of different types is different, but the texture characteristic variation of the same cloud or the cloud of the same type is smaller. By F'. t And judging whether the type of the cloud changes according to the calculated transient variable change, and effectively predicting the cloud of different types in a prediction unit.
In predicting a ground cloud picture, a motion trend needs to be obtained through the whole frame sequence. Since the future is uncertain, in the motion trend section, observations of trend momentum are captured by using an accumulated method. Motion filter prior to useAs an estimate of the current motion trend, the resulting momentum update function is as follows:
where alpha is the step size of the momentum update,for the learned trend momentum, the above formula is then defined asBy updating momentum->Converging to a motion filter F t l Is considered as a trend of motion over a period of time.
Through the analysis, the design process of the complete MotionGRU unit is shown in the following formula:
wherein ,F′t Representing the current Transient variable, the Transient (-) is a Transient change learner,representing the transient variable captured by the motion filter from the previous moment,/and the like>An input representing the last prediction unit; t represents the time step, L e {1, …, L } represents the current layer; l represents an upper hierarchical limit;
representing the current Trend momentum, trend (·) is a Trend momentum updater, +_a->The trend momentum of the previous moment is represented;
F t l representing the transition of each pixel location between adjacent states as a motion filter;
representing a position filter, broadcast (·) is a point multiplication operation, σ is a sigmoid function, W hm Representing the weight value of the prediction unit, representing the convolution operator;
H′ t indicating position information after image mapping, wherein, by indicating the Hadamard product, warp (·) indicates image mapping;
g t representing an update gate, W 1×1 Representing input weights, concat (·) representing the join operation, dec (·) representing the decoder,coordinate position information indicating time t;
representing output cloud layer prediction information->And represents the coordinate position information at time t-1.
(3) Embedding the motionGRU units between layers of the existing GRU prediction model, and simultaneously adding motionHighway between the layers for connection to obtain an enhanced GRU model, as shown in figure 3;
the enhanced GRU model construction formula is as follows:
wherein ,the input of the layer is the t moment l; />Representing a hidden state of the prediction unit; />The trend cumulative amount at the time t is represented;
equation (23) represents MotionHighway, out t Is the output gate of the prediction unit.
(4) And predicting the cloud evolution trend by using the enhanced GRU model.
The application embeds the MotionGRU unit between layers of the existing GRU prediction model for enhancing modeling of complex motion, and adds MotionHighway between layers for connection, so as to balance moving and non-moving parts of cloud. The motionGRU unit is introduced into cloud evolution trend prediction to perform unified modeling on transient changes and motion trends in the cloud cluster motion process, so that the complex space-time motion rules of the cloud cluster are accurately reflected. By introducing a MotionHighway balance cloud cluster moving and non-moving part, the enhanced GRU prediction model can make quick response to complex instantaneous change movements of the cloud clusters, analyze evolution trends of different types of cloud clusters, improve cloud evolution prediction precision based on a foundation cloud image, and provide preconditions for accurate prediction of the power generation power of a photovoltaic power station.
Claims (10)
1. A cloud cluster classification method, comprising:
(1) Introducing a motion history image theory MHI, acquiring and updating a motion history image, and identifying cloud layers in a cloud picture video;
(2) The classification of different cloud pictures is realized by using the MHI with improved cloud layer characteristic points, and the classification result comprises clear sky, partial clouds and all clouds;
(3) Graying the cloud image, and then introducing a gray coexistence matrix and a reverse difference moment to evaluate whether the classification result of the cloud image with cloud in part and all cloud in the step (2) is accurate, if so, classifying the cloud image by using a value obtained by calculation of the reverse difference moment; if not, executing the step (4);
(4) Updating the locally enhanced motion history image, and repeating the steps (2) to (3).
2. The cloud cluster classification method according to claim 1, wherein the step (1) includes:
the MHI obtains a binary difference map at the position (x, y) of an image frame at the time t and the time t-1 through an inter-frame difference method, namely a motion history image D (x, y, t):
wherein A (x, y, t) is a binary differential distance, the expression formula is A (x, y, t) = |B (x, y, t) -B (x, y, t+/-delta) |, B (x, y, t) is an intensity value of a pixel position with coordinates (x, y) at the moment of an image sequence t, and delta represents a time variation; xi is a threshold value and represents the sensitivity degree of the binary differential graph to scene change during generation;
MHI updates the motion history image D (x, y, t):
wherein, (x, y) is pixel point coordinates, t is time, and the frame number of the video is represented in the video data of the foundation cloud picture; duration τ represents the time range of motion; h τ (x, y, t-1) represents the time range of motion at time t-1; delta is the attenuation parameter;
and identifying cloud layers in the cloud picture video through pixel point coordinates (x, y) at different moments t.
3. The cloud classification method according to claim 2, wherein the attenuation parameter δ takes a value of 1.
4. The cloud classification method according to claim 2, wherein step (2) includes:
an updated motion history image D (x, y, t) is obtained by frame subtraction, and a perception mask M (x, y, t) is generated when a cloud feature is detected, expressed as follows:
wherein L is a set of cloud layer coordinates; w denotes a pixel coordinate around the cloud layer coordinate; alpha is a weight; d, d M(L,P) Is Manhattan distance;
the cloud coordinates have the highest weight, while the surrounding pixels are lower in weight, and they are proportional to the distance of the corresponding cloud coordinates; and classifying different cloud pictures by calculating the weight value of the cloud layer coordinates and the weight value of pixels around the cloud layer.
5. The cloud classification method of claim 4, wherein step (3) includes:
the gray scale coexistence matrix formula is:
g(i,j)=#{f(x 1 ,z 1 )=i,f(x 2 ,z 2 )=j|(x 1 ,x 2 ),(x 2 ,z 2 )∈M×N} (4)
wherein g is a gray scale coexistence matrix, i, j is a coordinate index; f is an image; (x) 1 ,z 1 )、(x 2 ,z 2 )、(x 1 ,x 2 )、(x 2 ,z 2 ) Representing image coordinate points; m×n represents an image range;
the inverse difference moment formula is:
wherein IDM is the inverse difference moment.
6. The cloud cluster classification method according to claim 5, wherein in the step (3), the cloud cluster classification result includes a blocky cloud, a thin cloud, a thick cloud, a roll cloud, and a layer cloud.
7. The cloud classification method as claimed in claim 5, wherein step (4) comprises:
obtaining an enhanced differential image E (x, y, t), wherein the expression formula is as follows:
E(x,y,t)=M(x,y,t)·D(x,y,t) (6)
binarizing the enhanced differential image, and updating a motion history image D (x, y, t):
updating the motion history image MHI (x, y, t) to obtain a new motion history image:
8. the cloud evolution trend prediction method is characterized by comprising the following steps of:
(1) The cloud cluster classification method according to any one of claims 1 to 7, obtaining a cloud cluster classification result;
(2) Constructing a MotionGRU unit capable of capturing transient changes and realizing motion trend accumulation;
(3) Embedding the MotionGRU units between layers of the existing GRU prediction model, and simultaneously adding MotionHighway between the layers for connection to obtain an enhanced GRU model;
(4) And predicting the cloud evolution trend by using the enhanced GRU model.
9. The cloud evolution trend prediction method according to claim 8, wherein in the step (2), the design process of the MotionGRU unit is as follows:
wherein ,Ft ' represents the current Transient variable, transient (·) is the Transient learner,representing the transient variable captured by the motion filter from the previous moment,/and the like>An input representing the last prediction unit; t represents the time step, L e {1, …, L } represents the current layer; l represents an upper hierarchical limit;
representing the current Trend momentum, trend (·) is a Trend momentum updater, +_a->The trend momentum of the previous moment is represented;
F t l representing the transition of each pixel location between adjacent states as a motion filter;
representing a position filter, broadcast (·) is a point multiplication operation, σ is a sigmoid function, W hm Representing the weight value of the prediction unit, representing the convolution operator;
H′ t indicates the position information after image mapping, and indicates the Hadamard product, warp (·) represents image mapping;
g t representing an update gate, W 1×1 Representing input weights, concat (·) representing the join operation, dec (·) representing the decoder,coordinate position information indicating time t;
representing output cloud layer prediction information->And represents the coordinate position information at time t-1.
10. The cloud evolution trend prediction method according to claim 9, wherein in the step (3), a MotionGRU unit is embedded between layers of an existing GRU prediction model, and MotionHighway is added between layers for connection, so as to obtain an enhanced GRU model, and the formula is as follows:
wherein ,the input of the layer is the t moment l; />Representing a hidden state of the prediction unit; />The trend cumulative amount at the time t is represented;
equation (18) represents MotioHighway, out t Is the output gate of the prediction unit.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311120826.7A CN116994065B (en) | 2023-08-31 | 2023-08-31 | Cloud cluster classification and cloud evolution trend prediction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311120826.7A CN116994065B (en) | 2023-08-31 | 2023-08-31 | Cloud cluster classification and cloud evolution trend prediction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116994065A true CN116994065A (en) | 2023-11-03 |
CN116994065B CN116994065B (en) | 2024-06-14 |
Family
ID=88528473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311120826.7A Active CN116994065B (en) | 2023-08-31 | 2023-08-31 | Cloud cluster classification and cloud evolution trend prediction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116994065B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000207569A (en) * | 1999-01-08 | 2000-07-28 | Fuji Photo Film Co Ltd | Image data converting method, image parts conversion method, recording medium recording image data converting program and recording medium recording image parts conversion program |
CN104766347A (en) * | 2015-04-29 | 2015-07-08 | 上海电气集团股份有限公司 | Cloud cluster movement prediction method based on foundation cloud chart |
US20190057588A1 (en) * | 2017-08-17 | 2019-02-21 | Bossa Nova Robotics Ip, Inc. | Robust Motion Filtering for Real-time Video Surveillance |
CN110705412A (en) * | 2019-09-24 | 2020-01-17 | 北京工商大学 | Video target detection method based on motion history image |
CN113628252A (en) * | 2021-08-13 | 2021-11-09 | 北京理工大学 | Method for detecting gas cloud cluster leakage based on thermal imaging video |
CN114565854A (en) * | 2022-04-29 | 2022-05-31 | 河北冀云气象技术服务有限责任公司 | Intelligent image cloud identification system and method |
CN114692720A (en) * | 2022-02-25 | 2022-07-01 | 广州文远知行科技有限公司 | Image classification method, device, equipment and storage medium based on aerial view |
CN115240149A (en) * | 2021-04-25 | 2022-10-25 | 株洲中车时代电气股份有限公司 | Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium |
-
2023
- 2023-08-31 CN CN202311120826.7A patent/CN116994065B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000207569A (en) * | 1999-01-08 | 2000-07-28 | Fuji Photo Film Co Ltd | Image data converting method, image parts conversion method, recording medium recording image data converting program and recording medium recording image parts conversion program |
CN104766347A (en) * | 2015-04-29 | 2015-07-08 | 上海电气集团股份有限公司 | Cloud cluster movement prediction method based on foundation cloud chart |
US20190057588A1 (en) * | 2017-08-17 | 2019-02-21 | Bossa Nova Robotics Ip, Inc. | Robust Motion Filtering for Real-time Video Surveillance |
CN110705412A (en) * | 2019-09-24 | 2020-01-17 | 北京工商大学 | Video target detection method based on motion history image |
CN115240149A (en) * | 2021-04-25 | 2022-10-25 | 株洲中车时代电气股份有限公司 | Three-dimensional point cloud detection and identification method and device, electronic equipment and storage medium |
CN113628252A (en) * | 2021-08-13 | 2021-11-09 | 北京理工大学 | Method for detecting gas cloud cluster leakage based on thermal imaging video |
CN114692720A (en) * | 2022-02-25 | 2022-07-01 | 广州文远知行科技有限公司 | Image classification method, device, equipment and storage medium based on aerial view |
CN114565854A (en) * | 2022-04-29 | 2022-05-31 | 河北冀云气象技术服务有限责任公司 | Intelligent image cloud identification system and method |
Also Published As
Publication number | Publication date |
---|---|
CN116994065B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113538391B (en) | Photovoltaic defect detection method based on Yolov4 and thermal infrared image | |
CN111626128A (en) | Improved YOLOv 3-based pedestrian detection method in orchard environment | |
CN111639787A (en) | Spatio-temporal data prediction method based on graph convolution network | |
CN116720156A (en) | Weather element forecasting method based on graph neural network multi-mode weather data fusion | |
CN115951014A (en) | CNN-LSTM-BP multi-mode air pollutant prediction method combining meteorological features | |
CN114067019A (en) | Urban waterlogging risk map rapid prefabricating method coupling deep learning and numerical simulation | |
CN114943365A (en) | Rainfall estimation model establishing method fusing multi-source data and rainfall estimation method | |
CN116258757A (en) | Monocular image depth estimation method based on multi-scale cross attention | |
CN114548253A (en) | Digital twin model construction system based on image recognition and dynamic matching | |
CN117789037A (en) | Crop growth period prediction method and device | |
CN116050460B (en) | Air temperature data spatial interpolation method based on attention neural network | |
CN116994065B (en) | Cloud cluster classification and cloud evolution trend prediction method | |
CN117746264A (en) | Multitasking implementation method for unmanned aerial vehicle detection and road segmentation | |
CN117197632A (en) | Transformer-based electron microscope pollen image target detection method | |
CN116579509A (en) | Photovoltaic power generation prediction method based on virtual reality | |
CN114937137B (en) | BIM and GIS-based building environment intelligent analysis method | |
CN114445726B (en) | Sample library establishing method and device based on deep learning | |
CN113591697B (en) | Video pedestrian re-identification method based on triple pyramid model and migration fusion | |
CN112380967B (en) | Spatial artificial target spectrum unmixing method and system based on image information | |
CN114581729A (en) | High-resolution remote sensing impervious surface extraction method based on weak supervised learning | |
CN113341419A (en) | Weather extrapolation method and system based on VAN-ConvLSTM | |
CN118332937B (en) | New energy meteorological large model construction method, device and power prediction method | |
CN112508441B (en) | Urban high-density outdoor thermal comfort evaluation method based on deep learning three-dimensional reconstruction | |
CN117612029B (en) | Remote sensing image target detection method based on progressive feature smoothing and scale adaptive expansion convolution | |
Crisosto et al. | Convolutional Neural Network for High-Resolution Cloud Motion Prediction from Hemispheric Sky Images. Energies 2021, 14, 753 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |