CN111738327A - Ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference - Google Patents

Ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference Download PDF

Info

Publication number
CN111738327A
CN111738327A CN202010558199.5A CN202010558199A CN111738327A CN 111738327 A CN111738327 A CN 111738327A CN 202010558199 A CN202010558199 A CN 202010558199A CN 111738327 A CN111738327 A CN 111738327A
Authority
CN
China
Prior art keywords
cloud
irradiation
sub
image
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010558199.5A
Other languages
Chinese (zh)
Other versions
CN111738327B (en
Inventor
邵玺
张臻
伍敏燕
徐国安
杜聚鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN202010558199.5A priority Critical patent/CN111738327B/en
Publication of CN111738327A publication Critical patent/CN111738327A/en
Application granted granted Critical
Publication of CN111738327B publication Critical patent/CN111738327B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an ultra-short-term irradiation prediction method based on typical cloud layer shielding irradiation difference, which is used for acquiring a foundation cloud picture and irradiation data of the same real-time observation point; obtaining sub-images from the foundation cloud picture, and classifying the sub-images according to a sub-image classification model; extracting feature data of the foundation cloud picture; and selecting a pre-trained prediction model according to the classification result of the sub-images, determining input quantity according to the irradiation data and the extracted characteristic data, inputting the input quantity into the prediction model, and outputting an ultra-short irradiation prediction value. The advantages are that: the method provided by the invention considers the difference of different types of cloud cover shielding irradiation under the real sky condition, effectively and quantitatively predicts the attenuation influence of solar irradiation caused by the cloud cover, and has good universality and accuracy in the practical application scene.

Description

Ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference
Technical Field
The invention relates to an ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference, and belongs to the technical field of multi-model control of solar irradiation and photovoltaic power generation nonlinear systems.
Background
With the rapid increase of the photovoltaic grid-connected quantity, the random fluctuation of the photovoltaic output brings great challenges to the operation of a power grid, and particularly, the problem of great reduction of irradiation caused by cloud layer movement within tens of minutes has attracted industrial attention in recent years. Aiming at the problem, relevant scholars predict the future irradiation fluctuation condition by processing and analyzing the characteristic data extracted from the all-sky foundation cloud picture; in the current research, a grid cloud index method is mainly used for predicting the reduction coefficient of a future cloud layer to ideal clear air irradiation, so that an irradiation prediction value is obtained. In the method, the cloud layer is assumed to be a single-layer homogeneous structure, the difference between the cloud layer and the real cloud layer is large, and the irradiation prediction result has large error; and because the difference of different types of cloud layers shielding irradiation is not considered, the generalization capability of the model under different cloud layer types is weak.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides an ultra-short-term irradiation prediction method based on typical cloud layer shielding irradiation difference, and effectively quantitatively predicts the attenuation influence of solar irradiation caused by cloud layers by considering the difference of different types of cloud layer shielding irradiation under the real sky condition.
In order to solve the technical problems, the invention provides an ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference,
acquiring a real-time foundation cloud picture and irradiation data of the same observation point;
obtaining sub-images from the foundation cloud picture, and classifying the sub-images according to a sub-image classification model;
extracting feature data of the foundation cloud picture;
and selecting a pre-trained prediction model according to the classification result of the sub-images, determining input quantity according to the irradiation data and the extracted characteristic data, inputting the input quantity into the prediction model, and outputting an ultra-short irradiation prediction value.
Furthermore, the sub-images are divided into equal-size blocks by adopting a dividing mode based on square grids.
Further, the sub-image classification model adopts a model constructed based on a support vector machine, and the model construction comprises: labeling sub-images, training a sub-image classification model and immediately classifying the sub-images;
the sub-image scaling label comprises: carrying out block division on the foundation cloud picture accumulated in the early stage to obtain a plurality of sub-pictures, and giving label values to the cloud types to which the sub-pictures belong; the cloud type includes: no cloud, rolling cloud, high-lying cloud, lying cloud and layered cloud respectively correspond to cloud type tag values of 0-5;
the training of the sub-image classification model comprises the following steps:
establishing a subimage training sample set, wherein the process of establishing the subimage training sample set comprises the following steps:
establishing input quantity p ═ ME corresponding to single subimageR,MEB,SDB,SKB,Dij,ENB,CONB,ENTB,HOMB,CC]Forming a training sample set T0 by the input quantity p and the sub-image type label value g as follows:
T0={(p1,g1),(p2,g2),...,(pn,gn)}
wherein (p)1,g1) A single training sample consisting of the input quantity of the first sub-image and the type label value of the first sub-image is obtained, and n is the total number of the sub-images participating in training;
average gray values of the R channel and the B channel:
Figure BDA0002545272370000021
standard deviation SD of B channelB
Figure BDA0002545272370000022
B channel skewness SKB
Figure BDA0002545272370000023
Gray scale deviation Dij
Dij=MEi-MEj
Wherein, i, j ∈ { R, G, B }, aliRepresenting the corresponding gray scale value of the pixel l at the i channel, l ∈ {0, …, (N-1) }, N representing the total number of pixels of a single sub-image;
energy of B channel ENB
Figure BDA0002545272370000024
B channel contrast CONB
Figure BDA0002545272370000025
B channel entropy ENTB
Figure BDA0002545272370000031
B channel inverse variance HOMB
Figure BDA0002545272370000032
Wherein M represents the number of image gray levels, PΔ(a, b) representing the pixel distance of two gray values a and b in the image gray co-occurrence matrix;
coverage of cloud layer of block: CC is Ncloud/N;
Wherein N iscloudThe number of cloud pixel points in a single sub-image;
and carrying out sample set data boundary division on the input quantity and the type label value of the constructed sub-image in the sub-image training set T0 through a support vector machine, and establishing a sub-image classification model.
Further, the process of extracting the feature data of the foundation cloud picture comprises the following steps:
calculating the total cloud amount ratio in the foundation cloud picture to obtain a binary cloud picture; and judging the motion direction of the cloud layer, dividing the grids in the reverse motion direction, and calculating the cloud amount ratio in each grid.
Further, the method for calculating the total cloud volume ratio judges the cloud layer and the sky by determining the optimal threshold of the normalized red-blue ratio of the cloud image by using a minimum cross entropy method, and the ratio of the total pixel number of the cloud layer to the total pixel number of the cloud image obtained by judgment is the total cloud volume ratio C.
Further, the optimal threshold value of the normalized red-blue ratio is TH 0.01t*
t*=argmin{-m(0,t-1)log[u(0,t-1)]-m(t,L)log[u(t,L)]}
Wherein t represents a sequence number of the histogram interval; t is t*The histogram interval sequence number of the optimal threshold value of the normalized red-blue ratio is located;
Figure BDA0002545272370000033
wherein h (i) is a percentage value of the histogram interval i, and L represents the total number of the histogram intervals.
Further, the method for judging the cloud layer and the sky comprises the following steps:
and traversing the gray value of the cloud image pixel, and calculating the normalized red-blue ratio corresponding to each pixel, wherein the cloud layer pixel is considered as the normalized red-blue ratio which is smaller than the optimal threshold of the normalized red-blue ratio of the cloud image, and otherwise, the cloud layer pixel is considered as the sky pixel.
Further, the process of distinguishing the motion direction of the cloud layer, dividing the grids in the motion reverse direction, and calculating the cloud amount ratio in each grid includes:
tracking and calculating all pixel point motions of the foundation cloud picture by using an optical flow method, and obtaining a cloud layer motion representative velocity v by using K-means clusteringp. Starting from the current sun position, along vpDividing 5 grids in the reverse direction, wherein the side length of the grids is the pixel distance of the cloud layer passing through within one minute, and calculating the ratio of the pixel point of the cloud layer of each grid to the total pixel number of a single grid to be c in sequence according to the binary cloud image1、c2、c3、c4、c5
Further, the determination process of the input amount includes:
input vector k ═ It-1,It-2,It-3,It-4,It-5,C,c1,c2,c3,c4,c5]Normalization processing to obtain new input Ki=100(ki-kmin)/(kmax-kmin) (I ═ 1, 2.., 11), where I ist-1,It-2,It-3,It-4,It-5The solar irradiance measured values at five moments before the current moment t, C is the ratio of the total pixel number of the cloud layer to the total pixel number of the cloud picture, C is the total cloud amount ratio1,c2,c3,c4,c5For the ratio of the number of cloud layer pixels per grid to the total number of pixels per grid, KiRepresenting the ith element, K, of the new input KiRepresenting the i-th element, k, of the original input quantitymin、kmaxRespectively representing the element with the minimum value and the element with the maximum value in the original input quantity.
Further, the construction of the prediction model comprises classification processing of historical data and establishment of a training sample set;
the classification processing of the historical data is to classify the cloud pictures and the irradiation data collected in the irradiation fluctuation time period according to the fluctuation types of the historical data; wherein the fluctuation types include: the irradiation attenuation degree is more than or equal to 50 percent, and the duration is more than 5 minutes; the irradiation attenuation degree is more than or equal to 50 percent and the duration is less than 5 minutes; the irradiation attenuation degree is less than or equal to 20 percent, and the duration is less than 2 minutes;
the establishing process of the training sample set comprises the following steps:
respectively taking a first type historical data set, a second type historical data set and a third type historical data set, and after cloud picture feature extraction, obtaining a first type training sample set T2, a second type training sample set T3 and a third type training sample set T3;
and respectively carrying out neural network training on the three types of training sample sets to obtain prediction models of M1, M2 and M3.
Further, the selection basis for selecting the pre-trained prediction model according to the classification result of the sub-image is as follows:
if the total type label value is 0 or 1, selecting an M1 model; if the total type label value is 2 or 3, selecting an M2 model; if the total type tag value is 4 or 5, the M3 model is selected.
The invention achieves the following beneficial effects:
the method provided by the invention considers the difference of different types of cloud cover shielding irradiation under the real sky condition, effectively and quantitatively predicts the attenuation influence of solar irradiation caused by the cloud cover, and has good universality and accuracy in the practical application scene.
Drawings
Fig. 1 is a flow chart of the ultra-short-term irradiation prediction method of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in the figure, the ultra-short-term irradiation prediction method based on the typical cloud shielding irradiation difference disclosed by the invention comprises the following steps of:
(S1) acquiring the real-time foundation cloud picture and the irradiation data of the same observation point;
(S2) obtaining sub-images from the ground based cloud image, classifying the sub-images according to the sub-image classification model;
(S3) extracting the features of the foundation cloud picture, determining the input quantity of the prediction model, and training the prediction model;
(S4) selecting a prediction model according to the sub-image classification result, inputting the characteristic quantity into the pre-trained prediction model, and outputting the ultra-short-term irradiation prediction value.
The method classifies and gives labels to the foundation cloud pictures and the irradiation data sets, provides input quantity for the prediction model by extracting the characteristics of the foundation cloud picture data, and establishes the prediction model by a machine learning method to realize the prediction of irradiance in the ultra-short term in the future.
Data classification and calibration label
And (4) accumulating the foundation cloud pictures and the irradiation data of the same observation point in an experiment, wherein the acquisition step length is 1 minute. Manually classifying the cloud pictures and the irradiation data collected in the irradiation fluctuation time period according to the fluctuation types of the cloud pictures and the irradiation data, and dividing the cloud pictures and the irradiation data into three types: the irradiation attenuation degree is more than or equal to 50 percent, and the duration is more than 5 minutes; the irradiation attenuation degree is more than or equal to 50 percent and the duration is less than 5 minutes; the radiation attenuation degree is less than or equal to 20 percent and the duration is less than 2 minutes. Through the classification, a cloud picture and an irradiation data set of three different typical cloud shielding irradiation differences for establishing the neural network are obtained.
Then, selecting a foundation cloud picture collected in an irradiation fluctuation time period to perform block division to obtain sub-images with consistent sizes, and manually observing cloud type label values of the given sub-images, wherein the cloud types comprise six types of typical cloud layers: the cloud type tag comprises six types of clouds, namely cloudless cloud, rolling cloud, high cloud accumulation (or dispersed cloud accumulation), cloud accumulation (or rain cloud accumulation) and layer cloud, wherein the six types of clouds respectively correspond to cloud type tag values of 0-5. Wherein, the irradiation fluctuation condition that the irradiation attenuation degree is less than or equal to 20% and the duration is less than 2 minutes is corresponding to no cloud and cirrus; the rolling cloud and the high cloud (or the dispersed cloud) correspond to the irradiation fluctuation condition that the irradiation attenuation degree is more than or equal to 50% and the duration is less than 5 minutes; the irradiation fluctuation condition of accumulated cloud (or accumulated rain cloud), the irradiation attenuation degree corresponding to the layer cloud is more than or equal to 50%, and the duration is more than 5 minutes;
(II) ground base cloud picture feature extraction
1) Cloud cover characteristics
The total cloud capacity ratio in the cloud picture is one of important factors reflecting the total shielding degree of the irradiation by the cloud layer, so that the total cloud capacity ratio is added into a prediction model as a cloud capacity characteristic. The method for calculating the ratio of the total cloud amount is to determine the optimal threshold value of the normalized red-blue ratio of the cloud picture by using a minimum cross entropy method, so that the judgment of a cloud layer and the sky is realized.
The normalized red-blue ratio optimal threshold TH may be expressed as:
TH=0.01t*(1)
t*=argmin{-m(0,t-1)log[u(0,t-1)]-m(t,L)log[u(t,L)]} (2)
wherein t represents a sequence number of the histogram interval; t is t*The histogram interval sequence number of the optimal threshold value of the normalized red-blue ratio is located;
Figure BDA0002545272370000061
wherein h (i) is a percentage value of the histogram interval i, and L represents the total number of the histogram intervals;
and traversing the cloud image pixels, calculating a normalized red-blue ratio value corresponding to each pixel, and judging that the pixel is a cloud layer pixel if the normalized red-blue ratio is smaller than the optimal threshold value of the normalized red-blue ratio of the cloud image, otherwise, the pixel is a sky pixel. And calculating and judging the ratio of the number of cloud layer pixel points to the total number of cloud picture pixels, namely the total cloud amount ratio C.
2) Movement characteristics
Random uncertainty is added to the prediction model due to irregular motion of the cloud layer, and the prediction reliability of the prediction model can be improved by adding motion characteristics related to the future moving direction of the cloud layer into the irradiation model.
And judging the future motion direction of the cloud layer, and performing frame-to-frame correlation analysis by using the previous and current foundation cloud pictures. The motion of all pixel points of the cloud image can be tracked and calculated by a dense optical flow method, an optical flow basic constraint equation is shown as a formula (4), and an energy function E (u, v) shown as a formula (5) is minimized by adding an optical flow field smooth constraint condition, so that a velocity vector can be solved.
uIx+vIy+It=0 (4)
Figure BDA0002545272370000062
Wherein, Ix,Iy,ItThe partial derivatives of the gray level of the pixel points in the image along the directions of x, y and t can be obtained from the image data, α is a smoothing weight coefficient, the speed change of the speed field is smoother and less prone to speed mutation when the speed field is larger α, and u and v are the speed vectors of the obtained pixel points.
And extracting the main speed of the cloud picture global speed distribution field obtained by the dense optical flow method by using K-means clustering. Considering that the ground cloud picture comprises moving cloud layer pixels and a large number of static sky pixels, the number of clustering centers is 2, and the step of mean clustering is as follows:
① cloud picture velocity data v of randomly selecting 2 pixel points from velocity fieldcenter1、vcenter2As an initial clustering center;
② and calculating the distance d1 | | v-v between each pixel point and the two cluster centerscenter1||2、d2=||v-vcenter2||2
Thirdly, according to the distance between each pixel point and the clustering center, each pixel point is allocated to the clustering center closest to the pixel point;
④ after clustering is finished, calculating average speed value of pixel points in each cluster to obtain new average position data v'1、v’2Is calculated as (v'1-vcenter1)2+(v’2-vcenter2)2If the average position data v 'is not more than 1, ending the iteration, otherwise, obtaining new average position data v'1、v’2Is given to vcenter1、vcenter2And returns to step ②.
Through K-means clustering, the global pixel points can be divided into two types of motion pixel classes and static pixel classes, and the average speed of all the pixel points belonging to the motion pixel classes in the final clustering result is obtained, namely the cloud layer motion representative speed vp
To obtain vpThen, starting from the current sun position, along vp5 grids are divided in the reverse direction, and the side length of each grid is within one minute of the cloud layerPixel distance. According to the result of dividing the cloud layer and the sky in 1), calculating a grid cloud number index, defining the grid cloud number index as the ratio of each grid cloud layer pixel point to the total pixel number of a single grid, describing the irradiation attenuation degree at the future moment caused by the cloud layer motion, and sequentially recording the grid cloud number index as c according to the sequence of the sun from near to far1、c2、c3、c4、c5
(III) training and selection of predictive models
1) Normalization of input quantities
The input quantities of the prediction model are:
k=[It-1,It-2,It-3,It-4,It-5,C,c1,c2,c3,c4,c5](6)
wherein, It-1,It-2,It-3,It-4,It-5Are solar irradiance measurements five times before the current time t.
Because the input quantity has different element dimensions and large difference of numerical value ranges, normalization processing is needed to control the values of all elements to be between 0 and 100. The normalized new input quantities are:
Ki=100(ki-kmin)/(kmax-kmin)(i=1,2,...,11) (7)
wherein, KiRepresenting the ith element, K, of the new input KiRepresenting the i-th element, k, of the original input quantitymin、kmaxRespectively representing the element with the minimum value and the element with the maximum value in the original input quantity.
2) Predictive model training
The establishment process of the neural network training sample set comprises the following steps: taking the first cloud picture and the irradiation data set in the step (I), and after cloud picture data processing and feature extraction, obtaining a training sample set as follows:
Figure BDA0002545272370000081
wherein the content of the first and second substances,
Figure BDA0002545272370000082
is t0New input quantity corresponding to time and (t)0+3) the single training sample of irradiance values at the time instant, m being the total number of training samples. The three types of cloud pictures are consistent with the establishment process of the training sample set of the irradiation data.
After training sample sets are respectively established on the three types of data sets, BP neural network training is carried out, and prediction models M1, M2 and M3 can be obtained;
3) predictive model selection
(1) Support vector machine block classification
The shape and the position distribution of the cloud layers in the cloud picture are random and irregular, and the current cloud picture type information can be effectively covered by globally dividing the current cloud picture into blocks.
Establishing a training sample set of a support vector machine, wherein the input quantity corresponding to a single sub-image is as follows:
p=[MER,MEB,SDB,SKB,Dij,ENB,CONB,ENTB,HOMB,CC](9)
the input quantity p and the subimage type label value are combined into a training sample set as follows:
{(p1,g1),(p2,g2),...,(pn,gn)} (10)
wherein (p)1,g1) The input quantity of the first sub-image and the single training sample formed by the type label value of the first sub-image are used, and n is the total number of the sub-images participating in training.
The calculation objects of the input quantity p are all single subimages, and the calculation formula of each element in p is as follows:
average gray values of the R channel and the B channel:
Figure BDA0002545272370000083
standard deviation SD of B channelB
Figure BDA0002545272370000084
B channel skewness SKB
Figure BDA0002545272370000085
Gray scale deviation Dij
Dij=MEi-MEj; (14)
Wherein, i, j ∈ { R, G, B }, aliRepresenting the corresponding gray scale value of the pixel l at the i channel, l ∈ {0, …, (N-1) }, N representing the total number of pixels of a single sub-image;
energy of B channel ENB
Figure BDA0002545272370000091
B channel contrast CONB
Figure BDA0002545272370000092
B channel entropy ENTB
Figure BDA0002545272370000093
B channel inverse variance HOMB
Figure BDA0002545272370000094
Wherein M represents the number of image gray levels, PΔ(a, b) representing the pixel distance of two gray values a and b in the image gray co-occurrence matrix;
coverage of cloud layer of block: CC is Ncloud/N; (19)
Where CC denotes cloud coverage, NcloudThe number of cloud pixel points in a single sub-image.
After the training of the training sample set, the support vector machine classifies the blocks of the current cloud picture to obtain the label values of all the blocks of the current cloud picture.
(2) Majority voting method selection prediction model
A label matrix formed by the label values of the blocks obtained by classifying the current cloud image block by the support vector machine in the step (1) is represented as follows:
Figure BDA0002545272370000095
wherein, gh,w∈ {0, 1., 5}, H ∈ {1,2, …, H }, W ∈ {1,2, …, W }, H, W are the number of blocks divided horizontally and vertically in the current cloud picture.
And traversing the label matrix to obtain the mode of the label value, namely the total cloud type label value of the current cloud picture. If the total type label value is 0 or 1, selecting an M1 model; if the total type label value is 2 or 3, selecting an M2 model; if the total type tag value is 4 or 5, the M3 model is selected. After the proper prediction model is selected, the future irradiance value three minutes away from the current moment can be predicted by extracting the new input K of the current cloud picture and inputting the new input K into the prediction model.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. An ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference is characterized in that,
acquiring a real-time foundation cloud picture and irradiation data of the same observation point;
obtaining sub-images from the foundation cloud picture, and classifying the sub-images according to a sub-image classification model;
extracting feature data of the foundation cloud picture;
and selecting a pre-trained prediction model according to the classification result of the sub-images, determining input quantity according to the irradiation data and the extracted characteristic data, inputting the input quantity into the prediction model, and outputting an ultra-short irradiation prediction value.
2. The ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference as claimed in claim 1, wherein the sub-image is divided into equal-sized blocks by adopting a dividing mode based on square grids.
3. The ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference as claimed in claim 1, wherein the sub-image classification model adopts a model constructed based on a support vector machine, and the model construction comprises: labeling sub-images, training a sub-image classification model and immediately classifying the sub-images;
the sub-image scaling label comprises: carrying out block division on the foundation cloud picture accumulated in the early stage to obtain a plurality of sub-pictures, and giving label values to the cloud types to which the sub-pictures belong; the cloud type includes: no cloud, rolling cloud, high-lying cloud, lying cloud and layered cloud respectively correspond to cloud type tag values of 0-5;
the training of the sub-image classification model comprises the following steps:
establishing a subimage training sample set, wherein the process of establishing the subimage training sample set comprises the following steps:
establishing input quantity p ═ ME corresponding to single subimageR,MEB,SDB,SKB,Dij,ENB,CONB,ENTB,HOMB,CC]Forming a training sample set T0 by the input quantity p and the sub-image type label value g as follows:
T0={(p1,g1),(p2,g2),...,(pn,gn)}
wherein (p)1,g1) A single training sample consisting of the input quantity of the first sub-image and the type label value of the first sub-image is obtained, and n is the total number of the sub-images participating in training;
average gray values of the R channel and the B channel:
Figure FDA0002545272360000011
standard deviation SD of B channelB
Figure FDA0002545272360000021
B channel skewness SKB
Figure FDA0002545272360000022
Gray scale deviation Dij
Dij=MEi-MEj
Wherein, i, j ∈ { R, G, B }, aliRepresenting the corresponding gray scale value of the pixel l at the i channel, l ∈ {0, …, (N-1) }, N representing the total number of pixels of a single sub-image;
energy of B channel ENB
Figure FDA0002545272360000023
B channel contrast CONB
Figure FDA0002545272360000024
B channel entropy ENTB
Figure FDA0002545272360000025
B channel inverse variance HOMB
Figure FDA0002545272360000026
Wherein M represents the number of image gray levels, PΔ(a, b) representing the pixel distance of two gray values a and b in the image gray co-occurrence matrix;
coverage of cloud layer of block: CC is Ncloud/N;
Wherein N iscloudThe number of cloud pixel points in a single sub-image;
and carrying out sample set data boundary division on the input quantity and the type label value of the constructed sub-image in the sub-image training set T0 through a support vector machine, and establishing a sub-image classification model.
4. The ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference as claimed in claim 1, wherein the process of extracting feature data of the ground-based cloud image comprises:
calculating the total cloud amount ratio in the foundation cloud picture to obtain a binary cloud picture; and judging the motion direction of the cloud layer, dividing the grids in the reverse motion direction, and calculating the cloud amount ratio in each grid.
5. The ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference as claimed in claim 4, wherein the calculation method of the total cloud occupancy ratio determines the cloud layer and the sky by determining the optimal threshold of the normalized red-blue ratio of the cloud image by using the minimum cross entropy method, and the ratio of the total pixel number of the cloud layer to the total pixel number of the cloud image is determined as the total cloud occupancy ratio C.
6. The ultra-short term irradiance prediction method based on typical cloud shielding irradiance difference as claimed in claim 5, wherein the normalized red-blue ratio optimal threshold is TH-0.01 t*
t*=arg min{-m(0,t-1)log[u(0,t-1)]-m(t,L)log[u(t,L)]}
Wherein t represents a sequence number of the histogram interval; t is t*The histogram interval sequence number of the optimal threshold value of the normalized red-blue ratio is located;
Figure FDA0002545272360000031
wherein h (i) is a percentage value of the histogram interval i, and L represents the total number of the histogram intervals.
7. The method of claim 5, wherein the method of determining the cloud and sky comprises:
and traversing the gray value of the cloud image pixel, and calculating the normalized red-blue ratio corresponding to each pixel, wherein the cloud layer pixel is considered as the normalized red-blue ratio which is smaller than the optimal threshold of the normalized red-blue ratio of the cloud image, and otherwise, the cloud layer pixel is considered as the sky pixel.
8. The method of claim 5, wherein the identifying the motion direction of the cloud layer, dividing the grids in the opposite direction of motion, and calculating the cloud number ratio in each grid comprises:
tracking and calculating all pixel point motions of the foundation cloud picture by using an optical flow method, and obtaining a cloud layer motion representative velocity v by using K-means clusteringp. Starting from the current sun position, along vpDividing 5 grids in the opposite direction, wherein the grid side lengthThe pixel distances of the cloud layers passing within one minute are calculated, and the ratio of the pixel points of the cloud layers of each grid to the total pixel number of each grid is sequentially c1、c2、c3、c4、c5
9. The method of claim 1, wherein the input quantity determination process comprises:
input vector k ═ It-1,It-2,It-3,It-4,It-5,C,c1,c2,c3,c4,c5]Normalization processing to obtain new input Ki=100(ki-kmin)/(kmax-kmin) (I ═ 1, 2.., 11), where I ist-1,It-2,It-3,It-4,It-5The solar irradiance measured values at five moments before the current moment t, C is the ratio of the total pixel number of the cloud layer to the total pixel number of the cloud picture, C is the total cloud amount ratio1,c2,c3,c4,c5For the ratio of the number of cloud layer pixels per grid to the total number of pixels per grid, KiRepresenting the ith element, K, of the new input KiRepresenting the i-th element, k, of the original input quantitymin、kmaxRespectively representing the element with the minimum value and the element with the maximum value in the original input quantity.
10. The ultra-short-term irradiation prediction method based on typical cloud shielding irradiation difference as claimed in claim 1, wherein the construction of the prediction model comprises classification processing of historical data and establishment of a training sample set;
the classification processing of the historical data is to classify the cloud pictures and the irradiation data collected in the irradiation fluctuation time period according to the fluctuation types of the historical data; wherein the fluctuation types include: the irradiation attenuation degree is more than or equal to 50 percent, and the duration is more than 5 minutes; the irradiation attenuation degree is more than or equal to 50 percent and the duration is less than 5 minutes; the irradiation attenuation degree is less than or equal to 20 percent, and the duration is less than 2 minutes;
the establishing process of the training sample set comprises the following steps:
respectively taking a first type historical data set, a second type historical data set and a third type historical data set, and after cloud picture feature extraction, obtaining a first type training sample set T2, a second type training sample set T3 and a third type training sample set T3;
and respectively carrying out neural network training on the three types of training sample sets to obtain prediction models of M1, M2 and M3.
11. The method for ultra-short-term irradiation prediction based on typical cloud shielding irradiation difference as claimed in claim 10, wherein the selection criteria for selecting the pre-trained prediction model according to the classification result of the sub-images is as follows:
if the total type label value is 0 or 1, selecting an M1 model; if the total type label value is 2 or 3, selecting an M2 model; if the total type tag value is 4 or 5, the M3 model is selected.
CN202010558199.5A 2020-06-18 2020-06-18 Ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference Active CN111738327B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010558199.5A CN111738327B (en) 2020-06-18 2020-06-18 Ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010558199.5A CN111738327B (en) 2020-06-18 2020-06-18 Ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference

Publications (2)

Publication Number Publication Date
CN111738327A true CN111738327A (en) 2020-10-02
CN111738327B CN111738327B (en) 2022-09-13

Family

ID=72649766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010558199.5A Active CN111738327B (en) 2020-06-18 2020-06-18 Ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference

Country Status (1)

Country Link
CN (1) CN111738327B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113886961A (en) * 2021-09-30 2022-01-04 中国科学院国家空间科学中心 Radiation effect calculation method, device and equipment based on spacecraft three-dimensional shielding

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766990A (en) * 2017-11-10 2018-03-06 河海大学 A kind of Forecasting Methodology of photovoltaic power station power generation power
CN110516629A (en) * 2019-08-30 2019-11-29 河海大学常州校区 A kind of nutritious obesity and classification method based on more Cloud Layer Characters
CN110514298A (en) * 2019-08-30 2019-11-29 河海大学常州校区 A kind of solar irradiation strength calculation method based on ground cloud atlas

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107766990A (en) * 2017-11-10 2018-03-06 河海大学 A kind of Forecasting Methodology of photovoltaic power station power generation power
CN110516629A (en) * 2019-08-30 2019-11-29 河海大学常州校区 A kind of nutritious obesity and classification method based on more Cloud Layer Characters
CN110514298A (en) * 2019-08-30 2019-11-29 河海大学常州校区 A kind of solar irradiation strength calculation method based on ground cloud atlas

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113886961A (en) * 2021-09-30 2022-01-04 中国科学院国家空间科学中心 Radiation effect calculation method, device and equipment based on spacecraft three-dimensional shielding

Also Published As

Publication number Publication date
CN111738327B (en) 2022-09-13

Similar Documents

Publication Publication Date Title
Si et al. Hybrid solar forecasting method using satellite visible images and modified convolutional neural networks
CN101334366B (en) Flotation recovery rate prediction method based on image characteristic analysis
CN111444939B (en) Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
Yang et al. ImgSensingNet: UAV vision guided aerial-ground air quality sensing system
CN112507793A (en) Ultra-short-term photovoltaic power prediction method
CN106651036A (en) Air quality forecasting system
CN112884742B (en) Multi-target real-time detection, identification and tracking method based on multi-algorithm fusion
CN111626128A (en) Improved YOLOv 3-based pedestrian detection method in orchard environment
CN112818969B (en) Knowledge distillation-based face pose estimation method and system
CN114266977B (en) Multi-AUV underwater target identification method based on super-resolution selectable network
CN110717408B (en) People flow counting method based on TOF camera
CN104320617A (en) All-weather video monitoring method based on deep learning
CN111368660A (en) Single-stage semi-supervised image human body target detection method
CN110427815B (en) Video processing method and device for realizing interception of effective contents of entrance guard
CN110751209A (en) Intelligent typhoon intensity determination method integrating depth image classification and retrieval
CN113064450B (en) Quantum particle swarm unmanned aerial vehicle path planning method based on annealing algorithm
CN115629160A (en) Air pollutant concentration prediction method and system based on space-time diagram
Ajith et al. Deep learning algorithms for very short term solar irradiance forecasting: A survey
CN111738327B (en) Ultrashort-term irradiation prediction method based on typical cloud shielding irradiation difference
CN113408550B (en) Intelligent weighing management system based on image processing
CN114882373A (en) Multi-feature fusion sandstorm prediction method based on deep neural network
CN110070023A (en) A kind of self-supervisory learning method and device based on sequence of motion recurrence
Pang et al. Federated Learning for Crowd Counting in Smart Surveillance Systems
CN113537137A (en) Escalator-oriented human body motion intrinsic feature extraction method and system
CN115470418B (en) Queuing point recommendation method and system based on unmanned aerial vehicle aerial photography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant