CN111428676A - Short-term rainfall prediction method based on sparse correspondence and deep neural network - Google Patents

Short-term rainfall prediction method based on sparse correspondence and deep neural network Download PDF

Info

Publication number
CN111428676A
CN111428676A CN202010253414.0A CN202010253414A CN111428676A CN 111428676 A CN111428676 A CN 111428676A CN 202010253414 A CN202010253414 A CN 202010253414A CN 111428676 A CN111428676 A CN 111428676A
Authority
CN
China
Prior art keywords
point
time
points
image
short
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010253414.0A
Other languages
Chinese (zh)
Other versions
CN111428676B (en
Inventor
方巍
张飞鸿
易伟楠
庞林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202010253414.0A priority Critical patent/CN111428676B/en
Publication of CN111428676A publication Critical patent/CN111428676A/en
Application granted granted Critical
Publication of CN111428676B publication Critical patent/CN111428676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Tourism & Hospitality (AREA)
  • Evolutionary Computation (AREA)
  • Strategic Management (AREA)
  • Computing Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Software Systems (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • General Business, Economics & Management (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Development Economics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Astronomy & Astrophysics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Remote Sensing (AREA)
  • Educational Administration (AREA)
  • Primary Health Care (AREA)
  • Image Analysis (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to a short-term rainfall prediction method based on sparse correspondence and a deep neural network, which specifically comprises the following steps: (1) image visualization; (2) detecting Fast characteristics; (3) SIFT matching; (4) calculating a global vector; (5) predicting radar echo images, and counting cloud cluster characteristics and space-time direction characteristics; (6) incepton v3 regression. The MAE and the RMSE obtained by the method are lower than those of other methods based on deep learning, the fitting degree is highest, the problem caused by too few data sets is solved, the method is more accurate in the result of constructing the inclusion v3 model compared with the CNN in the traditional stacking mode, and the precipitation prediction result can be more accurate and effective.

Description

Short-term rainfall prediction method based on sparse correspondence and deep neural network
Technical Field
The invention relates to the field of computer prediction, in particular to a short rainfall prediction method based on sparse correspondence and a deep neural network.
Background
At present, the prediction of the short-term rainfall is a research hotspot in meteorology, and the solution of the prediction problem of the short-term rainfall by using a radar echo image becomes a mainstream method. In radar weather, the strength of a weather target to the backscattering capability of radar waves is generally referred to as the strength of the weather target, and common measures are reflectivity and a reflectivity factor. Wherein the total of backscattering cross sections of the heavy cloud rain particles in unit volume is called reflectivity, the total of 6 powers of the diameters of the precipitation particles in unit volume of the precipitation target is called radar reflectivity factor, and the unit is mm6/m3. For convenience, we usually log-transform the reflectivity factor, described in dBZ: dBZ 10log (Z/Z0), wherein Z0 is 1mm6/m3
Because the radar echo map is a CAPPI image with constant height, the radar echo map can be converted into a mapping of rainfall intensity by utilizing a Marshall-palm relation or a radar echo and rainfall relation (Z-R) according to the detected reflectivity factor. Generally, the greater the reflectivity, the greater the amount of water in the cloud, and the more likely it is to cause precipitation.
The existing system predicts the precipitation amount by combining the speed and the echo intensity of the radial motion of a precipitation target along a radar wave band, and the threshold value of a reflectivity factor is matched to judge the possibility of causing rainfall, so as to realize the estimation of the future precipitation amount, the conventional forecast depends on an optical flow method, a motion vector of a cloud cluster is calculated through optical flow, and an echo image is extrapolated according to a semi-Lagrange day to obtain a forecast distribution, however, the method is improved from experience and data, the method is a specific type of machine learning, has strong capability and flexibility, can represent a complex concept as a nested hierarchical concept system, the advantages of the existing deep learning processing meteorological problems are gradually revealed, for example, after Shi and the like learn a large number of samples by using Conv L STM, the distribution prediction of the short-approaching radar echo image is realized, the result shows that the method is more accurate than a real-time optical flow method POVER and the Shi and the like improve on the basis of the conventional CNN, a dynamic CNN realizes the distribution prediction of radar images, compared with COC and DITREC, the method, the prediction of the early radar water amount is considered as a prediction by combining the conventional classification learning, the early-based on the learning of the early rainfall prediction of the early rainfall by using a long-dead-supervised learning, the early-late learning, and-late-early-.
The invention is inspired by an optical flow method, and estimates the motion trail of a cloud cluster by using a mode of combining Fast feature detection and SIFT matching, thereby overcoming the problem caused by too small number of data sets. Then, regression operation is carried out on the factors by utilizing an inclusion v3 model based on deep learning, and a multivariate relation between rainfall and image data is fitted. The invention separately carries out the description of the global motion vector and the regression operation, thereby avoiding the under-fitting condition which often occurs on a small data set in the end-to-end deep learning.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a short-term rainfall prediction method based on sparse correspondence and a deep neural network to solve the problem caused by too few data sets in the prior art and avoid the under-fitting condition frequently occurring on a small data set in end-to-end deep learning.
In order to solve the technical problems, the technical scheme of the invention is as follows: the short-term rainfall prediction method based on sparse correspondence and a deep neural network is provided, and has the innovation points that: the method specifically comprises the following steps:
(1) image visualization: converting the reflectivity factor in the radar echo map into a gray image so as to visualize the meteorological target;
(2) fast characteristic detection: performing feature detection on the visualized meteorological target image by using Fast;
(3) SIFT matching, namely describing and matching by using SIFT after Fast feature point detection is carried out, and endowing each feature point with a reference direction;
(4) calculating a global vector: calculating the moving speed and direction of each feature point in unit time based on the Taylor freezing hypothesis, and then drawing a global motion vector;
(5) predicting radar echo images, and counting cloud cluster characteristics and space-time direction characteristics: after the global vector is calculated, calculating a motion trajectory according to a Taylor freezing hypothesis and by utilizing an interpolation method to predict a radar echo image, and counting cloud cluster characteristics and space-time direction characteristics;
(6) inclusion v3 regression: and constructing an inclusion v3 model, inputting the local radar image into the inclusion v3 model to extract features, and finally obtaining output, namely obtaining the predicted precipitation.
Further, the specific method for Fast feature detection in the step (2) is as follows:
A. performing segmentation test on pixels on a fixed radius on the visual image obtained in the step (1), taking a point to be measured as a central pixel point, taking 16 surrounding pixels as points of neighborhood gray values, and comparing the central pixel point with four neighborhood points in the horizontal and vertical directions respectively by utilizing the spatial continuity of the image, wherein the characteristic point is determined when the threshold judgment condition is met, and otherwise, the characteristic point is determined as a non-characteristic point and needs to be removed;
B. after removing a large number of non-feature points, sequentially dividing the illumination degree of the feature points into three categories, sequentially quantizing the total uncertainty amount of the feature points by using entropy values, setting a boolean variable for each feature point to represent whether the feature points are angular points, and representing the uncertainty of the classification of the pixel set I by using empirical entropy H (I), wherein the empirical entropy H (I) is represented as:
Figure BDA0002435292590000041
where p represents the number of corner points,
Figure BDA0002435292590000042
the number of non-corner points is represented, and thus the information gain can be obtained: h (I) -H (w)1)-H(w2)-H(w3) Then, selecting a gray pixel containing the maximum information gain for segmentation;
C. the corner response function is defined as:
Figure BDA0002435292590000043
and taking the sum of the absolute values of the difference values of each feature point and the adjacent pixels as a quantization value, and comparing the quantization value with the adjacent corner points to remove the corner points with smaller response values.
Further, the condition formula of the threshold judgment in the step a is as follows:
Figure BDA0002435292590000044
wherein the parameter t is a threshold value, IlIs the gray value of the neighborhood, I0As the central pixel value, the class parameter wSS is {1, 2, 3}, and represents a luminance class in comparison with a neighborhood gradation value, and w is the case1And w3If the total number of the points is 3 or more, the point is considered as a feature point.
Further, the specific method for SIFT matching in step (3) is as follows: and calculating gradient values and directions of the characteristic points by using the difference quotient to replace a derivative, wherein the calculation formula of the gradient values is as follows:
Figure BDA0002435292590000051
the calculation formula of the direction of the characteristic point is as follows:
Figure BDA0002435292590000052
establishing a descriptor for each feature point for matching operation, constructing 128-dimensional phasor features by using gradient information in the 4x4 neighborhood window direction in a feature point scale space, forming SIFT feature description vectors, wherein the description vectors contain the structure hidden features of local clouds, and performing feature point matching on radar echo images of two adjacent frames according to the similarity degree of the characterization vectors.
Further, the method for calculating the global vector in step (4) is as follows: assuming that the observation result of the fixed observation point on the turbulence is equal to the observation result of other observation points along the mean wind direction in the same time period, the cloud cluster turbulence meets the Taylor freezing assumption in a short time, which is very similar to one of the assumed conditions of the optical flow method, namely, the spatial consistency, the relative displacement of each feature point in the radar echo image at the time t and the time t +1 can be calculated by using the feature point matching, the motion speed of the cloud cluster in the time period can be kept unchanged according to the Taylor freezing assumption, the motion speed and the motion direction of each feature point in unit time are calculated, and then the global motion vector is described.
Further, the specific method for predicting the radar echo image in the step (5) is as follows: since the center of the target station is not a feature point in most cases, an interpolation method is needed to calculate the motion trajectory, so that F (x)0T) is x0The echo intensity of the point at the time t is obtained by the following formula according to the average convection velocity invariance:
Figure BDA0002435292590000061
Figure BDA0002435292590000062
Figure BDA0002435292590000063
the velocity U in the above formula can be calculated by dividing the characteristic point offset by the time, i.e.
Figure BDA0002435292590000064
And then carrying out neighbor extrapolation, wherein the neighbor extrapolation process comprises the following steps: finding a nearest characteristic point around an observation central point by using Euclidean distance as a speed matching point, then moving along the reverse direction of a speed vector at the moment, re-matching the speed point after moving for a delta t time, namely repeatedly making a neighbor difference until the time is accumulated to a time length required to be predicted, and intercepting an image as a predicted radar echo image by taking the tail end of a track as the center and an observation range as an area after interpolation is finished.
Further, the specific method for counting the cloud cluster characteristics and the space-time direction characteristics in the step (5) is as follows:
the cloud cluster characteristics comprise shape characteristics and wind speed direction, the histogram of the SIFT descriptor is utilized for quantization to obtain the shape characteristics of the cloud cluster, and the wind speed direction of the cloud cluster is obtained according to the characteristic point relative displacement divided by the time; the space-time direction characteristics mainly comprise cloud cluster coverage rate and the maximum value, the mean value and the variance of the reflectivity, and the cloud cluster coverage rate and the maximum value, the mean value and the variance of the reflectivity are counted according to the pixel gray value.
Further, the inclusion v3 regression in the step (6) specifically comprises the following steps:
A. constructing an inclusion v3 model, wherein the inclusion v3 model comprises three parallel modules and two full-connection layers, the two full-connection layers are respectively an FCN1 full-connection layer and an FCN2 full-connection layer, the first module comprises convolution layers with the sizes of 1x1 and 3x3 and a pooling layer, the second module only comprises a convolution layer with the size of 1x1 and a pooling layer, the third module comprises convolution layers with the sizes of 3x1 and 1x3 and a pooling layer, and the pooling layers all adopt the maximum pooling with the step size of 2;
B. inputting a local radar image into three parallel modules to extract features, and transmitting the features to an FCN1 full-connection layer after dimension splicing;
C. because the regression and the classification are different, namely the classification is discrete and the regression is continuous, the output of the FCN1 is subjected to nonlinear processing by using a Relu function and then is input into the FCN2 full-link layer for fusion to obtain the output.
Further, in the inclusion v3 regression step, a loss function is defined as the following formula:
Figure BDA0002435292590000071
the cross validation is used as a training and validation strategy, and according to a measurement standard in linear regression, a mean square error RMSE, an average absolute error MAE and a fitting degree R-Squared are selected as standards for detecting the model score, wherein formulas of the square error RMSE, the average absolute error MAE and the fitting degree R-Squared are respectively as follows:
Figure BDA0002435292590000081
Figure BDA0002435292590000082
Figure BDA0002435292590000083
wherein the subscript of the output y has three expressions: train, label, test, respectively representing the output of the training process, sample label, and test output.
Compared with the prior art, the invention has the following beneficial effects:
the MAE and RMSE obtained by the method of combining FAST, SIFT and inclusion v3 based on the sparse correspondence and the deep neural network are lower than those obtained by other methods based on deep learning, the fitting degree is highest, the problem caused by the small number of data sets is solved, the result of constructing the inclusion v3 model is more accurate compared with the CNN in the traditional stacking mode, and the precipitation prediction result can be more accurate and effective.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments are briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flow chart of a short-term rainfall prediction method based on sparse correspondences and a deep neural network.
FIG. 2 is a diagram illustrating neighborhood pixel selection.
FIG. 3 is a graph of the result of SIFT matching in Fast detection.
FIG. 4 is a schematic diagram of trajectory extrapolation.
FIG. 5 is a diagram of a regression model.
FIG. 6 is a table of vector operation time comparisons.
FIG. 7 is a comparative table of the experiments for predicting precipitation.
FIG. 8 shows the results of the MAE, RMSE and R-Squared procedures during the training procedure combining FAST & SIFT and inclusion v 3.
FIG. 9 shows the results of the MAE, RMSE and R-Squared procedures during the training procedure combining FAST & SIFT and CNN.
FIG. 10 shows the results of the training of the Conv L STM for MAE, RMSE, and R-Squared.
FIG. 11 shows the results of the MAE, RMSE and R-Squared processes in the training process of L STM and 3DCNN in combination.
Detailed Description
The technical solution of the present invention will be clearly and completely described by the following detailed description.
The invention provides a short-term rainfall prediction method based on sparse correspondence and a deep neural network, a flow diagram of which is shown in figure 1, and the method specifically comprises the following steps:
(1) image visualization: converting the reflectivity factor in the radar echo map into a gray image so as to visualize the meteorological target;
(2) fast characteristic detection: performing feature detection on the visualized meteorological target image by using Fast, wherein the specific method for detecting the Fast feature comprises the following steps:
A. performing segmentation test on pixels on the fixed radius on the visual image obtained in the step (1), as shown in fig. 2, taking the point to be measured as a central pixel point, taking 16 surrounding pixels as neighborhood gray values, comparing the central pixel point with four neighborhood points in the horizontal and vertical directions respectively by using the spatial continuity of the image, and determining that the characteristic point is a threshold value according with a threshold value determination condition, otherwise, the characteristic point is a non-characteristic point and needs to be removed, wherein the threshold value determination condition formula is as follows:
Figure BDA0002435292590000101
wherein the parameter t is a threshold value, IlIs the gray value of the neighborhood, I0As the central pixel value, the class parameter wSS is {1, 2, 3}, and represents a luminance class in comparison with a neighborhood gradation value, and w is the case1And w3If the total number of the points is 3 or more, the point is considered as a feature point.
B. After removing a large number of non-feature points, sequentially dividing the illumination degree of the feature points into three categories, sequentially quantizing the total uncertainty amount of the feature points by using entropy values, setting a boolean variable for each feature point to represent whether the feature points are angular points, and representing the uncertainty of the classification of the pixel set I by using empirical entropy H (I), wherein the empirical entropy H (I) is represented as:
Figure BDA0002435292590000102
where p represents the number of corner points,
Figure BDA0002435292590000103
the number of non-corner points is represented, and thus the information gain can be obtained: h (I) -H (w)1)-H(w2)-H(w3) Then, selecting a gray pixel containing the maximum information gain for segmentation;
C. the corner response function is defined as:
Figure BDA0002435292590000104
and taking the sum of the absolute values of the difference values of each feature point and the adjacent pixels as a quantization value, and comparing the quantization value with the adjacent corner points to remove the corner points with smaller response values.
(3) SIFT matching, namely describing and matching by using SIFT after Fast feature point detection is carried out, and endowing each feature point with a reference direction, wherein the specific method for SIFT matching comprises the following steps: and calculating gradient values and directions of the characteristic points by using the difference quotient to replace a derivative, wherein the calculation formula of the gradient values is as follows:
Figure BDA0002435292590000111
the calculation formula of the direction of the characteristic point is as follows:
Figure BDA0002435292590000112
the method uses a histogram to count the gradient and the direction of pixels in a neighborhood, establishes a descriptor for each feature point for matching operation, uses the gradient information of the 4x4 neighborhood window direction in a feature point scale space to construct 128-dimensional phasor features to form SIFT feature description vectors, wherein the description vectors contain the structure hidden features of local clouds, and performs feature point matching on radar echo images of two adjacent frames according to the similarity of the representation vectors, as shown in FIG. 3.
(4) Calculating a global vector: calculating the moving speed and direction of each feature point in unit time based on the Taylor freezing assumption, and further describing a global motion vector, wherein the method for calculating the global vector comprises the following steps: assuming that the observation result of the fixed observation point on the turbulence is equal to the observation result of other observation points along the mean wind direction in the same time period, the cloud cluster turbulence meets the Taylor freezing assumption in a short time, which is very similar to one of the assumed conditions of the optical flow method, namely, the spatial consistency, the relative displacement of each feature point in the radar echo image at the time t and the time t +1 can be calculated by using the feature point matching, the motion speed of the cloud cluster in the time period can be kept unchanged according to the Taylor freezing assumption, the motion speed and the motion direction of each feature point in unit time are calculated, and then the global motion vector is described.
(5) Predicting radar echo images, and counting cloud cluster characteristics and space-time direction characteristics: and after the global vector is calculated, calculating a motion trajectory according to the Taylor freezing hypothesis and by using an interpolation method to predict the radar echo image, and counting the cloud cluster characteristics and the space-time direction characteristics.
The specific method for predicting the radar echo image comprises the following steps: since the center of the target station is not a feature point in most cases, interpolation is needed to calculate the motion trajectory, as shown in fig. 4, the goal is to predict the precipitation amount at the center coordinate of the area covered by the circle frame after 1-2 hours in the future. Let F (x)0T) is x0The echo intensity of the point at the time t is obtained by the following formula according to the average convection velocity invariance:
Figure BDA0002435292590000121
Figure BDA0002435292590000122
Figure BDA0002435292590000123
the velocity U in the above formula can be calculated by dividing the characteristic point offset by the time, i.e.
Figure BDA0002435292590000124
And then carrying out neighbor extrapolation, wherein the neighbor extrapolation process comprises the following steps: finding a nearest characteristic point around an observation central point by using Euclidean distance as a speed matching point, then moving along the reverse direction of a speed vector at the moment, re-matching the speed point after moving for a delta t time, namely repeatedly making neighbor difference until the time is accumulated to the time length required to be predicted, wherein the time of the method is 1.5 hours, and after the interpolation is finished, taking the tail end of a track as the center and the observation range as the area, intercepting an image as a predicted radar echo image.
The specific method for counting the cloud cluster characteristics and the space-time direction characteristics comprises the following steps: the cloud cluster characteristics comprise shape characteristics and wind speed direction, the histogram of the SIFT descriptor is utilized for quantization to obtain the shape characteristics of the cloud cluster, and the wind speed direction of the cloud cluster is obtained according to the characteristic point relative displacement divided by the time; the space-time direction characteristics mainly comprise cloud cluster coverage rate and the maximum value, the mean value and the variance of the reflectivity, and the cloud cluster coverage rate and the maximum value, the mean value and the variance of the reflectivity are counted according to the pixel gray value.
(6) Inclusion v3 regression: and constructing an inclusion v3 model, inputting the local radar image into the inclusion v3 model to extract features, and finally obtaining output, namely obtaining the predicted precipitation. The Inceptation v3 regression method specifically comprises the following steps:
A. constructing an inclusion v3 model, as shown in fig. 5, wherein the inclusion v3 model comprises three parallel modules and two full connection layers, the two full connection layers are an FCN1 full connection layer and an FCN2 full connection layer, the first module comprises convolution layers with the sizes of 1x1 and 3x3 and a pooling layer, the second module only comprises a convolution layer with the size of 1x1 and a pooling layer, the third module comprises convolution layers with the sizes of 3x1 and 1x3 and a pooling layer, and the pooling layers all adopt the maximum pooling with the step size of 2;
B. inputting a local radar image into three parallel modules to extract features, and transmitting the features to an FCN1 full-connection layer after dimension splicing;
C. because the regression and the classification are different, namely the classification is discrete and the regression is continuous, the output of the FCN1 is subjected to nonlinear processing by using a Relu function and then is input into the FCN2 full-link layer for fusion to obtain the output, and the predicted precipitation is obtained.
In the inclusion v3 regression step, a loss function is defined as the following equation:
Figure BDA0002435292590000141
the cross validation is used as a training and validation strategy, and according to a measurement standard in linear regression, a mean square error RMSE, an average absolute error MAE and a fitting degree R-Squared are selected as standards for detecting the model score, wherein formulas of the square error RMSE, the average absolute error MAE and the fitting degree R-Squared are respectively as follows:
Figure BDA0002435292590000142
Figure BDA0002435292590000143
Figure BDA0002435292590000144
wherein the subscript of the output y has three expressions: train, label, test, show the output, sample label and test output of the training process separately, on the training strategy of the invention, utilize Adam algorithm to optimize the gradient.
The method further compares the performance of the short-term rainfall prediction method based on sparse correspondence and a deep neural network, synthesizes local images in a sample into an integral cloud cluster by image splicing, describes a motion state by a Fast & SIFT method, and statistically stores cloud cluster structural features and space-time direction features.
According to Taylor freezing assumption, the invention utilizes a neighbor difference method to derive a motion trajectory and intercepts a local radar image above each target site as a data set for training a neural network, the data samples are divided into 10 parts according to a cross-validation rule, and training and validation are carried out by utilizing a 9: 1 proportional relation, after 2000 times of training, a final regression result is obtained, a comparison method comprises Conv L STM and 3DCNN + L STM, and CNN adopting a traditional stacking mode, the results of cross-validation of MAE, R-Squared and RMSE are shown in FIG. 7, the MAE reflects the true error of the model, the accuracy is higher when being closer to 0 model, the RMSE is the same as the size of the MAE, but the RMSE is more sensitive to dimension, for the same group of data, the RMSE is larger than the MAE, is substantially equivalent to amplifying the distance of error so as to be more convenient to observe, therefore, the value of the RMSE is smaller in the sense, the RMSE is larger, the overall model can not exist from the size, but the error degree of the same as the quadratic model, the quadratic R-is equal to the initial linear fitting of the quadratic, the quadratic R-is equal to the initial linear fitting of the absolute value of the model, the quadratic R-the absolute value of the quadratic model, the absolute value of the quadratic is smaller, the absolute value of the quadratic is equal to the absolute value of the square, the absolute value of the square, the absolute value of the square is equal to be equal to the absolute value of.
The comparison results show that the MAE and the RMSE obtained by the method combining FAST & SIFT and increment v3 are lower than those of other methods based on deep learning, and the fitting degree is the highest.
The above-mentioned embodiments are merely descriptions of the preferred embodiments of the present invention, and do not limit the concept and scope of the present invention, and various modifications and improvements made to the technical solutions of the present invention by those skilled in the art should fall into the protection scope of the present invention without departing from the design concept of the present invention, and the technical contents of the present invention as claimed are all described in the technical claims.

Claims (9)

1. A short-term rainfall prediction method based on sparse correspondences and a deep neural network is characterized by comprising the following steps: the method specifically comprises the following steps:
(1) image visualization: converting the reflectivity factor in the radar echo map into a gray image so as to visualize the meteorological target;
(2) fast characteristic detection: performing feature detection on the visualized meteorological target image by using Fast;
(3) SIFT matching, namely describing and matching by using SIFT after Fast feature point detection is carried out, and endowing each feature point with a reference direction;
(4) calculating a global vector: calculating the moving speed and direction of each feature point in unit time based on the Taylor freezing hypothesis, and then drawing a global motion vector;
(5) predicting radar echo images, and counting cloud cluster characteristics and space-time direction characteristics: after the global vector is calculated, calculating a motion trajectory according to a Taylor freezing hypothesis and by utilizing an interpolation method to predict a radar echo image, and counting cloud cluster characteristics and space-time direction characteristics;
(6) inclusion v3 regression: and constructing an Inceposition v3 model, inputting the local radar image into the Inceposition v3 model to extract features, and finally obtaining output, namely obtaining the predicted precipitation.
2. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 1, wherein the method comprises the following steps: the specific method for detecting Fast characteristics in the step (2) is as follows:
A. performing segmentation test on pixels on a fixed radius on the visual image obtained in the step (1), taking a point to be measured as a central pixel point, taking 16 surrounding pixels as points of neighborhood gray values, and comparing the central pixel point with four neighborhood points in the horizontal and vertical directions respectively by utilizing the spatial continuity of the image, wherein the characteristic point is determined when the threshold judgment condition is met, and otherwise, the characteristic point is determined as a non-characteristic point and needs to be removed;
B. after removing a large number of non-feature points, sequentially dividing the illumination degree of the feature points into three categories, sequentially quantizing the total uncertainty amount of the feature points by using entropy values, setting a boolean variable for each feature point to represent whether the feature points are angular points, and representing the uncertainty of the classification of the pixel set I by using empirical entropy H (I), wherein the empirical entropy H (I) is represented as:
Figure FDA0002435292580000021
where p represents the number of corner points,
Figure FDA0002435292580000022
the number of non-corner points is represented, and thus the information gain can be obtained: h (I) -H (w)1)-H(w2)-H(w3) Then, selecting a gray pixel containing the maximum information gain for segmentation;
C. the corner response function is defined as:
Figure FDA0002435292580000023
and taking the sum of the absolute values of the difference values of each feature point and the adjacent pixels as a quantization value, and comparing the quantization value with the adjacent corner points to remove the corner points with smaller response values.
3. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 2, wherein the method comprises the following steps: the condition formula of the threshold judgment in the step A is as follows:
Figure FDA0002435292580000024
wherein the parameter t is a threshold value, IlIs the gray value of the neighborhood, I0As the central pixel value, the class parameter wsS is {1, 2, 3}, and represents a luminance class in comparison with a neighborhood gradation value, and w is the case1And w3If the total number of the points is 3 or more, the point is considered as a feature point.
4. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 1, wherein the method comprises the following steps: the specific method for SIFT matching in the step (3) is as follows: and calculating gradient values and directions of the characteristic points by using the difference quotient to replace a derivative, wherein the calculation formula of the gradient values is as follows:
Figure FDA0002435292580000031
the calculation formula of the direction of the characteristic point is as follows:
Figure FDA0002435292580000032
establishing a descriptor for each feature point for matching operation, constructing 128-dimensional phasor features by using gradient information in the 4x4 neighborhood window direction in a feature point scale space, forming SIFT feature description vectors, wherein the description vectors contain the structure hidden features of local clouds, and performing feature point matching on radar echo images of two adjacent frames according to the similarity degree of the characterization vectors.
5. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 1, wherein the method comprises the following steps: the method for calculating the global vector in the step (4) comprises the following steps: assuming that the observation result of the fixed observation point on the turbulence is equal to the observation result of other observation points along the mean wind direction in the same time period, the cloud cluster turbulence meets the Taylor freezing assumption in a short time, which is very similar to one of the assumed conditions of the optical flow method, namely, the spatial consistency, the relative displacement of each feature point in the radar echo image at the time t and the time t +1 can be calculated by using the feature point matching, the motion speed of the cloud cluster in the time period can be kept unchanged according to the Taylor freezing assumption, the motion speed and the motion direction of each feature point in unit time are calculated, and then the global motion vector is described.
6. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 1, wherein the method comprises the following steps: the specific method for predicting the radar echo image in the step (5) is as follows: since the center of the target station is not a feature point in most cases, an interpolation method is needed to calculate the motion trajectory, so that F (x)0T) is x0The echo intensity of the point at the time t is obtained by the following formula according to the average convection velocity invariance:
Figure FDA0002435292580000041
the velocity U in the above formula can be calculated by dividing the characteristic point offset by the time, i.e.
Figure FDA0002435292580000042
And then carrying out neighbor extrapolation, wherein the neighbor extrapolation process comprises the following steps: finding a nearest characteristic point around an observation central point by using Euclidean distance as a speed matching point, then moving along the reverse direction of a speed vector at the moment, re-matching the speed point after moving for a delta t time, namely repeatedly making a neighbor difference until the time is accumulated to a time length required to be predicted, and intercepting an image as a predicted radar echo image by taking the tail end of a track as the center and an observation range as an area after interpolation is finished.
7. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 1, wherein the method comprises the following steps: the specific method for counting the cloud cluster characteristics and the space-time direction characteristics in the step (5) comprises the following steps:
the cloud cluster characteristics comprise shape characteristics and wind speed direction, the histogram of the SIFT descriptor is utilized for quantization to obtain the shape characteristics of the cloud cluster, and the wind speed direction of the cloud cluster is obtained according to the characteristic point relative displacement divided by the time; the space-time direction characteristics mainly comprise cloud cluster coverage rate and the maximum value, the mean value and the variance of the reflectivity, and the cloud cluster coverage rate and the maximum value, the mean value and the variance of the reflectivity are counted according to the pixel gray value.
8. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 1, wherein the method comprises the following steps: the inclusion v3 regression in the step (6) specifically comprises the following steps:
A. constructing an inclusion v3 model, wherein the inclusion v3 model comprises three parallel modules and two full-connection layers, the two full-connection layers are respectively an FCN1 full-connection layer and an FCN2 full-connection layer, the first module comprises convolution layers with the sizes of 1x1 and 3x3 and a pooling layer, the second module only comprises a convolution layer with the size of 1x1 and a pooling layer, the third module comprises convolution layers with the sizes of 3x1 and 1x3 and a pooling layer, and the pooling layers all adopt the maximum pooling with the step size of 2;
B. inputting a local radar image into three parallel modules to extract features, and transmitting the features to an FCN1 full-connection layer after dimension splicing;
C. because the regression and the classification are different, namely the classification is discrete and the regression is continuous, the output of the FCN1 is subjected to nonlinear processing by using a Relu function and then is input into the FCN2 full-link layer for fusion to obtain the output.
9. The method for predicting the short-term rainfall based on the sparse correspondence and the deep neural network according to claim 8, wherein: in the inclusion v3 regression step, a loss function is defined as the following equation:
Figure FDA0002435292580000061
the cross validation is used as a training and validation strategy, and according to a measurement standard in linear regression, a mean square error RMSE, an average absolute error MAE and a fitting degree R-Squared are selected as standards for detecting the model score, wherein formulas of the square error RMSE, the average absolute error MAE and the fitting degree R-Squared are respectively as follows:
Figure FDA0002435292580000062
Figure FDA0002435292580000063
Figure FDA0002435292580000064
wherein the subscript of the output y has three expressions: train, label, test, respectively representing the output of the training process, sample label, and test output.
CN202010253414.0A 2020-04-01 2020-04-01 Short-term rainfall prediction method based on sparse correspondence and deep neural network Active CN111428676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010253414.0A CN111428676B (en) 2020-04-01 2020-04-01 Short-term rainfall prediction method based on sparse correspondence and deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010253414.0A CN111428676B (en) 2020-04-01 2020-04-01 Short-term rainfall prediction method based on sparse correspondence and deep neural network

Publications (2)

Publication Number Publication Date
CN111428676A true CN111428676A (en) 2020-07-17
CN111428676B CN111428676B (en) 2023-04-07

Family

ID=71550905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010253414.0A Active CN111428676B (en) 2020-04-01 2020-04-01 Short-term rainfall prediction method based on sparse correspondence and deep neural network

Country Status (1)

Country Link
CN (1) CN111428676B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070286A (en) * 2020-08-25 2020-12-11 贵州黔源电力股份有限公司 Rainfall forecast early warning system for complex terrain watershed
CN112363140A (en) * 2020-11-05 2021-02-12 南京叁云科技有限公司 Thermodynamic constraint extrapolation objective correction method based on cyclic neural network
CN112363168A (en) * 2021-01-13 2021-02-12 南京满星数据科技有限公司 Assimilation fusion method based on radar extrapolation and mode prediction
CN112782700A (en) * 2020-12-30 2021-05-11 北京墨迹风云科技股份有限公司 Rainfall prediction method and device based on radar map
CN113255972A (en) * 2021-05-10 2021-08-13 东南大学 Short-term rainfall prediction method based on Attention mechanism
CN114492952A (en) * 2022-01-06 2022-05-13 清华大学 Short-term rainfall forecasting method and device based on deep learning, electronic equipment and storage medium
CN116451881B (en) * 2023-06-16 2023-08-22 南京信息工程大学 Short-time precipitation prediction method based on MSF-Net network model
CN116824372A (en) * 2023-06-21 2023-09-29 中国水利水电科学研究院 Urban rainfall prediction method based on Transformer
CN117973636A (en) * 2024-03-28 2024-05-03 南京信息工程大学 General traffic weather short-term forecasting method, device and storage medium
CN117973636B (en) * 2024-03-28 2024-05-31 南京信息工程大学 General traffic weather short-term forecasting method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210562A (en) * 2019-06-02 2019-09-06 西安电子科技大学 Image classification method based on depth network and sparse Fisher vector
CN110363327A (en) * 2019-06-04 2019-10-22 东南大学 Short based on ConvLSTM and 3D-CNN faces Prediction of Precipitation method
CN110579823A (en) * 2019-09-02 2019-12-17 中国电力科学研究院有限公司 method and system for forecasting short-term rainfall

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210562A (en) * 2019-06-02 2019-09-06 西安电子科技大学 Image classification method based on depth network and sparse Fisher vector
CN110363327A (en) * 2019-06-04 2019-10-22 东南大学 Short based on ConvLSTM and 3D-CNN faces Prediction of Precipitation method
CN110579823A (en) * 2019-09-02 2019-12-17 中国电力科学研究院有限公司 method and system for forecasting short-term rainfall

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐梓豪: "短临降水预报方法及其应用研究综述", 《科技经济导刊》 *
王婷等: "短临降水预报方法及其应用研究综述", 《电子世界》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070286B (en) * 2020-08-25 2023-11-24 贵州黔源电力股份有限公司 Precipitation forecast and early warning system for complex terrain river basin
CN112070286A (en) * 2020-08-25 2020-12-11 贵州黔源电力股份有限公司 Rainfall forecast early warning system for complex terrain watershed
CN112363140A (en) * 2020-11-05 2021-02-12 南京叁云科技有限公司 Thermodynamic constraint extrapolation objective correction method based on cyclic neural network
CN112363140B (en) * 2020-11-05 2024-04-05 南京叁云科技有限公司 Thermodynamic constraint extrapolation objective correction method based on cyclic neural network
CN112782700A (en) * 2020-12-30 2021-05-11 北京墨迹风云科技股份有限公司 Rainfall prediction method and device based on radar map
CN112782700B (en) * 2020-12-30 2024-04-16 北京墨迹风云科技股份有限公司 Precipitation prediction method and device based on radar map
CN112363168A (en) * 2021-01-13 2021-02-12 南京满星数据科技有限公司 Assimilation fusion method based on radar extrapolation and mode prediction
CN113255972B (en) * 2021-05-10 2022-11-01 东南大学 Short-term rainfall prediction method based on Attention mechanism
CN113255972A (en) * 2021-05-10 2021-08-13 东南大学 Short-term rainfall prediction method based on Attention mechanism
CN114492952A (en) * 2022-01-06 2022-05-13 清华大学 Short-term rainfall forecasting method and device based on deep learning, electronic equipment and storage medium
CN116451881B (en) * 2023-06-16 2023-08-22 南京信息工程大学 Short-time precipitation prediction method based on MSF-Net network model
CN116824372A (en) * 2023-06-21 2023-09-29 中国水利水电科学研究院 Urban rainfall prediction method based on Transformer
CN116824372B (en) * 2023-06-21 2023-12-08 中国水利水电科学研究院 Urban rainfall prediction method based on Transformer
CN117973636A (en) * 2024-03-28 2024-05-03 南京信息工程大学 General traffic weather short-term forecasting method, device and storage medium
CN117973636B (en) * 2024-03-28 2024-05-31 南京信息工程大学 General traffic weather short-term forecasting method, device and storage medium

Also Published As

Publication number Publication date
CN111428676B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN111428676B (en) Short-term rainfall prediction method based on sparse correspondence and deep neural network
WO2021218424A1 (en) Rbf neural network-based method for sea surface wind speed inversion from marine radar image
CN108596055B (en) Airport target detection method of high-resolution remote sensing image under complex background
Zhang et al. Weather radar echo prediction method based on convolution neural network and long short-term memory networks for sustainable e-agriculture
CN110378308B (en) Improved port SAR image near-shore ship detection method based on fast R-CNN
CN109871902B (en) SAR small sample identification method based on super-resolution countermeasure generation cascade network
CN106875395B (en) Super-pixel-level SAR image change detection method based on deep neural network
KR20200007084A (en) Ship detection method and system based on multi-dimensional features of scene
CN111160120A (en) Fast R-CNN article detection method based on transfer learning
WO2018168165A1 (en) Weather forecasting device, weather forecasting method, and program
CN110619328A (en) Intelligent ship water gauge reading identification method based on image processing and deep learning
CN114724120A (en) Vehicle target detection method and system based on radar vision semantic segmentation adaptive fusion
CN113033315A (en) Rare earth mining high-resolution image identification and positioning method
Li et al. Pixel-level detection and measurement of concrete crack using faster region-based convolutional neural network and morphological feature extraction
Laupheimer et al. The importance of radiometric feature quality for semantic mesh segmentation
CN114565824A (en) Single-stage rotating ship detection method based on full convolution network
Dong et al. Pixel-level intelligent segmentation and measurement method for pavement multiple damages based on mobile deep learning
CN113344148A (en) Marine ship target identification method based on deep learning
CN116703895A (en) Small sample 3D visual detection method and system based on generation countermeasure network
CN112785548A (en) Pavement crack detection method based on vehicle-mounted laser point cloud
CN116188943A (en) Solar radio spectrum burst information detection method and device
CN116012618A (en) Weather identification method, system, equipment and medium based on radar echo diagram
CN114170196A (en) SAR image small target identification method based on CenterNet2
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
CN115496998A (en) Remote sensing image wharf target detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant