CN116307267A - Rainfall prediction method based on convolution - Google Patents

Rainfall prediction method based on convolution Download PDF

Info

Publication number
CN116307267A
CN116307267A CN202310541048.2A CN202310541048A CN116307267A CN 116307267 A CN116307267 A CN 116307267A CN 202310541048 A CN202310541048 A CN 202310541048A CN 116307267 A CN116307267 A CN 116307267A
Authority
CN
China
Prior art keywords
convolution
rainfall
feature map
attention
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310541048.2A
Other languages
Chinese (zh)
Other versions
CN116307267B (en
Inventor
邹茂扬
范鈡月
杨昊
陈敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN202310541048.2A priority Critical patent/CN116307267B/en
Publication of CN116307267A publication Critical patent/CN116307267A/en
Application granted granted Critical
Publication of CN116307267B publication Critical patent/CN116307267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a rainfall prediction method based on convolution, which comprises the following steps: s1, acquiring rainfall and wind data, and processing the rainfall and wind data to obtain two-dimensional information; s2, inputting the two-dimensional information into a multi-mode fusion structure to obtain a rainfall characteristic diagram after information supplementation; s3, inputting the rainfall characteristic diagram into a UNet model to obtain a rainfall intensity classification result, and completing rainfall prediction; according to the rainfall prediction method based on convolution, higher-precision rainfall prediction can be obtained by utilizing complementarity among multiple modes, on the basis of adopting a multi-mode fusion technology, the provided UNet model can effectively construct a remote characteristic dependency relationship to strengthen characteristic extraction of edge information and time sequence information, and a lightweight convolution structure is constructed by changing a characteristic extraction process, so that the effect is optimal on the accuracy of rainfall prediction. The invention shows excellent performance in terms of both improving rainfall prediction accuracy and reducing model complexity.

Description

Rainfall prediction method based on convolution
Technical Field
The invention belongs to the field of rainfall prediction, and particularly relates to a rainfall prediction method based on convolution.
Background
The accurate rainfall forecast not only can provide convenience for daily life of people, but also is beneficial to disaster prevention and reduction. A common operating system currently used for precipitation prediction is the numerical weather prediction (Numerical Weather Prediction, NWP) model. NWP is a method of forecasting future weather by having functions taking into account various complications (physical processes such as terrain and non-adiabatic heating). This means that NWP methods are computationally complex, formulated and regularized severely, but rather limit the accuracy of precipitation prediction. With the continuous deep precipitation research, deep neural network technology is gradually introduced into the precipitation field by virtue of excellent feature extraction capability and automatic learning mode features.
The rainfall forecast based on deep learning has wide application prospect, and most of the rainfall forecast is realized by learning a nonlinear function through a single observed radar or satellite image at present. However, rainfall is the result of interactions throughout the weather system and can be affected by other environmental factors (e.g., wind, terrain, temperature, air pressure, etc.). To improve the prediction accuracy, a multi-modal fusion technique is proposed for use in the field of precipitation prediction. Comparatively, the work result of the research field aiming at multi-mode rainfall prediction is less; moreover, in previous studies, the application of deep learning based multimodal fusion techniques mostly used late fusion strategies. The late fusion method is to use two models to extract the characteristics of the modes respectively, and each mode is used for capturing information independently, so that the correlation and the integrity between the modes are ignored. The convolutional neural network has remarkable effect in the image field, and usually adopts an early fusion strategy for the multi-modal features, so that the multi-modal features are combined from an original space, and the complementarity of the multi-modal information can be better utilized to improve the model performance. In addition, with the continuous penetration of rainfall forecasting theory, the research result of the method is being applied to various related industries, and in practical application, because of calculation force limitation, the complexity and parameter quantity of a forecasting model are required to be high.
The existing method is easy to ignore a plurality of edge information when global features are extracted, long-distance feature dependency relationship of the features on the structure cannot be effectively constructed, the feature expression of each semantic information is insufficient, and model precision and potential improvement are weakened to a certain extent.
The existing method has the problems of large parameter quantity and high calculation complexity. In practical application, because of computational limitations, there are high requirements on the complexity and parameter amount of the forecasting model, so the size reduction of the model and the parameter amount reduction are critical to the practical rainfall forecasting industry of the method. Features are modeled in the space-time dimension on convolutional neural networks using standard convolutional layers. The method can excellently extract the space structure information, but the processing of the time sequence information of the continuous frames in the time domain is not satisfactory, and the model effect is greatly affected.
Disclosure of Invention
Aiming at the defects in the prior art, the rainfall prediction method based on convolution provided by the invention solves the following problems:
(1) The existing method adopts a later fusion strategy to ignore the correlation and the integrity among the modes and causes the problem of high model complexity.
(2) The existing method can not effectively construct remote characteristic dependency relationship, and has the problems of information redundancy and easy neglect of edge information.
(3) And the problem of erroneous segmentation of the precipitation area under the conditions of similar characteristics and discontinuous targets.
(4) Convolutional neural networks capture the problem of insufficient timing information for successive frames in the time domain.
(5) The existing method has the problems of large parameter quantity and high calculation complexity.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a convolution-based rainfall prediction method, comprising the steps of:
s1, acquiring rainfall and wind data, and processing the rainfall and wind data to obtain two-dimensional information;
s2, inputting the two-dimensional information into a multi-mode fusion structure to obtain a rainfall characteristic diagram after information supplementation;
s3, inputting the rainfall characteristic diagram into a UNet model to obtain a rainfall intensity classification result, and completing rainfall prediction;
the invention can predict the precipitation intensity classification result of the future half hour.
In the step S3, the method for obtaining the precipitation intensity classification result according to the UNet model specifically comprises the following steps:
s31, performing continuous four times of maximum pooling downsampling, depth separable convolution, batch normalization and Relu processing on the rainfall characteristic map to obtain a first characteristic map, and performing convolution attention processing before each time of maximum pooling downsampling to generate an intermediate characteristic map;
s32, performing convolution attention processing on the first feature map to obtain a second feature map;
s33, performing continuous four-time bilinear upsampling and depth separable convolution processing on the second feature map to obtain a third feature map, and splicing the intermediate feature map to the generated feature map through a jump-linked long-short-period memory network module in each bilinear upsampling processing;
and S34, carrying out convolution operation on the third characteristic diagram by 1*1 to obtain a precipitation intensity classification result.
Further: the two-dimensional information comprises an accumulated rainfall map, a west-to-east wind speed map and a south-to-north wind speed map;
the step S1 comprises the following sub-steps:
s11, converting rainfall data into an accumulated rainfall pattern through a Z-R relation algorithm;
wherein the rainfall data is radar reflectivity of 5-minute time steps, and the spatial resolution of the accumulated rainfall pattern is 0.01 degree;
s12, converting wind data into a west-to-east wind speed graph and a south-to-north wind speed graph through a trigonometric function method;
wherein the wind data are wind speed and wind direction at intervals of 5 minutes, and the spatial resolution of the west-to-east wind speed graph and the south-to-north wind speed graph is 0.01 degrees.
Further: in the S2, the multi-mode fusion structure comprises a convolution channel and a Concat layer which are sequentially connected, wherein the convolution channel comprises a first channel, a second channel and a third channel, and the first channel to the third channel are connected with the Concat layer;
the first channel, the second channel and the third channel are respectively provided with a convolution kernel of 3*3.
The beneficial effects of the above-mentioned further scheme are: the multi-mode fusion structure is used for carrying out feature extraction and fusion on multi-mode data, and feature combination is carried out after feature extraction is carried out by initial convolution, so that the information complementarity of the multi-mode data is fully utilized, the UNet model not only can extract more abundant structural information to improve rainfall prediction accuracy, but also greatly reduces the complexity of the model compared with a later fusion method.
Further: the step S2 comprises the following sub-steps:
s21, respectively inputting the accumulated rainfall map, the west-to-east wind speed map and the north-to-south wind speed map into the first channel to the third channel for convolution operation to obtain fourth to sixth characteristic maps;
s22, inputting the fourth to sixth feature maps into a Concat layer for accumulation and stacking operation, and obtaining a rainfall feature map.
Further: in the step S3, the method for depth separable convolution processing specifically includes:
and sequentially inputting the feature images of the depth separable convolution processing into two depth separable convolution modules to obtain a depth separable convolution processing result, wherein each depth separable convolution module comprises a channel-by-channel convolution and a point-by-point convolution which are connected with each other.
The beneficial effects of the above-mentioned further scheme are: the design depth separable convolution module replaces common convolution, and a lightweight convolution structure can be constructed by changing the image characteristic information extraction process so as to achieve the aims of compressing the number of parameters and lightening a network.
Further: in the step S3, the method of convolution attention processing specifically includes:
the method comprises the steps of taking a feature diagram of convolution attention processing as a first input feature diagram, inputting the first input feature diagram into a convolution attention module to obtain a convolution attention processing result, wherein the convolution attention module comprises a channel attention sub-module and a space attention sub-module which are connected with each other;
the method for obtaining the convolution attention processing result comprises the following steps:
SA1, inputting a first input feature map to a channel attention sub-module to obtain a channel attention feature map;
SA2, inputting the channel attention feature map to a space attention sub-module to obtain a two-dimensional space attention map;
and SA3, performing cross multiplication calculation on the channel attention characteristic diagram and the first input characteristic diagram according to the two-dimensional space attention diagram and the channel attention characteristic diagram to obtain a convolution attention processing result.
The beneficial effects of the above-mentioned further scheme are: aiming at the problems that the existing method cannot effectively construct remote characteristic dependency relationship and has information redundancy and easy neglect of edge information, the invention designs a convolution attention module in each layer of coding structure, firstly carries out attention characteristic extraction on a first input characteristic image, and then establishes structural long-distance dependency relationship with the characteristics of the same-level decoding layer, thereby increasing the characteristics related to rainfall in channels and spaces, effectively helping information flow in a network, effectively solving the problem of mistaken segmentation of rainfall areas under the conditions of similar characteristics and discontinuous targets, and improving the robustness of the network.
Further: in SA1, the channel attention submodule includes a first average pooling layer, a first maximum pooling layer and a multi-layer perception network, where the first average pooling layer and the first maximum pooling layer are both connected with the multi-layer perception network;
the SA1 specifically comprises the following components:
inputting a first input feature map to a first average pooling layer and a first maximum pooling layer to obtain a seventh feature map and an eighth feature map, inputting the seventh feature map and the eighth feature map to a multi-layer sensing network to obtain a ninth feature map and a tenth feature map, and performing accumulation and sigmoid activation function processing on the ninth feature map and the tenth feature map to obtain a channel attention feature map;
in SA2, the spatial attention submodule includes a second average pooling layer, a second maximum pooling layer and a first convolution which are connected with each other, wherein the first convolution is connected with the second average pooling layer and the second maximum pooling layer, and a convolution kernel of the first convolution is 7*7;
the SA2 specifically comprises:
performing cross multiplication calculation on the channel attention feature map and the first input feature map to obtain an eleventh feature map, respectively inputting the eleventh feature map into a second average pooling layer and a second maximum pooling layer to obtain a twelfth feature map and a thirteenth feature map, performing calculation on the twelfth feature map and the thirteenth feature map through first convolution to obtain a fourteenth feature map, and performing sigmoid activation function processing on the fourteenth feature map to obtain a two-dimensional space attention map;
the SA3 specifically comprises:
and performing cross multiplication calculation on the two-dimensional space attention map and the eleventh characteristic map to obtain a convolution attention processing result.
Further: in the step S3, the long-period memory network module comprises two long-period memory networks which are connected with each other, and each long-period memory network comprises an input gate, a forget gate and an output gate which are parallel;
the method for processing the feature map by each long-term and short-term memory network specifically comprises the following steps:
SB1, taking a characteristic diagram processed by a long-term and short-term memory network as an input characteristic;
SB2, inputting the input characteristic and the output characteristic at the last moment into a forgetting gate to obtain a first numerical value;
SB3, inputting the input characteristic and the output characteristic at the last moment into an input gate to obtain a second numerical value and a current new information state, and obtaining the current reserved new information according to the second numerical value and the current new information state;
SB4, obtaining storage information according to the new information, the first value and the state of the last time of the reserved information;
SB5, inputting the input characteristics and the output characteristics of the last moment into an output gate to obtain the characteristic information to be output;
SB6, obtaining output characteristics according to the characteristic information and the storage information to be output, and taking the output characteristics as the result of processing the characteristic diagram by the long-term and short-term memory network.
The beneficial effects of the above-mentioned further scheme are: aiming at the problem that the convolutional neural network captures the time sequence information of continuous frames in the time domain, the invention further provides a long-short-term memory network (LSTM), and the information flow is controlled through three gating structures of an input gate, a forgetting gate and an output gate so as to screen, supplement and reserve the time sequence characteristics for a long time, thereby increasing the flexibility of the network, solving the problem of the front-back association of the characteristics and providing technical support for further improving the rainfall prediction accuracy.
Further: in SB2, a first value is obtainedf t The expression of (2) is specifically:
Figure SMS_1
in the method, in the process of the invention,
Figure SMS_2
for the sigmoid activation function,W f as a first weight to be used,H t-1 as an output characteristic of the last moment in time,X t in order to input the characteristics of the feature,tfor the current moment of time,b f for the first bias, the value range of the first numerical value is 0 to 1;
in SB3, a second value is obtainedi t The expression of (2) is specifically:
Figure SMS_3
in the method, in the process of the invention,W i as a result of the second weight being set,b i is a second bias;
obtaining the current new information state
Figure SMS_4
The expression of (2) is specifically:
Figure SMS_5
in the method, in the process of the invention,W c as a result of the third weight being given,b c is a third bias;
obtaining new information which is currently reserved
Figure SMS_6
The expression of (2) is specifically:
Figure SMS_7
in SB4, the stored information is obtainedC t The expression of (2) is specifically:
Figure SMS_8
in the method, in the process of the invention,C t-1 the information state is reserved for the last moment;
in SB5, feature information to be output is obtainedO t The expression of (2) is specifically:
Figure SMS_9
in the method, in the process of the invention,W o for the fourth weight to be the fourth weight,b o is a fourth bias;
in SB6, the expression for obtaining the output characteristics is specifically:
Figure SMS_10
the beneficial effects of the invention are as follows:
(1) According to the rainfall prediction method based on convolution, the rainfall prediction with higher precision can be obtained by utilizing complementarity among multiple modes, and on the basis of adopting a multi-mode fusion technology, the rainfall prediction accuracy of a UNet model only using a convolution attention module is improved by at least 2.7% compared with that of a reference model, and the rainfall prediction accuracy of a UNet model only using a depth separable convolution is improved by at least 1.2% compared with that of the reference model; the rainfall prediction accuracy of the UNet model of the long-period memory network module is improved by at least 1.9% compared with that of the reference model, and the optimized UNet model provided by the invention, which designs three modules, further enhances the extraction of rainfall characteristics and has the best effect on the accuracy of rainfall prediction.
(2) The method not only improves the rainfall prediction accuracy by at least 4.6% compared with a reference model, but also reduces the reference quantity by three quarters compared with the traditional UNet model, and the scheme of the invention has excellent performance in improving the rainfall prediction accuracy and reducing the model complexity.
(3) The UNet model provided by the invention can effectively construct the remote characteristic dependency relationship to strengthen the characteristic extraction of the edge information and the time sequence information, and constructs a lightweight convolution structure by changing the characteristic extraction process, so that the effect on the accuracy of rainfall prediction is obvious.
Drawings
FIG. 1 is a flow chart of a rainfall prediction method based on convolution.
Fig. 2 is a schematic diagram of the UNet model structure according to the present invention.
FIG. 3 is a schematic diagram of a data processing trigonometric function according to the invention.
FIG. 4 is a schematic diagram of a depth separable convolution structure of the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, in one embodiment of the present invention, a convolution-based rainfall prediction method includes the steps of:
s1, acquiring rainfall and wind data, and processing the rainfall and wind data to obtain two-dimensional information;
s2, inputting the two-dimensional information into a multi-mode fusion structure to obtain a rainfall characteristic diagram after information supplementation;
s3, inputting the rainfall characteristic diagram into a UNet model to obtain a rainfall intensity classification result, and completing rainfall prediction;
as shown in fig. 2, in the step S3, the method for obtaining the precipitation intensity classification result according to the UNet model specifically includes:
s31, performing continuous four times of maximum pooling downsampling, depth separable convolution, batch normalization and Relu processing on the rainfall characteristic map to obtain a first characteristic map, and performing convolution attention processing before each time of maximum pooling downsampling to generate an intermediate characteristic map;
s32, performing convolution attention processing on the first feature map to obtain a second feature map;
s33, performing continuous four-time bilinear upsampling and depth separable convolution processing on the second feature map to obtain a third feature map, and splicing the intermediate feature map to the generated feature map through a jump-linked long-short-period memory network module in each bilinear upsampling processing;
and S34, carrying out convolution operation on the third characteristic diagram by 1*1 to obtain a precipitation intensity classification result.
In this embodiment, the UNet model includes an encoding portion and a decoding portion, where the depth of the encoding portion is separable, and the encoding portion performs convolution attention processing after convolution processing, so as to construct a structural long dependency relationship and enhance feature extraction of edge information, the decoding portion adopts linear upsampling bilinear upsampling processing to improve resolution of an output feature map, and secondly, in order to preserve more semantic information and spatial information, the UNet model receives semantic information from the bottom and recombines with high-resolution features through skip-connection, so that the structure can better perform a rainfall intensity classification task, and the skip-connection portion between the encoding portion and the decoding portion accesses 2 long-short-term memory networks (LSTM), acquires high-dimensional feature information of time latitude, captures temporal dynamics of spatial feature representation at different levels of resolution, and finally, the output features of the backbone network classify rainfall intensities through a1×1 convolution calculation.
The two-dimensional information comprises an accumulated rainfall map, a west-to-east wind speed map and a south-to-north wind speed map;
the step S1 comprises the following sub-steps:
s11, converting rainfall data into an accumulated rainfall pattern through a Z-R relation algorithm;
wherein the rainfall data is radar reflectivity of 5-minute time steps, and the spatial resolution of the accumulated rainfall pattern is 0.01 degree;
s12, converting wind data into a west-to-east wind speed graph and a south-to-north wind speed graph through a trigonometric function method;
wherein the wind data are wind speed and wind direction at intervals of 5 minutes, and the spatial resolution of the west-to-east wind speed graph and the south-to-north wind speed graph is 0.01 degrees.
As shown in FIG. 3, in the present embodiment, the spatial resolution is 0.01 degree (about 1 km. Times.1 km) and the distance from the ground is 10 m, in the weather, obtained by using a polar coordinate system as a wind speed map (U component) from west to east and a wind speed map (V component) from south to north by means of a trigonometric function
Figure SMS_11
The clockwise direction is positive corresponding to the positive direction of the Y axis. Knowing the wind speed WS and the wind direction WD, by
Figure SMS_12
And->
Figure SMS_13
Obtaining a U component and a V component;
as shown in fig. 3, in the present embodiment, N, S, W and E represent north, south, west and east, respectively, WS represents wind speed, WD represents wind direction,
Figure SMS_14
represents the included angle between the wind direction and the polar coordinate,Vrepresenting the wind component from the south to the north,Urepresenting the wind component from west to east.
In the S2, the multi-mode fusion structure comprises a convolution channel and a Concat layer which are sequentially connected, wherein the convolution channel comprises a first channel, a second channel and a third channel, and the first channel to the third channel are connected with the Concat layer;
the first channel, the second channel and the third channel are respectively provided with a convolution kernel of 3*3.
In this embodiment, the multi-modal fusion structure is used to perform feature extraction and fusion on multi-modal data, and feature combination is performed after feature extraction is performed by initial convolution, so that the information complementarity of the multi-modal data is fully utilized, and the UNet model not only can extract more abundant structural information to improve rainfall prediction accuracy, but also greatly reduces the complexity of the model compared with the late fusion method.
The step S2 comprises the following sub-steps:
s21, respectively inputting the accumulated rainfall map, the west-to-east wind speed map and the north-to-south wind speed map into the first channel to the third channel for convolution operation to obtain fourth to sixth characteristic maps;
s22, inputting the fourth to sixth feature maps into a Concat layer for accumulation and stacking operation, and obtaining a rainfall feature map.
In this embodiment, the pixel values 128×128 of the first to third channels and the corresponding 3×3 convolution kernel are subjected to convolution operation to obtain a corresponding feature map, and features at corresponding positions of the feature map are accumulated and stacked on the Concat layer to obtain a rainfall feature map subjected to information supplementation.
In the step S3, the method for depth separable convolution processing specifically includes:
and sequentially inputting the feature images of the depth separable convolution processing into two depth separable convolution modules to obtain a depth separable convolution processing result, wherein each depth separable convolution module comprises a channel-by-channel convolution and a point-by-point convolution which are connected with each other.
In this embodiment, as shown in fig. 4, a depth separable convolution module (DSC) is designed to replace the normal convolution, and a lightweight convolution structure can be constructed by changing the image feature information extraction process to achieve the goal of compressing the number of parameters and lightening the network, where the depth separable convolution module is composed of a channel-by-channel convolution (Depthwise Convolution) and a point-by-point convolution (Pointwise Convolution).
Input feature map size is
Figure SMS_16
The size of the convolution kernel is +.>
Figure SMS_18
The number of cores is N. Each point corresponding to the spatial position of the feature map is subjected to a convolution operation. Then the single standard convolution is calculated as
Figure SMS_21
N standard product convolution calculations are +.>
Figure SMS_17
The method comprises the steps of carrying out a first treatment on the surface of the Unlike standard convolution, channel-by-channel convolution performs a convolution operation on each channel of the input feature map using a convolution kernel
Figure SMS_19
Point-wise convolution combines channel features>
Figure SMS_20
The total calculated amount of the depth separable convolution module is +.>
Figure SMS_22
. The ratio of the calculated amount of the depth separable convolution to the standard convolution is +.>
Figure SMS_15
It is apparent that the computational efficiency of the depth separable convolution module is far better than standard convolution.
In the step S3, the method of convolution attention processing specifically includes:
the method comprises the steps of taking a feature diagram of convolution attention processing as a first input feature diagram, inputting the first input feature diagram into a convolution attention module to obtain a convolution attention processing result, wherein the convolution attention module comprises a channel attention sub-module and a space attention sub-module which are connected with each other;
the method for obtaining the convolution attention processing result comprises the following steps:
SA1, inputting a first input feature map to a channel attention sub-module to obtain a channel attention feature map;
SA2, inputting the channel attention feature map to a space attention sub-module to obtain a two-dimensional space attention map;
and SA3, performing cross multiplication calculation according to the two-dimensional space attention map and the channel attention feature map and the first input feature map to obtain a convolution attention processing result with rich space information.
Aiming at the problems that the existing method cannot effectively construct remote characteristic dependency relationship and has information redundancy and easy neglect of edge information, the invention designs a convolution attention module in each layer of coding structure, firstly carries out attention characteristic extraction on a first input characteristic image, and then establishes structural long-distance dependency relationship with the characteristics of the same-level decoding layer, thereby increasing the characteristics related to rainfall in channels and spaces, effectively helping information flow in a network, effectively solving the problem of mistaken segmentation of rainfall areas under the conditions of similar characteristics and discontinuous targets, and improving the robustness of the network.
In SA1, the channel attention submodule includes a first average pooling layer, a first maximum pooling layer and a multi-layer perception network, where the first average pooling layer and the first maximum pooling layer are both connected with the multi-layer perception network;
the SA1 specifically comprises the following components:
inputting a first input feature map to a first average pooling layer and a first maximum pooling layer to obtain a seventh feature map and an eighth feature map, inputting the seventh feature map and the eighth feature map to a multi-layer sensing network to obtain a ninth feature map and a tenth feature map, and performing accumulation and sigmoid activation function processing on the ninth feature map and the tenth feature map to obtain a channel attention feature map;
in this embodiment, a first input feature map is input to a first average pooling Layer and a first maximum pooling Layer to reduce information loss caused by pooling, a seventh feature map and an eighth feature map are simultaneously subjected to dimension reduction and dimension increase operations in a Multi-Layer persistence (MLP), weights in the process are shared, and finally, two feature maps obtained from the Multi-Layer persistence network are processed by a summation and sigmoid activation function to obtain a channel attention feature map,
obtaining a channel attention profile M CF) The expression of (2) is specifically:
Figure SMS_23
in the method, in the process of the invention,Fas a first input feature map,
Figure SMS_24
for the sigmoid activation function,AvgPoolF) As a consequence of the seventh feature map,MaxPoolF) For the eighth feature map->
Figure SMS_25
For the ninth feature map->
Figure SMS_26
Is a tenth feature map.
In SA2, the spatial attention submodule includes a second average pooling layer, a second maximum pooling layer and a first convolution which are connected with each other, wherein the first convolution is connected with the second average pooling layer and the second maximum pooling layer, and a convolution kernel of the first convolution is 7*7;
the spatial attention submodule is used for capturing important spatial features of the features.
The SA2 specifically comprises:
performing cross multiplication calculation on the channel attention feature map and the first input feature map to obtain an eleventh feature map, respectively inputting the eleventh feature map into a second average pooling layer and a second maximum pooling layer to obtain a twelfth feature map and a thirteenth feature map, performing calculation on the twelfth feature map and the thirteenth feature map through first convolution to obtain a fourteenth feature map, and performing sigmoid activation function processing on the fourteenth feature map to obtain a two-dimensional space attention map;
in this embodiment, the channel attention profiles pass through the second averaging pooling layer respectivelyAnd a second maximum pooling layer, and spliced into three-dimensional features; then, performing first convolution calculation on the convolution kernel of 7 multiplied by 7 to reduce the dimension of the convolution kernel to obtain a two-dimensional feature; generating a two-dimensional space attention diagram through a sigmoid activation function, and obtaining the two-dimensional space attention diagram
Figure SMS_27
The expression of (2) is specifically:
Figure SMS_28
in the method, in the process of the invention,
Figure SMS_29
is the eleventh characteristic diagram, the expression is +.>
Figure SMS_30
For the twelfth feature map, ++>
Figure SMS_31
For thirteenth feature map, ->
Figure SMS_32
Is the fourteenth feature map.
The SA3 specifically comprises:
and performing cross multiplication calculation on the two-dimensional space attention map and the eleventh characteristic map to obtain a convolution attention processing result with rich space information.
In the step S3, the long-period memory network module comprises two long-period memory networks which are connected with each other, and each long-period memory network comprises an input gate, a forget gate and an output gate which are parallel;
the method for processing the feature map by each long-term and short-term memory network specifically comprises the following steps:
SB1, taking a characteristic diagram processed by a long-term and short-term memory network as an input characteristic;
SB2, inputting the input characteristic and the output characteristic at the last moment into a forgetting gate to obtain a first numerical value;
SB3, inputting the input characteristic and the output characteristic at the last moment into an input gate to obtain a second numerical value and a current new information state, and obtaining the current reserved new information according to the second numerical value and the current new information state;
SB4, obtaining storage information according to the new information, the first value and the state of the last time of the reserved information;
SB5, inputting the input characteristics and the output characteristics of the last moment into an output gate to obtain the characteristic information to be output;
SB6, obtaining output characteristics according to the characteristic information and the storage information to be output, and taking the output characteristics as the result of processing the characteristic diagram by the long-term and short-term memory network.
In this embodiment, aiming at the problem that the convolutional neural network captures the time sequence information of the continuous frames in the time domain, the invention further provides a long-short-term memory network (LSTM), and the information flow is controlled by three gating structures of the input gate, the forgetting gate and the output gate so as to screen, supplement and reserve the time sequence characteristics for a long time, thereby increasing the flexibility of the network, solving the problem of the front-back association of the characteristics and providing technical support for further improving the rainfall prediction accuracy.
In SB2, a first value is obtainedf t The expression of (2) is specifically:
Figure SMS_33
in the method, in the process of the invention,
Figure SMS_34
for the sigmoid activation function,W f as a first weight to be used,H t-1 as an output characteristic of the last moment in time,X t in order to input the characteristics of the feature,tfor the current moment of time,b f for the first bias, the value range of the first numerical value is 0 to 1;
in SB3, a second value is obtainedi t The expression of (2) is specifically:
Figure SMS_35
in the middle of,W i As a result of the second weight being set,b i is a second bias;
obtaining the current new information state
Figure SMS_36
The expression of (2) is specifically:
Figure SMS_37
in the method, in the process of the invention,W c as a result of the third weight being given,b c is a third bias;
obtaining new information which is currently reserved
Figure SMS_38
The expression of (2) is specifically:
Figure SMS_39
in SB4, the stored information is obtainedC t The expression of (2) is specifically:
Figure SMS_40
in the method, in the process of the invention,C t-1 the information state is reserved for the last moment;
in SB5, feature information to be output is obtainedO t The expression of (2) is specifically:
Figure SMS_41
in the method, in the process of the invention,W o for the fourth weight to be the fourth weight,b o is a fourth bias;
in SB6, the expression for obtaining the output characteristics is specifically:
Figure SMS_42
in this embodiment, the forgetting gate is configured to determine how much information at the previous time needs to be retained at the current time, and the input feature and the output feature at the previous time are respectively calculated by using a first weight, and then add a first offset to increase flexibility of a function to improve fitting capability of an algorithm. The preserved information is introduced into a nonlinear factor by a function sigmoid to improve the expression capability of the feature. Finally, the forgetting gate returns a first value between 0 and 1, and the first value determines how much information is reserved by the information storage structure for subsequent transmission.
The input gate is used for determining how much information is updated, the sigmoid layer and the tanh layer act together, the input characteristic and the output characteristic at the last moment are calculated through the second weight, and then the second bias is added. The updated information is activated by the function sigmoid, returning a second value between 0 and 1, which can determine how much new information to retain. And regarding the current new information storage, calculating a third weight of the input characteristic and the output characteristic at the last moment, adding a third bias, calculating through a tanh function to obtain the current new information state, and finally obtaining the current reserved new information.
The stored information is determined by the new information currently reserved, the first value and the state of the reserved information at the previous moment.
The beneficial effects of the invention are as follows: according to the rainfall prediction method based on convolution, the rainfall prediction with higher precision can be obtained by utilizing complementarity among multiple modes, and on the basis of adopting a multi-mode fusion technology, the rainfall prediction accuracy of a UNet model only using a convolution attention module is improved by at least 2.7% compared with that of a reference model, and the rainfall prediction accuracy of a UNet model only using a depth separable convolution is improved by at least 1.2% compared with that of the reference model; the rainfall prediction accuracy of the UNet model of the long-period memory network module is improved by at least 1.9% compared with that of the reference model, and the optimized UNet model provided by the invention, which designs three modules, further enhances the extraction of rainfall characteristics and has the best effect on the accuracy of rainfall prediction.
The method not only improves the rainfall prediction accuracy by at least 4.6% compared with a reference model, but also reduces the reference quantity by three quarters compared with the traditional UNet model, and the scheme of the invention has excellent performance in improving the rainfall prediction accuracy and reducing the model complexity.
The UNet model provided by the invention can effectively construct the remote characteristic dependency relationship to strengthen the characteristic extraction of the edge information and the time sequence information, and constructs a lightweight convolution structure by changing the characteristic extraction process, so that the effect on the accuracy of rainfall prediction is obvious.
In the description of the present invention, it should be understood that the terms "center," "thickness," "upper," "lower," "horizontal," "top," "bottom," "inner," "outer," "radial," and the like indicate or are based on the orientation or positional relationship shown in the drawings, merely to facilitate description of the present invention and to simplify the description, and do not indicate or imply that the devices or elements referred to must have a particular orientation, be configured and operated in a particular orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be interpreted as indicating or implying a relative importance or number of technical features indicated. Thus, a feature defined as "first," "second," "third," or the like, may explicitly or implicitly include one or more such feature.

Claims (9)

1. A convolution-based rainfall prediction method, comprising the steps of:
s1, acquiring rainfall and wind data, and processing the rainfall and wind data to obtain two-dimensional information;
s2, inputting the two-dimensional information into a multi-mode fusion structure to obtain a rainfall characteristic diagram after information supplementation;
s3, inputting the rainfall characteristic diagram into a UNet model to obtain a rainfall intensity classification result, and completing rainfall prediction;
in the step S3, the method for obtaining the precipitation intensity classification result according to the UNet model specifically comprises the following steps:
s31, performing continuous four times of maximum pooling downsampling, depth separable convolution, batch normalization and Relu processing on the rainfall characteristic map to obtain a first characteristic map, and performing convolution attention processing before each time of maximum pooling downsampling to generate an intermediate characteristic map;
s32, performing convolution attention processing on the first feature map to obtain a second feature map;
s33, performing continuous four-time bilinear upsampling and depth separable convolution processing on the second feature map to obtain a third feature map, and splicing the intermediate feature map to the generated feature map through a jump-linked long-short-period memory network module in each bilinear upsampling processing;
and S34, carrying out convolution operation on the third characteristic diagram by 1*1 to obtain a precipitation intensity classification result.
2. The convolution-based rainfall prediction method according to claim 1, wherein the two-dimensional information includes a cumulative rainfall map, a west-to-east wind speed map and a north-to-south wind speed map;
the step S1 comprises the following sub-steps:
s11, converting rainfall data into an accumulated rainfall pattern through a Z-R relation algorithm;
wherein the rainfall data is radar reflectivity of 5-minute time steps, and the spatial resolution of the accumulated rainfall pattern is 0.01 degree;
s12, converting wind data into a west-to-east wind speed graph and a south-to-north wind speed graph through a trigonometric function method;
wherein the wind data are wind speed and wind direction at intervals of 5 minutes, and the spatial resolution of the west-to-east wind speed graph and the south-to-north wind speed graph is 0.01 degrees.
3. The rainfall prediction method based on convolution according to claim 2, wherein in the S2, the multi-mode fusion structure comprises a convolution channel and a Concat layer which are sequentially connected, the convolution channel comprises a first channel, a second channel and a third channel, and the first channel to the third channel are connected with the Concat layer;
the first channel, the second channel and the third channel are respectively provided with a convolution kernel of 3*3.
4. A method of convolution-based rainfall prediction according to claim 3 characterised in that S2 comprises the sub-steps of:
s21, respectively inputting the accumulated rainfall map, the west-to-east wind speed map and the north-to-south wind speed map into the first channel to the third channel for convolution operation to obtain fourth to sixth characteristic maps;
s22, inputting the fourth to sixth feature maps into a Concat layer for accumulation and stacking operation, and obtaining a rainfall feature map.
5. The rainfall prediction method based on convolution according to claim 1, wherein in S3, the method of depth separable convolution processing specifically comprises:
and sequentially inputting the feature images of the depth separable convolution processing into two depth separable convolution modules to obtain a depth separable convolution processing result, wherein each depth separable convolution module comprises a channel-by-channel convolution and a point-by-point convolution which are connected with each other.
6. The method for predicting rainfall based on convolution according to claim 1, wherein in S3, the method for convolution attention processing specifically comprises:
the method comprises the steps of taking a feature diagram of convolution attention processing as a first input feature diagram, inputting the first input feature diagram into a convolution attention module to obtain a convolution attention processing result, wherein the convolution attention module comprises a channel attention sub-module and a space attention sub-module which are connected with each other;
the method for obtaining the convolution attention processing result comprises the following steps:
SA1, inputting a first input feature map to a channel attention sub-module to obtain a channel attention feature map;
SA2, inputting the channel attention feature map to a space attention sub-module to obtain a two-dimensional space attention map;
and SA3, performing cross multiplication calculation on the channel attention characteristic diagram and the first input characteristic diagram according to the two-dimensional space attention diagram and the channel attention characteristic diagram to obtain a convolution attention processing result.
7. The convolution-based rainfall prediction method of claim 6, wherein in SA1, the channel attention submodule includes a first average pooling layer, a first maximum pooling layer and a multi-layer perception network, and the first average pooling layer and the first maximum pooling layer are connected with the multi-layer perception network;
the SA1 specifically comprises the following components:
inputting a first input feature map to a first average pooling layer and a first maximum pooling layer to obtain a seventh feature map and an eighth feature map, inputting the seventh feature map and the eighth feature map to a multi-layer sensing network to obtain a ninth feature map and a tenth feature map, and performing accumulation and sigmoid activation function processing on the ninth feature map and the tenth feature map to obtain a channel attention feature map;
in SA2, the spatial attention submodule includes a second average pooling layer, a second maximum pooling layer and a first convolution which are connected with each other, wherein the first convolution is connected with the second average pooling layer and the second maximum pooling layer, and a convolution kernel of the first convolution is 7*7;
the SA2 specifically comprises:
performing cross multiplication calculation on the channel attention feature map and the first input feature map to obtain an eleventh feature map, respectively inputting the eleventh feature map into a second average pooling layer and a second maximum pooling layer to obtain a twelfth feature map and a thirteenth feature map, performing calculation on the twelfth feature map and the thirteenth feature map through first convolution to obtain a fourteenth feature map, and performing sigmoid activation function processing on the fourteenth feature map to obtain a two-dimensional space attention map;
the SA3 specifically comprises:
and performing cross multiplication calculation on the two-dimensional space attention map and the eleventh characteristic map to obtain a convolution attention processing result.
8. The method of claim 1, wherein in S3, the long-short-term memory network module includes two interconnected long-short-term memory networks, each including an input gate, a forget gate, and an output gate in parallel;
the method for processing the feature map by each long-term and short-term memory network specifically comprises the following steps:
SB1, taking a characteristic diagram processed by a long-term and short-term memory network as an input characteristic;
SB2, inputting the input characteristic and the output characteristic at the last moment into a forgetting gate to obtain a first numerical value;
SB3, inputting the input characteristic and the output characteristic at the last moment into an input gate to obtain a second numerical value and a current new information state, and obtaining the current reserved new information according to the second numerical value and the current new information state;
SB4, obtaining storage information according to the new information, the first value and the state of the last time of the reserved information;
SB5, inputting the input characteristics and the output characteristics of the last moment into an output gate to obtain the characteristic information to be output;
SB6, obtaining output characteristics according to the characteristic information and the storage information to be output, and taking the output characteristics as the result of processing the characteristic diagram by the long-term and short-term memory network.
9. The method of claim 8, wherein in SB2, a first value is obtainedf t The expression of (2) is specifically:
Figure QLYQS_1
in the method, in the process of the invention,
Figure QLYQS_2
for the sigmoid activation function,W f as a first weight to be used,H t-1 as an output characteristic of the last moment in time,X t in order to input the characteristics of the feature,tfor the current moment of time,b f for the first bias, the value range of the first numerical value is 0 to 1;
in SB3, getTo a second valuei t The expression of (2) is specifically:
Figure QLYQS_3
in the method, in the process of the invention,W i as a result of the second weight being set,b i is a second bias;
obtaining the current new information state
Figure QLYQS_4
The expression of (2) is specifically:
Figure QLYQS_5
in the method, in the process of the invention,W c as a result of the third weight being given,b c is a third bias;
obtaining new information which is currently reserved
Figure QLYQS_6
The expression of (2) is specifically:
Figure QLYQS_7
in SB4, the stored information is obtainedC t The expression of (2) is specifically:
Figure QLYQS_8
in the method, in the process of the invention,C t-1 the information state is reserved for the last moment;
in SB5, feature information to be output is obtainedO t The expression of (2) is specifically:
Figure QLYQS_9
in the method, in the process of the invention,W o for the fourth weight to be the fourth weight,b o is a fourth bias;
in SB6, the expression for obtaining the output characteristics is specifically:
Figure QLYQS_10
CN202310541048.2A 2023-05-15 2023-05-15 Rainfall prediction method based on convolution Active CN116307267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310541048.2A CN116307267B (en) 2023-05-15 2023-05-15 Rainfall prediction method based on convolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310541048.2A CN116307267B (en) 2023-05-15 2023-05-15 Rainfall prediction method based on convolution

Publications (2)

Publication Number Publication Date
CN116307267A true CN116307267A (en) 2023-06-23
CN116307267B CN116307267B (en) 2023-07-25

Family

ID=86790885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310541048.2A Active CN116307267B (en) 2023-05-15 2023-05-15 Rainfall prediction method based on convolution

Country Status (1)

Country Link
CN (1) CN116307267B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563507A (en) * 2020-04-14 2020-08-21 浙江科技学院 Indoor scene semantic segmentation method based on convolutional neural network
CN111915619A (en) * 2020-06-05 2020-11-10 华南理工大学 Full convolution network semantic segmentation method for dual-feature extraction and fusion
CN112861727A (en) * 2021-02-09 2021-05-28 北京工业大学 Real-time semantic segmentation method based on mixed depth separable convolution
CN113095586A (en) * 2021-04-23 2021-07-09 华风气象传媒集团有限责任公司 Short-term multi-meteorological-element forecasting method based on deep neural network
CN113283435A (en) * 2021-05-14 2021-08-20 陕西科技大学 Remote sensing image semantic segmentation method based on multi-scale attention fusion
CN114187275A (en) * 2021-12-13 2022-03-15 贵州大学 Multi-stage and multi-scale attention fusion network and image rain removing method
CN114462578A (en) * 2022-02-10 2022-05-10 南京信息工程大学 Method for improving forecast precision of short rainfall
CN115310724A (en) * 2022-10-10 2022-11-08 南京信息工程大学 Precipitation prediction method based on Unet and DCN _ LSTM
CN115578384A (en) * 2022-11-30 2023-01-06 长春工业大学 UNet brain tumor image segmentation algorithm based on global and local feature fusion
CN115641498A (en) * 2022-09-01 2023-01-24 福建师范大学 Medium-term rainfall forecast post-processing correction method based on space multi-scale convolutional neural network
CN115761261A (en) * 2022-11-27 2023-03-07 东南大学 Short-term rainfall prediction method based on radar echo diagram extrapolation
CN115907189A (en) * 2022-12-06 2023-04-04 成都信息工程大学 Strong wind forecast correction method based on hybrid model

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563507A (en) * 2020-04-14 2020-08-21 浙江科技学院 Indoor scene semantic segmentation method based on convolutional neural network
CN111915619A (en) * 2020-06-05 2020-11-10 华南理工大学 Full convolution network semantic segmentation method for dual-feature extraction and fusion
CN112861727A (en) * 2021-02-09 2021-05-28 北京工业大学 Real-time semantic segmentation method based on mixed depth separable convolution
CN113095586A (en) * 2021-04-23 2021-07-09 华风气象传媒集团有限责任公司 Short-term multi-meteorological-element forecasting method based on deep neural network
CN113283435A (en) * 2021-05-14 2021-08-20 陕西科技大学 Remote sensing image semantic segmentation method based on multi-scale attention fusion
CN114187275A (en) * 2021-12-13 2022-03-15 贵州大学 Multi-stage and multi-scale attention fusion network and image rain removing method
CN114462578A (en) * 2022-02-10 2022-05-10 南京信息工程大学 Method for improving forecast precision of short rainfall
CN115641498A (en) * 2022-09-01 2023-01-24 福建师范大学 Medium-term rainfall forecast post-processing correction method based on space multi-scale convolutional neural network
CN115310724A (en) * 2022-10-10 2022-11-08 南京信息工程大学 Precipitation prediction method based on Unet and DCN _ LSTM
CN115761261A (en) * 2022-11-27 2023-03-07 东南大学 Short-term rainfall prediction method based on radar echo diagram extrapolation
CN115578384A (en) * 2022-11-30 2023-01-06 长春工业大学 UNet brain tumor image segmentation algorithm based on global and local feature fusion
CN115907189A (en) * 2022-12-06 2023-04-04 成都信息工程大学 Strong wind forecast correction method based on hybrid model

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
DO NGOC TUYEN等: "RainPredRNN:A new approach for precipitation nowcasing with weather radar echo images based on deep learning", 《》, vol. 11, no. 3, pages 1 - 14 *
HONGSHU CHE 等: "ED-DRAP:Encoder-Decoder deep residual attention prediction network for radar echoes", 《IEEE GEOSCIENCE AND REMOTE SENSING LETTERS》, vol. 19, pages 1 - 5, XP011898791, DOI: 10.1109/LGRS.2022.3141498 *
JIANHAO MA 等: "A Precipitation Prediction Method Based on UNet and Attention Mechanism", 《2022 IEEE INTL CONF ON DEPENDABLE》, pages 1 - 4 *
ZHIYUN YANG等: "A self-attention integrated spatiotemporal LSTM approach to edge-radar echo extrapolation in the Internet of Radars", 《ISA TRANSACTIONS》, pages 155 - 166 *
周逸风: "基于深度学习的单目深度估计算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 1, pages 138 - 2800 *

Also Published As

Publication number Publication date
CN116307267B (en) 2023-07-25

Similar Documents

Publication Publication Date Title
CN109377530B (en) Binocular depth estimation method based on depth neural network
CN110084210B (en) SAR image multi-scale ship detection method based on attention pyramid network
CN109271933B (en) Method for estimating three-dimensional human body posture based on video stream
CN112435282B (en) Real-time binocular stereo matching method based on self-adaptive candidate parallax prediction network
CN113033570A (en) Image semantic segmentation method for improving fusion of void volume and multilevel characteristic information
CN111462324B (en) Online spatiotemporal semantic fusion method and system
CN110889198B (en) Space-time probability distribution prediction method and system based on multi-factor joint learning
CN112288776B (en) Target tracking method based on multi-time step pyramid codec
CN112560865B (en) Semantic segmentation method for point cloud under outdoor large scene
CN112991350A (en) RGB-T image semantic segmentation method based on modal difference reduction
CN107341513B (en) Multi-source ocean surface temperature remote sensing product fusion method based on stable fixed order filtering model
CN113283525A (en) Image matching method based on deep learning
CN114445634A (en) Sea wave height prediction method and system based on deep learning model
CN111797841A (en) Visual saliency detection method based on depth residual error network
CN115792913B (en) Radar echo extrapolation method and system based on space-time network
CN115223017B (en) Multi-scale feature fusion bridge detection method based on depth separable convolution
CN116486102A (en) Infrared dim target detection method based on mixed spatial modulation characteristic convolutional neural network
CN114842284A (en) Attention mechanism and DCGAN-based steel rail surface defect image expansion method
CN111627055A (en) Scene depth completion method based on semantic segmentation
CN116307267B (en) Rainfall prediction method based on convolution
CN116699731B (en) Tropical cyclone path short-term forecasting method, system and storage medium
CN114742206B (en) Rainfall intensity estimation method for comprehensive multi-time space-scale Doppler radar data
CN116863241A (en) End-to-end semantic aerial view generation method, model and equipment based on computer vision under road scene
CN116933621A (en) Urban waterlogging simulation method based on terrain feature deep learning
CN115631412A (en) Remote sensing image building extraction method based on coordinate attention and data correlation upsampling

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant