CN112053375A - Method and equipment for predicting prediction based on improved network convolution model - Google Patents

Method and equipment for predicting prediction based on improved network convolution model Download PDF

Info

Publication number
CN112053375A
CN112053375A CN202010871067.8A CN202010871067A CN112053375A CN 112053375 A CN112053375 A CN 112053375A CN 202010871067 A CN202010871067 A CN 202010871067A CN 112053375 A CN112053375 A CN 112053375A
Authority
CN
China
Prior art keywords
data
convolution
layer
ppi
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010871067.8A
Other languages
Chinese (zh)
Inventor
何娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202010871067.8A priority Critical patent/CN112053375A/en
Publication of CN112053375A publication Critical patent/CN112053375A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9094Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01WMETEOROLOGY
    • G01W1/00Meteorology
    • G01W1/10Devices for predicting weather conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Atmospheric Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Ecology (AREA)
  • Environmental Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The method and the device for predicting the nowcasting based on the improved network convolution model are provided, and the method and the device are used for predicting the nowcasting based on the improved network convolution model by determining the convolution kernel size and the input image size of the original network convolution model; determining an edge area on a specified azimuth of the input image according to the convolution kernel size and the input image size; establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the edge area on the specified azimuth; and inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future. Therefore, the problem of data edge mutation at the radar scanning starting position is avoided, and the prediction accuracy of the approach prediction is improved.

Description

Method and equipment for predicting prediction based on improved network convolution model
Technical Field
The application relates to the field of computers, in particular to a method and equipment for predicting nowcasting based on an improved network convolution model.
Background
With the development of remote sensing technology, radar has become an important means for ground detection of weather conditions. The Doppler weather radar is a new type of radar, and it uses the Doppler effect of the change between the echo frequency and the emitting frequency of the precipitation to measure the radial movement speed of the precipitation particles, and uses the speed information chart to measure the wind speed distribution, vertical airflow speed, atmospheric turbulence, precipitation particle spectrum distribution, and wind field structure characteristics in the precipitation, especially in the strong convection precipitation. The reflected signal detected by the radar is called radar echo, and the strength and the structure of the signal detected by the radar reflect the structure of weather and precipitation. By extrapolating the evolution process of the radar level echo diagram over time, it is not difficult to obtain a forecast of the future weather evolution law, such a process being generally referred to as a nowcasting.
The display modes of general Doppler radar products comprise PPI, CAPPI, CR and the like, with the development of machine learning, a lot of predictions of the approach prediction are carried out through deep learning of a neural network, and data used for prediction are usually carried out by using CAPPI or CR with fixed height; because the volume sweep of the Doppler weather radar is conical, data of CAPPI, VCS or CR are generated by PPI interpolation, and a part of data amount of the data is lost, so that the method has the problems of high uncertainty, low representativeness and the like, wherein the combined reflectivity factor only comprises basic reflectivity factors of one elevation angle and two elevation angles. When PPI data is directly used, the problems of multiple elevation angles and data sparseness exist, and the problem of edge mutation of an image generated in forecasting is caused.
Disclosure of Invention
An object of the present application is to provide a method and a device for predicting a nowcasting based on an improved network convolution model, which solve the problems in the prior art that the data edge mutation at the radar scanning start position is caused by the original convolution structure in the network learning PPI data, and the data used in the existing nowcasting method has a large uncertainty, and the combined reflectivity factor only includes the basic reflectivity factors of one or two elevation angles, and the representativeness is not strong.
According to one aspect of the present application, there is provided a method for predicting a nowcasting based on an improved network convolution model, the method comprising:
determining the convolution kernel size and the input image size of the original network convolution model used;
determining an edge area on a specified azimuth of the input image according to the convolution kernel size and the input image size;
establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the edge area on the specified azimuth;
and inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future.
Further, the edge area in the designated position includes an upper edge area, a lower edge area, a left edge area, and a right edge area, and the corresponding convolution operation is performed on the edge area in the designated position, including:
performing convolution operation on the left edge area and the right edge area in a convolution mode with zero edge filling values;
and filling the edge filling value of the lower edge area into the upper edge area to generate a scroll image, and performing convolution operation on the scroll image.
Further, before inputting the determined network training data into the improved network convolution model for sequence learning, the method includes:
determining multilayer PPI original data at each moment in the target quantity moments;
and determining network training data according to the multiple layers of PPI original data at each moment.
Further, determining network training data according to the multiple layers of PPI raw data at each time, including:
carrying out mapping processing of a target layer elevation angle on the multi-layer PPI original data at each moment to obtain mapping data corresponding to each moment;
and splicing the channels of the mapping data from the first layer to the last layer at all times to serve as network training data.
Further, determining a plurality of layers of PPI raw data at each of the target number of time instants includes:
reading radar scanning data of N elevation angles at each moment in target quantity moments, wherein the radar data comprise images with sizes determined by radar azimuth scanning frequency and equidistant sampling points on rays, and N is a positive integer;
and determining the multilayer PPI original data at each moment in the target quantity moments according to the radar scanning data of the N elevation angles at each moment in the target moments.
Further, the input image size is determined by the radar azimuth scanning frequency and equidistant sampling points on the ray.
Further, determining a non-edge region of the input image and an edge region in a specified orientation according to the convolution kernel size and the input image size includes:
when the convolution kernel size is m × k and the input image size is L × R, the abscissa is from
Figure 728715DEST_PATH_IMAGE001
To
Figure 657357DEST_PATH_IMAGE002
And the ordinate is from
Figure 651858DEST_PATH_IMAGE003
To
Figure 199514DEST_PATH_IMAGE004
The area of (a) is a non-edge area of the input image, wherein L is the equal distance of equidistant sampling points on a ray, and R is the radar azimuth scanning frequency;
and determining an edge area in a designated direction according to the size of the input image and the determined non-edge area of the input image.
Further, performing mapping processing of a target layer elevation angle on the multi-layer PPI raw data at each time to obtain mapping data corresponding to each time, including:
carrying out noise pretreatment on the N layers of PPI original data to obtain N layers of PPI de-noising data;
mapping any layer elevation angle in the N layers of PPI de-noising data to an i layer elevation angle to obtain mapping data which takes the i layer as a first layer and takes M layers as a last layer, wherein i is more than or equal to 1 and less than or equal to N, and M = N-i;
splicing the channels of the mapping data from the first layer to the last layer at all times, wherein the splicing comprises the following steps:
and splicing the mapping data of the ith layer to the mapping data channel of the Mth layer at all the moments.
According to another aspect of the present application, there is also provided an apparatus for predicting a nowcasting based on an improved network convolution model, the apparatus including:
the acquisition device is used for determining the convolution kernel size and the input image size of the used original network convolution model;
determining means for determining a non-edge region of the input image and an edge region in a specified orientation according to the convolution kernel size and the input image size;
the establishing device is used for establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the edge area on the specified azimuth;
and the prediction device is used for inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future.
According to yet another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the method as described above.
Compared with the prior art, the method has the advantages that the convolution kernel size and the input image size of the original network convolution model are determined; determining an edge area on a specified azimuth of the input image according to the convolution kernel size and the input image size; establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the non-edge area and the edge area on the designated position; and inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future. Therefore, the problem of data edge mutation at the radar scanning starting position is avoided, and the prediction accuracy of the approach prediction is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method for predicting nowcasting based on an improved network convolution model provided in accordance with an aspect of the present application;
FIG. 2 is a diagram illustrating the convolution effect of a roll convolution in one embodiment of the present application;
fig. 3 is a schematic diagram illustrating a process of performing noise preprocessing on the N layers of PPI raw data according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating the prediction of PPI data using an improved network convolution model according to an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating an apparatus for predicting a nowcast based on an improved network convolution model according to another aspect of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include volatile Memory in a computer readable medium, Random Access Memory (RAM), and/or nonvolatile Memory such as Read Only Memory (ROM) or flash Memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change RAM (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic Disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 is a flow chart illustrating a method for predicting a nowcast based on an improved network convolution model according to an aspect of the present application, the method including: step S11 to step S14,
in step S11, the convolution kernel size and the input image size of the original network convolution model used are determined; here, the original network convolution model cannot capture information at the other end of the image at the edge portion during the convolution process, when the original network convolution model is improved in the present application, the convolution in the network convolution model is replaced by a reel convolution, and when the original network convolution model is improved to be the reel convolution, it is first necessary to determine the convolution kernel size of the model and the input image size input to the model, that is, determine the convolution kernel size of the convolution kernel and the input image size of the convolution, so as to perform a targeted convolution processing on the input image, where the input image is a scanned image obtained by radar scanning or an optimized image obtained by processing the scanned image, and more preferably, is a PPI image, and PPI (plan indicator) is a plane Position display, and PPI data is data obtained by scanning the radar for one turn at a certain elevation angle.
In step S12, determining an edge region in a specified orientation of the input image from the convolution kernel size and the input image size; in this case, the non-edge regions of the input image and the edge regions in the specified orientation are calculated from the convolution kernel size and the input image size, and the original convolution is modified to a reel convolution from the determined regions.
Next, in step S13, establishing an improved network convolution model, wherein the improved network convolution model performs a corresponding convolution operation on the edge region at the specified orientation; in this case, the improved network convolution model may be used for performing the corresponding convolution operation on the determined edge region, so as to avoid the problem of edge abrupt change, while the original network convolution model may be used for the non-edge region, or in other manners. And the established improved network convolution model has better PPI data training effect through the improvement of the convolution in the model.
Subsequently, in step S14, the determined network training data is input into the improved network convolution model for sequence learning, and a forecast image of several frames in the future is obtained. Here, the modified network convolution model is used to train input data, which is determined network training data that is processed based on the read PPI data when weather prediction is performed, so that prediction images of several frames in the future are obtained through sequence learning.
In an embodiment of the present application, the edge regions in the designated orientation include an upper edge region, a lower edge region, a left edge region, and a right edge region, and in step S13, a convolution operation is performed on the left edge region and the right edge region in a convolution manner with an edge fill value being zero; and filling the edge filling value of the lower edge area into the upper edge area to generate a scroll image, and performing convolution operation on the scroll image. The edge regions in the designated azimuth include upper and lower edge regions and left and right edge regions of the input image, the edge regions in each azimuth are subjected to targeted processing, so that an original convolution structure is optimized to be a reel convolution structure, an original convolution in the network is replaced by reel convolution, information of the other end of the image cannot be captured at the edge part in the convolution process of the original convolution, the reel convolution is designed, and the two-dimensional image padding is rolled into a reel shape in the convolution padding (edge padding learning), so that information of the actual periphery can be captured in the process of learning the edge information. Wherein the method comprises the following steps: and performing convolution operation on the non-edge area by using the original network convolution model. Specifically, the convolution window does not overflow the image size in the non-edge region, and the convolution operation is performed on the partial region by using the original network convolution structure; for the left and right edge regions, because the abscissa is the spatial ray distance and no correlation exists, the left and right edge portions are convolved by adopting a convolution mode with zero edge padding (padding = 0); for the edge portions on the upper and lower sides, for example, when processing a scanning azimuth of r =0 degrees, the nearest azimuth should be 1 degree and 359 degrees, so the lower edge padding of the PPI image is filled into the upper edge region of the image to generate a reel image, and the reel image is subjected to a convolution operation.
In an embodiment of the present application, before inputting the determined network training data into the improved network convolution model for sequence learning, multiple layers of PPI raw data at each time of a target number of times may be determined; and determining network training data according to the multiple layers of PPI original data at each moment. Here, the improved network convolution model is preferably used for processing PPI data images, which can be determined by reading multiple layers of PPI raw data. Specifically, the target number is the number of the selected multiple moments, and if 5 future frames of images are forecasted when the approach forecast is performed, the target number moments are selected as five historical moments of T, T +1, T +2, T +3, and T +4, and the 5 future frames of weather forecast images are forecasted. In the embodiment of the application, the prediction data used selects PPI data of a plurality of elevation angles at each moment in time during acquisition, so that a plurality of layers of PPI raw data are generated to serve as prediction data sources of the nowcasting. Because the PPI original data has no uncertainty caused by data certainty and frame interpolation, the effect of converting subsequent network learning information quantity and generated PPI data into CAPPI is better than that of training by directly using CAPPI. Denoising the obtained multilayer PPI original data, mapping the denoised data and further obtaining network training data according to the mapped data.
Continuing with the above embodiment, the mapping processing of the target layer elevation angle may be performed on the multiple layers of PPI raw data at each time to obtain mapping data corresponding to each time; and splicing the channels of the mapping data from the first layer to the last layer at all times to serve as network training data. The method comprises the steps of processing multilayer PPI raw data at each moment, taking the multilayer PPI raw data at the T moment as an example, mapping the multilayer PPI raw data at the moment, wherein the mapping can be directly performed, denoising the multilayer PPI raw data first, and then mapping a target layer elevation angle of the denoised data, wherein the target layer elevation angle is one layer of the obtained PPI layers, and the denoised data at other elevation angles are mapped to the elevation angle, so that the mapping data at the T moment can be obtained. And performing the mapping processing on the multi-layer PPI original data at other moments to obtain mapping data corresponding to each moment. The first layer of all the time points to a target layer selected during multi-layer elevation mapping at the time point, such as the ith layer, the mapping data of the first layer is the denoised PPI data of the ith layer, the second layer is the (i + 1) th layer, the mapping data of the second layer is the mapping data mapped to the ith layer by the (i + 1) th layer, and so on for subsequent layers and corresponding mapping data. And splicing the multi-layer mapping data, wherein the spliced data is used as network training data.
In an embodiment of the present application, when determining multiple layers of PPI raw data at each of target quantity moments, radar scan data at N elevation angles at each of the target quantity moments may be read, where the radar scan data includes an image of a size determined by a radar azimuth scanning frequency and equidistant sampling points on a ray, and N is a positive integer; and determining the multilayer PPI original data at each moment in the target quantity moments according to the radar scanning data of the N elevation angles at each moment in the target moments. Taking the time T as an example, reading radar scanning data of N elevation angles at the time T, performing cone scanning on each elevation angle of the doppler radar, wherein the number of the elevation angles can be set by itself, taking the time T as an example, setting the time N elevation angles, taking the elevation angle of the bottommost layer as a first-layer elevation angle, each frame of data represents an echo signal received after the radar scans for one circle at a certain time, the signal read at each elevation angle is a matrix, the abscissa is an equidistant sampling point on a ray and is denoted by L, the interval distance of each sampling point is L0, if sampling is performed every 300m in the ray direction, L0=300m, the ordinate is a scanning azimuth angle and is denoted by R, the range of R is 0-360 degrees and is one circle, the azimuth scanning frequency is R, and the vertical height corresponding to each sampling point is denoted by h, so that an image with the size of. It should be noted that the acquired N elevation angles include a bottom elevation angle, a middle elevation angle and a top elevation angle, where one elevation angle may include one elevation angle and may also include multiple elevation angles, for example, there are 15 scanning elevation angles for radar scanning in a certain area, and the first 3 elevation angles are taken as bottom elevation angles.
By obtaining the optimized PPI image through the above embodiment, when the improved network convolution model is performed, the size of the input image is determined by the radar azimuth scanning frequency and equidistant sampling points on the ray. And ensuring that the size of the input image is consistent with that of the PPI original image and the image size is fixed.
In an embodiment of the present application, assuming that the convolution kernel size is m × k and the input image size is L × R, the abscissa is derived from
Figure 776120DEST_PATH_IMAGE005
To
Figure 16608DEST_PATH_IMAGE006
And the ordinate is from
Figure 560722DEST_PATH_IMAGE003
To
Figure 912069DEST_PATH_IMAGE004
The area of (a) is a non-edge area of the input image, wherein L is the equal distance of equidistant sampling points on a ray, and R is the radar azimuth scanning frequency; and determining an edge area in a designated direction according to the size of the input image and the determined non-edge area of the input image. The non-edge region is the abscissa slave
Figure 530132DEST_PATH_IMAGE007
To
Figure 551309DEST_PATH_IMAGE006
And the ordinate is from
Figure 458085DEST_PATH_IMAGE003
To
Figure 144281DEST_PATH_IMAGE008
The convolution window does not overflow the image size in this range, the convolution mode of this part of the area adopts the original convolution operation, for the left and right edge portions, the convolution mode of padding =0 is adopted, and for the upper and lower edge portions, the lower edge of the PPI image is padded to the upper side of the image, i.e., the image changes from L × R to the size of (L + m-1) × R, and then the convolution operation is performed, so that the upper and lower edge portions can acquire the information of the bottom of the image, as shown in fig. 2.
In an embodiment of the present application, noise preprocessing may be performed on the N layers of PPI raw data to obtain N layers of PPI de-noising data; and mapping any layer elevation angle in the N layers of PPI de-noising data to an i layer elevation angle to obtain mapping data which takes the i layer as a first layer and takes M layers as a last layer, wherein i is more than or equal to 1 and less than or equal to N, and M = N-i. In the method, noise preprocessing is performed on N layers of PPI raw data at the time T, so that noise caused by equipment or interference of ground object noise, solar rays and the like in the radar scanning process is removed. Fixed position noise can be removed by adopting a horse-type distance, a value smaller than 10dBz is removed to remove non-fixed position noise, and a remained denoised radar echo diagram is cleaner and less interfered data, so that the follow-up network learning is facilitated, and the accuracy of prediction is improved. Mapping the denoised PPI data, namely mapping the elevation angle of any other layer (marked as a j layer) to the elevation angle of an i layer, using the denoised PPI data of the i layer as spliced first layer data, using the mapping data of the denoised PPI data of the (i + 1) th layer as second layer data, and sequentially using the mapping data of the denoised PPI data of the N layer as M layer data, wherein M = N-i; for example, if N is 10 and i is 5, the spliced mapping data of the first layer is the original layer 5 data, and the spliced mapping data of the layer 5 is the mapping data obtained by mapping the original layer 10 data to the original layer 5. Fig. 3 is a schematic diagram illustrating a process of performing noise pre-processing on the N layers of PPI raw data in an embodiment of the present application, where the image denoising is performed on the PPI data at an elevation angle of layer 1, the PPI data at an elevation angle of layer 2, … …, the PPI data at an elevation angle of layer i, … …, and the PPI data at an elevation angle of layer i, and the denoised image is mapped to an influence range of the elevation angle of layer i.
In an embodiment of the present application, when channels of mapping data from the first layer to the last layer at all times are spliced, channels of mapping data from the ith layer to the mth layer at all times may be spliced. In the embodiment of the application, channels of mapping data from a first layer to a last layer obtained at each time are spliced, that is, channels of mapping data from an ith layer to an mth layer are spliced. The spliced data is used as the input of a prediction generation network for model training, and because a plurality of frames at the time before the task learning of the prediction are used for predicting a plurality of frames in the future, the input of the network is input at a plurality of times in the embodiment of the application, namely after the multi-layer PPI data corresponding to each time in the target number of times are fused, the fusion results at all the times are input into the improved network convolution model for training. Therefore, the problem of insufficient data information of PPI at a single elevation angle is solved, the quality of network learning is improved, and the prediction accuracy of the nowcasting is improved.
In an embodiment of the present application, as shown in fig. 4, T =0 original PPI data, T =1 original PPI data, T =2 original PPI data, T =3 original PPI data, and T =4 original PPI data are read, the original PPI data at each time is preprocessed to obtain corresponding ith layer PPI data, the ith layer PPI data at all times are concatenated and then input into the above improved network convolution model for sequence learning, and the ith layer PPI data predicted by T =5, the ith layer PPI data predicted by T =6, the ith layer PPI data predicted by T =7, the ith layer PPI data predicted by T =8, and the ith layer PPI data predicted by T =9 are output. In the application, the data form in the improved network convolution model is improved from CAPPI to PPI data after mapping processing, and the PPI is the PPI data without uncertainty caused by data loss and frame interpolation in the original data, so that the effect of converting network learning information quantity and the generated PPI data into CAPPI is better than that of training by directly using the CAPPI; meanwhile, the common convolution structure is improved into a drum-shaped convolution structure, so that the problem of edge mutation is avoided; the PPI is adopted during training data, so that prediction of a prediction product model is not limited to learning data, data processing can be further performed on the predicted PPI, the PPI can be processed into data formats of various display modes such as CAPPI, CR, RHI and VCS, the data format of the training data is not limited to one, and the training data can be displayed through various display modes, so that the problem of analysis limitation caused by the single data format of the training data is solved. In addition, the multi-layer PPI data are fused into the training of the single-layer PPI data, so that the model learns the multi-dimensional space-time data, and the effect of generating the image by the network is more stable.
In addition, the embodiment of the present application further provides a computer readable medium, on which computer readable instructions are stored, the computer readable instructions being executable by a processor to implement the foregoing method for predicting a nowcasting based on an improved network convolution model.
In correspondence with the method described above, the present application also provides a terminal, which includes modules or units capable of executing the steps of the method described in fig. 1 or each embodiment, and these modules or units can be implemented by hardware, software or a combination of hardware and software, and this application is not limited thereto. For example, in an embodiment of the present application, there is also provided an apparatus for predicting a nowcasting based on an improved network convolution model, the apparatus including:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method as previously described.
For example, the computer readable instructions, when executed, cause the one or more processors to:
determining the convolution kernel size and the input image size of the original network convolution model used;
determining an edge area on a specified azimuth of the input image according to the convolution kernel size and the input image size;
establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the edge area on the specified azimuth;
and inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future.
Fig. 5 is a schematic structural diagram illustrating an apparatus for predicting a nowcast based on an improved network convolution model according to another aspect of the present application, where the apparatus includes: the device comprises an acquisition device 11, a determination device 12, a building device 13 and a prediction device 14, wherein the acquisition device 11 is used for determining the convolution kernel size and the input image size of an original network convolution model used; the determining device 12 is used for determining a non-edge area of the input image and an edge area on a designated position according to the convolution kernel size and the input image size; the establishing device 13 is configured to establish an improved network convolution model, where a corresponding convolution operation is performed on the edge region in the specified orientation in the improved network convolution model; the prediction device 14 is configured to input the determined network training data into the improved network convolution model for sequence learning, so as to obtain a prediction image of several frames in the future.
It should be noted that the content executed by the obtaining device 11, the determining device 12, the establishing device 13 and the predicting device 14 is the same as or corresponding to the content in the above steps S11, S12, S13 and S14, respectively, and for the sake of brevity, the description is omitted here.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (10)

1. A method for predicting nowcasting based on an improved network convolution model, the method comprising:
determining the convolution kernel size and the input image size of the original network convolution model used;
determining an edge area on a specified azimuth of the input image according to the convolution kernel size and the input image size;
establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the edge area on the specified azimuth;
and inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future.
2. The method of claim 1, wherein the edge regions in the designated orientation comprise an upper edge region, a lower edge region, a left edge region, and a right edge region, and performing the corresponding convolution operation on the edge regions in the designated orientation comprises:
performing convolution operation on the left edge area and the right edge area in a convolution mode with zero edge filling values;
and filling the edge filling value of the lower edge area into the upper edge area to generate a scroll image, and performing convolution operation on the scroll image.
3. The method of claim 1, wherein prior to inputting the determined network training data into the improved network convolution model for sequence learning, comprising:
determining multilayer PPI original data at each moment in the target quantity moments;
and determining network training data according to the multiple layers of PPI original data at each moment.
4. The method of claim 3, wherein determining network training data from the plurality of layers of PPI raw data at each time instant comprises:
carrying out mapping processing of a target layer elevation angle on the multi-layer PPI original data at each moment to obtain mapping data corresponding to each moment;
and splicing the channels of the mapping data from the first layer to the last layer at all times to serve as network training data.
5. The method of claim 3, wherein determining a plurality of layers of PPI raw data for each of a target number of time instants comprises:
reading radar scanning data of N elevation angles at each moment in target quantity moments, wherein the radar scanning data comprise images with sizes determined by radar azimuth scanning frequency and equidistant sampling points on rays, and N is a positive integer;
and determining the multilayer PPI original data at each moment in the target quantity moments according to the radar scanning data of the N elevation angles at each moment in the target moments.
6. The method of claim 4, wherein the input image size is determined by the radar azimuth scanning frequency and equidistant sampling points on the ray.
7. The method of claim 6, wherein determining non-edge regions and edge regions at specified orientations of the input image based on the convolution kernel size and the input image size comprises:
when the convolution kernel size is m × k and the input image size is L × R, the abscissa is selected from the group consisting of
Figure 999918DEST_PATH_IMAGE001
To
Figure 773970DEST_PATH_IMAGE002
And the ordinate is from
Figure 484437DEST_PATH_IMAGE003
To
Figure 25140DEST_PATH_IMAGE004
The area of (a) is a non-edge area of the input image, wherein L is the equal distance of equidistant sampling points on a ray, and R is the radar azimuth scanning frequency;
and determining an edge area in a designated direction according to the size of the input image and the determined non-edge area of the input image.
8. The method of claim 4, wherein performing a mapping process of a target layer elevation angle on the multi-layer PPI raw data at each time point to obtain mapping data corresponding to each time point comprises:
carrying out noise pretreatment on the N layers of PPI original data to obtain N layers of PPI de-noising data;
mapping any layer elevation angle in the N layers of PPI de-noising data to an i layer elevation angle to obtain mapping data which takes the i layer as a first layer and takes M layers as a last layer, wherein i is more than or equal to 1 and less than or equal to N, and M = N-i;
splicing the channels of the mapping data from the first layer to the last layer at all times, wherein the splicing comprises the following steps:
and splicing the mapping data of the ith layer to the mapping data channel of the Mth layer at all the moments.
9. An apparatus for predicting nowcasting based on an improved network convolution model, the apparatus comprising:
the acquisition device is used for determining the convolution kernel size and the input image size of the used original network convolution model;
determining means for determining an edge region in a specified orientation of the input image based on the convolution kernel size and the input image size;
the establishing device is used for establishing an improved network convolution model, wherein the improved network convolution model carries out corresponding convolution operation on the edge area on the specified azimuth;
and the prediction device is used for inputting the determined network training data into the improved network convolution model for sequence learning to obtain forecast images of a plurality of frames in the future.
10. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 8.
CN202010871067.8A 2020-08-26 2020-08-26 Method and equipment for predicting prediction based on improved network convolution model Pending CN112053375A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010871067.8A CN112053375A (en) 2020-08-26 2020-08-26 Method and equipment for predicting prediction based on improved network convolution model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010871067.8A CN112053375A (en) 2020-08-26 2020-08-26 Method and equipment for predicting prediction based on improved network convolution model

Publications (1)

Publication Number Publication Date
CN112053375A true CN112053375A (en) 2020-12-08

Family

ID=73599291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010871067.8A Pending CN112053375A (en) 2020-08-26 2020-08-26 Method and equipment for predicting prediction based on improved network convolution model

Country Status (1)

Country Link
CN (1) CN112053375A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114636981A (en) * 2022-02-28 2022-06-17 广东省气象台(南海海洋气象预报中心) Online deep learning typhoon center positioning system based on radar echo

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109064507A (en) * 2018-08-21 2018-12-21 北京大学深圳研究生院 A kind of flow depth degree convolutional network model method of doing more physical exercises for video estimation
US20190114818A1 (en) * 2017-10-16 2019-04-18 Adobe Systems Incorporated Predicting Patch Displacement Maps Using A Neural Network
CN110839156A (en) * 2019-11-08 2020-02-25 北京邮电大学 Future frame prediction method and model based on video image
CN111062410A (en) * 2019-11-05 2020-04-24 复旦大学 Star information bridge weather prediction method based on deep learning
CN111083479A (en) * 2019-12-31 2020-04-28 合肥图鸭信息科技有限公司 Video frame prediction method and device and terminal equipment
CN111127510A (en) * 2018-11-01 2020-05-08 杭州海康威视数字技术股份有限公司 Target object position prediction method and device
KR102093577B1 (en) * 2018-12-03 2020-05-15 이화여자대학교 산학협력단 Future video generating method based on neural network and future video producing appratus
CN111242038A (en) * 2020-01-15 2020-06-05 北京工业大学 Dynamic tongue tremor detection method based on frame prediction network
US20200242744A1 (en) * 2019-01-30 2020-07-30 Siemens Healthcare Gmbh Forecasting Images for Image Processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190114818A1 (en) * 2017-10-16 2019-04-18 Adobe Systems Incorporated Predicting Patch Displacement Maps Using A Neural Network
CN109064507A (en) * 2018-08-21 2018-12-21 北京大学深圳研究生院 A kind of flow depth degree convolutional network model method of doing more physical exercises for video estimation
CN111127510A (en) * 2018-11-01 2020-05-08 杭州海康威视数字技术股份有限公司 Target object position prediction method and device
KR102093577B1 (en) * 2018-12-03 2020-05-15 이화여자대학교 산학협력단 Future video generating method based on neural network and future video producing appratus
US20200242744A1 (en) * 2019-01-30 2020-07-30 Siemens Healthcare Gmbh Forecasting Images for Image Processing
CN111062410A (en) * 2019-11-05 2020-04-24 复旦大学 Star information bridge weather prediction method based on deep learning
CN110839156A (en) * 2019-11-08 2020-02-25 北京邮电大学 Future frame prediction method and model based on video image
CN111083479A (en) * 2019-12-31 2020-04-28 合肥图鸭信息科技有限公司 Video frame prediction method and device and terminal equipment
CN111242038A (en) * 2020-01-15 2020-06-05 北京工业大学 Dynamic tongue tremor detection method based on frame prediction network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JINZHUO WANG 等: "" Predicting Diverse Future Frames With Local Transformation-Guided Masking"", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》, vol. 29, no. 12, 18 November 2018 (2018-11-18) *
张德正;翁理国;夏旻;曹辉;: "基于深度卷积长短时神经网络的视频帧预测", 计算机应用, no. 06, 10 April 2019 (2019-04-10) *
胡琦 等: ""基于多普勒激光雷达的风场预测"", 《激光与红外》, vol. 42, no. 3, 31 March 2012 (2012-03-31), pages 268 - 273 *
苗开超;韩婷婷;王传辉;章军;姚叶青;周建平;: "基于LSTM网络的大雾临近预报模型及应用", 计算机系统应用, no. 05, 15 May 2019 (2019-05-15) *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114636981A (en) * 2022-02-28 2022-06-17 广东省气象台(南海海洋气象预报中心) Online deep learning typhoon center positioning system based on radar echo

Similar Documents

Publication Publication Date Title
KR20210002104A (en) Target detection and training of target detection networks
CN112766221B (en) Ship direction and position multitasking-based SAR image ship target detection method
US20180033146A1 (en) Reflectivity map estimate from dot based structured light systems
Kim et al. High-speed drone detection based on yolo-v8
CN111476159A (en) Method and device for training and detecting detection model based on double-angle regression
CN112927279A (en) Image depth information generation method, device and storage medium
CN112508803B (en) Denoising method and device for three-dimensional point cloud data and storage medium
CN114708257B (en) SAR moving ship target detection method and device
KR20220050467A (en) Deep Learning-based Ocean Cluster Data Measurement System Using Sea Level Wave Reflectance
CN113781478B (en) Oil tank image detection method, oil tank image detection device, electronic equipment and computer readable medium
CN111736157B (en) PPI data-based prediction method and device for nowcasting
CN112053375A (en) Method and equipment for predicting prediction based on improved network convolution model
CN114565824A (en) Single-stage rotating ship detection method based on full convolution network
CN116229419B (en) Pedestrian detection method and device
CN113466839A (en) Side-scan sonar sea bottom line detection method and device
JP7418476B2 (en) Method and apparatus for determining operable area information
CN111260607B (en) Automatic suspicious article detection method, terminal equipment, computer equipment and medium
CN116310837B (en) SAR ship target rotation detection method and system
CN117095038A (en) Point cloud filtering method and system for laser scanner
CN111948658A (en) Deep water area positioning method for identifying and matching underwater landform images
CN116403114A (en) Remote sensing image target recognition method and system based on deep learning
CN116343078A (en) Target tracking method, system and equipment based on video SAR
CN112926534B (en) SAR graphics ship target detection method based on transform domain information fusion
CN113762271A (en) SAR image semantic segmentation method and system based on irregular convolution kernel neural network model
CN111239740A (en) Method and equipment for removing ray noise

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination