CN112487718B - Satellite image inversion radar combined reflectivity method based on deep learning - Google Patents
Satellite image inversion radar combined reflectivity method based on deep learning Download PDFInfo
- Publication number
- CN112487718B CN112487718B CN202011357348.8A CN202011357348A CN112487718B CN 112487718 B CN112487718 B CN 112487718B CN 202011357348 A CN202011357348 A CN 202011357348A CN 112487718 B CN112487718 B CN 112487718B
- Authority
- CN
- China
- Prior art keywords
- data
- training
- model
- neural network
- satellite image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000002310 reflectometry Methods 0.000 title claims abstract description 15
- 238000013135 deep learning Methods 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 35
- 238000007781 pre-processing Methods 0.000 claims abstract description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 35
- 125000004122 cyclic group Chemical group 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 10
- 230000010354 integration Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 7
- 238000002790 cross-validation Methods 0.000 claims description 6
- 230000006870 function Effects 0.000 claims description 6
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000003702 image correction Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/14—Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2155—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the incorporation of unlabelled data, e.g. multiple instance learning [MIL], semi-supervised techniques using expectation-maximisation [EM] or naïve labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Abstract
The invention discloses a satellite image inversion radar combined reflectivity method based on deep learning, which comprises the steps of constructing a database, preprocessing data in the database, constructing a data set for model training, training and optimizing the data set and then outputting a result.
Description
Technical Field
The invention relates to the technical field of radar echo inversion, in particular to a satellite image inversion radar combined reflectivity method based on deep learning.
Background
The meteorological satellite can realize global and large-scale observation, for example, the observation area of the geostationary meteorological satellite on a certain fixed region of the earth in the high altitude of about 36000km is 1.7X108 km2, which is about one third of the surface area of the earth; the large-scale observation of the meteorological satellite enables areas such as ocean, desert, altitude and the like which account for four fifths of the earth to acquire meteorological data from satellite detection, so that global atmospheric activity is known in depth.
The satellite image has the characteristics of large data volume, abundant content and various characteristics, but has more noise, the current radar has limited observation range, and a large gap exists between radar networking in areas with sparse radar deployment, such as western areas.
In the prior radar echo inversion technology, a radar echo inversion method based on a certain physical meteorological element is adopted, a relation is established between the physical meteorological element and a radar echo in the main principle, the principle is simple, and when the result error is larger, the accuracy is not high and uncertain factors are increased, so that a satellite image inversion radar combined reflectivity method based on deep learning is urgently needed to solve the problems.
Disclosure of Invention
The invention provides a satellite image inversion radar combined reflectivity method based on deep learning, which can effectively solve the problems of large result error, low accuracy and increased uncertain factors of the conventional radar echo inversion technology in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions: a satellite image inversion radar combined reflectivity method based on deep learning comprises the following steps:
S1, constructing a database: real-time data including weather satellite image data and satellite images for at least one year;
S2, pretreatment: carrying out smooth image correction on satellite images in a database;
S3, constructing a data set for model training: extracting a plurality of channel pixel points from real-time data of a satellite image by using ENVI software, and storing the values of the extracted pixel points;
S4, data training: constructing an RNN-CNN model to train the data in the data set, specifically:
a. Constructing a deep cyclic neural network model and a deep convolutional neural network model, wherein when the deep cyclic neural network model is constructed, the deep cyclic neural network model is directly output through a Dense layer, and the output result is directly used as the input of the deep convolutional neural network;
b. constructing an integrated model framework taking a Stacking algorithm as a main body;
c. B, connecting the deep convolutional neural network model and the deep convolutional neural network model in series under the frame of the step b to obtain an integrated model, namely an RNN-CNN model;
d. Carrying out iterative processing on the data set in the RNN-CNN model by using an n-fold cross validation method in a Stacking algorithm, and outputting an average value of iterative results;
And S5, optimizing the training data of the RNN-CNN model by utilizing XGboost integration algorithm and outputting the optimized training data.
Preferably, in step S1, the data of one year is divided into four season samples of a spring training sample, a summer training sample, an autumn training sample and a winter training sample.
Preferably, in step S2, the preprocessing includes convolving the original satellite image with a positive spike function having negative side lobes as a filter function, where the spatial expression is: The fourier expression of the resulting processed image is: /(I) Where u and v are frequency domain variables, G (u, v) represents the Fourier transform of the satellite image G (x, y) to be processed,/> Is close to the original image/> Is a fourier transform of (a).
Preferably, in step S3, the real-time satellite image data is extracted by using ENVI software to obtain Q channel pixel points, and the pixel points are combined with the length M and the width N of the corresponding satellite picture, and stored as a npy file of m×n×q.
Preferably, in step S4, the iterative processing of the data set using the RNN-CNN model specifically includes: dividing a data set into n parts by using an n-fold cross validation method in a Stacking algorithm, respectively performing n iterations, taking 1 part of the data as a test set in each iteration, taking the remaining n-1 parts as a training set, inputting the rest n-1 parts as a training set into an integrated model for training, and averaging the obtained n results to obtain an output result of the integrated model RNN-CNN.
Preferably, in step S5, when optimizing with XGboost integration algorithm, the XGboost integration algorithm output npy array value file is saved as a picture format.
Compared with the prior art, the invention has the beneficial effects that: the invention has scientific and reasonable structure and safe and convenient use, and is characterized in that the method modifies the deep cyclic neural network model, is connected with the deep convolutional neural network model in series under an integrated model frame taking a Stacking algorithm as a main body, constructs an RNN-CNN model for satellite images, inverts the radar combined reflectivity by using the satellite images, wherein the data of all channels are processed, the result error is reduced, the accuracy is improved, and simultaneously, the performance of the model is improved and the data processing efficiency is accelerated by utilizing XGboost integrated algorithm.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention.
In the drawings:
FIG. 1 is a schematic flow chart of the method of the present invention;
FIG. 2 is a plot of the combined reflectivity of the radar based on satellite image inversion in accordance with the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Examples: as shown in fig. 1-2, a method for inverting radar combined reflectivity based on a satellite image of deep learning includes the following steps:
S1, constructing a database: the method comprises weather satellite image data and real-time data of satellite images of at least one year, wherein the data of one year is divided into four season samples of spring training samples, summer training samples, autumn training samples and winter training samples;
S2, pretreatment: carrying out smooth image correction on satellite images in a database;
Convolving the original satellite image with a positive spike function having negative side lobes as a filter function, wherein the spatial expression is: The fourier expression of the resulting processed image is: /(I) Where u and v are frequency domain variables, G (u, v) represents the Fourier transform of the satellite image G (x, y) to be processed,/> Is close to the original image/> fourier transform of (a);
S3, constructing a data set for model training: extracting a plurality of channel pixel points from real-time data of a satellite image by ENVI (The Environment for Visualizing Images) software, and storing the values of the extracted pixel points;
In this embodiment, the ENVI software is used to extract Q channel pixel points from real-time satellite image data, combine the pixel points with the length M and the width N of the corresponding satellite image, and store the pixel points as the npy files of m×n×q, where the stored npy files are the data set of model training;
S4, data training: constructing an RNN-CNN model to train the data in the data set, specifically:
a. Constructing a deep cyclic neural network model and a deep convolutional neural network model;
The deep circulation neural network (Recurrent Neural Network, RNN) is an artificial neural network with nodes directionally connected into a ring, the internal state of the artificial neural network can show dynamic time sequence behavior, and the internal memory of the artificial neural network can be used for processing an input sequence with any time sequence, so that the input sequence is easy to process such as non-segmented handwriting recognition, voice recognition and the like;
The convolutional neural network (Convolutional Neural Networks, CNN) is a feedforward neural network (Feedforward Neural Networks) which comprises convolutional calculation and has a depth structure, is one of representative algorithms of deep learning (DEEP LEARNING), has the capability of characterization learning (representation learning), and can carry out translation invariant classification (shift-INVARIANT CLASSIFICATION) on input information according to a hierarchical structure of the convolutional neural network;
When the deep cyclic neural network model is constructed, the deep cyclic neural network model is directly output through a Dense layer, and the output result is directly used as the input of the deep cyclic neural network;
b. constructing an integrated model framework taking a Stacking algorithm as a main body;
c. B, connecting the deep convolutional neural network model and the deep convolutional neural network model in series under the frame of the step b to obtain an integrated model, namely an RNN-CNN model;
d. Carrying out iterative processing on the data set in the RNN-CNN model by using an n-fold cross validation method in a Stacking algorithm, and outputting an average value of iterative results;
The method comprises the following steps: dividing a data set into n parts by using an n-fold cross validation method in a Stacking algorithm, respectively performing n iterations, taking 1 part of the data as a test set in each iteration, taking the remaining n-1 parts as a training set, inputting the rest n-1 parts as a training set into an integrated model for training, and averaging the obtained n results to obtain an output result of the integrated model RNN-CNN;
And S5, optimizing training data of the RNN-CNN model by utilizing XGboost integration algorithm, improving accuracy of inversion radar combined reflectivity output by the integration model, outputting npy array value files after optimizing, obtaining a final radar combined reflectivity sample map, and storing the obtained result as a picture format.
Finally, it should be noted that: the foregoing is merely a preferred example of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (3)
1. The method for inverting the radar combined reflectivity by using the satellite images based on the deep learning is characterized by comprising the following steps of:
S1, constructing a database: real-time data including weather satellite image data and satellite images for at least one year;
S2, pretreatment: carrying out smooth image correction on satellite images in a database;
Preprocessing involves convolving the original satellite image with a positive spike function with negative side lobes as a filter function, where the spatial expression is: f to (x, y) =g (x, y) ×h (x, y), the fourier expression of the processed image is obtained as: f to (u, v) =g (u, v) =h (u, v), where u and v are frequency domain variables, G (u, v) represents the fourier transform of the satellite image G (x, y) to be processed, and F to (u, v) are the fourier transforms approaching F to (x, y) of the original image;
S3, constructing a data set for model training: extracting a plurality of channel pixel points from real-time data of a satellite image by using ENVI software, and storing the values of the extracted pixel points;
Extracting Q channel pixel points from real-time satellite image data by using ENVI software, combining the pixel points with the length M and the width N of a corresponding satellite picture, and storing the combined pixel points as npy files of M, N and Q;
S4, data training: constructing an RNN-CNN model to train the data in the data set, specifically:
a. Constructing a deep cyclic neural network model and a deep convolutional neural network model, wherein when the deep cyclic neural network model is constructed, the deep cyclic neural network model is directly output through a Dense layer, and the output result is directly used as the input of the deep convolutional neural network;
b. constructing an integrated model framework taking a Stacking algorithm as a main body;
c. B, connecting the deep convolutional neural network model and the deep convolutional neural network model in series under the frame of the step b to obtain an integrated model, namely an RNN-CNN model;
d. Carrying out iterative processing on the data set in the RNN-CNN model by using an n-fold cross validation method in a Stacking algorithm, and outputting an average value of iterative results;
The iterative processing of the data set by using the RNN-CNN model comprises the following steps: dividing a data set into n parts by using an n-fold cross validation method in a Stacking algorithm, respectively performing n iterations, taking 1 part of the data as a test set in each iteration, taking the remaining n-1 parts as a training set, inputting the rest n-1 parts as a training set into an integrated model for training, and averaging the obtained n results to obtain an output result of the integrated model RNN-CNN;
And S5, optimizing the training data of the RNN-CNN model by utilizing XGboost integration algorithm and outputting the optimized training data.
2. The depth learning-based satellite image inversion radar combined reflectivity method as claimed in claim 1, wherein: in step S1, the data of one year is divided into four season samples of spring training samples, summer training samples, autumn training samples, and winter training samples.
3. The depth learning-based satellite image inversion radar combined reflectivity method as claimed in claim 1, wherein: in step S5, when optimizing with XGboost integration algorithm, the XGboost integration algorithm output npy array value file is saved as a picture format.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011357348.8A CN112487718B (en) | 2020-11-27 | 2020-11-27 | Satellite image inversion radar combined reflectivity method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011357348.8A CN112487718B (en) | 2020-11-27 | 2020-11-27 | Satellite image inversion radar combined reflectivity method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112487718A CN112487718A (en) | 2021-03-12 |
CN112487718B true CN112487718B (en) | 2024-04-16 |
Family
ID=74935932
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011357348.8A Active CN112487718B (en) | 2020-11-27 | 2020-11-27 | Satellite image inversion radar combined reflectivity method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112487718B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115144835B (en) * | 2022-09-02 | 2023-01-03 | 南京信大气象科学技术研究院有限公司 | Method for inverting weather radar reflectivity by satellite based on neural network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108445464A (en) * | 2018-03-12 | 2018-08-24 | 南京恩瑞特实业有限公司 | Satellite radar inverting fusion methods of the NRIET based on machine learning |
CN110208880A (en) * | 2019-06-05 | 2019-09-06 | 北京邮电大学 | A kind of sea fog detection method based on deep learning and satellite remote sensing technology |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016165082A1 (en) * | 2015-04-15 | 2016-10-20 | 中国科学院自动化研究所 | Image stego-detection method based on deep learning |
-
2020
- 2020-11-27 CN CN202011357348.8A patent/CN112487718B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108445464A (en) * | 2018-03-12 | 2018-08-24 | 南京恩瑞特实业有限公司 | Satellite radar inverting fusion methods of the NRIET based on machine learning |
CN110208880A (en) * | 2019-06-05 | 2019-09-06 | 北京邮电大学 | A kind of sea fog detection method based on deep learning and satellite remote sensing technology |
Non-Patent Citations (2)
Title |
---|
基于深度学习神经网络的SAR星上目标识别系统研究;袁秋壮;魏松杰;罗娜;;上海航天;20171025(05);全文 * |
袁秋壮 ; 魏松杰 ; 罗娜 ; .基于深度学习神经网络的SAR星上目标识别系统研究.上海航天.2017,(05),全文. * |
Also Published As
Publication number | Publication date |
---|---|
CN112487718A (en) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109271856B (en) | Optical remote sensing image target detection method based on expansion residual convolution | |
CN110532859B (en) | Remote sensing image target detection method based on deep evolution pruning convolution net | |
Chen et al. | Efficient approximation of deep relu networks for functions on low dimensional manifolds | |
CN111291696B (en) | Handwriting Dongba character recognition method based on convolutional neural network | |
CN108921030B (en) | SAR automatic target recognition method | |
CN110728706B (en) | SAR image fine registration method based on deep learning | |
CN113888550A (en) | Remote sensing image road segmentation method combining super-resolution and attention mechanism | |
CN114445634A (en) | Sea wave height prediction method and system based on deep learning model | |
CN111738954B (en) | Single-frame turbulence degradation image distortion removal method based on double-layer cavity U-Net model | |
CN110838095B (en) | Single image rain removing method and system based on cyclic dense neural network | |
CN110648292A (en) | High-noise image denoising method based on deep convolutional network | |
CN112487718B (en) | Satellite image inversion radar combined reflectivity method based on deep learning | |
He et al. | Remote sensing image super-resolution using deep–shallow cascaded convolutional neural networks | |
CN113255892A (en) | Method and device for searching decoupled network structure and readable storage medium | |
CN110647977B (en) | Method for optimizing Tiny-YOLO network for detecting ship target on satellite | |
CN113610097A (en) | SAR ship target segmentation method based on multi-scale similarity guide network | |
CN112215199A (en) | SAR image ship detection method based on multi-receptive-field and dense feature aggregation network | |
Yaohua et al. | A SAR oil spill image recognition method based on densenet convolutional neural network | |
CN112967210B (en) | Unmanned aerial vehicle image denoising method based on full convolution twin network | |
CN112801206B (en) | Image key point matching method based on depth map embedded network and structure self-learning | |
CN112800851B (en) | Water body contour automatic extraction method and system based on full convolution neuron network | |
CN111080516B (en) | Super-resolution image reconstruction method based on self-sample enhancement | |
CN113989612A (en) | Remote sensing image target detection method based on attention and generation countermeasure network | |
CN112270259A (en) | SAR image ship target rapid detection method based on lightweight convolutional neural network | |
CN113688783B (en) | Face feature extraction method, low-resolution face recognition method and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |