CN113052201A - Satellite cloud picture cloud classification method based on deep learning - Google Patents

Satellite cloud picture cloud classification method based on deep learning Download PDF

Info

Publication number
CN113052201A
CN113052201A CN202011536412.9A CN202011536412A CN113052201A CN 113052201 A CN113052201 A CN 113052201A CN 202011536412 A CN202011536412 A CN 202011536412A CN 113052201 A CN113052201 A CN 113052201A
Authority
CN
China
Prior art keywords
satellite
data
observation
training
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011536412.9A
Other languages
Chinese (zh)
Other versions
CN113052201B (en
Inventor
成巍
姜宇航
邓志武
石文静
朱孟斌
刘厂
顾春利
高峰
王德龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
61540 Troops of PLA
Original Assignee
61540 Troops of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 61540 Troops of PLA filed Critical 61540 Troops of PLA
Priority to CN202011536412.9A priority Critical patent/CN113052201B/en
Publication of CN113052201A publication Critical patent/CN113052201A/en
Application granted granted Critical
Publication of CN113052201B publication Critical patent/CN113052201B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a satellite cloud picture cloud classification method based on deep learning, which can classify and label real-time satellite observation data without manual interpretation. The method utilizes the nonlinear mapping capability of the deep learning network, constructs the model based on historical satellite observation data, and performs cloud classification and labeling on the real-time satellite observation data, so that the real situation is more close to the real situation, the labor consumption is reduced, the cloud classification of a larger observation area can be realized, and the defects of the prior art are overcome.

Description

Satellite cloud picture cloud classification method based on deep learning
Technical Field
The invention belongs to the technical field of meteorological satellites, and particularly relates to a satellite cloud image cloud classification method based on deep learning.
Background
With the continuous progress of technology and the continuous improvement of life quality of people, the current weather forecast service has more and more important status in the production and life of people, wherein the disaster weather has great influence on the production and life of people, the investment in disaster prevention and reduction is continuously strengthened by the meteorological department, the property of the cloud plays a vital role in forecasting the disaster weather, and the meteorological department pays more attention to the observation and identification of the cloud.
With the rapid development of satellite technology, satellites become an indispensable data source in current weather forecasting, the satellite observation is greatly developed due to the fact that the satellites are not influenced by external conditions and the observation capability of the satellites is continuous, the satellites have the superiority of the satellites in disaster prevention and reduction work by virtue of the advantages of the satellites, weather departments of various countries pay attention to the application of satellite data, but the application of the satellite data is far from enough at present, and the observation capability of the satellites of various countries is continuously improved.
The satellite cloud classification of satellite observation cloud pictures is rapidly developed by all weather departments, and currently, the most representative and widely applied satellite cloud classification products comprise a plurality of weather institutions including the American national environment forecasting center, the Japanese weather hall and the China national weather bureau, which already release satellite cloud classification products made by the users and release the satellite cloud classification products to the users after compression in various modes. The decoded satellite cloud classification product is generally in the form of encoded HDF or netCDF data with sequentially superimposed element categories, longitudes, latitudes, and element values.
Due to different satellite technology levels of different countries, different satellite cloud classification products use different classification standards. In the actual use process, different classification modes correspond to different application scenes, different requirements can be provided for the satellite cloud picture cloud classification method in some application scenes, at the moment, the cloud types need to be judged manually, a large amount of manpower and material resources are consumed to refer to a large amount of accumulated historical data, manual judgment has high subjectivity, and accurate classification results cannot be provided.
Disclosure of Invention
In view of the above, the invention provides a satellite cloud image cloud classification method based on deep learning, which can classify and label real-time satellite observation data without manual interpretation.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the invention discloses a satellite cloud picture cloud classification method based on deep learning, which comprises the following steps of:
step 1, constructing a training data set according to historical satellite observation data and similar meteorological satellite classification data;
step 2, constructing a deep learning network model;
the deep learning network model comprises an encoding-decoding network structure, and a residual error unit, a convolution layer, an up-sampling operation and a down-sampling operation which form the network structure;
the convolutional layer specifically comprises a convolution operation, a batch standardization operation and an activation function; the up-sampling operation comprises two different structures, wherein one structure comprises a deconvolution operation, a batch standardization operation and an activation function, and the other structure comprises a bilinear interpolation operation, a batch standardization operation and an activation function; the down-sampling operation comprises a convolution operation with the step size of 2, a batch standardization operation and an activation function;
step 3, training a deep learning network model according to the training data set to obtain a satellite cloud classification model;
step 4, extracting data segments of the same channel and the same area as the historical satellite observation data set in the training data set from the real-time satellite observation data, and obtaining a data segment F after calibration and projection processing1
Data fragment F1And outputting the classified data segments through the satellite cloud classification model as input data of the satellite cloud classification model, and taking the output classified data segments as satellite cloud classification results.
In step 1, the specific steps of constructing the training data set are as follows:
step 11, the satellite adopts a stationary orbit nominal projection defined by CGMS LRIT/HRIT global specification, and the longitude and latitude are calculated and converted into a row and column number formula based on a WGS84 reference ellipsoid through geographic coordinates; converting the longitude and latitude to be classified into a row number and a column number through a longitude and latitude to row number conversion formula;
step 12, in any observation fragment of the historical satellite observation data, according to the row and column numbers obtained by conversion, all observation channels of the L1-level products corresponding to the regional satellite data are selected to obtain a channel set C ═ C1,C2,...,CnN is the total number of the selected observation channels, and a channel set C ═ C1,C2,...,CnThe infrared detector comprises a visible light channel and an infrared channel;
step 13, calibrating the L1 level data in the channel set to obtain a calibrated observation channel data set Q ═ Q1,Q2,...,Qn}; the visible light channel is restored to be reflectivity through searching a calibration table, and the reflectivity of the visible light channel is corrected by utilizing the solar altitude angle data; the infrared channel obtains brightness temperature data by looking up a calibration table; the brightness temperature data of the infrared channel and the corrected reflectivity of the visible light channel form a calibrated observation channel data set Q ═ Q1,Q2,...,Qn};
Step 14, performing projection conversion on the data in the calibrated observation channel data set to obtain an observation data set T ═ T { T ] formed by equidistant cylindrical projection1,T2,...,Tn}; the observation data set T at the moment is set as { T ═ T1,T2,...,TnSaving the training file into a training file directory;
step 15, extracting the equal longitude and latitude classification data of L2-grade similar products of the same type or similar meteorological satellites, matching the observation data with the L2 product of the same type meteorological satellite through matching time, and constructing a corresponding label data set L ═ L { L } in the training process1,L2,...,LnThe label data set L at that time is { L ═ L }1,L2,...,LnSaving the training file into a training file directory;
step 16, in a new time observation fragment of the historical satellite observation data, selecting all observation channels of satellite data L1-grade products in a specific area according to the row and column numbers obtained by conversion to obtain a new channel set, and re-executing the steps 13-16 by using the new channel set until all historical satellite observation data required to be included in the training process and the label data of the historical similar meteorological satellites are stored in a training file directory to obtain a training data set; wherein the new time refers to a time different from all previous times.
In step 3, the specific steps of training the deep learning network model are as follows:
according to the historical satellite observation data set T ═ T1,T2,...,TnAnd a label data set L ═ L1,L2,...,LnDetermining a loss function according to a difference value between the label data sets of the observation data sets, and determining the loss function as a cross entropy loss function;
taking the data in the training data set as the input of a deep learning network model, adopting an Adam gradient descent method as an optimizer of the training model, taking a cross entropy loss function as a cost function of the model, adopting a method of variable step length learning rate to train parameters in the deep learning network model step by step, obtaining optimal model parameters after multiple iterative cycles, and taking the deep learning network model under the optimal model parameters as a satellite cloud classification model;
in step 4, the cloud classification result is covered on the data segment F pixel by pixel1Above, the results are labeled in a visualized form.
In step 2, the coding-decoding network structure realizes parameter sharing through long-hop connection.
In step 11, the specific calculation of converting the obtained longitude and latitude into a column number formula is as follows:
substep 1.1, convert the angle representation of the geography latitude and longitude into radian representation, and convert the geography latitude and longitude into geocentric latitude and longitude according to the following formula:
λe=lon
Figure BDA0002853186250000041
wherein lat and lon are longitude and latitude, ea is the semimajor axis of the earth, and eb is the semiminor axis of the earth.
Substep 1.2, solving for R by using the latitude and longitude of the geocentriceThe formula is as follows:
Figure BDA0002853186250000042
substeps 1.3, using ReCalculating r1、r2、r3,λDFor satellite sub-satellite point longitude, the formula is as follows:
r1=h-Re×cos(φe)×cos(λeD)
r2=-Re×cos(φe)×sin(λeD)
r3=Re×sin(φe)
substeps 1.4, using r1、r2、r3Calculating rnX, y, the formula is as follows:
Figure BDA0002853186250000043
Figure BDA0002853186250000044
Figure BDA0002853186250000045
and a substep 1.5 of solving a row number and a column number corresponding to the longitude and latitude of the designated area, wherein the formula is as follows:
c=COFF+x×2-16×CFAC
l=LOFF+y×2-16×LFAC
wherein COFF is column offset, CFAC is column scale factor, LOFF is row offset, LFAC is row scale factor, and c and l are row and column numbers corresponding to longitude and latitude.
Has the advantages that:
the satellite cloud image cloud classification method based on deep learning utilizes the non-linear mapping capability of a deep learning network and builds a model based on historical satellite observation data, and classifies and labels real-time satellite observation data, so that the real-time satellite observation data is closer to the real situation, manual interpretation is not needed, the defects of the prior art are overcome, the labor consumption is reduced, and satellite cloud classification in a larger observation area can be realized.
Drawings
FIG. 1 is a flow chart of the steps of a method of deep learning based satellite cloud classification of clouds of the present invention;
FIG. 2 is a schematic diagram of an encoding-decoding structure according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a residual error unit according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a downsampling unit according to an embodiment of the present invention.
Detailed Description
The invention is described in detail below by way of example with reference to the accompanying drawings.
The invention relates to a method for classifying satellite cloud images based on deep learning, which comprises the following steps:
step 1, constructing a training data set according to historical satellite observation data and similar meteorological satellite classification data, wherein the specific process is as follows:
step 11, the satellite adopts a stationary orbit nominal projection defined by CGMSRIT/HRIT global specification, and the longitude and latitude are calculated and converted into a row and column number formula based on a WGS84 reference ellipsoid through geographic coordinates; converting the longitude and latitude to be classified into a row number and a column number through a longitude and latitude to row number conversion formula;
step 12, in any observation fragment of the historical satellite observation data, according to the row and column numbers obtained by conversion, all observation channels of the L1-level products corresponding to the regional satellite data are selected to obtain a channel set C ═ C1,C2,...,CnN is the total number of the selected observation channels, and a channel set C ═ C1,C2,...,CnThe infrared detector comprises a visible light channel and an infrared channel;
step 13, calibrating the L1 level data in the channel set to obtain a calibrated observation channel data set Q ═ Q1,Q2,...,Qn}. The visible light channel is restored to be reflectivity through searching a calibration table, and the reflectivity of the visible light channel is corrected by utilizing the solar altitude angle data; the infrared channel obtains brightness temperature data by looking up a calibration table; the brightness temperature data of the infrared channel and the corrected reflectivity of the visible light channel form a calibrated observation channel data set Q ═ Q1,Q2,...,Qn};
Step 14, performing projection conversion on the data in the calibrated observation channel data set to obtain an observation data set T ═ T { T ] formed by equidistant cylindrical projection1,T2,...,Tn}; the observation data set T at the moment is set as { T ═ T1,T2,...,TnSaving the training file into a training file directory;
step 15, extracting the equal longitude and latitude classification data of L2-grade similar products of the same type or similar meteorological satellites, matching the observation data with the L2 product of the same type meteorological satellite through matching time, and constructing a corresponding label data set L ═ L { L } in the training process1,L2,...,LnThe label data set L at that time is { L ═ L }1,L2,...,LnSaving the training file into a training file directory;
step 16, in a new time observation fragment of the historical satellite observation data, selecting all observation channels of satellite data L1-grade products in a specific area according to the row and column numbers obtained by conversion to obtain a new channel set, and re-executing the steps 13-16 by using the new channel set until all historical satellite observation data required to be included in the training process and the label data of the historical similar meteorological satellites are stored in a training file directory to obtain a training data set; wherein the new time refers to a time different from all previous times.
And 2, constructing a deep learning network model, wherein the deep learning network model comprises a coding-decoding network structure, and a residual error unit, a convolutional layer, an upsampling operation and a downsampling operation which form the network structure. The convolutional layer specifically comprises a convolution operation, a batch standardization operation and an activation function; the up-sampling operation comprises two different structures, wherein one structure comprises a deconvolution operation, a batch standardization operation and an activation function, and the other structure comprises a bilinear interpolation operation, a batch standardization operation and an activation function; the downsampling operation includes a convolution operation with a step size of 2, a batch normalization operation, and an activation function. Wherein the encoding-decoding network structure enables sharing of parameters over long-hop connections.
Step 3, training the deep learning network model according to the training data set to obtain a satellite cloud classification model, which specifically comprises the following steps:
according to the historical satellite observation data set T ═ T1,T2,...,TnAnd a label data set L ═ L1,L2,...,LnDetermining a loss function by observing the difference between the data sets of the data set tags, determiningThe loss function is a cross entropy loss function;
taking the data in the training data set as the input of a deep learning network model, adopting an Adam gradient descent method as an optimizer of the training model, taking a cross entropy loss function as a cost function of the model, gradually training parameters in the deep learning network model by adopting a variable step length learning rate method, obtaining optimal model parameters after multiple iterative cycles, and taking the deep learning network model under the optimal model parameters as a satellite cloud classification model;
and 4, extracting and obtaining a historical satellite observation data set T ═ T { T } in the training data set from the real-time satellite observation data1,T2,...,TnThe data segments of the same channel and the same area are calibrated and projected to obtain a data segment F1
Data fragment F1And outputting the classified data segments through the satellite cloud classification model as input data of the satellite cloud classification model, and taking the output classified data segments as satellite cloud classification results.
Further, the satellite cloud classification result is covered on the data segment F pixel by pixel1Above, the results are labeled in a visualized form.
The invention is described in detail below with reference to a practical example:
the historical meteorological satellite observation data is L1-grade full disc data of a Chinese area conventionally observed by a stationary orbit radiation imager of Fengyun 4A star, the longitude and latitude of the satellite sub-satellite center are 104.7 degrees, the data adopts nominal projection, and the resolution is 4000 meters; the latitude and longitude ranges to be classified are as follows: longitude 80 ° E-140 ° E, latitude 5 ° N-55 ° N. The cloud types to be classified include: rolling clouds, deep convection clouds, high-lying clouds, rainy clouds, lying clouds, layered clouds, and clear sky. The satellite cloud classification data of the same type of meteorological satellite adopts a Japanese Himapari-8 satellite L2-grade satellite cloud Classification (CTYPE) product, and a data segment consistent with a region to be classified is intercepted.
The method for classifying the satellite cloud images based on the deep learning comprises the following specific implementation steps:
example step 1, building a training data set
Step 11, the satellite adopts a stationary orbit nominal projection defined by the CGMSRIT/HRIT global specification, and the longitude and latitude are calculated and converted into a row and column number formula based on a WGS84 reference ellipsoid according to the geographic coordinates, wherein the calculation is as follows:
substep 1.1, convert the angle representation of the geography latitude and longitude into radian representation, and convert the geography latitude and longitude into geocentric latitude and longitude according to the following formula:
λe=lon
Figure BDA0002853186250000081
wherein lat and lon are longitude and latitude, ea is the semimajor axis of the earth, and eb is the semiminor axis of the earth.
Substep 1.2, solving for R by using the latitude and longitude of the geocentriceThe formula is as follows:
Figure BDA0002853186250000082
substeps 1.3, using ReCalculating r1、r2、r3,λDFor satellite sub-satellite point longitude, the formula is as follows:
r1=h-Re×cos(φe)×cos(λeD)
r2=-Re×cos(φe)×sin(λeD)
r3=Re×sin(φe)
substeps 1.4, using r1、r2、r3Calculating rnX, y, the formula is as follows:
Figure BDA0002853186250000083
Figure BDA0002853186250000084
Figure BDA0002853186250000085
and a substep 1.5 of solving a row number and a column number corresponding to the longitude and latitude of the designated area, wherein the formula is as follows:
c=COFF+x×2-16×CFAC
l=LOFF+y×2-16×LFAC
wherein COFF is column offset, CFAC is column scale factor, LOFF is row offset, LFAC is row scale factor, and c and l are row and column numbers corresponding to longitude and latitude.
Step 12, according to a row-column number conversion longitude and latitude formula, selecting 14 observation channels in observation fragments of satellite data L1 level products in an area with the longitude of 80-140-E and the latitude of 5-55-N at a certain moment, and obtaining a channel set C ═ C { (C {)1,C2,...,C14The channels comprise 6 visible light channels and 8 infrared channels;
step 13, calibrating the L1 level data to obtain a calibrated observation channel data set Q ═ Q1,Q2,...,Q14}。
The visible light channel can be restored to reflectivity by looking up the calibration table, and the reflectivity of the visible light channel is corrected by utilizing solar altitude data, and the specific formula is as follows:
Figure BDA0002853186250000091
ref is the reflectivity of the visible light channel restored by looking up a calibration table, Zen is the solar zenith angle, and D is the corrected reflectivity data;
and the infrared channel table look-up calibration result is brightness temperature data.
The width of the grid of the wind cloud 4A satellite is 0.05 degrees × 0.05 degrees, the input data is a pixel value of 1000 × 1200, zero padding is used in the embodiment for the convenience of convolution operation, and the image size is padded to 1200 × 1200 pixels.
Step 14, performing projection conversion on all the processed 14 observation channel data, converting the data into equidistant cylindrical projection to obtain an observation data set, and storing the observation data set in an observation data file directory to obtain an observation data set T ═ T at one moment1,T2,...,T14}。
Step 15, selecting classification data of an L2-grade satellite cloud Classification (CTYPE) product of a Hiwari-8 satellite in a Japan weather hall, matching the observation time of the wind cloud 4A satellite with the time of the Hiwari-8 satellite cloud classification product, and constructing a label data set L ═ L { in the training process1}。
And step 16, repeatedly executing the step of extracting the observation area time slice and the observation channel data from the wind cloud 4A satellite observation data and extracting Himapari-8 satellite L2-level satellite cloud classification data until all historical satellite observation data required to be included in the training process and label data of historical similar satellites are stored in a training file directory, so as to obtain a training data set.
Step 2, constructing a deep learning network model;
the structure of the coding-decoding network is shown in fig. 2, the left half is called coding network, the right half is called decoding network, the network includes 4 down-sampling and 4 up-sampling, and 5 layers, which are described in fig. 2, and the specific variation of data size in the following embodiment is also described, and each operation in the figure will be described in detail below:
after the wind cloud 4A satellite 14 channel observation data is input into the network, the number of channels of the image is changed through the convolution layer, and the convolution layer in the image is used for changing the number of channels of the image, so that the image characteristics are better extracted. The incoming data will then be further characterized by residual units, each of which contains two convolutional layers and a short-hop connection, as shown in fig. 3. The convolution layer consists of convolution operation, batch standardization operation and an activation function; the short jump connection superposes the data input into the residual error unit and the data after passing through the two convolution layers; the first layer convolution formula is:
H(x)=x
after the second layer convolution and the short hop connection, the function becomes:
H(x)=F(x)+x
generally speaking, a single-layer residual error unit cannot bring a good effect, and a plurality of layers of residual error units are often required to continuously extract features to obtain a good effectiThe number of residual units, the specific number of units, depends on the data.
The down-sampling operation in fig. 2 is a variation of the residual unit shown in fig. 4, and to further extract higher-order semantic features, the image is encoded, and the network uses a convolution operation with a step size of 2 both in the dotted part and in the second layer of the residual unit, so that f (x) has the same data dimension as x.
The long-hop connection in fig. 2 transfers data of the same layer from the coding network to the decoding network, and merges the decoded data with the data of the coding network of the same layer in the channel dimension, thereby realizing the sharing of parameters. The merged data continues to be input into the upsampling layer.
Example step 3: training a deep learning network model according to the training data set to obtain a satellite cloud classification model, which comprises the following specific steps:
setting the wind cloud 4A satellite observation data set T as { T ═ T1,T2,...,TnAnd the himwari-8 tag dataset L ═ L1,L2,...,LnInputting a training data set formed by the training data set into a training network;
according to the historical satellite observation data set T ═ T1,T2,...,TnAnd a label data set L ═ L1,L2,...,LnDetermining a loss function according to a difference value between the label data sets of the observation data sets, and determining the loss function as a cross entropy loss function;
Figure RE-GDA0003061854460000101
taking the data in the training data set as the input of a deep learning network model, adopting an Adam gradient descent method as an optimizer of the training model and a cross entropy loss function as a cost function of the model, gradually training parameters in the deep learning network model by adopting a method of variable step length learning rate, obtaining optimal model parameters after multiple iterative cycles, and taking the deep learning network model under the optimal model parameters as a satellite cloud classification model;
example step 4: extracting original observation data in the same region as the wind cloud 4A satellite observation data in the training data set from the wind cloud 4A satellite real-time observation data, carrying out corresponding scaling and projection on the original data, referring to the step of scaling and projection on the wind cloud 4A satellite in the step 1 and the step 2, inputting the obtained scaled and projected wind cloud 4A satellite real-time observation data as a model input into a satellite cloud classification model with optimal model parameters, and automatically outputting a satellite cloud classification result data segment of the satellite observation data by the model;
covering the satellite cloud classification result on the wind cloud 4A real-time observation data fragment pixel by pixel, and taking different colors as cloud type distinction as visual marking of the result.
In summary, the above description is only a preferred embodiment of the present invention, and is not intended to limit the protection scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (6)

1. A satellite cloud picture cloud classification method based on deep learning is characterized by comprising the following steps:
step 1, constructing a training data set according to historical satellite observation data and similar meteorological satellite classification data;
step 2, constructing a deep learning network model;
the deep learning network model comprises an encoding-decoding network structure, and a residual error unit, a convolution layer, an up-sampling operation and a down-sampling operation which form the network structure;
the convolutional layer specifically comprises a convolution operation, a batch standardization operation and an activation function; the up-sampling operation comprises two different structures, wherein one structure comprises a deconvolution operation, a batch standardization operation and an activation function, and the other structure comprises a bilinear interpolation operation, a batch standardization operation and an activation function; the down-sampling operation comprises a convolution operation with the step size of 2, a batch standardization operation and an activation function;
step 3, training a deep learning network model according to the training data set to obtain a satellite cloud classification model;
step 4, extracting data segments of the same channel and the same area as the historical satellite observation data set in the training data set from the real-time satellite observation data, and carrying out calibration and projection processing to obtain data segments F1
Data fragment F1And outputting the classified data segments through the satellite cloud classification model as input data of the satellite cloud classification model, and taking the output classified data segments as satellite cloud classification results.
2. The deep learning-based satellite cloud picture cloud classification method according to claim 1, wherein in the step 1, the specific steps of constructing the training data set are as follows:
step 11, the satellite adopts a stationary orbit nominal projection defined by CGMSRIT/HRIT global specification, and the longitude and latitude are calculated and converted into a row and column number formula based on a WGS84 reference ellipsoid through geographic coordinates; converting the longitude and latitude to be classified into a row number and a column number through a longitude and latitude to row number conversion formula;
step 12, in any observation fragment of the historical satellite observation data, according to the row and column numbers obtained by conversion, all observation channels of the L1-level products corresponding to the regional satellite data are selected, and a channel set C ═ C is obtained1,C2,...,CnN is the total number of the selected observation channels, and a channel set C ═ C1,C2,...,CnThe infrared detector comprises a visible light channel and an infrared channel;
step 13, calibrating the L1 level data in the channel set to obtain a calibrated observation channel data set Q ═ Q1,Q2,...,Qn}; the visible light channel is restored to be reflectivity through searching a calibration table, and the reflectivity of the visible light channel is corrected by utilizing the solar altitude angle data; the infrared channel obtains brightness temperature data by looking up a calibration table; the brightness temperature data of the infrared channel and the reflectivity of the corrected visible light channel form a calibrated observation channel data set Q ═ Q1,Q2,...,Qn};
Step 14, performing projection conversion on the data in the calibrated observation channel data set to obtain an observation data set T ═ T { T ] formed by equidistant cylindrical projection1,T2,...,Tn}; the observation data set T at the moment is set as { T ═ T1,T2,...,TnSaving the training file into a training file directory;
step 15, extracting the equal longitude and latitude classification data of L2-grade similar products of the same type or similar meteorological satellites, matching the observation data with the L2 product of the same type meteorological satellite through matching time, and constructing a corresponding label data set L ═ L { L } in the training process1,L2,...,LnThe label data set L at that time is { L ═ L }1,L2,...,LnSaving the training file into a training file directory;
step 16, in a new observation fragment of the historical satellite observation data, selecting all observation channels of satellite data L1-level products in a specific area according to the converted row and column numbers to obtain a new channel set, and performing the steps 13-16 again by using the new channel set until all the historical satellite observation data required to be included in the training process and the label data of the historical same-type meteorological satellite are stored in a training file directory to obtain a training data set; wherein the new time refers to a time different from all previous times.
3. The deep learning-based satellite cloud classification method according to claim 1, wherein in the step 3, the specific step of training the deep learning network model is as follows:
according to the historical satellite observation data set T ═ T1,T2,...,TnAnd a label data set L ═ L1,L2,...,LnDetermining a loss function through a difference value between the label data sets of the observation data sets, and determining the loss function as a cross entropy loss function;
and taking the data in the training data set as the input of a deep learning network model, taking an Adam gradient descent method as an optimizer of the training model, taking a cross entropy loss function as a cost function of the model, gradually training parameters in the deep learning network model by adopting a method of variable step length learning rate, obtaining optimal model parameters after multiple iterative cycles, and taking the deep learning network model under the optimal model parameters as a satellite cloud classification model.
4. The deep learning-based satellite cloud image cloud classification method according to claim 1, wherein in the step 4, the cloud classification result is covered on the data segment F pixel by pixel1Above, the results are labeled in a visualized form.
5. The deep learning based satellite cloud classification method according to claim 1, wherein in the step 2, the coding-decoding network structure realizes sharing of parameters through long-hop connection.
6. The satellite cloud classification method based on deep learning of claim 2, wherein in the step 11, the specific calculation for converting longitude and latitude into a column number formula is as follows:
substep 1.1, convert the angle representation of the geography latitude and longitude into radian representation, and convert the geography latitude and longitude into the geocentric latitude and longitude according to the following formula:
λe=lon
Figure FDA0002853186240000031
wherein lat and lon are longitude and latitude, ea is a semimajor axis of the earth, and eb is a minor semiminor axis of the earth;
substep 1.2, solving for R by using the latitude and longitude of the geocentriceThe formula is as follows:
Figure FDA0002853186240000032
substeps 1.3, using ReCalculating r1、r2、r3,λDFor satellite sub-satellite point longitude, the formula is as follows:
r1=h-Re×cos(φe)×cos(λeD)
r2=-Re×cos(φe)×sin(λeD)
r3=Re×sin(φe)
substeps 1.4, using r1、r2、r3Calculating rnX, y, the formula is as follows:
Figure FDA0002853186240000041
Figure FDA0002853186240000042
Figure FDA0002853186240000043
and a substep 1.5 of solving a row number and a column number corresponding to the longitude and latitude of the designated area, wherein the formula is as follows:
c=COFF+x×2-16×CFAC
l=LOFF+y×2-16×LFAC
wherein COFF is column offset, CFAC is column scale factor, LOFF is row offset, LFAC is row scale factor, and c and l are row and column numbers corresponding to longitude and latitude.
CN202011536412.9A 2020-12-22 2020-12-22 Satellite cloud picture cloud classification method based on deep learning Active CN113052201B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011536412.9A CN113052201B (en) 2020-12-22 2020-12-22 Satellite cloud picture cloud classification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011536412.9A CN113052201B (en) 2020-12-22 2020-12-22 Satellite cloud picture cloud classification method based on deep learning

Publications (2)

Publication Number Publication Date
CN113052201A true CN113052201A (en) 2021-06-29
CN113052201B CN113052201B (en) 2022-10-11

Family

ID=76508046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011536412.9A Active CN113052201B (en) 2020-12-22 2020-12-22 Satellite cloud picture cloud classification method based on deep learning

Country Status (1)

Country Link
CN (1) CN113052201B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780439A (en) * 2021-09-15 2021-12-10 国家气象中心 Multi-cloud identification system of different types of meteorological satellites based on unsupervised domain adaptation
CN117371316A (en) * 2023-10-09 2024-01-09 北京大学重庆大数据研究院 Deep learning-based stationary satellite solar short wave radiation inversion method and readable storage medium
WO2024021225A1 (en) * 2022-07-29 2024-02-01 知天(珠海横琴)气象科技有限公司 High-resolution true-color visible light model generation method, high-resolution true-color visible light model inversion method, and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210483A (en) * 2019-12-23 2020-05-29 中国人民解放军空军研究院战场环境研究所 Simulated satellite cloud picture generation method based on generation of countermeasure network and numerical mode product
CN111274878A (en) * 2020-01-10 2020-06-12 中国科学院自动化研究所 Satellite cloud picture classification method and system
CN111861884A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Satellite cloud image super-resolution reconstruction method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210483A (en) * 2019-12-23 2020-05-29 中国人民解放军空军研究院战场环境研究所 Simulated satellite cloud picture generation method based on generation of countermeasure network and numerical mode product
CN111274878A (en) * 2020-01-10 2020-06-12 中国科学院自动化研究所 Satellite cloud picture classification method and system
CN111861884A (en) * 2020-07-15 2020-10-30 南京信息工程大学 Satellite cloud image super-resolution reconstruction method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
贾杰等: "遥感领域中气象卫星云图自动分类研究", 《无线互联科技》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113780439A (en) * 2021-09-15 2021-12-10 国家气象中心 Multi-cloud identification system of different types of meteorological satellites based on unsupervised domain adaptation
CN113780439B (en) * 2021-09-15 2023-09-22 国家气象中心 Multi-cloud identification system of different types of meteorological satellites based on unsupervised domain adaptation
WO2024021225A1 (en) * 2022-07-29 2024-02-01 知天(珠海横琴)气象科技有限公司 High-resolution true-color visible light model generation method, high-resolution true-color visible light model inversion method, and system
CN117371316A (en) * 2023-10-09 2024-01-09 北京大学重庆大数据研究院 Deep learning-based stationary satellite solar short wave radiation inversion method and readable storage medium

Also Published As

Publication number Publication date
CN113052201B (en) 2022-10-11

Similar Documents

Publication Publication Date Title
CN113052201B (en) Satellite cloud picture cloud classification method based on deep learning
Schaaf et al. First operational BRDF, albedo nadir reflectance products from MODIS
CN102693538B (en) Generate the global alignment method and apparatus of high dynamic range images
CN109523510B (en) Method for detecting abnormal region of river channel water quality space based on multispectral remote sensing image
CN109635249B (en) Water body turbidity inversion model establishing method, water body turbidity inversion model detecting method and water body turbidity inversion model detecting device
CN108090872B (en) Single-frame multispectral image super-resolution reconstruction method and system based on gradient extraction
Congalton Remote sensing: an overview
Singh et al. Earth observation data sets in monitoring of urbanization and urban heat island of Delhi, India
Sofieva et al. A novel tropopause-related climatology of ozone profiles
CN107576399B (en) MODIS forest fire detection-oriented brightness and temperature prediction method and system
CN116519557A (en) Aerosol optical thickness inversion method
CN109359264B (en) Chlorophyll product downscaling method and device based on MODIS
Mercier et al. Solar irradiance anticipative transformer
CN113705340B (en) Deep learning change detection method based on radar remote sensing data
CN111273376B (en) Downscaling sea surface net radiation determination method, system, equipment and storage medium
CN111177652B (en) Spatial downscaling method and system for remote sensing precipitation data
CN115859797A (en) Satellite quantitative precipitation estimation method based on deep learning
Jing et al. Two improvement schemes of PAN modulation fusion methods for spectral distortion minimization
Iannone et al. Proba-V cloud detection Round Robin: Validation results and recommendations
Chadwick et al. An artificial neural network approach to multispectral rainfall estimation over Africa
CN111060991A (en) Method for generating clear sky radiation product of wind and cloud geostationary satellite
CN116385894A (en) Coastline identification method, device and equipment based on remote sensing image
CN115222837A (en) True color cloud picture generation method and device, electronic equipment and storage medium
Kaplan et al. MTF driven adaptive multiscale bilateral filtering for pansharpening
Talbi et al. Vector-Quantized Variational AutoEncoder for pansharpening

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant