CN113222015A - Cloud identification method and device, computing equipment and storage medium - Google Patents

Cloud identification method and device, computing equipment and storage medium Download PDF

Info

Publication number
CN113222015A
CN113222015A CN202110519077.XA CN202110519077A CN113222015A CN 113222015 A CN113222015 A CN 113222015A CN 202110519077 A CN202110519077 A CN 202110519077A CN 113222015 A CN113222015 A CN 113222015A
Authority
CN
China
Prior art keywords
data
cloud
satellite
angle
sun
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110519077.XA
Other languages
Chinese (zh)
Inventor
吴昊
何娜
闫正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Eye Control Technology Co Ltd
Original Assignee
Shanghai Eye Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Eye Control Technology Co Ltd filed Critical Shanghai Eye Control Technology Co Ltd
Priority to CN202110519077.XA priority Critical patent/CN113222015A/en
Publication of CN113222015A publication Critical patent/CN113222015A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D21/00Measuring or testing not otherwise provided for
    • G01D21/02Measuring two or more variables by means not covered by a single other subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a cloud identification method and device, computing equipment and a storage medium. The method comprises the following steps: acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle; and processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result. According to the technical scheme, satellite image data, sun-related angle data, satellite-related angle data and terrain data are comprehensively considered, abundant and comprehensive characteristic details are provided, automatic identification is carried out by utilizing a pre-trained cloud identification model, and accuracy of cloud identification is improved.

Description

Cloud identification method and device, computing equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of satellite observation, in particular to a cloud identification method and device, computing equipment and a storage medium.
Background
In meteorological observation, the observation of clouds is of great significance to aircraft navigation, weather modification and weather forecasting. The observation of clouds includes the observation of clouds, cloud cover and cloud height, wherein the observation of clouds is the most difficult. The cloud evolution is very variable and is very easily influenced by various factors such as the wind speed in the sky, the convection in the sky, the turbulent flow, the water vapor content, the local circulation change, the large-scale weather change and the like, so that the cloud shape is difficult to capture and identify. To accurately identify each cloud in the sky, it is necessary for the observer to have a rich experience and a strong comprehensive judgment capability, however, the human judgment is easily affected by subjectivity, and the manpower or equipment cannot meet the actual requirements, for example, it is difficult to observe the clouds in the sea and the unmanned area.
At present, some methods can also utilize a meteorological satellite cloud image to invert cloud products, and determine clouds on a remote sensing image by a threshold value method. However, because the selected threshold is fixed, the influence of different terrains, satellite positions and sun positions cannot be comprehensively considered, the cloud identification deviation of areas such as plateaus or basins is large, and the accuracy of cloud identification is low.
Disclosure of Invention
The invention provides a cloud identification method, a cloud identification device, computing equipment and a storage medium, which are used for improving the accuracy of cloud identification.
In a first aspect, an embodiment of the present invention provides a cloud identification method, including:
acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle;
and processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
Optionally, before acquiring the observation data, the method further includes:
acquiring sample data, wherein the sample data comprises satellite image sample data, sun-related angle sample data, satellite-related angle sample data, terrain sample data and cloud type labels;
training the cloud recognition model based on the sample data.
Optionally, the sun-related angle data includes a sun zenith angle and a sun azimuth angle;
the satellite related angle data includes a satellite zenith angle and a satellite azimuth angle.
Optionally, before processing the observation data through the pre-trained cloud recognition model, the method further includes: the observation data is preprocessed by the following steps:
intercepting observation data in a target observation area;
interpolating images of partial channels in the satellite image data to unify the resolution of each channel image;
and generating an image mask corresponding to the terrain data, wherein the image mask is used for representing position information of land, position information of a water area and altitude information in the terrain data.
Optionally, the cloud identification model is constructed based on a depth residual error network;
the training of the cloud recognition model based on the sample data comprises:
processing the sample data through a rolling block and pooling layer to down-sample a feature map of the sample data to 1/2 of original size;
processing the down-sampled feature map through the depth residual error network to obtain multi-scale features;
combining the multi-scale features by adopting a void space pyramid pooling module and/or a convolution up-sampling module;
outputting the identification result of the sample data according to the combined features based on a logistic regression model;
adjusting network parameters of the cloud identification model, and repeatedly performing the down-sampling, feature map processing, feature merging and outputting operations until the output identification result and the loss function corresponding to the cloud category label are in a preset range to obtain the trained cloud identification model
Optionally, the training the cloud recognition model based on the sample data includes:
randomly cutting the sample data into one or more parts, or cutting the sample data into a plurality of parts according to a set sliding window;
processing each part of sample data through the cloud identification model;
integrating the output result corresponding to each part of sample data to obtain the identification result of the sample data;
and adjusting network parameters of the cloud identification model, and repeatedly performing the processing and integrating operation on each part of sample data until the output identification result and the loss function corresponding to the cloud class label are within a preset range to obtain the trained cloud identification model.
Optionally, the processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result includes:
cutting the observation data into a plurality of parts according to a set sliding window;
processing each part of observation data through the cloud identification model;
and integrating the output result corresponding to each part of observation data to obtain the cloud identification result, wherein for the overlapped part of each cutting of the set sliding window, the output result corresponding to one cutting is selected as the cloud identification result through voting or according to the confidence coefficient of each cutting.
Optionally, the loss function for training the cloud identification model is a weighted cross entropy loss function.
In a second aspect, an embodiment of the present invention provides a cloud identification apparatus, including:
the data acquisition module is used for acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle;
and the recognition module is used for processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
In a third aspect, an embodiment of the present invention provides a computing device, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the cloud identification method of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the cloud identification method according to the first aspect.
The embodiment of the invention provides a cloud identification method and device, computing equipment and a storage medium. The method comprises the following steps: acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data; and processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result. According to the technical scheme, satellite image data, sun-related angle data, satellite-related angle data and terrain data are comprehensively considered, abundant and comprehensive characteristic details are provided, automatic identification is carried out by utilizing a pre-trained cloud identification model, and accuracy of cloud identification is improved.
Drawings
Fig. 1 is a flowchart of a cloud identification method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a related angle according to an embodiment of the present invention;
fig. 3 is a flowchart of a cloud identification method according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a visualization of a cloud class label according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of training a cloud recognition model according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of a cloud identification device according to a third embodiment of the present invention;
fig. 7 is a schematic diagram of a hardware structure of a computing device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. In addition, the embodiments and features of the embodiments in the present invention may be combined with each other without conflict. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a flowchart of a cloud identification method according to an embodiment of the present invention, which is applicable to cloud identification of clouds in the sky. In particular, the cloud identification method may be performed by a cloud identification apparatus, which may be implemented in software and/or hardware and integrated in a computing device. Further, computing devices include, but are not limited to: desktop computers, notebook computers, servers, and electronic devices in ground control centers, among others.
As shown in fig. 1, the method specifically includes the following steps:
s110, acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data includes a satellite zenith angle and a satellite azimuth angle.
Specifically, the satellite image data refers to image data acquired in space through a satellite. For example, a sunflower 8 satellite is adopted to acquire data of 16 channels, and by analyzing the correlation between the data of the 16 channels and the cloud, the data of seven channels (VIS1, VIS2, VIS3, B07, B10, IR1 and IR2) with the highest correlation among the data of the satellite image are selected as the satellite image data, wherein the seven channels are three visible light channels, a cloud fog channel (generally in the lower troposphere layer), a water vapor channel (generally in the middle troposphere layer), a surface temperature channel and a sea surface water temperature channel.
The terrain data comprises lake ocean distribution data and terrain topography data of a target observation area, and is used for distinguishing whether an observed position belongs to land or water and the altitude of the position, so that the cloud category can be estimated more accurately.
In addition, the visible and near infrared channels of the satellite are affected by the zenith and azimuth angles of the sun and the satellite. In this embodiment, the observation data further includes sun-related angle data and satellite-related angle data. Although the sunflower 8 satellite belongs to a stationary satellite and has a position substantially unchanged with respect to the earth, the satellite trajectory is not an ideal circle in practice and still has a certain degree of drift. Therefore, in this embodiment, satellite-dependent angle data is also taken into account, so that clouds can be identified more accurately.
In this embodiment, a solar zenith angle (denoted as SOZ), a solar azimuth angle (denoted as SOA), a satellite zenith angle (denoted as SAZ), and a satellite azimuth angle (denoted as SAA) are obtained from data observed by a satellite. For an observation point pixel, the sun zenith angle refers to an included angle between an incident solar ray at the position of the pixel at the current moment and a normal line of a horizontal plane of the pixel; the sun azimuth angle is a ray formed by clockwise rotating from the positive north direction to the current moment by taking a pixel point of the earth surface as a vertex and projecting incident sunlight on the earth surface at the pixel position, and the rotating angle is the sun azimuth angle; the satellite zenith angle is an included angle formed by a connecting line of the satellite sensor and the pixel and a normal line of a ground plane of the pixel; the satellite azimuth refers to a ray from the positive north direction by taking a pixel point of the earth surface as a vertex and rotating clockwise to the projection of the current satellite and pixel connection line on the earth surface, and the rotating angle is the satellite azimuth.
Fig. 2 is a schematic diagram of a related angle according to an embodiment of the present invention. As shown in fig. 2, a three-axis coordinate system is established in the observation point pixels, a plane formed by the X axis and the Y axis is equivalent to the earth surface, the Z axis is equivalent to the normal of the ground plane, the N point can be regarded as the satellite position or the sun position, for the N point, the zenith angle is θ, and the azimuth angle is φ.
And S120, processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
Specifically, the cloud recognition model is a machine learning model trained in advance, such as a neural network model. The cloud recognition model learns the rule of obtaining output (correct recognition result) from input (known observation data) from a large amount of sample data, so that the cloud recognition result of the input observation data can be automatically output after the observation data acquired in real time is input into the cloud recognition model. Wherein, the cloud identification result refers to the cloud identified according to the input observation data. This example divides clouds into 11 types based on the International Satellite Cloud Computing Project (ISCCP): rolling clouds, deep convection clouds, high cumulus clouds, high layer clouds, rainlayer clouds, cumulus clouds, layered clouds, cloudless, and unknown classes.
According to the cloud identification method provided by the embodiment of the invention, the satellite image data, the sun-related angle data, the satellite-related angle data and the terrain data are comprehensively considered, rich and comprehensive characteristic details are provided, the cloud identification method is combined with a pre-trained cloud identification model for automatic identification, the cloud can be obtained by taking the influences of the sun and the satellite positions on different terrains and different moments into consideration, and the accuracy of cloud identification is improved.
Example two
Fig. 3 is a flowchart of a cloud identification method according to a second embodiment of the present invention, where the optimization is performed on the basis of the second embodiment, and the model training, the observation data preprocessing, and the cloud identification model identification process are specifically described in this embodiment. It should be noted that technical details that are not described in detail in the present embodiment may be referred to any of the above embodiments.
Specifically, as shown in fig. 3, the method specifically includes the following steps:
and S210, acquiring sample data, wherein the sample data comprises satellite image sample data, sun-related angle sample data, satellite-related angle sample data, terrain sample data and cloud category labels.
Specifically, the sample data may be data for which an accurate cloud has been determined in historical observation, or may be data downloaded from a standard database for training a cloud recognition model. The cloud type label is added to the sample data, so that the cloud identification model can learn the rule from input (satellite image sample data, sun-related angle sample data, satellite-related angle sample data and terrain sample data) to output (cloud type label). For example, the existing cloud classification of satellite No. 8 sunflower is used as a label, and the cloud is classified into nine types of cirrus, circulant cloud, deep convection cloud, high cumulus cloud, high layer cloud, rainlayer cloud, cumulus cloud, laminated cloud and layer cloud according to the ISCCP classification standard, and an unknown type and cloud class label without cloud are added, and the cloud class label has 11 types in total.
Fig. 4 is a schematic diagram of visualization of a cloud class label according to a second embodiment of the present invention. As shown in fig. 4, the white part of the upper left area is due to the area being in the dark state, and the method of the present embodiment does not support cloud output in the dark state; in the lower right area, different clouds may be represented by (shades of) different colors.
And S220, training a cloud identification model based on the sample data.
Specifically, through continuous adjustment and update of network parameters in the cloud recognition model, when the loss function reaches a predetermined range, training of the cloud recognition model is completed. It should be noted that the existing cloud classification in the standard database of the sunflower satellite No. 8 is for daytime and has no cloud class label in the night state, so the trained cloud identification model only identifies the daytime cloud, and for the part without the cloud class label in the night, the training or the calculation of the loss function is not involved.
Optionally, training the cloud recognition model based on the sample data specifically includes S2210-S2250:
s2210, processing the sample data through a rolling block and a pooling layer to down-sample a feature map of the sample data to 1/2 of an original size;
s2220, processing the down-sampled feature map through the depth residual error network to obtain multi-scale features;
s2230, combining the multi-scale features by adopting a void space pyramid pooling module and/or a convolution upsampling module;
s2240, outputting an identification result of the sample data according to the combined features based on a logistic regression model;
s2250, adjusting network parameters of the cloud identification model, and repeatedly performing the down-sampling, feature map processing, feature merging, and outputting operations until the output identification result and the loss function corresponding to the cloud class label are within a predetermined range, thereby obtaining a trained cloud identification model.
Fig. 5 is a schematic diagram of training a cloud recognition model according to a second embodiment of the present invention. The cloud identification model in this embodiment is a semantic segmentation model (depeplab v3) using a residual error network (Resnet101) with a depth of 101 layers as a base network, and a multi-scale cascade structure is added. As shown in fig. 5, the input sample data is first subjected to a primary convolution and Max Pooling (Max Pooling) operation, and the feature map is downsampled to 1/2 size. Four levels of feature maps are generated in the depth residual network (coding part): x of size 1/8 of original, feature flat 1 of size 1/4 of original, feature flat 2 of size 1/4 of original, and feature flat 3 of size 1/2 of original; then combining the X with feat1 through a cavity space pyramid pooling module structure, combining the combined features with feat2 after convolution and up-sampling, and combining the combined features with feat3 through convolution and up-sampling; and finally outputting the identification result of the sample data by the combined features through a 1 multiplied by 1 convolutional layer and a Softmax logistic regression model. On the basis, the network parameters of the cloud identification model are continuously adjusted, the processes of downsampling, multi-scale feature generation, feature combination and Softmax logistic regression are repeatedly executed, so that the value of the loss function is converged in a preset range, and the cloud identification model is trained, has high identification accuracy and can be practically applied to cloud identification of observed data.
Optionally, the loss function for training the cloud identification model is a weighted cross entropy loss function. Specifically, if the proportion of one class in the cloud class label is p, the weight corresponding to the cloud class is 1/p. The loss function can be expressed as:
Figure BDA0003063216000000101
wherein, x represents the recognition result output by the last layer of the cloud recognition model, class represents a cloud class, weight [ class ]]=1/p[class]Represents the weight, p [ class ], corresponding to the cloud class when calculating the loss function]The proportion of the cloud class is labeled as the cloud class, and j is the total number of the cloud classes in the cloud class label.
And S230, acquiring observation data including satellite image data, sun-related angle data, satellite-related angle data and terrain data.
And S240, preprocessing the observation data.
In this embodiment, the following preprocessing is performed on the observation data:
1) intercepting observation data in a target observation area;
2) interpolating images of partial channels in the satellite image data to unify the resolution of each channel image;
3) and generating an image mask corresponding to the terrain data, wherein the image mask is used for representing position information of land, position information of a water area and altitude information in the terrain data.
For 1), the output range of the satellite image data is E80 ° to E200 ° for east longitude, N60 ° to S60 ° for north latitude, and the spatial resolution is 1km to 2km per pixel point. In this embodiment, cloud identification may be performed on the target observation region, that is, the resolved satellite image data is intercepted. In addition, for the terrain data, the solar zenith angle, the solar azimuth angle, the satellite zenith angle, the satellite azimuth angle and the like, the data corresponding to the same region are intercepted according to the method.
For 2), the resolutions of the seven channels of the satellite image data are not consistent, and the uniformity of the resolutions can be realized through interpolation. For example, the observation data of the final input value cloud identification model includes observation data in the range of east longitude E80 ° to east longitude E140 °, north latitude N55 ° to south latitude S5 °; the total number of the channels is 1751 pixels in width and 1001 pixels in height.
For 3), two image masks, mask1 and mask2, may be generated for the intercepted terrain data of the target observation region, where 0 in mask1 represents land and 1 represents water surface. On mask2, is the altitude data normalized to between 0 and 1, the normalization method is obtained by directly dividing the altitude by 10000.
On this basis, the input sample data is divided into 13 channels: [ VIS1, VIS2, VIS3, B07, B10, IR1, IR2, SOZ, SOA, SAZ, SAA, mask1, mask2 ]. The output cloud class label is marked as Mask _ label. The cloud type label corresponds to the longitude and latitude where the observation point pixel is located.
And S250, processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
In this embodiment, the cloud identification model is trained in advance, a rule that the cloud identification result is obtained from the input observation data is learned from a large amount of sample data, and the cloud identification model has high identification accuracy, and on the basis, the observation data is input to the cloud identification model, so that the cloud identification result of the observation data can be obtained.
Optionally, training the cloud recognition model based on the sample data includes: randomly cutting observation data into one or more parts, or cutting the observation data into a plurality of parts according to a set sliding window; processing each part of sample data through a cloud identification model; integrating the output result corresponding to each part of sample data to obtain a sample data identification result; and adjusting network parameters of the cloud identification model, and repeatedly performing the processing and integrating operation on each part of sample data until the output identification result and the loss function corresponding to the cloud class label are within a preset range to obtain the trained cloud identification model.
Specifically, due to hardware limitations, computing power limitations, and the like, it is difficult to directly train a large-size network model of 1751 × 1001 pixels, and this embodiment cuts input data to simplify the training or recognition process. For example, input data of 480 × 480 pixels is used, a clipping mode can be selected according to the data amount to be processed during training, if the data amount is large, clipping can be performed once at random, if the data amount is small, overlapping sliding window clipping can be used, the whole large image is divided into a plurality of small images to be processed respectively, and finally, the division results of all the small images are integrated.
Illustratively, 2500 sheets of data are used, 2000 sheets of data are used as training data, 10% of data are randomly selected as verification data in the training process, and the remaining 500 sheets of data are used as test data. In the training process, each original image is randomly cropped three times, so that the required training data with the size of 480 × 480 is generated.
Optionally, the processing of the observation data by the pre-trained cloud recognition model includes:
cutting observation data into a plurality of parts according to a set sliding window;
processing each part of observation data through a cloud identification model;
and integrating the output result corresponding to each part of observation data to obtain a cloud recognition result, wherein for the overlapped part of each cutting of the set sliding window, the output result corresponding to one cutting is selected as the cloud recognition result through voting or according to the confidence coefficient of each cutting.
For example, for an overlapping portion of two cropping, the confidence of the first cropping is D1, the confidence of the second cropping is D2, and D1 > D2, when cloud recognition is performed on the overlapping portion, the output result corresponding to the first cropping may be used as the criterion; similarly, it can be determined by voting which crop corresponds to the output result.
It should be noted that in the training process, sample data can be cut by adopting a random cutting or sliding window cutting mode to increase the diversity of the sample data, so that the cloud identification model can learn richer sample characteristics conveniently, and the identification capability of the cloud identification model is improved. When the trained cloud-shaped recognition model is applied to cloud-shaped recognition of observation data, the observation data are cut by adopting the overlapped sliding window, so that the recognition process is simplified, and the stability and the accuracy of the recognition process are improved.
The cloud identification method provided by the second embodiment of the invention is optimized on the basis of the first embodiment, and the cloud identification model is trained according to sample data, so that the cloud identification model fully learns cloud characteristics of data of different channels, and the cloud is obtained by inversion considering the influences of sun and satellite positions of different terrains and different moments, and the accuracy of cloud identification is improved; by preprocessing observation data, the data quality is improved, the observation requirement on a target area is met, unnecessary processing is avoided, and the calculated amount is reduced; a cloud identification model is built based on a depth residual error network, a cascade structure is added, multi-scale features are fully extracted and multiplexed, and accuracy of cloud identification is further improved.
EXAMPLE III
Fig. 6 is a schematic structural diagram of a cloud identification device according to a third embodiment of the present invention. As shown in fig. 6, the cloud identification apparatus provided in this embodiment includes:
a data collection module 310, configured to collect observation data, where the observation data includes satellite image data, sun-related angle data, satellite-related angle data, and terrain data, and the sun-related angle data includes a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle;
and the recognition module 320 is configured to process the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
According to the cloud identification device provided by the third embodiment of the invention, abundant and comprehensive characteristic details are provided by comprehensively considering the satellite image data, the sun-related angle data, the satellite-related angle data and the terrain data, and the cloud identification is automatically identified by using the pre-trained cloud identification model, so that the accuracy of cloud identification is improved.
On the basis of the above embodiment, the method further includes:
the data acquisition module is used for acquiring sample data before the observation data is acquired, wherein the sample data comprises satellite image sample data, sun-related angle sample data, satellite-related angle sample data, terrain sample data and cloud type labels;
and the training module is used for training the cloud-shaped recognition model based on the sample data.
On the basis of the above embodiment, the sun-related angle data includes a sun zenith angle and a sun azimuth angle;
the satellite related angle data includes a satellite zenith angle and a satellite azimuth angle.
On the basis of the above embodiment, the method further includes:
the preprocessing module is used for preprocessing the observation data before the observation data are processed through a pre-trained cloud recognition model:
intercepting observation data in a target observation area;
interpolating images of partial channels in the satellite image data to unify the resolution of each channel image;
and generating an image mask corresponding to the terrain data, wherein the image mask is used for representing position information of land, position information of a water area and altitude information in the terrain data.
On the basis of the embodiment, the cloud identification model is constructed based on a depth residual error network;
a training module comprising:
a downsampling unit for processing the sample data through the rolling block and the pooling layer to downsample the feature map of the sample data to 1/2 of an original size;
the multi-scale feature extraction unit is used for processing the down-sampled feature map through the depth residual error network to obtain multi-scale features;
the characteristic merging unit is used for merging the multi-scale characteristics by adopting a void space pyramid pooling module and/or a convolution upsampling module;
the classification unit is used for outputting the identification result of the sample data according to the combined features based on a logistic regression model;
and the first execution unit is used for adjusting network parameters of the cloud identification model, and repeatedly executing the down-sampling, the feature map processing, the feature merging and the output operation until the output identification result and the loss function corresponding to the cloud class label are in a preset range, so as to obtain the trained cloud identification model.
On the basis of the above embodiment, the training module includes:
the first clipping unit is used for clipping the sample data into one or more parts at random or clipping the sample data into a plurality of parts according to a set sliding window;
the second processing unit is used for processing each part of sample data through the cloud-shaped identification model;
the third integration unit is used for integrating the output result corresponding to each part of sample data to obtain the identification result of the sample data;
and the second execution unit is used for adjusting the network parameters of the cloud-shaped recognition model and repeatedly executing the processing and the integration operation on each part of sample data until the output recognition result and the loss function corresponding to the cloud-shaped class label are in a preset range, so that the trained cloud-shaped recognition model is obtained.
On the basis of the above embodiment, the identification module 320 includes:
the second cutting unit is used for cutting the observation data into a plurality of parts according to the set sliding window;
the second processing unit is used for processing each part of observation data through the cloud identification model;
and the second integration unit integrates the output result corresponding to each part of observation data to obtain a cloud-shaped recognition result, wherein for the overlapped part of each cutting of the set sliding window, the output result corresponding to one cutting is selected as the cloud-shaped recognition result through voting or according to the confidence coefficient of each cutting.
On the basis of the embodiment, the loss function of the cloud identification model is trained to be a weighted cross entropy loss function.
The cloud identification device provided by the third embodiment of the invention can be used for executing the cloud identification method provided by any embodiment, and has corresponding functions and beneficial effects.
Example four
Fig. 7 is a schematic diagram of a hardware structure of a computing device according to a fourth embodiment of the present invention. Computing devices include, but are not limited to: desktop computers, notebook computers, servers, and electronic devices in ground control centers, among others.
As shown in fig. 7, the computing device provided in the present application includes a memory 42, a processor 41, and a computer program stored on the memory and executable on the processor, and when the processor 41 executes the program, the cloud identification method described above is implemented.
The computing device may also include memory 42; the processor 41 in the computing device may be one or more, and fig. 7 illustrates one processor 41 as an example; the memory 42 is used to store one or more programs; the one or more programs are executed by the one or more processors 41, so that the one or more processors 41 implement the cloud identification method as described in the embodiments of the present application.
The computing device further includes: a communication device 43, an input device 44 and an output device 45.
The processor 41, the memory 42, the communication means 43, the input means 44 and the output means 45 in the computing device may be connected by a bus or other means, as exemplified by the bus connection in fig. 7.
The input device 44 may be used to receive entered numeric or character information and to generate key signal inputs relating to user settings and function controls of the computing device. The output device 45 may include a display device such as a display screen.
The communication means 43 may comprise a receiver and a transmitter. The communication device 43 is configured to perform information transmission and reception communication in accordance with control of the processor 41.
The memory 42, as a computer-readable storage medium, may be configured to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the cloud identification method according to the embodiment of the present application (for example, the data acquisition module 310 and the identification module 320 in the cloud identification device). The memory 42 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the computing device, and the like. Further, the memory 42 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 42 may further include memory located remotely from processor 41, which may be connected to a computing device over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
On the basis of the foregoing embodiments, the present embodiment also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a cloud identification apparatus, implements a cloud identification method in any of the foregoing embodiments of the present invention, the method including: acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle; and processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
Embodiments of the present invention provide a storage medium including computer-executable instructions, which may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer-readable storage medium may be, for example, but is not limited to: an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the cloud identification method according to the embodiments of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A cloud identification method, comprising:
acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle;
and processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
2. The method of claim 1, further comprising, prior to acquiring observation data:
acquiring sample data, wherein the sample data comprises satellite image sample data, sun-related angle sample data, satellite-related angle sample data, terrain sample data and cloud type labels;
training the cloud recognition model based on the sample data.
3. The method of claim 1, further comprising, prior to processing the observation data through a pre-trained cloud recognition model: the observation data is preprocessed by the following steps:
intercepting observation data in a target observation area;
interpolating images of partial channels in the satellite image data to unify the resolution of each channel image;
and generating an image mask corresponding to the terrain data, wherein the image mask is used for representing position information of land, position information of a water area and altitude information in the terrain data.
4. The method of claim 2, wherein the cloud identification model is constructed based on a depth residual error network;
the training of the cloud recognition model based on the sample data comprises:
processing the sample data through a rolling block and pooling layer to down-sample a feature map of the sample data to 1/2 of original size;
processing the down-sampled feature map through the depth residual error network to obtain multi-scale features;
combining the multi-scale features by adopting a void space pyramid pooling module and/or a convolution up-sampling module;
outputting the identification result of the sample data according to the combined features based on a logistic regression model;
and adjusting network parameters of the cloud identification model, and repeatedly performing the down-sampling, feature map processing, feature merging and outputting operations until the output identification result and the loss function corresponding to the cloud category label are in a preset range to obtain the trained cloud identification model.
5. The method of claim 2, wherein said training said cloud recognition model based on said sample data comprises:
randomly cutting the sample data into one or more parts, or cutting the sample data into a plurality of parts according to a set sliding window;
processing each part of sample data through the cloud identification model;
integrating the output result corresponding to each part of sample data to obtain the identification result of the sample data;
and adjusting network parameters of the cloud identification model, and repeatedly performing the processing and integrating operation on each part of sample data until the output identification result and the loss function corresponding to the cloud class label are within a preset range to obtain the trained cloud identification model.
6. The method of claim 1, wherein the processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result comprises:
cutting the observation data into a plurality of parts according to a set sliding window;
processing each part of observation data through the cloud identification model;
and integrating the output result corresponding to each part of observation data to obtain the cloud identification result, wherein for the overlapped part of each cutting of the set sliding window, the output result corresponding to one cutting is selected as the cloud identification result through voting or according to the confidence coefficient of each cutting.
7. The method of claim 2, wherein the loss function for training the cloud recognition model is a weighted cross-entropy loss function.
8. A cloud recognition device, comprising:
the data acquisition module is used for acquiring observation data, wherein the observation data comprises satellite image data, sun-related angle data, satellite-related angle data and terrain data, and the sun-related angle data comprises a sun zenith angle and a sun azimuth angle; the satellite related angle data comprises a satellite zenith angle and a satellite azimuth angle;
and the recognition module is used for processing the observation data through a pre-trained cloud recognition model to obtain a cloud recognition result.
9. A computing device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the cloud identification method of any of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the cloud identification method according to any one of claims 1 to 7.
CN202110519077.XA 2021-05-12 2021-05-12 Cloud identification method and device, computing equipment and storage medium Pending CN113222015A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110519077.XA CN113222015A (en) 2021-05-12 2021-05-12 Cloud identification method and device, computing equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110519077.XA CN113222015A (en) 2021-05-12 2021-05-12 Cloud identification method and device, computing equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113222015A true CN113222015A (en) 2021-08-06

Family

ID=77095244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110519077.XA Pending CN113222015A (en) 2021-05-12 2021-05-12 Cloud identification method and device, computing equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113222015A (en)

Similar Documents

Publication Publication Date Title
US11521022B2 (en) Semantic state based sensor tracking and updating
CN109840553B (en) Extraction method and system of cultivated land crop type, storage medium and electronic equipment
CN113486000B (en) Surface evapotranspiration data downscaling method based on multi-source data and deep learning
CN110569797B (en) Method, system and storage medium for detecting mountain fire of geostationary orbit satellite image
CN110991430B (en) Ground feature identification and coverage rate calculation method and system based on remote sensing image
CN112308029A (en) Rainfall station and satellite rainfall data fusion method and system
CN111428862B (en) Polar unbalanced space-time combined convection primary short-term prediction method
CN113343858B (en) Road network geographic position identification method and device, electronic equipment and storage medium
CN114067142B (en) Method for realizing scene structure prediction, target detection and lane-level positioning
CN112668615B (en) Satellite cloud picture prediction method based on depth cross-scale extrapolation fusion
CN113420939A (en) Cloud picture forecasting method, equipment and storage medium
CN109375290A (en) A kind of bridge spanning the sea mist monitoring system and its application method based on machine learning
US20230059277A1 (en) Map data processing method and apparatus, and storage medium
CN118135426B (en) Port water area traffic condition identification method based on satellite image and SAM
CN113011295A (en) Method, computer equipment and medium for identifying photovoltaic power station based on remote sensing image
CN113129248A (en) Island remote sensing image set obtaining method, device, equipment and medium
WO2022107619A1 (en) Data analysis device and method, and program
WO2022107620A1 (en) Data analysis device and method, and program
CN112084989A (en) Unmanned aerial vehicle and CNN-based large-range pine wood nematode withered vertical wood intelligent detection method
WO2023275431A1 (en) Method and system for optimizing image data for generating orthorectified image
CN113658258B (en) Typhoon positioning method, typhoon positioning device, typhoon positioning system, typhoon positioning electronic equipment and typhoon positioning storage medium
CN111274878A (en) Satellite cloud picture classification method and system
CN118131365A (en) Meteorological monitoring device and method for new energy power generation prediction
CN114120145A (en) Monitoring method, monitoring device, electronic equipment and computer readable storage medium
CN117710508A (en) Near-surface temperature inversion method and device for generating countermeasure network based on improved condition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination