CN116188968B - Neural network-based detection method for thick cloud area of remote sensing image - Google Patents

Neural network-based detection method for thick cloud area of remote sensing image Download PDF

Info

Publication number
CN116188968B
CN116188968B CN202211545313.6A CN202211545313A CN116188968B CN 116188968 B CN116188968 B CN 116188968B CN 202211545313 A CN202211545313 A CN 202211545313A CN 116188968 B CN116188968 B CN 116188968B
Authority
CN
China
Prior art keywords
representing
module
feature
remote sensing
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211545313.6A
Other languages
Chinese (zh)
Other versions
CN116188968A (en
Inventor
李冠群
俞伟学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genyu Muxing Beijing Space Technology Co ltd
Original Assignee
Genyu Muxing Beijing Space Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genyu Muxing Beijing Space Technology Co ltd filed Critical Genyu Muxing Beijing Space Technology Co ltd
Priority to CN202211545313.6A priority Critical patent/CN116188968B/en
Publication of CN116188968A publication Critical patent/CN116188968A/en
Application granted granted Critical
Publication of CN116188968B publication Critical patent/CN116188968B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image data processing, and discloses a remote sensing image thick cloud area detection method based on a neural network, which comprises the following steps: acquiring a cloud remote sensing image, determining a branch according to the position to obtain a position feature, perfecting the branch according to the edge to obtain an edge feature, fusing the position feature and the edge feature to obtain a fused feature, and inputting the fused feature into a U-shaped convolutional neural network to obtain a thick cloud region detection result. The invention utilizes a parallel U-shaped convolutional neural network, adopts a position determining branch and an edge perfecting branch to respectively carry out thick cloud area integral detection and cloud area edge refinement, and combines and fuses the features extracted by the two branches, thereby obtaining a remote sensing image thick cloud area detection result which is accurate in detection and complete in edge refinement.

Description

Neural network-based detection method for thick cloud area of remote sensing image
Technical Field
The invention relates to the field of image data processing, in particular to a neural network-based method for detecting a thick cloud area of a remote sensing image.
Background
The optical remote sensing image can provide rich observation of ground object information, however, cloud cover can shield ground information widely in a large area, so that estimation and observation of the ground information are affected. Thus, detection of thick cloud area coverage in an optical remote sensing image is the basis and key for further analysis and utilization of remote sensing image information.
The traditional method for detecting the thick cloud area of the remote sensing image is mainly based on a spectrum threshold strategy, and the strategy realizes automatic detection of the thick cloud by setting thresholds for different spectrums of the remote sensing image. The strategy does not need to carry out pixel-level label marking on the remote sensing image and does not need to carry out complex model training. However, the spectrum threshold-based method is often poor in generalization performance, poor in thick cloud detection precision for complex remote sensing scenes, and insufficient in detection robustness for remote sensing images of different types of scenes. In recent years, convolutional neural networks show outstanding effects in various computer vision and remote sensing image processing tasks, and a remote sensing image thick cloud area detection method based on the convolutional neural networks is inspired. The invention patent of China discloses a remote sensing satellite cloud detection method based on deep LabV3+ (CN202010241130. X), which inputs a remote sensing image with cloud coverage into a semantic segmentation network deep LabV3+ to obtain a corresponding cloud zone detection result graph. In the method, a cavity convolution and a pyramid structure are introduced, so that the receptive field of convolution is increased, and the accuracy of cloud zone detection is improved. Further, as the Chinese patent 'remote sensing image cloud detection method based on multi-scale convolutional neural network' (CN 202111108889.1), a remote sensing image cloud detection method based on multi-scale convolutional neural network is disclosed, and the method improves the detection universality of clouds with different sizes by utilizing multi-scale convolutional and pooling operations. However, in order to improve the detection accuracy of the thick cloud area in the remote sensing image, how to accurately determine the general position of the thick cloud area and simultaneously refine the edge of the cloud area is still a very difficult problem.
At present, a large number of researches are carried out on the detection of the thick cloud area of the remote sensing image, but the existing researches generally adopt a convolution neural network with a single branch to detect the thick cloud area of the remote sensing image, and no method for respectively carrying out the whole thick cloud detection and the cloud area edge refinement by utilizing a network model with a plurality of branches is seen.
Disclosure of Invention
The invention aims to overcome one or more of the prior technical problems and provides a neural network-based method for detecting a thick cloud area of a remote sensing image.
In order to achieve the above object, the present invention provides a method for detecting a thick cloud area of a remote sensing image based on a neural network, comprising:
acquiring a cloud remote sensing image;
determining branches according to the positions to obtain position features;
obtaining edge characteristics according to the edge perfecting branches;
fusing the position features and the edge features to obtain fusion features;
and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
According to one aspect of the invention, the method for obtaining the position characteristic according to the position determination branch comprises the following steps:
inputting the cloud remote sensing image into the position determining branch, and marking the cloud remote sensing image input into the position determining branch as
Figure DEST_PATH_IMAGE001
Will->
Figure 948238DEST_PATH_IMAGE001
Compression by the compression module in turn>
Figure 788018DEST_PATH_IMAGE001
Is of a size of->
Figure 940782DEST_PATH_IMAGE002
The formula through the compression module is that,
Figure DEST_PATH_IMAGE003
Figure 170906DEST_PATH_IMAGE004
Figure DEST_PATH_IMAGE005
Figure 398975DEST_PATH_IMAGE006
wherein,,
Figure DEST_PATH_IMAGE007
the output after passing through the first compression module;
Figure 30945DEST_PATH_IMAGE008
representing the output after passing through the second compression module;
Figure DEST_PATH_IMAGE009
representing the output after passing through the third compression module;
Figure 620189DEST_PATH_IMAGE002
representing the output after passing through the fourth compression module;
Figure 101724DEST_PATH_IMAGE010
、/>
Figure DEST_PATH_IMAGE011
、/>
Figure 700196DEST_PATH_IMAGE012
and->
Figure DEST_PATH_IMAGE013
Representing first through fourth compression modules, respectively;
will be
Figure 921092DEST_PATH_IMAGE002
The feature fine module is used for carrying out fine obtaining +.>
Figure 946817DEST_PATH_IMAGE014
,/>
Figure 213851DEST_PATH_IMAGE014
And->
Figure 851899DEST_PATH_IMAGE002
Is the same as the size of the feature refinement module, by the formula,
Figure DEST_PATH_IMAGE015
wherein,,
Figure 192881DEST_PATH_IMAGE014
an output representing a feature refinement module;
Figure 389507DEST_PATH_IMAGE016
representing a leakage rectified linear activation unit;
Figure DEST_PATH_IMAGE017
representing a 3 x 3 convolution;
superposition
Figure 81520DEST_PATH_IMAGE002
And->
Figure 254750DEST_PATH_IMAGE014
Stacking of the characteristic channel layers is performed such that +.>
Figure 981398DEST_PATH_IMAGE002
And->
Figure 411242DEST_PATH_IMAGE014
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure 59392DEST_PATH_IMAGE002
And->
Figure 537778DEST_PATH_IMAGE014
The position characteristics are obtained by inputting the position characteristics into the reconstruction module, the formulas for obtaining the position characteristics are as follows,
Figure 181249DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE019
Figure 973141DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE021
wherein,,
Figure 577429DEST_PATH_IMAGE022
representing the output of the first reconstruction module;
Figure DEST_PATH_IMAGE023
representing the output of the second reconstruction module;
Figure 593927DEST_PATH_IMAGE024
representing the output of the third reconstruction module;
Figure DEST_PATH_IMAGE025
representing a location feature;
Figure 528122DEST_PATH_IMAGE026
、/>
Figure DEST_PATH_IMAGE027
、/>
Figure 909556DEST_PATH_IMAGE028
and->
Figure DEST_PATH_IMAGE029
Representing first through fourth reconstruction modules, respectively;
Figure 797878DEST_PATH_IMAGE030
representing the stacking operation of feature channel levels.
According to one aspect of the invention, the method for obtaining edge features according to edge perfection branches comprises the following steps:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch as
Figure DEST_PATH_IMAGE031
The position determining branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of the expansion convolution units of the 9 feature extraction modules are different, and the position determining branch is about to>
Figure 853952DEST_PATH_IMAGE031
Sequentially passing through the first five feature extraction modules to obtain +.>
Figure 409698DEST_PATH_IMAGE032
And->
Figure DEST_PATH_IMAGE033
The formula of the first five extraction modules is that,
Figure 289930DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE035
Figure 632924DEST_PATH_IMAGE036
Figure DEST_PATH_IMAGE037
Figure 725645DEST_PATH_IMAGE038
wherein,,
Figure DEST_PATH_IMAGE039
the output after passing through the first feature extraction module;
Figure 870318DEST_PATH_IMAGE040
representation ofThe output after passing through the second feature extraction module;
Figure DEST_PATH_IMAGE041
representing the output after passing through the third feature extraction module;
Figure 390293DEST_PATH_IMAGE033
representing the output after passing through the fourth feature extraction module;
Figure 14391DEST_PATH_IMAGE032
representing the output after passing through the fifth feature extraction module;
Figure 707541DEST_PATH_IMAGE042
、/>
Figure DEST_PATH_IMAGE043
、/>
Figure 175562DEST_PATH_IMAGE044
、/>
Figure DEST_PATH_IMAGE045
and->
Figure 630552DEST_PATH_IMAGE046
Representing first to fifth feature extraction modules, respectively;
Figure 511920DEST_PATH_IMAGE033
and->
Figure 477602DEST_PATH_IMAGE032
Stacking of the characteristic channel layers is performed such that +.>
Figure 128026DEST_PATH_IMAGE033
And->
Figure 786541DEST_PATH_IMAGE014
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure 155205DEST_PATH_IMAGE033
And->
Figure 924578DEST_PATH_IMAGE032
Inputting the edge characteristics into the rest extraction modules to obtain the edge characteristics, wherein the formula for obtaining the edge characteristics is as follows,
Figure DEST_PATH_IMAGE047
Figure 134236DEST_PATH_IMAGE048
Figure DEST_PATH_IMAGE049
Figure 432493DEST_PATH_IMAGE050
wherein,,
Figure DEST_PATH_IMAGE051
representing the output after passing through the sixth feature extraction module;
Figure 491716DEST_PATH_IMAGE052
representing the output after passing through the seventh feature extraction module;
Figure DEST_PATH_IMAGE053
representing the output after passing through the eighth feature extraction module;
Figure 766577DEST_PATH_IMAGE054
representing edge features;
Figure DEST_PATH_IMAGE055
、/>
Figure 594856DEST_PATH_IMAGE056
、/>
Figure DEST_PATH_IMAGE057
and->
Figure 64015DEST_PATH_IMAGE058
Respectively representing sixth to ninth feature extraction modules;
Figure 141692DEST_PATH_IMAGE030
representing the stacking operation of feature channel levels.
According to one aspect of the invention, the position feature and the edge feature are fused to obtain the fusion feature, the U-shaped convolutional neural network is obtained according to the fusion feature, the calculation formula of the fusion feature is obtained,
Figure DEST_PATH_IMAGE059
wherein,,
Figure 482894DEST_PATH_IMAGE060
representing a fusion feature;
Figure 962417DEST_PATH_IMAGE030
representing a superposition operation of the feature channel layers;
Figure 133635DEST_PATH_IMAGE017
representing a 3 x 3 convolution;
Figure DEST_PATH_IMAGE061
representing a location feature;
Figure 901871DEST_PATH_IMAGE062
representing a cloud remote sensing image;
Figure DEST_PATH_IMAGE063
representing edge features.
According to one aspect of the invention, the U-shaped convolutional neural network is trained using binary cross entropy loss function training, and the calculation formula using the binary cross entropy loss function is:
Figure 518535DEST_PATH_IMAGE064
wherein,,
Figure DEST_PATH_IMAGE065
representing a binary cross entropy loss function;
Figure 321406DEST_PATH_IMAGE062
representing a cloud remote sensing image;
Figure DEST_PATH_IMAGE067
representing a thick cloud mask label with a cloud remote sensing image;
Figure 132367DEST_PATH_IMAGE068
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
Figure DEST_PATH_IMAGE069
representing a binary cross entropy calculation.
In order to achieve the above object, the present invention provides a remote sensing image thick cloud area detection system based on a neural network, including:
an image acquisition module: acquiring a cloud remote sensing image;
the position characteristic acquisition module is used for: determining branches according to the positions to obtain position features;
edge feature acquisition module: obtaining edge characteristics according to the edge perfecting branches;
fusion characteristic acquisition module: fusing the position features and the edge features to obtain fusion features;
thick cloud area detection module: and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
In order to achieve the above object, the present invention provides an electronic device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program when executed by the processor implements the above method for detecting a thick cloud area of a remote sensing image based on a neural network.
In order to achieve the above object, the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above method for detecting a thick cloud area of a remote sensing image based on a neural network.
Based on the above, the invention has the beneficial effects that:
according to the invention, a parallel U-shaped convolutional neural network is utilized, two branches are adopted to respectively carry out integral detection of a thick cloud area and edge refinement of the cloud area, and features extracted by the two branches are combined and fused, so that a thick cloud area detection result of a remote sensing image with accurate detection and complete edge refinement is obtained.
Drawings
FIG. 1 schematically illustrates a flow chart of a method for detecting a thick cloud region of a remote sensing image based on a neural network according to the present invention;
FIG. 2 schematically illustrates a U-shaped convolutional neural network diagram of a neural network-based remote sensing image thick cloud region detection method according to the present invention;
fig. 3 schematically shows a flowchart of a remote sensing image thick cloud area detection system based on a neural network according to the present invention.
Detailed Description
The present disclosure will now be discussed with reference to exemplary embodiments, it being understood that the embodiments discussed are merely for the purpose of enabling those of ordinary skill in the art to better understand and thus practice the present disclosure and do not imply any limitation to the scope of the present disclosure.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The terms "based on" and "based at least in part on" are to be construed as "at least one embodiment.
Fig. 1 schematically illustrates a flowchart of a method for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention, as shown in fig. 1, the method for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention includes:
acquiring a cloud remote sensing image;
determining branches according to the positions to obtain position features;
obtaining edge characteristics according to the edge perfecting branches;
fusing the position features and the edge features to obtain fused features;
and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
According to one embodiment of the invention, the method for obtaining the position characteristic according to the position determining branch comprises the following steps:
inputting the cloud remote sensing image into a position determination branch, and marking the cloud remote sensing image input into the position determination branch as
Figure 889364DEST_PATH_IMAGE001
Will->
Figure 342342DEST_PATH_IMAGE001
Compression by compression module in turn>
Figure 796457DEST_PATH_IMAGE001
Is of a size of->
Figure 575057DEST_PATH_IMAGE002
The formula by the compression module is,
Figure 849044DEST_PATH_IMAGE003
Figure 371292DEST_PATH_IMAGE004
Figure 912869DEST_PATH_IMAGE005
Figure 596792DEST_PATH_IMAGE006
wherein,,
Figure 358074DEST_PATH_IMAGE007
the output after passing through the first compression module;
Figure 418434DEST_PATH_IMAGE008
representing the output after passing through the second compression module;
Figure 315983DEST_PATH_IMAGE009
representing the output after passing through the third compression module;
Figure 436386DEST_PATH_IMAGE002
representing the output after passing through the fourth compression module;
Figure 180570DEST_PATH_IMAGE010
、/>
Figure 779042DEST_PATH_IMAGE011
、/>
Figure 796676DEST_PATH_IMAGE012
and->
Figure 87980DEST_PATH_IMAGE013
Representing first through fourth compression modules, respectively;
will be
Figure 558276DEST_PATH_IMAGE002
Fine acquisition by feature fine module>
Figure 960438DEST_PATH_IMAGE014
,/>
Figure 596694DEST_PATH_IMAGE014
And->
Figure 58899DEST_PATH_IMAGE002
Is the same as the size of the feature refinement module, by the formula,
Figure 750912DEST_PATH_IMAGE015
wherein,,
Figure 222344DEST_PATH_IMAGE014
an output representing a feature refinement module;
Figure 948992DEST_PATH_IMAGE016
representing a leakage rectified linear activation unit;
Figure 316519DEST_PATH_IMAGE017
representing a 3 x 3 convolution;
Superposition
Figure 528451DEST_PATH_IMAGE002
And->
Figure 272416DEST_PATH_IMAGE014
Stacking of the characteristic channel layers is performed such that +.>
Figure 853570DEST_PATH_IMAGE002
And->
Figure 657578DEST_PATH_IMAGE014
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure 324183DEST_PATH_IMAGE002
And->
Figure 137418DEST_PATH_IMAGE014
Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
Figure 337193DEST_PATH_IMAGE018
Figure 46523DEST_PATH_IMAGE019
Figure 731582DEST_PATH_IMAGE020
Figure 817350DEST_PATH_IMAGE021
wherein,,
Figure 638675DEST_PATH_IMAGE022
representing the output of the first reconstruction module;
Figure 518907DEST_PATH_IMAGE023
representing the output of the second reconstruction module;
Figure 679147DEST_PATH_IMAGE024
representing the output of the third reconstruction module;
Figure 834184DEST_PATH_IMAGE025
representing a location feature;
Figure 244437DEST_PATH_IMAGE026
、/>
Figure 826728DEST_PATH_IMAGE027
、/>
Figure 955221DEST_PATH_IMAGE028
and->
Figure 648371DEST_PATH_IMAGE029
Representing first through fourth reconstruction modules, respectively;
Figure 677244DEST_PATH_IMAGE030
representing the stacking operation of feature channel levels.
According to one embodiment of the invention, the method for obtaining the edge characteristics according to the edge perfecting branches comprises the following steps:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch as
Figure 430437DEST_PATH_IMAGE031
The position determining branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of expansion convolution units of the 9 feature extraction modules are different, and the expansion rate of the expansion convolution units is about to be ≡>
Figure 577384DEST_PATH_IMAGE031
Sequentially passing through the first five feature extraction modules to obtain ∈Rev->
Figure 277487DEST_PATH_IMAGE032
And
Figure 662332DEST_PATH_IMAGE033
the formula through the first five extraction modules is that,
Figure 586426DEST_PATH_IMAGE034
Figure 689511DEST_PATH_IMAGE035
Figure 960349DEST_PATH_IMAGE036
Figure 465279DEST_PATH_IMAGE037
Figure 294695DEST_PATH_IMAGE038
wherein,,
Figure 885076DEST_PATH_IMAGE039
the output after passing through the first feature extraction module;
Figure 458140DEST_PATH_IMAGE040
representing the output after passing through the second feature extraction module;
Figure 817577DEST_PATH_IMAGE041
after the representation passes through the third feature extraction moduleAn output of (2);
Figure 582009DEST_PATH_IMAGE033
representing the output after passing through the fourth feature extraction module;
Figure 394107DEST_PATH_IMAGE032
representing the output after passing through the fifth feature extraction module;
Figure 505282DEST_PATH_IMAGE042
、/>
Figure 984805DEST_PATH_IMAGE043
、/>
Figure 156024DEST_PATH_IMAGE044
、/>
Figure 720997DEST_PATH_IMAGE045
and->
Figure 131469DEST_PATH_IMAGE046
Representing first to fifth feature extraction modules, respectively;
Figure 465498DEST_PATH_IMAGE033
and->
Figure 73197DEST_PATH_IMAGE032
Stacking of the characteristic channel layers is performed such that +.>
Figure 594308DEST_PATH_IMAGE033
And->
Figure 578445DEST_PATH_IMAGE014
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure 766981DEST_PATH_IMAGE033
And->
Figure 44116DEST_PATH_IMAGE032
Inputting the obtained edge characteristics into the rest extraction modules, obtaining the formulas of the edge characteristics as follows,
Figure 52523DEST_PATH_IMAGE047
Figure 574772DEST_PATH_IMAGE048
Figure 617814DEST_PATH_IMAGE049
Figure 567315DEST_PATH_IMAGE050
wherein,,
Figure 594177DEST_PATH_IMAGE051
representing the output after passing through the sixth feature extraction module;
Figure 388958DEST_PATH_IMAGE052
representing the output after passing through the seventh feature extraction module;
Figure 53551DEST_PATH_IMAGE053
representing the output after passing through the eighth feature extraction module;
Figure 908374DEST_PATH_IMAGE054
representing edge features;
Figure 156953DEST_PATH_IMAGE055
、/>
Figure 755425DEST_PATH_IMAGE056
、/>
Figure 38639DEST_PATH_IMAGE057
and->
Figure 64363DEST_PATH_IMAGE058
Respectively representing sixth to ninth feature extraction modules;
Figure 298773DEST_PATH_IMAGE030
representing the stacking operation of feature channel levels.
According to one embodiment of the invention, the fusion position feature and the edge feature are fused to obtain a fusion feature, a U-shaped convolutional neural network is obtained according to the fusion feature, the calculation formula of the fusion feature is obtained,
Figure 700936DEST_PATH_IMAGE059
wherein,,
Figure 573077DEST_PATH_IMAGE060
representing a fusion feature;
Figure 35282DEST_PATH_IMAGE030
representing a superposition operation of the feature channel layers;
Figure 258453DEST_PATH_IMAGE017
representing a 3 x 3 convolution;
Figure 464307DEST_PATH_IMAGE061
representing a location feature;
Figure 686560DEST_PATH_IMAGE062
representing a cloud remote sensing image;
Figure 585245DEST_PATH_IMAGE063
representing edge features.
According to one embodiment of the invention, the U-shaped convolutional neural network is trained by using binary cross entropy loss function training, and a calculation formula of the binary cross entropy loss function is as follows:
Figure 826871DEST_PATH_IMAGE064
wherein,,
Figure 305257DEST_PATH_IMAGE065
representing a binary cross entropy loss function;
Figure 620832DEST_PATH_IMAGE062
representing a cloud remote sensing image;
Figure 221577DEST_PATH_IMAGE067
representing a thick cloud mask label with a cloud remote sensing image;
Figure 622603DEST_PATH_IMAGE068
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
Figure 403215DEST_PATH_IMAGE069
representing a binary cross entropy calculation.
According to one embodiment of the present invention, fig. 2 schematically shows a U-shaped convolutional neural network diagram of a remote sensing image thick cloud area detection method based on a U-shaped convolutional neural network according to the present invention, as shown in fig. 2, the position determining branch includes four compression modules, one feature refinement module and four reconstruction modules, the compression modules include two 3×3 convolutions, two leakage rectifying linear activation units and one maximum pooling layer, the feature refinement module includes two 3×3 convolutions and two leakage rectifying linear activation units, the reconstruction modules include one upsampling operation unit, two 3×3 convolutions and two leakage rectifying linear activation units, and the feature extraction module includes one expansion convolution unit and one leakage rectifying linear activation unit.
Furthermore, to achieve the above object, the present invention provides a system for detecting a thick cloud area of a remote sensing image based on a neural network, and fig. 3 schematically shows a flowchart of a system for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention, as shown in fig. 3, and the system for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention includes:
an image acquisition module: acquiring a cloud remote sensing image;
the position characteristic acquisition module is used for: determining branches according to the positions to obtain position features;
edge feature acquisition module: obtaining edge characteristics according to the edge perfecting branches;
fusion characteristic acquisition module: fusing the position features and the edge features to obtain fused features;
thick cloud area detection module: and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
According to one embodiment of the invention, the method for obtaining the position characteristic according to the position determining branch comprises the following steps:
inputting the cloud remote sensing image into a position determination branch, and marking the cloud remote sensing image input into the position determination branch as
Figure 635613DEST_PATH_IMAGE001
Will->
Figure 344943DEST_PATH_IMAGE001
Compression by compression module in turn>
Figure 295581DEST_PATH_IMAGE001
Is of a size of->
Figure 115770DEST_PATH_IMAGE002
The formula by the compression module is,
Figure 405937DEST_PATH_IMAGE003
Figure 348485DEST_PATH_IMAGE004
Figure 225568DEST_PATH_IMAGE005
Figure 911764DEST_PATH_IMAGE006
wherein,,
Figure 56437DEST_PATH_IMAGE007
the output after passing through the first compression module;
Figure 107570DEST_PATH_IMAGE008
representing the output after passing through the second compression module;
Figure 32801DEST_PATH_IMAGE009
representing the output after passing through the third compression module;
Figure 194792DEST_PATH_IMAGE002
representing the output after passing through the fourth compression module;
Figure 958086DEST_PATH_IMAGE010
、/>
Figure 976858DEST_PATH_IMAGE011
、/>
Figure 327068DEST_PATH_IMAGE012
and->
Figure 292750DEST_PATH_IMAGE013
Representing first through fourth compression modules, respectively;
will be
Figure 474332DEST_PATH_IMAGE002
Fine acquisition by feature fine module>
Figure 867267DEST_PATH_IMAGE014
,/>
Figure 501511DEST_PATH_IMAGE014
And->
Figure 500910DEST_PATH_IMAGE002
Is the same as the size of the feature refinement module, by the formula,
Figure 740262DEST_PATH_IMAGE015
wherein,,
Figure 366415DEST_PATH_IMAGE014
an output representing a feature refinement module;
Figure 425638DEST_PATH_IMAGE016
representing a leakage rectified linear activation unit;
Figure 733123DEST_PATH_IMAGE017
representing a 3 x 3 convolution;
superposition
Figure 889297DEST_PATH_IMAGE002
And->
Figure 122570DEST_PATH_IMAGE014
Stacking of the characteristic channel layers is performed such that +.>
Figure 731406DEST_PATH_IMAGE002
And->
Figure 577003DEST_PATH_IMAGE014
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure 525367DEST_PATH_IMAGE002
And->
Figure 493323DEST_PATH_IMAGE014
Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
Figure 527138DEST_PATH_IMAGE018
Figure 677890DEST_PATH_IMAGE019
Figure 543078DEST_PATH_IMAGE020
Figure 619618DEST_PATH_IMAGE021
wherein,,
Figure 203046DEST_PATH_IMAGE022
representing the output of the first reconstruction module;
Figure 656024DEST_PATH_IMAGE023
representing the output of the second reconstruction module;
Figure 313402DEST_PATH_IMAGE024
representing the output of the third reconstruction module;
Figure 623160DEST_PATH_IMAGE025
representing a location feature;
Figure 130103DEST_PATH_IMAGE026
、/>
Figure 121193DEST_PATH_IMAGE027
、/>
Figure 960973DEST_PATH_IMAGE028
and->
Figure 379316DEST_PATH_IMAGE029
Representing first through fourth reconstruction modules, respectively;
Figure 937336DEST_PATH_IMAGE030
representing the stacking operation of feature channel levels.
According to one embodiment of the invention, the method for obtaining the edge characteristics according to the edge perfecting branches comprises the following steps:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch as
Figure 466537DEST_PATH_IMAGE031
The position determining branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of expansion convolution units of the 9 feature extraction modules are different, and the expansion rate of the expansion convolution units is about to be ≡>
Figure 617550DEST_PATH_IMAGE031
Sequentially passing through the first five feature extraction modules to obtain ∈Rev->
Figure 534690DEST_PATH_IMAGE032
And
Figure 252111DEST_PATH_IMAGE033
the formula through the first five extraction modules is that,
Figure 585003DEST_PATH_IMAGE034
Figure 399375DEST_PATH_IMAGE035
Figure 159521DEST_PATH_IMAGE036
Figure 862773DEST_PATH_IMAGE037
Figure 61673DEST_PATH_IMAGE038
wherein,,
Figure 137076DEST_PATH_IMAGE039
the output after passing through the first feature extraction module;
Figure 599281DEST_PATH_IMAGE040
representing the output after passing through the second feature extraction module;
Figure 556873DEST_PATH_IMAGE041
representing the output after passing through the third feature extraction module;
Figure 762727DEST_PATH_IMAGE033
representing the output after passing through the fourth feature extraction module;
Figure 990839DEST_PATH_IMAGE032
representing the output after passing through the fifth feature extraction module;
Figure 889525DEST_PATH_IMAGE042
、/>
Figure 334413DEST_PATH_IMAGE043
、/>
Figure 78378DEST_PATH_IMAGE044
、/>
Figure 925111DEST_PATH_IMAGE045
and->
Figure 463540DEST_PATH_IMAGE046
Representing first to fifth feature extraction modules, respectively;
Figure 159838DEST_PATH_IMAGE033
and->
Figure 707494DEST_PATH_IMAGE032
Stacking of the characteristic channel layers is performed such that +.>
Figure 408734DEST_PATH_IMAGE033
And->
Figure 383643DEST_PATH_IMAGE014
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure 803123DEST_PATH_IMAGE033
And->
Figure 154470DEST_PATH_IMAGE032
Inputting the obtained edge characteristics into the rest extraction modules, obtaining the formulas of the edge characteristics as follows,
Figure 205822DEST_PATH_IMAGE047
Figure 351632DEST_PATH_IMAGE048
Figure 258408DEST_PATH_IMAGE049
Figure 147867DEST_PATH_IMAGE050
wherein,,
Figure 823699DEST_PATH_IMAGE051
representing the output after passing through the sixth feature extraction module;
Figure 405990DEST_PATH_IMAGE052
representing the output after passing through the seventh feature extraction module;
Figure 534483DEST_PATH_IMAGE053
representing the output after passing through the eighth feature extraction module;
Figure 726168DEST_PATH_IMAGE054
representing edge features;
Figure 256506DEST_PATH_IMAGE055
、/>
Figure 744119DEST_PATH_IMAGE056
、/>
Figure 359908DEST_PATH_IMAGE057
and->
Figure 856749DEST_PATH_IMAGE058
Respectively representing sixth to ninth feature extraction modules;
Figure 241594DEST_PATH_IMAGE030
representing the stacking operation of feature channel levels.
According to one embodiment of the invention, the fusion position feature and the edge feature are fused to obtain a fusion feature, a U-shaped convolutional neural network is obtained according to the fusion feature, the calculation formula of the fusion feature is obtained,
Figure 401573DEST_PATH_IMAGE059
wherein,,
Figure 504658DEST_PATH_IMAGE060
representing a fusion feature;
Figure 274031DEST_PATH_IMAGE030
representing a superposition operation of the feature channel layers;
Figure 44541DEST_PATH_IMAGE017
representing a 3 x 3 convolution;
Figure 873957DEST_PATH_IMAGE061
representing a location feature;
Figure 464338DEST_PATH_IMAGE062
representation cloud remote sensing mapAn image;
Figure 302981DEST_PATH_IMAGE063
representing edge features.
According to one embodiment of the invention, the U-shaped convolutional neural network is trained by using binary cross entropy loss function training, and a calculation formula of the binary cross entropy loss function is as follows:
Figure 895374DEST_PATH_IMAGE064
wherein,,
Figure 161270DEST_PATH_IMAGE065
representing a binary cross entropy loss function;
Figure 973369DEST_PATH_IMAGE062
representing a cloud remote sensing image;
Figure 350123DEST_PATH_IMAGE067
representing a thick cloud mask label with a cloud remote sensing image;
Figure 829646DEST_PATH_IMAGE068
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
Figure 865DEST_PATH_IMAGE069
representing a binary cross entropy calculation.
According to one embodiment of the present invention, fig. 2 schematically shows a U-shaped convolutional neural network diagram of a remote sensing image thick cloud area detection method based on a U-shaped convolutional neural network according to the present invention, as shown in fig. 2, the position determining branch includes four compression modules, one feature refinement module and four reconstruction modules, the compression modules include two 3×3 convolutions, two leakage rectifying linear activation units and one maximum pooling layer, the feature refinement module includes two 3×3 convolutions and two leakage rectifying linear activation units, the reconstruction modules include one upsampling operation unit, two 3×3 convolutions and two leakage rectifying linear activation units, and the feature extraction module includes one expansion convolution unit and one leakage rectifying linear activation unit.
In order to achieve the above object, the present invention also provides an electronic device including: the remote sensing image thick cloud area detection method based on the neural network comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the computer program is executed by the processor.
In order to achieve the above object, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned method for detecting a thick cloud area of a remote sensing image based on a neural network.
Based on the above, the method has the beneficial effects that the parallel U-shaped convolutional neural network is utilized, two branches are adopted to respectively carry out integral detection of the thick cloud area and edge refinement of the cloud area, and the features extracted by the two branches are combined and fused, so that the detection result of the thick cloud area of the remote sensing image, which is accurate in detection and complete in edge refinement, is obtained.
Those of ordinary skill in the art will appreciate that the modules and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention.
In addition, each functional module in the embodiment of the present invention may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method for energy saving signal transmission/reception of the various embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.

Claims (7)

1. The method for detecting the thick cloud area of the remote sensing image based on the neural network is characterized by comprising the following steps of:
acquiring a cloud remote sensing image;
determining branches according to the positions to obtain position features;
the method for obtaining the position characteristics according to the position determination branches comprises the following steps:
inputting the cloud remote sensing image into the position determining branch, and marking the cloud remote sensing image input into the position determining branch as
Figure QLYQS_1
Will->
Figure QLYQS_2
Compression by compression module in turn>
Figure QLYQS_3
Is of a size of->
Figure QLYQS_4
By means ofThe formula of the compression module is as follows,
Figure QLYQS_5
Figure QLYQS_6
Figure QLYQS_7
Figure QLYQS_8
wherein,,
Figure QLYQS_9
the output after passing through the first compression module;
Figure QLYQS_10
representing the output after passing through the second compression module;
Figure QLYQS_11
representing the output after passing through the third compression module;
Figure QLYQS_12
representing the output after passing through the fourth compression module;
Figure QLYQS_13
、/>
Figure QLYQS_14
、/>
Figure QLYQS_15
and->
Figure QLYQS_16
Representing first through fourth compression modules, respectively;
will be
Figure QLYQS_17
Fine acquisition by feature fine module>
Figure QLYQS_18
,/>
Figure QLYQS_19
And->
Figure QLYQS_20
Is the same as the size of the feature refinement module, by the formula,
Figure QLYQS_21
wherein,,
Figure QLYQS_22
an output representing a feature refinement module;
Figure QLYQS_23
representing a leakage rectified linear activation unit;
Figure QLYQS_24
representing a 3 x 3 convolution;
superposition
Figure QLYQS_25
And->
Figure QLYQS_26
Stacking feature channel layers so that/>
Figure QLYQS_27
And->
Figure QLYQS_28
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure QLYQS_29
And->
Figure QLYQS_30
Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
Figure QLYQS_31
Figure QLYQS_32
Figure QLYQS_33
Figure QLYQS_34
wherein,,
Figure QLYQS_35
representing the output of the first reconstruction module;
Figure QLYQS_36
representing the output of the second reconstruction module;
Figure QLYQS_37
representing the output of the third reconstruction module;
Figure QLYQS_38
representing a location feature;
Figure QLYQS_39
、/>
Figure QLYQS_40
、/>
Figure QLYQS_41
and->
Figure QLYQS_42
Representing first through fourth reconstruction modules, respectively;
Figure QLYQS_43
representing a superposition operation of the feature channel layers; obtaining edge characteristics according to the edge perfecting branches;
fusing the position features and the edge features to obtain fusion features;
and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
2. The method for detecting the thick cloud area of the remote sensing image based on the neural network according to claim 1, wherein the method for obtaining the edge characteristics according to the edge perfection branch is as follows:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch as
Figure QLYQS_44
The edge perfecting branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of expansion convolution units of 9 feature extraction modules are different, and the edge perfecting branch is about to->
Figure QLYQS_45
Sequentially passing through the first five feature extraction modules to obtain +.>
Figure QLYQS_46
And->
Figure QLYQS_47
The formula of the first five extraction modules is that,
Figure QLYQS_48
Figure QLYQS_49
Figure QLYQS_50
Figure QLYQS_51
Figure QLYQS_52
wherein,,
Figure QLYQS_53
the output after passing through the first feature extraction module;
Figure QLYQS_54
representing the output after passing through the second feature extraction module;
Figure QLYQS_55
representation by third feature extractionOutputting after the module;
Figure QLYQS_56
representing the output after passing through the fourth feature extraction module;
Figure QLYQS_57
representing the output after passing through the fifth feature extraction module;
Figure QLYQS_58
、/>
Figure QLYQS_59
、/>
Figure QLYQS_60
、/>
Figure QLYQS_61
and->
Figure QLYQS_62
Representing first to fifth feature extraction modules, respectively;
Figure QLYQS_63
and->
Figure QLYQS_64
Stacking of the characteristic channel layers is performed such that +.>
Figure QLYQS_65
And->
Figure QLYQS_66
The number of the characteristic channels is doubled as the original number
Figure QLYQS_67
And->
Figure QLYQS_68
Inputting the edge characteristics into the rest extraction modules to obtain the edge characteristics, wherein the formula for obtaining the edge characteristics is as follows,
Figure QLYQS_69
Figure QLYQS_70
Figure QLYQS_71
Figure QLYQS_72
wherein,,
Figure QLYQS_73
representing the output after passing through the sixth feature extraction module;
Figure QLYQS_74
representing the output after passing through the seventh feature extraction module;
Figure QLYQS_75
representing the output after passing through the eighth feature extraction module;
Figure QLYQS_76
representing edge features;
Figure QLYQS_77
、/>
Figure QLYQS_78
、/>
Figure QLYQS_79
and->
Figure QLYQS_80
Respectively representing sixth to ninth feature extraction modules;
Figure QLYQS_81
representing the stacking operation of feature channel levels.
3. The method for detecting thick cloud areas of remote sensing images based on a neural network according to claim 2, wherein the fusion characteristics are obtained by fusing the position characteristics and the edge characteristics, the U-shaped convolutional neural network is obtained according to the fusion characteristics, and the calculation formula for obtaining the fusion characteristics is as follows,
Figure QLYQS_82
wherein,,
Figure QLYQS_83
representing a fusion feature;
Figure QLYQS_84
representing a superposition operation of the feature channel layers;
Figure QLYQS_85
representing a 3 x 3 convolution;
Figure QLYQS_86
representing a location feature;
Figure QLYQS_87
representing a cloud remote sensing image;
Figure QLYQS_88
representing edge features.
4. The neural network-based remote sensing image thick cloud area detection method according to claim 3, wherein the U-shaped convolutional neural network is trained by using binary cross entropy loss function training, and a calculation formula using the binary cross entropy loss function is:
Figure QLYQS_89
wherein,,
Figure QLYQS_90
representing a binary cross entropy loss function;
Figure QLYQS_91
representing a cloud remote sensing image;
Figure QLYQS_92
representing a thick cloud mask label with a cloud remote sensing image;
Figure QLYQS_93
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
Figure QLYQS_94
representing a binary cross entropy calculation.
5. The utility model provides a thick cloud area detecting system of remote sensing image based on neural network which characterized in that includes:
an image acquisition module: acquiring a cloud remote sensing image;
the position characteristic acquisition module is used for: determining branches according to the positions to obtain position features;
the method for obtaining the position characteristics according to the position determination branches comprises the following steps:
inputting the cloud remote sensing image into the position determining branch, and marking the cloud remote sensing image input into the position determining branch as
Figure QLYQS_95
Will->
Figure QLYQS_96
Compression by compression module in turn>
Figure QLYQS_97
Is of a size of->
Figure QLYQS_98
The formula through the compression module is that,
Figure QLYQS_99
Figure QLYQS_100
Figure QLYQS_101
Figure QLYQS_102
wherein,,
Figure QLYQS_103
the output after passing through the first compression module;
Figure QLYQS_104
representing the output after passing through the second compression module;
Figure QLYQS_105
representing the output after passing through the third compression module;
Figure QLYQS_106
representing the output after passing through the fourth compression module;
Figure QLYQS_107
、/>
Figure QLYQS_108
、/>
Figure QLYQS_109
and->
Figure QLYQS_110
Representing first through fourth compression modules, respectively;
will be
Figure QLYQS_111
Fine acquisition by feature fine module>
Figure QLYQS_112
,/>
Figure QLYQS_113
And->
Figure QLYQS_114
Is the same as the size of the feature refinement module, by the formula,
Figure QLYQS_115
wherein,,
Figure QLYQS_116
an output representing a feature refinement module;
Figure QLYQS_117
representing a leakage rectified linear activation unit;
Figure QLYQS_118
representing a 3 x 3 convolution;
superposition
Figure QLYQS_119
And->
Figure QLYQS_120
Stacking of the characteristic channel layers is performed such that +.>
Figure QLYQS_121
And->
Figure QLYQS_122
The number of characteristic channels of (a) becomes twice as large as the original number, and +.>
Figure QLYQS_123
And->
Figure QLYQS_124
Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
Figure QLYQS_125
Figure QLYQS_126
Figure QLYQS_127
Figure QLYQS_128
wherein,,
Figure QLYQS_129
representing the output of the first reconstruction module;
Figure QLYQS_130
representing the output of the second reconstruction module;
Figure QLYQS_131
representing the output of the third reconstruction module;
Figure QLYQS_132
representing a location feature;
Figure QLYQS_133
、/>
Figure QLYQS_134
、/>
Figure QLYQS_135
and->
Figure QLYQS_136
Representing first through fourth reconstruction modules, respectively;
Figure QLYQS_137
representing a superposition operation of the feature channel layers; edge feature acquisition module: obtaining edge characteristics according to the edge perfecting branches;
fusion characteristic acquisition module: fusing the position features and the edge features to obtain fusion features;
thick cloud area detection module: and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
6. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing a neural network-based remote sensing image thick cloud region detection method as claimed in any one of claims 1 to 4.
7. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the computer program implements a neural network-based remote sensing image thick cloud area detection method as claimed in any one of claims 1 to 4.
CN202211545313.6A 2022-12-05 2022-12-05 Neural network-based detection method for thick cloud area of remote sensing image Active CN116188968B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211545313.6A CN116188968B (en) 2022-12-05 2022-12-05 Neural network-based detection method for thick cloud area of remote sensing image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211545313.6A CN116188968B (en) 2022-12-05 2022-12-05 Neural network-based detection method for thick cloud area of remote sensing image

Publications (2)

Publication Number Publication Date
CN116188968A CN116188968A (en) 2023-05-30
CN116188968B true CN116188968B (en) 2023-07-14

Family

ID=86451280

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211545313.6A Active CN116188968B (en) 2022-12-05 2022-12-05 Neural network-based detection method for thick cloud area of remote sensing image

Country Status (1)

Country Link
CN (1) CN116188968B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838124B (en) * 2017-09-12 2021-06-18 深圳科亚医疗科技有限公司 Method, system, and medium for segmenting images of objects having sparse distribution
CN112084872A (en) * 2020-08-10 2020-12-15 浙江工业大学 High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge
CN113239830B (en) * 2021-05-20 2023-01-17 北京航空航天大学 Remote sensing image cloud detection method based on full-scale feature fusion

Also Published As

Publication number Publication date
CN116188968A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
CN107862698B (en) Light field foreground segmentation method and device based on K mean cluster
CN109344701A (en) A kind of dynamic gesture identification method based on Kinect
CN109255324A (en) Gesture processing method, interaction control method and equipment
CN108319972A (en) A kind of end-to-end difference online learning methods for image, semantic segmentation
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN108960404B (en) Image-based crowd counting method and device
CN106355197A (en) Navigation image matching filtering method based on K-means clustering algorithm
Yue et al. A novel attention fully convolutional network method for synthetic aperture radar image segmentation
CN107169479A (en) Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication
CN104657980A (en) Improved multi-channel image partitioning algorithm based on Meanshift
CN110245600B (en) Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width
CN110298227A (en) A kind of vehicle checking method in unmanned plane image based on deep learning
US11042986B2 (en) Method for thinning and connection in linear object extraction from an image
CN104851089A (en) Static scene foreground segmentation method and device based on three-dimensional light field
CN109063630B (en) Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy
CN111415373A (en) Target tracking and segmenting method, system and medium based on twin convolutional network
Wang et al. Enhanced spinning parallelogram operator combining color constraint and histogram integration for robust light field depth estimation
CN115457277A (en) Intelligent pavement disease identification and detection method and system
CN108876810A (en) The method that algorithm carries out moving object detection is cut using figure in video frequency abstract
CN108573238A (en) A kind of vehicle checking method based on dual network structure
CN107274361A (en) Landsat TM remote sensing image datas remove cloud method and system
CN116188968B (en) Neural network-based detection method for thick cloud area of remote sensing image
CN106023184A (en) Depth significance detection method based on anisotropy center-surround difference
CN109784145A (en) Object detection method and storage medium based on depth map
CN117541652A (en) Dynamic SLAM method based on depth LK optical flow method and D-PROSAC sampling strategy

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant