CN116188968B - Neural network-based detection method for thick cloud area of remote sensing image - Google Patents
Neural network-based detection method for thick cloud area of remote sensing image Download PDFInfo
- Publication number
- CN116188968B CN116188968B CN202211545313.6A CN202211545313A CN116188968B CN 116188968 B CN116188968 B CN 116188968B CN 202211545313 A CN202211545313 A CN 202211545313A CN 116188968 B CN116188968 B CN 116188968B
- Authority
- CN
- China
- Prior art keywords
- representing
- module
- feature
- remote sensing
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 45
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 27
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 31
- 238000000605 extraction Methods 0.000 claims description 65
- 230000006835 compression Effects 0.000 claims description 44
- 238000007906 compression Methods 0.000 claims description 44
- 238000000034 method Methods 0.000 claims description 34
- 230000004927 fusion Effects 0.000 claims description 31
- 230000006870 function Effects 0.000 claims description 16
- 230000004913 activation Effects 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000012549 training Methods 0.000 claims description 5
- 238000012545 processing Methods 0.000 abstract description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to the technical field of image data processing, and discloses a remote sensing image thick cloud area detection method based on a neural network, which comprises the following steps: acquiring a cloud remote sensing image, determining a branch according to the position to obtain a position feature, perfecting the branch according to the edge to obtain an edge feature, fusing the position feature and the edge feature to obtain a fused feature, and inputting the fused feature into a U-shaped convolutional neural network to obtain a thick cloud region detection result. The invention utilizes a parallel U-shaped convolutional neural network, adopts a position determining branch and an edge perfecting branch to respectively carry out thick cloud area integral detection and cloud area edge refinement, and combines and fuses the features extracted by the two branches, thereby obtaining a remote sensing image thick cloud area detection result which is accurate in detection and complete in edge refinement.
Description
Technical Field
The invention relates to the field of image data processing, in particular to a neural network-based method for detecting a thick cloud area of a remote sensing image.
Background
The optical remote sensing image can provide rich observation of ground object information, however, cloud cover can shield ground information widely in a large area, so that estimation and observation of the ground information are affected. Thus, detection of thick cloud area coverage in an optical remote sensing image is the basis and key for further analysis and utilization of remote sensing image information.
The traditional method for detecting the thick cloud area of the remote sensing image is mainly based on a spectrum threshold strategy, and the strategy realizes automatic detection of the thick cloud by setting thresholds for different spectrums of the remote sensing image. The strategy does not need to carry out pixel-level label marking on the remote sensing image and does not need to carry out complex model training. However, the spectrum threshold-based method is often poor in generalization performance, poor in thick cloud detection precision for complex remote sensing scenes, and insufficient in detection robustness for remote sensing images of different types of scenes. In recent years, convolutional neural networks show outstanding effects in various computer vision and remote sensing image processing tasks, and a remote sensing image thick cloud area detection method based on the convolutional neural networks is inspired. The invention patent of China discloses a remote sensing satellite cloud detection method based on deep LabV3+ (CN202010241130. X), which inputs a remote sensing image with cloud coverage into a semantic segmentation network deep LabV3+ to obtain a corresponding cloud zone detection result graph. In the method, a cavity convolution and a pyramid structure are introduced, so that the receptive field of convolution is increased, and the accuracy of cloud zone detection is improved. Further, as the Chinese patent 'remote sensing image cloud detection method based on multi-scale convolutional neural network' (CN 202111108889.1), a remote sensing image cloud detection method based on multi-scale convolutional neural network is disclosed, and the method improves the detection universality of clouds with different sizes by utilizing multi-scale convolutional and pooling operations. However, in order to improve the detection accuracy of the thick cloud area in the remote sensing image, how to accurately determine the general position of the thick cloud area and simultaneously refine the edge of the cloud area is still a very difficult problem.
At present, a large number of researches are carried out on the detection of the thick cloud area of the remote sensing image, but the existing researches generally adopt a convolution neural network with a single branch to detect the thick cloud area of the remote sensing image, and no method for respectively carrying out the whole thick cloud detection and the cloud area edge refinement by utilizing a network model with a plurality of branches is seen.
Disclosure of Invention
The invention aims to overcome one or more of the prior technical problems and provides a neural network-based method for detecting a thick cloud area of a remote sensing image.
In order to achieve the above object, the present invention provides a method for detecting a thick cloud area of a remote sensing image based on a neural network, comprising:
acquiring a cloud remote sensing image;
determining branches according to the positions to obtain position features;
obtaining edge characteristics according to the edge perfecting branches;
fusing the position features and the edge features to obtain fusion features;
and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
According to one aspect of the invention, the method for obtaining the position characteristic according to the position determination branch comprises the following steps:
inputting the cloud remote sensing image into the position determining branch, and marking the cloud remote sensing image input into the position determining branch asWill->Compression by the compression module in turn>Is of a size of->The formula through the compression module is that,
will beThe feature fine module is used for carrying out fine obtaining +.>,/>And->Is the same as the size of the feature refinement module, by the formula,
superpositionAnd->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->The position characteristics are obtained by inputting the position characteristics into the reconstruction module, the formulas for obtaining the position characteristics are as follows,
According to one aspect of the invention, the method for obtaining edge features according to edge perfection branches comprises the following steps:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch asThe position determining branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of the expansion convolution units of the 9 feature extraction modules are different, and the position determining branch is about to>Sequentially passing through the first five feature extraction modules to obtain +.>And->The formula of the first five extraction modules is that,
and->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the edge characteristics into the rest extraction modules to obtain the edge characteristics, wherein the formula for obtaining the edge characteristics is as follows,
According to one aspect of the invention, the position feature and the edge feature are fused to obtain the fusion feature, the U-shaped convolutional neural network is obtained according to the fusion feature, the calculation formula of the fusion feature is obtained,
According to one aspect of the invention, the U-shaped convolutional neural network is trained using binary cross entropy loss function training, and the calculation formula using the binary cross entropy loss function is:
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
In order to achieve the above object, the present invention provides a remote sensing image thick cloud area detection system based on a neural network, including:
an image acquisition module: acquiring a cloud remote sensing image;
the position characteristic acquisition module is used for: determining branches according to the positions to obtain position features;
edge feature acquisition module: obtaining edge characteristics according to the edge perfecting branches;
fusion characteristic acquisition module: fusing the position features and the edge features to obtain fusion features;
thick cloud area detection module: and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
In order to achieve the above object, the present invention provides an electronic device, including a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program when executed by the processor implements the above method for detecting a thick cloud area of a remote sensing image based on a neural network.
In order to achieve the above object, the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above method for detecting a thick cloud area of a remote sensing image based on a neural network.
Based on the above, the invention has the beneficial effects that:
according to the invention, a parallel U-shaped convolutional neural network is utilized, two branches are adopted to respectively carry out integral detection of a thick cloud area and edge refinement of the cloud area, and features extracted by the two branches are combined and fused, so that a thick cloud area detection result of a remote sensing image with accurate detection and complete edge refinement is obtained.
Drawings
FIG. 1 schematically illustrates a flow chart of a method for detecting a thick cloud region of a remote sensing image based on a neural network according to the present invention;
FIG. 2 schematically illustrates a U-shaped convolutional neural network diagram of a neural network-based remote sensing image thick cloud region detection method according to the present invention;
fig. 3 schematically shows a flowchart of a remote sensing image thick cloud area detection system based on a neural network according to the present invention.
Detailed Description
The present disclosure will now be discussed with reference to exemplary embodiments, it being understood that the embodiments discussed are merely for the purpose of enabling those of ordinary skill in the art to better understand and thus practice the present disclosure and do not imply any limitation to the scope of the present disclosure.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The terms "based on" and "based at least in part on" are to be construed as "at least one embodiment.
Fig. 1 schematically illustrates a flowchart of a method for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention, as shown in fig. 1, the method for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention includes:
acquiring a cloud remote sensing image;
determining branches according to the positions to obtain position features;
obtaining edge characteristics according to the edge perfecting branches;
fusing the position features and the edge features to obtain fused features;
and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
According to one embodiment of the invention, the method for obtaining the position characteristic according to the position determining branch comprises the following steps:
inputting the cloud remote sensing image into a position determination branch, and marking the cloud remote sensing image input into the position determination branch asWill->Compression by compression module in turn>Is of a size of->The formula by the compression module is,
will beFine acquisition by feature fine module>,/>And->Is the same as the size of the feature refinement module, by the formula,
SuperpositionAnd->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
According to one embodiment of the invention, the method for obtaining the edge characteristics according to the edge perfecting branches comprises the following steps:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch asThe position determining branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of expansion convolution units of the 9 feature extraction modules are different, and the expansion rate of the expansion convolution units is about to be ≡>Sequentially passing through the first five feature extraction modules to obtain ∈Rev->Andthe formula through the first five extraction modules is that,
and->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the obtained edge characteristics into the rest extraction modules, obtaining the formulas of the edge characteristics as follows,
According to one embodiment of the invention, the fusion position feature and the edge feature are fused to obtain a fusion feature, a U-shaped convolutional neural network is obtained according to the fusion feature, the calculation formula of the fusion feature is obtained,
According to one embodiment of the invention, the U-shaped convolutional neural network is trained by using binary cross entropy loss function training, and a calculation formula of the binary cross entropy loss function is as follows:
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
According to one embodiment of the present invention, fig. 2 schematically shows a U-shaped convolutional neural network diagram of a remote sensing image thick cloud area detection method based on a U-shaped convolutional neural network according to the present invention, as shown in fig. 2, the position determining branch includes four compression modules, one feature refinement module and four reconstruction modules, the compression modules include two 3×3 convolutions, two leakage rectifying linear activation units and one maximum pooling layer, the feature refinement module includes two 3×3 convolutions and two leakage rectifying linear activation units, the reconstruction modules include one upsampling operation unit, two 3×3 convolutions and two leakage rectifying linear activation units, and the feature extraction module includes one expansion convolution unit and one leakage rectifying linear activation unit.
Furthermore, to achieve the above object, the present invention provides a system for detecting a thick cloud area of a remote sensing image based on a neural network, and fig. 3 schematically shows a flowchart of a system for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention, as shown in fig. 3, and the system for detecting a thick cloud area of a remote sensing image based on a neural network according to the present invention includes:
an image acquisition module: acquiring a cloud remote sensing image;
the position characteristic acquisition module is used for: determining branches according to the positions to obtain position features;
edge feature acquisition module: obtaining edge characteristics according to the edge perfecting branches;
fusion characteristic acquisition module: fusing the position features and the edge features to obtain fused features;
thick cloud area detection module: and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
According to one embodiment of the invention, the method for obtaining the position characteristic according to the position determining branch comprises the following steps:
inputting the cloud remote sensing image into a position determination branch, and marking the cloud remote sensing image input into the position determination branch asWill->Compression by compression module in turn>Is of a size of->The formula by the compression module is,
will beFine acquisition by feature fine module>,/>And->Is the same as the size of the feature refinement module, by the formula,
superpositionAnd->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
According to one embodiment of the invention, the method for obtaining the edge characteristics according to the edge perfecting branches comprises the following steps:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch asThe position determining branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of expansion convolution units of the 9 feature extraction modules are different, and the expansion rate of the expansion convolution units is about to be ≡>Sequentially passing through the first five feature extraction modules to obtain ∈Rev->Andthe formula through the first five extraction modules is that,
and->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the obtained edge characteristics into the rest extraction modules, obtaining the formulas of the edge characteristics as follows,
According to one embodiment of the invention, the fusion position feature and the edge feature are fused to obtain a fusion feature, a U-shaped convolutional neural network is obtained according to the fusion feature, the calculation formula of the fusion feature is obtained,
According to one embodiment of the invention, the U-shaped convolutional neural network is trained by using binary cross entropy loss function training, and a calculation formula of the binary cross entropy loss function is as follows:
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
According to one embodiment of the present invention, fig. 2 schematically shows a U-shaped convolutional neural network diagram of a remote sensing image thick cloud area detection method based on a U-shaped convolutional neural network according to the present invention, as shown in fig. 2, the position determining branch includes four compression modules, one feature refinement module and four reconstruction modules, the compression modules include two 3×3 convolutions, two leakage rectifying linear activation units and one maximum pooling layer, the feature refinement module includes two 3×3 convolutions and two leakage rectifying linear activation units, the reconstruction modules include one upsampling operation unit, two 3×3 convolutions and two leakage rectifying linear activation units, and the feature extraction module includes one expansion convolution unit and one leakage rectifying linear activation unit.
In order to achieve the above object, the present invention also provides an electronic device including: the remote sensing image thick cloud area detection method based on the neural network comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the computer program is executed by the processor.
In order to achieve the above object, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned method for detecting a thick cloud area of a remote sensing image based on a neural network.
Based on the above, the method has the beneficial effects that the parallel U-shaped convolutional neural network is utilized, two branches are adopted to respectively carry out integral detection of the thick cloud area and edge refinement of the cloud area, and the features extracted by the two branches are combined and fused, so that the detection result of the thick cloud area of the remote sensing image, which is accurate in detection and complete in edge refinement, is obtained.
Those of ordinary skill in the art will appreciate that the modules and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention.
In addition, each functional module in the embodiment of the present invention may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method for energy saving signal transmission/reception of the various embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.
Claims (7)
1. The method for detecting the thick cloud area of the remote sensing image based on the neural network is characterized by comprising the following steps of:
acquiring a cloud remote sensing image;
determining branches according to the positions to obtain position features;
the method for obtaining the position characteristics according to the position determination branches comprises the following steps:
inputting the cloud remote sensing image into the position determining branch, and marking the cloud remote sensing image input into the position determining branch asWill->Compression by compression module in turn>Is of a size of->By means ofThe formula of the compression module is as follows,
will beFine acquisition by feature fine module>,/>And->Is the same as the size of the feature refinement module, by the formula,
superpositionAnd->Stacking feature channel layers so that/>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
representing a superposition operation of the feature channel layers; obtaining edge characteristics according to the edge perfecting branches;
fusing the position features and the edge features to obtain fusion features;
and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
2. The method for detecting the thick cloud area of the remote sensing image based on the neural network according to claim 1, wherein the method for obtaining the edge characteristics according to the edge perfection branch is as follows:
inputting the cloud remote sensing image into the edge perfection branch, and marking the cloud remote sensing image input into the edge perfection branch asThe edge perfecting branch comprises nine feature extraction modules, the features of the feature extraction modules are the same, the expansion rates of expansion convolution units of 9 feature extraction modules are different, and the edge perfecting branch is about to->Sequentially passing through the first five feature extraction modules to obtain +.>And->The formula of the first five extraction modules is that,
and->Stacking of the characteristic channel layers is performed such that +.>And->The number of the characteristic channels is doubled as the original numberAnd->Inputting the edge characteristics into the rest extraction modules to obtain the edge characteristics, wherein the formula for obtaining the edge characteristics is as follows,
3. The method for detecting thick cloud areas of remote sensing images based on a neural network according to claim 2, wherein the fusion characteristics are obtained by fusing the position characteristics and the edge characteristics, the U-shaped convolutional neural network is obtained according to the fusion characteristics, and the calculation formula for obtaining the fusion characteristics is as follows,
4. The neural network-based remote sensing image thick cloud area detection method according to claim 3, wherein the U-shaped convolutional neural network is trained by using binary cross entropy loss function training, and a calculation formula using the binary cross entropy loss function is:
representing a result obtained by inputting the result into the U-shaped convolutional neural network;
5. The utility model provides a thick cloud area detecting system of remote sensing image based on neural network which characterized in that includes:
an image acquisition module: acquiring a cloud remote sensing image;
the position characteristic acquisition module is used for: determining branches according to the positions to obtain position features;
the method for obtaining the position characteristics according to the position determination branches comprises the following steps:
inputting the cloud remote sensing image into the position determining branch, and marking the cloud remote sensing image input into the position determining branch asWill->Compression by compression module in turn>Is of a size of->The formula through the compression module is that,
will beFine acquisition by feature fine module>,/>And->Is the same as the size of the feature refinement module, by the formula,
superpositionAnd->Stacking of the characteristic channel layers is performed such that +.>And->The number of characteristic channels of (a) becomes twice as large as the original number, and +.>And->Inputting the position characteristics into a reconstruction module to obtain the position characteristics, wherein the formula for obtaining the position characteristics is as follows,
representing a superposition operation of the feature channel layers; edge feature acquisition module: obtaining edge characteristics according to the edge perfecting branches;
fusion characteristic acquisition module: fusing the position features and the edge features to obtain fusion features;
thick cloud area detection module: and inputting the fusion characteristics into a U-shaped convolutional neural network to obtain a thick cloud region detection result.
6. An electronic device comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing a neural network-based remote sensing image thick cloud region detection method as claimed in any one of claims 1 to 4.
7. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the computer program implements a neural network-based remote sensing image thick cloud area detection method as claimed in any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211545313.6A CN116188968B (en) | 2022-12-05 | 2022-12-05 | Neural network-based detection method for thick cloud area of remote sensing image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211545313.6A CN116188968B (en) | 2022-12-05 | 2022-12-05 | Neural network-based detection method for thick cloud area of remote sensing image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116188968A CN116188968A (en) | 2023-05-30 |
CN116188968B true CN116188968B (en) | 2023-07-14 |
Family
ID=86451280
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211545313.6A Active CN116188968B (en) | 2022-12-05 | 2022-12-05 | Neural network-based detection method for thick cloud area of remote sensing image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116188968B (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110838124B (en) * | 2017-09-12 | 2021-06-18 | 深圳科亚医疗科技有限公司 | Method, system, and medium for segmenting images of objects having sparse distribution |
CN112084872A (en) * | 2020-08-10 | 2020-12-15 | 浙江工业大学 | High-resolution remote sensing target accurate detection method fusing semantic segmentation and edge |
CN113239830B (en) * | 2021-05-20 | 2023-01-17 | 北京航空航天大学 | Remote sensing image cloud detection method based on full-scale feature fusion |
-
2022
- 2022-12-05 CN CN202211545313.6A patent/CN116188968B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN116188968A (en) | 2023-05-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107862698B (en) | Light field foreground segmentation method and device based on K mean cluster | |
CN109344701A (en) | A kind of dynamic gesture identification method based on Kinect | |
CN109255324A (en) | Gesture processing method, interaction control method and equipment | |
CN108319972A (en) | A kind of end-to-end difference online learning methods for image, semantic segmentation | |
CN104200461B (en) | The remote sensing image registration method of block and sift features is selected based on mutual information image | |
CN108960404B (en) | Image-based crowd counting method and device | |
CN106355197A (en) | Navigation image matching filtering method based on K-means clustering algorithm | |
Yue et al. | A novel attention fully convolutional network method for synthetic aperture radar image segmentation | |
CN107169479A (en) | Intelligent mobile equipment sensitive data means of defence based on fingerprint authentication | |
CN104657980A (en) | Improved multi-channel image partitioning algorithm based on Meanshift | |
CN110245600B (en) | Unmanned aerial vehicle road detection method for self-adaptive initial quick stroke width | |
CN110298227A (en) | A kind of vehicle checking method in unmanned plane image based on deep learning | |
US11042986B2 (en) | Method for thinning and connection in linear object extraction from an image | |
CN104851089A (en) | Static scene foreground segmentation method and device based on three-dimensional light field | |
CN109063630B (en) | Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy | |
CN111415373A (en) | Target tracking and segmenting method, system and medium based on twin convolutional network | |
Wang et al. | Enhanced spinning parallelogram operator combining color constraint and histogram integration for robust light field depth estimation | |
CN115457277A (en) | Intelligent pavement disease identification and detection method and system | |
CN108876810A (en) | The method that algorithm carries out moving object detection is cut using figure in video frequency abstract | |
CN108573238A (en) | A kind of vehicle checking method based on dual network structure | |
CN107274361A (en) | Landsat TM remote sensing image datas remove cloud method and system | |
CN116188968B (en) | Neural network-based detection method for thick cloud area of remote sensing image | |
CN106023184A (en) | Depth significance detection method based on anisotropy center-surround difference | |
CN109784145A (en) | Object detection method and storage medium based on depth map | |
CN117541652A (en) | Dynamic SLAM method based on depth LK optical flow method and D-PROSAC sampling strategy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |