CN116168302A - Remote sensing image rock vein extraction method based on multi-scale residual error fusion network - Google Patents

Remote sensing image rock vein extraction method based on multi-scale residual error fusion network Download PDF

Info

Publication number
CN116168302A
CN116168302A CN202310449575.0A CN202310449575A CN116168302A CN 116168302 A CN116168302 A CN 116168302A CN 202310449575 A CN202310449575 A CN 202310449575A CN 116168302 A CN116168302 A CN 116168302A
Authority
CN
China
Prior art keywords
representing
scale residual
computational
remote sensing
sensing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310449575.0A
Other languages
Chinese (zh)
Other versions
CN116168302B (en
Inventor
李冠群
俞伟学
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Genyu Muxing Beijing Space Technology Co ltd
Original Assignee
Genyu Muxing Beijing Space Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Genyu Muxing Beijing Space Technology Co ltd filed Critical Genyu Muxing Beijing Space Technology Co ltd
Priority to CN202310449575.0A priority Critical patent/CN116168302B/en
Publication of CN116168302A publication Critical patent/CN116168302A/en
Application granted granted Critical
Publication of CN116168302B publication Critical patent/CN116168302B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention relates to the technical field of remote sensing image processing, and discloses a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network, which comprises the following steps: acquiring a remote sensing image of the rock vein; obtaining a characteristic image according to a multi-scale residual fusion network; and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result. According to the invention, the trained multi-scale residual fusion network combined with wavelet features is utilized to automatically extract the rock vein region in the remote sensing image, and manual intervention is not required; the multi-scale residual fusion network combining wavelet features can fully consider remote sensing image components with different frequencies, and through the multi-scale residual fusion module, image features with different frequencies and different scales can be fully extracted, and depth fusion is carried out, so that accurate remote sensing image rock vein region extraction is realized.

Description

Remote sensing image rock vein extraction method based on multi-scale residual error fusion network
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
Background
Remote sensing refers to the acquisition of information about the earth's surface and the atmosphere by sensing devices remote from the earth's surface. The remote sensing technology is widely applied to various fields such as environment monitoring, resource management, city planning and the like. In recent years, with the development of remote sensing technology and the progress of remote sensing instruments, remote sensing images have become a valuable data source for earth observation.
Geological exploration is a typical application field of remote sensing observation technology, and rock vein is an important content in address exploration. In the current various vein region searching methods, remote sensing image observation is an effective one. However, currently, the observation of the dike region from the remote sensing image mainly depends on the observation by naked eyes, and this approach consumes a great deal of labor cost and also limits the improvement of time efficiency.
Disclosure of Invention
The invention aims to overcome one or more of the prior technical problems and provides a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
In order to achieve the above object, the invention provides a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network, which comprises the following steps:
acquiring a remote sensing image of the rock vein;
obtaining a characteristic image according to a multi-scale residual fusion network;
and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
According to one aspect of the invention, the multi-scale residual fusion network comprises a discrete wavelet transform, a multi-scale residual fusion module and an inverse discrete wavelet transform, and the discrete wavelet transform is used for decomposing a remote rock vein image, wherein the formula is that,
Figure SMS_1
wherein ,
Figure SMS_2
representing the approximation component;
Figure SMS_3
representing diagonal components;
Figure SMS_4
representing a horizontal component;
Figure SMS_5
representing the vertical component;
Figure SMS_6
representing a discrete wavelet transform;
Figure SMS_7
and representing the remote sensing image of the rock vein.
According to one aspect of the invention, the multi-scale residual fusion module is used to fuse the approximate component with the diagonal component, the horizontal component and the vertical component, where the formula is,
Figure SMS_8
Figure SMS_9
wherein ,
Figure SMS_10
representing a first output feature after passing through a first multi-scale residual fusion module;
Figure SMS_11
representing a second output feature after passing through the first multi-scale residual fusion module;
Figure SMS_12
representing a third output feature after passing through the second multi-scale residual fusion module;
Figure SMS_13
representing a fourth output feature after passing through the second multi-scale residual fusion module;
Figure SMS_14
representing a first multi-scale residual fusion module;
Figure SMS_15
representing a first multi-scale residual fusion module.
According to one aspect of the invention, the multi-scale residual fusion module is used for cross fusion, wherein the formula is,
Figure SMS_16
Figure SMS_17
wherein ,
Figure SMS_18
representing a fifth output feature after passing through the third multi-scale residual fusion module;
Figure SMS_19
representing a sixth output feature after passing through the third multi-scale residual fusion module;
Figure SMS_20
representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
Figure SMS_21
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
Figure SMS_22
representing a third multi-scale residual fusion module;
Figure SMS_23
a fourth multi-scale residual fusion module is represented.
According to one aspect of the invention, the feature image is obtained by inverse transforming using the inverse discrete wavelet transform, wherein the formula is,
Figure SMS_24
wherein ,
Figure SMS_25
representing an inverse discrete wavelet transform;
Figure SMS_26
representing the feature image.
According to one aspect of the invention, the method for fusing the approximate component and the diagonal component, the horizontal component and the vertical component using the multi-scale residual fusion module further comprises, the multi-scale residual fusion module having two computing branches, the computing branches comprising four computing operation layers, the computing operation layers comprising a convolution, a switchable normalization operation and a parametric rectification linear unit, the formula for computation using the first computing operation layer being,
Figure SMS_27
Figure SMS_28
wherein ,
Figure SMS_29
a first computational operation layer representing a first computational branch; />
Figure SMS_30
A first computational operation layer representing a second computational branch;
Figure SMS_31
representing an output of a first computational operation layer through a first computational branch;
Figure SMS_32
representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
Figure SMS_33
Figure SMS_34
wherein ,
Figure SMS_35
a second computational operation layer representing a first computational branch;
Figure SMS_36
a second computational operation layer representing a second computational branch;
Figure SMS_37
representing an output of a second computational operation layer through the first computational branch;
Figure SMS_38
representing an output of a second computational operation layer through a second computational branch;
Figure SMS_39
representing a superposition operation of channel layers on a plurality of features;
the third computational operation layer, using two computational branches, is further processed, as formulated,
Figure SMS_40
Figure SMS_41
wherein ,
Figure SMS_42
a third computational operation layer representing a first computational branch;
Figure SMS_43
a third computational operation layer representing a second computational branch;
Figure SMS_44
representing an output of a third computational operation layer through the first computational branch;
Figure SMS_45
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
Figure SMS_46
Figure SMS_47
wherein ,
Figure SMS_48
a fourth computational operation layer representing the first computational branch;
Figure SMS_49
a fourth computational operation layer representing a second computational branch.
According to one aspect of the invention, the nested encoder network is trained using cross entropy loss, where the formula is,
Figure SMS_50
wherein ,
Figure SMS_51
representing the loss employed by the network training;
Figure SMS_52
representing binary cross entropy;
Figure SMS_53
representing the dice coefficients; />
Figure SMS_54
And a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
In order to achieve the above object, the present invention provides a remote sensing image karst extraction system based on a multi-scale residual fusion network, which is characterized by comprising:
the rock vein remote sensing image acquisition module is used for acquiring: acquiring a remote sensing image of the rock vein;
the characteristic image acquisition module is used for: obtaining a characteristic image according to a multi-scale residual fusion network;
the rock vein extraction result acquisition module: and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
In order to achieve the above objective, the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, wherein the computer program, when executed by the processor, implements the above remote sensing image rock vein extraction method based on a multi-scale residual fusion network.
In order to achieve the above objective, the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
Based on the above, the invention has the beneficial effects that: the trained multi-scale residual fusion network combined with wavelet features can be utilized to automatically extract the rock vein region in the remote sensing image, and manual intervention is not needed; the multi-scale residual fusion network combining wavelet features can fully consider remote sensing image components with different frequencies, and through the multi-scale residual fusion module, image features with different frequencies and different scales can be fully extracted, and depth fusion is carried out, so that accurate remote sensing image rock vein region extraction is realized.
Drawings
FIG. 1 schematically shows a flow chart of a remote sensing image vein extraction method based on a multi-scale residual fusion network according to the invention;
fig. 2 schematically shows a flow chart of a remote sensing image dike extraction system based on a multi-scale residual fusion network according to the invention.
Detailed Description
The present disclosure will now be discussed with reference to exemplary embodiments, it being understood that the embodiments discussed are merely for the purpose of enabling those of ordinary skill in the art to better understand and thus practice the present disclosure and do not imply any limitation to the scope of the present disclosure.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The terms "based on" and "based at least in part on" are to be construed as "at least one embodiment.
Fig. 1 schematically shows a flowchart of a remote sensing image rock burst extraction method based on a multi-scale residual error fusion network according to the present invention, as shown in fig. 1, the remote sensing image rock burst extraction method based on a multi-scale residual error fusion network according to the present invention includes:
acquiring a remote sensing image of the rock vein;
obtaining a characteristic image according to a multi-scale residual fusion network;
and inputting the characteristic image into a remote sensing image vein extraction network to obtain a vein extraction result.
According to one embodiment of the invention, the multi-scale residual fusion network comprises discrete wavelet transform, a multi-scale residual fusion module and inverse discrete wavelet transform, and the discrete wavelet transform is used for decomposing the remote-sensing image of the rock vein, wherein the formula is that,
Figure SMS_55
wherein ,
Figure SMS_56
representing the approximation component;
Figure SMS_57
representing diagonal components;
Figure SMS_58
representing a horizontal component;
Figure SMS_59
representing the vertical component;
Figure SMS_60
representing a discrete wavelet transform;
Figure SMS_61
and representing the remote sensing image of the rock vein.
According to one embodiment of the present invention, the approximate component and the diagonal component, the horizontal component and the vertical component are fused using a multi-scale residual fusion module, where the formula is,
Figure SMS_62
Figure SMS_63
wherein ,
Figure SMS_64
representing a first output feature after passing through a first multi-scale residual fusion module;
Figure SMS_65
representing a second output feature after passing through the first multi-scale residual fusion module;
Figure SMS_66
representing a third output feature after passing through the second multi-scale residual fusion module;
Figure SMS_67
representing a fourth output feature after passing through the second multi-scale residual fusion module;
Figure SMS_68
representing a first multi-scale residual fusion module;
Figure SMS_69
representing a first multi-scale residual fusion module.
According to one embodiment of the present invention, cross-fusion is performed using a multi-scale residual fusion module, where the formula is,
Figure SMS_70
Figure SMS_71
wherein ,
Figure SMS_72
representing a fifth output feature after passing through the third multi-scale residual fusion module;
Figure SMS_73
representing a sixth output feature after passing through the third multi-scale residual fusion module;
Figure SMS_74
representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
Figure SMS_75
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
Figure SMS_76
representing a third multi-scale residual fusion module;
Figure SMS_77
a fourth multi-scale residual fusion module is represented.
According to one embodiment of the invention, the feature image is obtained by inverse conversion using an inverse discrete wavelet transform, wherein the formula is,
Figure SMS_78
wherein ,
Figure SMS_79
representing an inverse discrete wavelet transform;
Figure SMS_80
representing the feature image.
According to one embodiment of the present invention, the method for fusing the approximate component and the diagonal component, and the horizontal component and the vertical component using the multi-scale residual fusion module further includes, the multi-scale residual fusion module having two calculation branches in total, the calculation branches including four calculation operation layers, the calculation operation layers being composed of one convolution, one switchable normalization operation, and one parametric rectification linear unit, the formula of the calculation using the first calculation operation layer being,
Figure SMS_81
Figure SMS_82
wherein ,
Figure SMS_83
a first computational operation layer representing a first computational branch;
Figure SMS_84
a first computational operation layer representing a second computational branch;
Figure SMS_85
representing a first computational operation through a first computational branchOutputting a working layer;
Figure SMS_86
representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
Figure SMS_87
Figure SMS_88
wherein ,
Figure SMS_89
a second computational operation layer representing a first computational branch;
Figure SMS_90
a second computational operation layer representing a second computational branch;
Figure SMS_91
representing an output of a second computational operation layer through the first computational branch;
Figure SMS_92
representing an output of a second computational operation layer through a second computational branch;
Figure SMS_93
representing a superposition operation of channel layers on a plurality of features;
the third computational operation layer, using two computational branches, is further processed, as formulated,
Figure SMS_94
Figure SMS_95
wherein ,
Figure SMS_96
a third computational operation layer representing a first computational branch;
Figure SMS_97
a third computational operation layer representing a second computational branch;
Figure SMS_98
representing an output of a third computational operation layer through the first computational branch;
Figure SMS_99
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
Figure SMS_100
Figure SMS_101
wherein ,
Figure SMS_102
a fourth computational operation layer representing the first computational branch;
Figure SMS_103
a fourth computational operation layer representing a second computational branch.
According to one embodiment of the invention, a nested encoder network is trained using cross entropy loss, where the formula is,
Figure SMS_104
wherein ,
Figure SMS_105
representing the loss employed by the network training;
Figure SMS_106
representing binary cross entropy;
Figure SMS_107
representing the dice coefficients;
Figure SMS_108
and a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
Furthermore, to achieve the above object, the present invention provides a remote sensing image vein extraction system based on a multi-scale residual error fusion network, and fig. 2 schematically shows a flowchart of a remote sensing image vein extraction system based on a multi-scale residual error fusion network according to the present invention, as shown in fig. 2, and the remote sensing image vein extraction system based on a multi-scale residual error fusion network according to the present invention includes:
the rock vein remote sensing image acquisition module is used for acquiring: acquiring a remote sensing image of the rock vein;
the characteristic image acquisition module is used for: obtaining a characteristic image according to a multi-scale residual fusion network;
the rock vein extraction result acquisition module: and inputting the characteristic image into a remote sensing image vein extraction network to obtain a vein extraction result.
According to one embodiment of the invention, the multi-scale residual fusion network comprises discrete wavelet transform, a multi-scale residual fusion module and inverse discrete wavelet transform, and the discrete wavelet transform is used for decomposing the remote-sensing image of the rock vein, wherein the formula is that,
Figure SMS_109
wherein ,
Figure SMS_110
representing the approximation component;
Figure SMS_111
representing diagonal components;
Figure SMS_112
representing a horizontal component;
Figure SMS_113
representing the vertical component;
Figure SMS_114
representing a discrete wavelet transform;
Figure SMS_115
and representing the remote sensing image of the rock vein.
According to one embodiment of the present invention, the approximate component and the diagonal component, the horizontal component and the vertical component are fused using a multi-scale residual fusion module, where the formula is,
Figure SMS_116
;/>
Figure SMS_117
wherein ,
Figure SMS_118
representing a first output feature after passing through a first multi-scale residual fusion module;
Figure SMS_119
representing a second output feature after passing through the first multi-scale residual fusion module;
Figure SMS_120
representing a third output feature after passing through the second multi-scale residual fusion module;
Figure SMS_121
representing a fourth output feature after passing through the second multi-scale residual fusion module;
Figure SMS_122
representing a first multi-scale residual fusion module;
Figure SMS_123
representing a first multi-scale residual fusion module.
According to one embodiment of the present invention, cross-fusion is performed using a multi-scale residual fusion module, where the formula is,
Figure SMS_124
Figure SMS_125
wherein ,
Figure SMS_126
representing a fifth output feature after passing through the third multi-scale residual fusion module;
Figure SMS_127
representing a sixth output feature after passing through the third multi-scale residual fusion module;
Figure SMS_128
representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
Figure SMS_129
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
Figure SMS_130
representing a third multi-scale residual fusion module;
Figure SMS_131
a fourth multi-scale residual fusion module is represented.
According to one embodiment of the invention, the feature image is obtained by inverse conversion using an inverse discrete wavelet transform, wherein the formula is,
Figure SMS_132
wherein ,
Figure SMS_133
representing an inverse discrete wavelet transform;
Figure SMS_134
representing the feature image.
According to one embodiment of the present invention, the method for fusing the approximate component and the diagonal component, and the horizontal component and the vertical component using the multi-scale residual fusion module further includes, the multi-scale residual fusion module having two calculation branches in total, the calculation branches including four calculation operation layers, the calculation operation layers being composed of one convolution, one switchable normalization operation, and one parametric rectification linear unit, the formula of the calculation using the first calculation operation layer being,
Figure SMS_135
Figure SMS_136
wherein ,
Figure SMS_137
a first computational operation layer representing a first computational branch;
Figure SMS_138
a first computational operation layer representing a second computational branch;
Figure SMS_139
representing an output of a first computational operation layer through a first computational branch; />
Figure SMS_140
Representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
Figure SMS_141
Figure SMS_142
wherein ,
Figure SMS_143
a second computational operation layer representing a first computational branch;
Figure SMS_144
a second computational operation layer representing a second computational branch;
Figure SMS_145
representing an output of a second computational operation layer through the first computational branch;
Figure SMS_146
representing an output of a second computational operation layer through a second computational branch;
Figure SMS_147
representing a superposition operation of channel layers on a plurality of features;
the third computational operation layer, using two computational branches, is further processed, as formulated,
Figure SMS_148
Figure SMS_149
wherein ,
Figure SMS_150
a third computational operation layer representing a first computational branch;
Figure SMS_151
a third computational operation layer representing a second computational branch;
Figure SMS_152
representing an output of a third computational operation layer through the first computational branch;
Figure SMS_153
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
Figure SMS_154
Figure SMS_155
wherein ,
Figure SMS_156
a fourth computational operation layer representing the first computational branch;
Figure SMS_157
a fourth computational operation layer representing a second computational branch.
According to one embodiment of the invention, a nested encoder network is trained using cross entropy loss, where the formula is,
Figure SMS_158
wherein ,
Figure SMS_159
representing the loss employed by the network training;
Figure SMS_160
representing binary cross entropy;
Figure SMS_161
representing the dice coefficients;
Figure SMS_162
and a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
In order to achieve the above object, the present invention also provides an electronic device including: the remote sensing image rock vein extraction method based on the multi-scale residual error fusion network comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the computer program is executed by the processor.
In order to achieve the above object, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
Based on the method, the method has the advantages that the trained multi-scale residual fusion network combined with wavelet characteristics is utilized to automatically extract the rock vein region in the remote sensing image, and manual intervention is not needed; the multi-scale residual fusion network combining wavelet features can fully consider remote sensing image components with different frequencies, and through the multi-scale residual fusion module, image features with different frequencies and different scales can be fully extracted, and depth fusion is carried out, so that accurate remote sensing image rock vein region extraction is realized.
Those of ordinary skill in the art will appreciate that the modules and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention.
In addition, each functional module in the embodiment of the present invention may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method for energy saving signal transmission/reception of the various embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.

Claims (10)

1. A remote sensing image rock vein extraction method based on a multi-scale residual fusion network is characterized by comprising the following steps:
acquiring a remote sensing image of the rock vein;
obtaining a characteristic image according to a multi-scale residual fusion network;
and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
2. The method for extracting the remote sensing image dike based on the multi-scale residual fusion network according to claim 1, wherein the multi-scale residual fusion network comprises discrete wavelet transformation, a multi-scale residual fusion module and inverse discrete wavelet transformation, the remote sensing image of the dike is decomposed by using the discrete wavelet transformation, and the formula is that,
Figure QLYQS_1
wherein ,
Figure QLYQS_2
representing the approximation component;
Figure QLYQS_3
representing diagonal components;
Figure QLYQS_4
representing a horizontal component;
Figure QLYQS_5
representing the vertical component;
Figure QLYQS_6
representing a discrete wavelet transform;
Figure QLYQS_7
and representing the remote sensing image of the rock vein.
3. The remote sensing image vein extraction method based on a multi-scale residual fusion network according to claim 2, wherein the multi-scale residual fusion module is used for fusing an approximate component and a diagonal component, and a horizontal component and a vertical component, wherein the formula is that,
Figure QLYQS_8
Figure QLYQS_9
wherein ,
Figure QLYQS_10
representing a first output feature after passing through a first multi-scale residual fusion module;
Figure QLYQS_11
representing a second output feature after passing through the first multi-scale residual fusion module;
Figure QLYQS_12
representing a third output feature after passing through the second multi-scale residual fusion module;
Figure QLYQS_13
representing a fourth output feature after passing through the second multi-scale residual fusion module;
Figure QLYQS_14
representing a first multi-scale residual fusion module;
Figure QLYQS_15
representing a first multi-scale residual fusion module.
4. The method for extracting the remote sensing image dike based on the multi-scale residual fusion network according to claim 3, wherein the multi-scale residual fusion module is used for cross fusion, and the formula is as follows,
Figure QLYQS_16
Figure QLYQS_17
wherein ,
Figure QLYQS_18
representing a fifth output feature after passing through the third multi-scale residual fusion module;
Figure QLYQS_19
representing a sixth output feature after passing through the third multi-scale residual fusion module; />
Figure QLYQS_20
Representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
Figure QLYQS_21
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
Figure QLYQS_22
representing a third multi-scale residual fusion module;
Figure QLYQS_23
a fourth multi-scale residual fusion module is represented.
5. The method for extracting the remote sensing image dike based on the multi-scale residual fusion network according to claim 4, wherein the inverse discrete wavelet transform is used for performing inverse transformation to obtain the characteristic image, wherein the formula is that,
Figure QLYQS_24
wherein ,
Figure QLYQS_25
representing an inverse discrete wavelet transform;
Figure QLYQS_26
representation ofAnd (5) characteristic images.
6. The method for extracting remote sensing image dike based on a multi-scale residual fusion network according to claim 5, wherein the method for fusing the approximate component and the diagonal component, and the horizontal component and the vertical component by using the multi-scale residual fusion module further comprises two calculation branches in total, wherein the calculation branches comprise four calculation operation layers, the calculation operation layers consist of a convolution, a switchable normalization operation and a parametric rectification linear unit, and the formula for calculation by using the first calculation operation layer is that,
Figure QLYQS_27
Figure QLYQS_28
wherein ,
Figure QLYQS_29
a first computational operation layer representing a first computational branch;
Figure QLYQS_30
a first computational operation layer representing a second computational branch;
Figure QLYQS_31
representing an output of a first computational operation layer through a first computational branch;
Figure QLYQS_32
representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
Figure QLYQS_33
Figure QLYQS_34
wherein ,
Figure QLYQS_35
a second computational operation layer representing a first computational branch;
Figure QLYQS_36
a second computational operation layer representing a second computational branch;
Figure QLYQS_37
representing an output of a second computational operation layer through the first computational branch;
Figure QLYQS_38
representing an output of a second computational operation layer through a second computational branch;
Figure QLYQS_39
representing a superposition operation of channel layers on a plurality of features;
the third computational operation layer, using two computational branches, is further processed, as formulated,
Figure QLYQS_40
Figure QLYQS_41
;/>
wherein ,
Figure QLYQS_42
a third computational operation layer representing a first computational branch;
Figure QLYQS_43
a third computational operation layer representing a second computational branch;
Figure QLYQS_44
representing an output of a third computational operation layer through the first computational branch;
Figure QLYQS_45
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
Figure QLYQS_46
Figure QLYQS_47
wherein ,
Figure QLYQS_48
a fourth computational operation layer representing the first computational branch;
Figure QLYQS_49
a fourth computational operation layer representing a second computational branch.
7. The method for extracting remote sensing image dike based on the multi-scale residual fusion network according to claim 6, wherein the nested encoder network is trained by using cross entropy loss, wherein the formula is as follows,
Figure QLYQS_50
wherein ,
Figure QLYQS_51
representing the loss employed by the network training;
Figure QLYQS_52
representing binary cross entropy;
Figure QLYQS_53
representing the dice coefficients;
Figure QLYQS_54
and a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
8. The remote sensing image rock vein extraction system based on the multi-scale residual error fusion network is characterized by comprising the following components:
the rock vein remote sensing image acquisition module is used for acquiring: acquiring a remote sensing image of the rock vein;
the characteristic image acquisition module is used for: obtaining a characteristic image according to a multi-scale residual fusion network;
the rock vein extraction result acquisition module: and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing a remote sensing image dike extraction method based on a multi-scale residual fusion network as claimed in any one of claims 1 to 7.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the method for extracting the remote sensing image rock veins based on the multi-scale residual error fusion network is realized according to any one of claims 1 to 7.
CN202310449575.0A 2023-04-25 2023-04-25 Remote sensing image rock vein extraction method based on multi-scale residual error fusion network Active CN116168302B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310449575.0A CN116168302B (en) 2023-04-25 2023-04-25 Remote sensing image rock vein extraction method based on multi-scale residual error fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310449575.0A CN116168302B (en) 2023-04-25 2023-04-25 Remote sensing image rock vein extraction method based on multi-scale residual error fusion network

Publications (2)

Publication Number Publication Date
CN116168302A true CN116168302A (en) 2023-05-26
CN116168302B CN116168302B (en) 2023-07-14

Family

ID=86416753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310449575.0A Active CN116168302B (en) 2023-04-25 2023-04-25 Remote sensing image rock vein extraction method based on multi-scale residual error fusion network

Country Status (1)

Country Link
CN (1) CN116168302B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140222403A1 (en) * 2013-02-07 2014-08-07 Schlumberger Technology Corporation Geologic model via implicit function
CN112784806A (en) * 2021-02-04 2021-05-11 中国地质科学院矿产资源研究所 Lithium-containing pegmatite vein extraction method based on full convolution neural network
CN113177456A (en) * 2021-04-23 2021-07-27 西安电子科技大学 Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN113379618A (en) * 2021-05-06 2021-09-10 航天东方红卫星有限公司 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN113625363A (en) * 2021-08-18 2021-11-09 中国地质科学院矿产资源研究所 Mineral exploration method and device for pegmatite-type lithium ore, computer equipment and medium
CN113780296A (en) * 2021-09-13 2021-12-10 山东大学 Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN113850824A (en) * 2021-09-27 2021-12-28 太原理工大学 Remote sensing image road network extraction method based on multi-scale feature fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140222403A1 (en) * 2013-02-07 2014-08-07 Schlumberger Technology Corporation Geologic model via implicit function
CN112784806A (en) * 2021-02-04 2021-05-11 中国地质科学院矿产资源研究所 Lithium-containing pegmatite vein extraction method based on full convolution neural network
CN113177456A (en) * 2021-04-23 2021-07-27 西安电子科技大学 Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion
CN113379618A (en) * 2021-05-06 2021-09-10 航天东方红卫星有限公司 Optical remote sensing image cloud removing method based on residual dense connection and feature fusion
CN113625363A (en) * 2021-08-18 2021-11-09 中国地质科学院矿产资源研究所 Mineral exploration method and device for pegmatite-type lithium ore, computer equipment and medium
CN113780296A (en) * 2021-09-13 2021-12-10 山东大学 Remote sensing image semantic segmentation method and system based on multi-scale information fusion
CN113850824A (en) * 2021-09-27 2021-12-28 太原理工大学 Remote sensing image road network extraction method based on multi-scale feature fusion

Also Published As

Publication number Publication date
CN116168302B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN108509915B (en) Method and device for generating face recognition model
CN109523470B (en) Depth image super-resolution reconstruction method and system
CN103559697A (en) Scrap paper lengthwise cutting splicing and recovering algorithm based on FFT
CN114022900A (en) Training method, detection method, device, equipment and medium for detection model
CN115546076B (en) Remote sensing image thin cloud removing method based on convolutional network
CN113139618B (en) Robustness-enhanced classification method and device based on integrated defense
CN114049491A (en) Fingerprint segmentation model training method, fingerprint segmentation device, fingerprint segmentation equipment and fingerprint segmentation medium
CN116109829B (en) Coral reef water area image segmentation method based on fusion network
CN116168302B (en) Remote sensing image rock vein extraction method based on multi-scale residual error fusion network
Kekre et al. SAR image segmentation using vector quantization technique on entropy images
Kekre et al. Sectorization of Full Kekre’s Wavelet Transform for Feature extraction of Color Images
CN113688263B (en) Method, computing device, and storage medium for searching for image
CN115083006A (en) Iris recognition model training method, iris recognition method and iris recognition device
CN111626373B (en) Multi-scale widening residual error network, small target recognition and detection network and optimization method thereof
CN112862700B (en) Hyperspectral remote sensing image blind denoising method based on noise attention
CN115861081B (en) Image super-resolution reconstruction method based on ladder type multi-stage wavelet network
CN116660992B (en) Seismic signal processing method based on multi-feature fusion
CN115470873B (en) Radar radiation source identification method and system
CN117496162B (en) Method, device and medium for removing thin cloud of infrared satellite remote sensing image
Singh et al. Forged Image Identification with Digital Image Forensic Tools
Gandhi et al. Reverse image search using discrete wavelet transform, local histogram and canny edge detector
Khandakar et al. Complexity analysis and accuracy of image recovery based on signal transformation algorithms
Sang et al. A new approach for texture classification in CBIR
CN115564650A (en) Text image super-resolution method and system, electronic equipment and storage medium
CN115294578A (en) Text information extraction method, device, equipment and medium based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant