CN116168302A - Remote sensing image rock vein extraction method based on multi-scale residual error fusion network - Google Patents
Remote sensing image rock vein extraction method based on multi-scale residual error fusion network Download PDFInfo
- Publication number
- CN116168302A CN116168302A CN202310449575.0A CN202310449575A CN116168302A CN 116168302 A CN116168302 A CN 116168302A CN 202310449575 A CN202310449575 A CN 202310449575A CN 116168302 A CN116168302 A CN 116168302A
- Authority
- CN
- China
- Prior art keywords
- representing
- scale residual
- computational
- remote sensing
- sensing image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000004927 fusion Effects 0.000 title claims abstract description 120
- 210000003462 vein Anatomy 0.000 title claims abstract description 68
- 239000011435 rock Substances 0.000 title claims abstract description 60
- 238000000605 extraction Methods 0.000 title claims abstract description 42
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000004364 calculation method Methods 0.000 claims description 26
- 238000000034 method Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 10
- 238000010606 normalization Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Abstract
The invention relates to the technical field of remote sensing image processing, and discloses a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network, which comprises the following steps: acquiring a remote sensing image of the rock vein; obtaining a characteristic image according to a multi-scale residual fusion network; and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result. According to the invention, the trained multi-scale residual fusion network combined with wavelet features is utilized to automatically extract the rock vein region in the remote sensing image, and manual intervention is not required; the multi-scale residual fusion network combining wavelet features can fully consider remote sensing image components with different frequencies, and through the multi-scale residual fusion module, image features with different frequencies and different scales can be fully extracted, and depth fusion is carried out, so that accurate remote sensing image rock vein region extraction is realized.
Description
Technical Field
The invention relates to the technical field of remote sensing image processing, in particular to a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
Background
Remote sensing refers to the acquisition of information about the earth's surface and the atmosphere by sensing devices remote from the earth's surface. The remote sensing technology is widely applied to various fields such as environment monitoring, resource management, city planning and the like. In recent years, with the development of remote sensing technology and the progress of remote sensing instruments, remote sensing images have become a valuable data source for earth observation.
Geological exploration is a typical application field of remote sensing observation technology, and rock vein is an important content in address exploration. In the current various vein region searching methods, remote sensing image observation is an effective one. However, currently, the observation of the dike region from the remote sensing image mainly depends on the observation by naked eyes, and this approach consumes a great deal of labor cost and also limits the improvement of time efficiency.
Disclosure of Invention
The invention aims to overcome one or more of the prior technical problems and provides a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
In order to achieve the above object, the invention provides a remote sensing image rock vein extraction method based on a multi-scale residual error fusion network, which comprises the following steps:
acquiring a remote sensing image of the rock vein;
obtaining a characteristic image according to a multi-scale residual fusion network;
and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
According to one aspect of the invention, the multi-scale residual fusion network comprises a discrete wavelet transform, a multi-scale residual fusion module and an inverse discrete wavelet transform, and the discrete wavelet transform is used for decomposing a remote rock vein image, wherein the formula is that,
According to one aspect of the invention, the multi-scale residual fusion module is used to fuse the approximate component with the diagonal component, the horizontal component and the vertical component, where the formula is,
wherein ,representing a first output feature after passing through a first multi-scale residual fusion module;
representing a second output feature after passing through the first multi-scale residual fusion module;
representing a third output feature after passing through the second multi-scale residual fusion module;
representing a fourth output feature after passing through the second multi-scale residual fusion module;
According to one aspect of the invention, the multi-scale residual fusion module is used for cross fusion, wherein the formula is,
wherein ,representing a fifth output feature after passing through the third multi-scale residual fusion module;
representing a sixth output feature after passing through the third multi-scale residual fusion module;
representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
According to one aspect of the invention, the feature image is obtained by inverse transforming using the inverse discrete wavelet transform, wherein the formula is,
According to one aspect of the invention, the method for fusing the approximate component and the diagonal component, the horizontal component and the vertical component using the multi-scale residual fusion module further comprises, the multi-scale residual fusion module having two computing branches, the computing branches comprising four computing operation layers, the computing operation layers comprising a convolution, a switchable normalization operation and a parametric rectification linear unit, the formula for computation using the first computing operation layer being,
representing an output of a first computational operation layer through a first computational branch;
representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a second computational operation layer through the first computational branch;
representing an output of a second computational operation layer through a second computational branch;
the third computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a third computational operation layer through the first computational branch;
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
According to one aspect of the invention, the nested encoder network is trained using cross entropy loss, where the formula is,
And a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
In order to achieve the above object, the present invention provides a remote sensing image karst extraction system based on a multi-scale residual fusion network, which is characterized by comprising:
the rock vein remote sensing image acquisition module is used for acquiring: acquiring a remote sensing image of the rock vein;
the characteristic image acquisition module is used for: obtaining a characteristic image according to a multi-scale residual fusion network;
the rock vein extraction result acquisition module: and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
In order to achieve the above objective, the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, wherein the computer program, when executed by the processor, implements the above remote sensing image rock vein extraction method based on a multi-scale residual fusion network.
In order to achieve the above objective, the present invention provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
Based on the above, the invention has the beneficial effects that: the trained multi-scale residual fusion network combined with wavelet features can be utilized to automatically extract the rock vein region in the remote sensing image, and manual intervention is not needed; the multi-scale residual fusion network combining wavelet features can fully consider remote sensing image components with different frequencies, and through the multi-scale residual fusion module, image features with different frequencies and different scales can be fully extracted, and depth fusion is carried out, so that accurate remote sensing image rock vein region extraction is realized.
Drawings
FIG. 1 schematically shows a flow chart of a remote sensing image vein extraction method based on a multi-scale residual fusion network according to the invention;
fig. 2 schematically shows a flow chart of a remote sensing image dike extraction system based on a multi-scale residual fusion network according to the invention.
Detailed Description
The present disclosure will now be discussed with reference to exemplary embodiments, it being understood that the embodiments discussed are merely for the purpose of enabling those of ordinary skill in the art to better understand and thus practice the present disclosure and do not imply any limitation to the scope of the present disclosure.
As used herein, the term "comprising" and variants thereof are to be interpreted as meaning "including but not limited to" open-ended terms. The terms "based on" and "based at least in part on" are to be construed as "at least one embodiment.
Fig. 1 schematically shows a flowchart of a remote sensing image rock burst extraction method based on a multi-scale residual error fusion network according to the present invention, as shown in fig. 1, the remote sensing image rock burst extraction method based on a multi-scale residual error fusion network according to the present invention includes:
acquiring a remote sensing image of the rock vein;
obtaining a characteristic image according to a multi-scale residual fusion network;
and inputting the characteristic image into a remote sensing image vein extraction network to obtain a vein extraction result.
According to one embodiment of the invention, the multi-scale residual fusion network comprises discrete wavelet transform, a multi-scale residual fusion module and inverse discrete wavelet transform, and the discrete wavelet transform is used for decomposing the remote-sensing image of the rock vein, wherein the formula is that,
According to one embodiment of the present invention, the approximate component and the diagonal component, the horizontal component and the vertical component are fused using a multi-scale residual fusion module, where the formula is,
wherein ,representing a first output feature after passing through a first multi-scale residual fusion module;
representing a second output feature after passing through the first multi-scale residual fusion module;
representing a third output feature after passing through the second multi-scale residual fusion module;
representing a fourth output feature after passing through the second multi-scale residual fusion module;
According to one embodiment of the present invention, cross-fusion is performed using a multi-scale residual fusion module, where the formula is,
wherein ,representing a fifth output feature after passing through the third multi-scale residual fusion module;
representing a sixth output feature after passing through the third multi-scale residual fusion module;
representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
According to one embodiment of the invention, the feature image is obtained by inverse conversion using an inverse discrete wavelet transform, wherein the formula is,
According to one embodiment of the present invention, the method for fusing the approximate component and the diagonal component, and the horizontal component and the vertical component using the multi-scale residual fusion module further includes, the multi-scale residual fusion module having two calculation branches in total, the calculation branches including four calculation operation layers, the calculation operation layers being composed of one convolution, one switchable normalization operation, and one parametric rectification linear unit, the formula of the calculation using the first calculation operation layer being,
representing a first computational operation through a first computational branchOutputting a working layer;
representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a second computational operation layer through the first computational branch;
representing an output of a second computational operation layer through a second computational branch;
the third computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a third computational operation layer through the first computational branch;
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
According to one embodiment of the invention, a nested encoder network is trained using cross entropy loss, where the formula is,
and a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
Furthermore, to achieve the above object, the present invention provides a remote sensing image vein extraction system based on a multi-scale residual error fusion network, and fig. 2 schematically shows a flowchart of a remote sensing image vein extraction system based on a multi-scale residual error fusion network according to the present invention, as shown in fig. 2, and the remote sensing image vein extraction system based on a multi-scale residual error fusion network according to the present invention includes:
the rock vein remote sensing image acquisition module is used for acquiring: acquiring a remote sensing image of the rock vein;
the characteristic image acquisition module is used for: obtaining a characteristic image according to a multi-scale residual fusion network;
the rock vein extraction result acquisition module: and inputting the characteristic image into a remote sensing image vein extraction network to obtain a vein extraction result.
According to one embodiment of the invention, the multi-scale residual fusion network comprises discrete wavelet transform, a multi-scale residual fusion module and inverse discrete wavelet transform, and the discrete wavelet transform is used for decomposing the remote-sensing image of the rock vein, wherein the formula is that,
According to one embodiment of the present invention, the approximate component and the diagonal component, the horizontal component and the vertical component are fused using a multi-scale residual fusion module, where the formula is,
wherein ,representing a first output feature after passing through a first multi-scale residual fusion module;
representing a second output feature after passing through the first multi-scale residual fusion module;
representing a third output feature after passing through the second multi-scale residual fusion module;
representing a fourth output feature after passing through the second multi-scale residual fusion module;
According to one embodiment of the present invention, cross-fusion is performed using a multi-scale residual fusion module, where the formula is,
wherein ,representing a fifth output feature after passing through the third multi-scale residual fusion module;
representing a sixth output feature after passing through the third multi-scale residual fusion module;
representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
According to one embodiment of the invention, the feature image is obtained by inverse conversion using an inverse discrete wavelet transform, wherein the formula is,
According to one embodiment of the present invention, the method for fusing the approximate component and the diagonal component, and the horizontal component and the vertical component using the multi-scale residual fusion module further includes, the multi-scale residual fusion module having two calculation branches in total, the calculation branches including four calculation operation layers, the calculation operation layers being composed of one convolution, one switchable normalization operation, and one parametric rectification linear unit, the formula of the calculation using the first calculation operation layer being,
representing an output of a first computational operation layer through a first computational branch; />
Representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a second computational operation layer through the first computational branch;
representing an output of a second computational operation layer through a second computational branch;
the third computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a third computational operation layer through the first computational branch;
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
According to one embodiment of the invention, a nested encoder network is trained using cross entropy loss, where the formula is,
and a rock vein region binary tag corresponding to the input rock vein remote sensing image is represented.
In order to achieve the above object, the present invention also provides an electronic device including: the remote sensing image rock vein extraction method based on the multi-scale residual error fusion network comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the computer program is executed by the processor.
In order to achieve the above object, the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the above-mentioned remote sensing image rock vein extraction method based on a multi-scale residual error fusion network.
Based on the method, the method has the advantages that the trained multi-scale residual fusion network combined with wavelet characteristics is utilized to automatically extract the rock vein region in the remote sensing image, and manual intervention is not needed; the multi-scale residual fusion network combining wavelet features can fully consider remote sensing image components with different frequencies, and through the multi-scale residual fusion module, image features with different frequencies and different scales can be fully extracted, and depth fusion is carried out, so that accurate remote sensing image rock vein region extraction is realized.
Those of ordinary skill in the art will appreciate that the modules and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the apparatus and device described above may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the modules is merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or modules, which may be in electrical, mechanical, or other forms.
The modules described as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules can be selected according to actual needs to achieve the purpose of the embodiment of the invention.
In addition, each functional module in the embodiment of the present invention may be integrated in one processing module, or each module may exist alone physically, or two or more modules may be integrated in one module.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method for energy saving signal transmission/reception of the various embodiments of the present invention. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, etc.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.
It should be understood that, the sequence numbers of the steps in the summary and the embodiments of the present invention do not necessarily mean the order of execution, and the execution order of the processes should be determined by the functions and the internal logic, and should not be construed as limiting the implementation process of the embodiments of the present invention.
Claims (10)
1. A remote sensing image rock vein extraction method based on a multi-scale residual fusion network is characterized by comprising the following steps:
acquiring a remote sensing image of the rock vein;
obtaining a characteristic image according to a multi-scale residual fusion network;
and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
2. The method for extracting the remote sensing image dike based on the multi-scale residual fusion network according to claim 1, wherein the multi-scale residual fusion network comprises discrete wavelet transformation, a multi-scale residual fusion module and inverse discrete wavelet transformation, the remote sensing image of the dike is decomposed by using the discrete wavelet transformation, and the formula is that,
3. The remote sensing image vein extraction method based on a multi-scale residual fusion network according to claim 2, wherein the multi-scale residual fusion module is used for fusing an approximate component and a diagonal component, and a horizontal component and a vertical component, wherein the formula is that,
wherein ,representing a first output feature after passing through a first multi-scale residual fusion module;
representing a second output feature after passing through the first multi-scale residual fusion module;
representing a third output feature after passing through the second multi-scale residual fusion module;
representing a fourth output feature after passing through the second multi-scale residual fusion module;
4. The method for extracting the remote sensing image dike based on the multi-scale residual fusion network according to claim 3, wherein the multi-scale residual fusion module is used for cross fusion, and the formula is as follows,
wherein ,representing a fifth output feature after passing through the third multi-scale residual fusion module;
representing a sixth output feature after passing through the third multi-scale residual fusion module; />
Representing a seventh output feature after passing through the fourth multi-scale residual fusion module;
representing an eighth output feature after passing through the fourth multi-scale residual fusion module;
5. The method for extracting the remote sensing image dike based on the multi-scale residual fusion network according to claim 4, wherein the inverse discrete wavelet transform is used for performing inverse transformation to obtain the characteristic image, wherein the formula is that,
6. The method for extracting remote sensing image dike based on a multi-scale residual fusion network according to claim 5, wherein the method for fusing the approximate component and the diagonal component, and the horizontal component and the vertical component by using the multi-scale residual fusion module further comprises two calculation branches in total, wherein the calculation branches comprise four calculation operation layers, the calculation operation layers consist of a convolution, a switchable normalization operation and a parametric rectification linear unit, and the formula for calculation by using the first calculation operation layer is that,
representing an output of a first computational operation layer through a first computational branch;
representing an output of the first computational operation layer through the second computational branch;
the second computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a second computational operation layer through the first computational branch;
representing an output of a second computational operation layer through a second computational branch;
the third computational operation layer, using two computational branches, is further processed, as formulated,
representing an output of a third computational operation layer through the first computational branch;
representing an output of a third computational operation layer through the second computational branch;
the fourth calculation operation layer of the two calculation branches is used for further processing to obtain a fused result, and the formula is that,
7. The method for extracting remote sensing image dike based on the multi-scale residual fusion network according to claim 6, wherein the nested encoder network is trained by using cross entropy loss, wherein the formula is as follows,
8. The remote sensing image rock vein extraction system based on the multi-scale residual error fusion network is characterized by comprising the following components:
the rock vein remote sensing image acquisition module is used for acquiring: acquiring a remote sensing image of the rock vein;
the characteristic image acquisition module is used for: obtaining a characteristic image according to a multi-scale residual fusion network;
the rock vein extraction result acquisition module: and inputting the characteristic image into a remote sensing image rock vein extraction network to obtain a rock vein extraction result.
9. An electronic device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing a remote sensing image dike extraction method based on a multi-scale residual fusion network as claimed in any one of claims 1 to 7.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the method for extracting the remote sensing image rock veins based on the multi-scale residual error fusion network is realized according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310449575.0A CN116168302B (en) | 2023-04-25 | 2023-04-25 | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310449575.0A CN116168302B (en) | 2023-04-25 | 2023-04-25 | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116168302A true CN116168302A (en) | 2023-05-26 |
CN116168302B CN116168302B (en) | 2023-07-14 |
Family
ID=86416753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310449575.0A Active CN116168302B (en) | 2023-04-25 | 2023-04-25 | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116168302B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140222403A1 (en) * | 2013-02-07 | 2014-08-07 | Schlumberger Technology Corporation | Geologic model via implicit function |
CN112784806A (en) * | 2021-02-04 | 2021-05-11 | 中国地质科学院矿产资源研究所 | Lithium-containing pegmatite vein extraction method based on full convolution neural network |
CN113177456A (en) * | 2021-04-23 | 2021-07-27 | 西安电子科技大学 | Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion |
CN113379618A (en) * | 2021-05-06 | 2021-09-10 | 航天东方红卫星有限公司 | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion |
CN113625363A (en) * | 2021-08-18 | 2021-11-09 | 中国地质科学院矿产资源研究所 | Mineral exploration method and device for pegmatite-type lithium ore, computer equipment and medium |
CN113780296A (en) * | 2021-09-13 | 2021-12-10 | 山东大学 | Remote sensing image semantic segmentation method and system based on multi-scale information fusion |
CN113850824A (en) * | 2021-09-27 | 2021-12-28 | 太原理工大学 | Remote sensing image road network extraction method based on multi-scale feature fusion |
-
2023
- 2023-04-25 CN CN202310449575.0A patent/CN116168302B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140222403A1 (en) * | 2013-02-07 | 2014-08-07 | Schlumberger Technology Corporation | Geologic model via implicit function |
CN112784806A (en) * | 2021-02-04 | 2021-05-11 | 中国地质科学院矿产资源研究所 | Lithium-containing pegmatite vein extraction method based on full convolution neural network |
CN113177456A (en) * | 2021-04-23 | 2021-07-27 | 西安电子科技大学 | Remote sensing target detection method based on single-stage full convolution network and multi-feature fusion |
CN113379618A (en) * | 2021-05-06 | 2021-09-10 | 航天东方红卫星有限公司 | Optical remote sensing image cloud removing method based on residual dense connection and feature fusion |
CN113625363A (en) * | 2021-08-18 | 2021-11-09 | 中国地质科学院矿产资源研究所 | Mineral exploration method and device for pegmatite-type lithium ore, computer equipment and medium |
CN113780296A (en) * | 2021-09-13 | 2021-12-10 | 山东大学 | Remote sensing image semantic segmentation method and system based on multi-scale information fusion |
CN113850824A (en) * | 2021-09-27 | 2021-12-28 | 太原理工大学 | Remote sensing image road network extraction method based on multi-scale feature fusion |
Also Published As
Publication number | Publication date |
---|---|
CN116168302B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108509915B (en) | Method and device for generating face recognition model | |
CN109523470B (en) | Depth image super-resolution reconstruction method and system | |
CN103559697A (en) | Scrap paper lengthwise cutting splicing and recovering algorithm based on FFT | |
CN114022900A (en) | Training method, detection method, device, equipment and medium for detection model | |
CN115546076B (en) | Remote sensing image thin cloud removing method based on convolutional network | |
CN113139618B (en) | Robustness-enhanced classification method and device based on integrated defense | |
CN114049491A (en) | Fingerprint segmentation model training method, fingerprint segmentation device, fingerprint segmentation equipment and fingerprint segmentation medium | |
CN116109829B (en) | Coral reef water area image segmentation method based on fusion network | |
CN116168302B (en) | Remote sensing image rock vein extraction method based on multi-scale residual error fusion network | |
Kekre et al. | SAR image segmentation using vector quantization technique on entropy images | |
Kekre et al. | Sectorization of Full Kekre’s Wavelet Transform for Feature extraction of Color Images | |
CN113688263B (en) | Method, computing device, and storage medium for searching for image | |
CN115083006A (en) | Iris recognition model training method, iris recognition method and iris recognition device | |
CN111626373B (en) | Multi-scale widening residual error network, small target recognition and detection network and optimization method thereof | |
CN112862700B (en) | Hyperspectral remote sensing image blind denoising method based on noise attention | |
CN115861081B (en) | Image super-resolution reconstruction method based on ladder type multi-stage wavelet network | |
CN116660992B (en) | Seismic signal processing method based on multi-feature fusion | |
CN115470873B (en) | Radar radiation source identification method and system | |
CN117496162B (en) | Method, device and medium for removing thin cloud of infrared satellite remote sensing image | |
Singh et al. | Forged Image Identification with Digital Image Forensic Tools | |
Gandhi et al. | Reverse image search using discrete wavelet transform, local histogram and canny edge detector | |
Khandakar et al. | Complexity analysis and accuracy of image recovery based on signal transformation algorithms | |
Sang et al. | A new approach for texture classification in CBIR | |
CN115564650A (en) | Text image super-resolution method and system, electronic equipment and storage medium | |
CN115294578A (en) | Text information extraction method, device, equipment and medium based on artificial intelligence |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |