CN111563866B - Multisource remote sensing image fusion method - Google Patents

Multisource remote sensing image fusion method Download PDF

Info

Publication number
CN111563866B
CN111563866B CN202010378705.2A CN202010378705A CN111563866B CN 111563866 B CN111563866 B CN 111563866B CN 202010378705 A CN202010378705 A CN 202010378705A CN 111563866 B CN111563866 B CN 111563866B
Authority
CN
China
Prior art keywords
image
component
fusion
multispectral
filtering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010378705.2A
Other languages
Chinese (zh)
Other versions
CN111563866A (en
Inventor
李晓玲
聂祥飞
黄海波
张月
冯丽源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Three Gorges University
Original Assignee
Chongqing Three Gorges University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Three Gorges University filed Critical Chongqing Three Gorges University
Priority to CN202010378705.2A priority Critical patent/CN111563866B/en
Publication of CN111563866A publication Critical patent/CN111563866A/en
Application granted granted Critical
Publication of CN111563866B publication Critical patent/CN111563866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a multi-source remote sensing image fusion method, which comprises the following steps: firstly, utilizing IHS conversion to obtain a brightness I component, a chromaticity H component and a saturation S component of the multi-spectrum image after up-sampling; filtering the component I by adopting guide filtering, and constructing self-adaptive fractional order differentiation for enhancing edge detail information in the full-color image; respectively carrying out wavelet transformation on the filtered multispectral image I component and the enhanced panchromatic image to obtain a high-frequency component and a low-frequency component of the multispectral image I component and the enhanced panchromatic image, wherein the high-frequency component adopts an absolute value maximization principle, and the low-frequency component adopts a weighted average principle; and taking the result obtained by wavelet inverse transformation as a new I component, and finally obtaining a fusion image by utilizing IHS inverse transformation. The method combines the guided filtering and the fractional differential to fuse the full-color image and the multispectral image, effectively inhibits the spectrum distortion phenomenon of the fusion result, and reduces the loss of space detail information.

Description

Multisource remote sensing image fusion method
Technical Field
The invention relates to the technical field of image processing, in particular to a multi-source remote sensing image fusion method.
Background
The multi-source remote sensing image fusion refers to an image processing technology for carrying out information complementary superposition on two or more remote sensing images of the same scene from different sensors to obtain a comprehensive image with more accurate and perfect information. The image fusion is not only an important component of remote sensing detection data processing, but also has wide application space in the fields of environment detection, city planning, military reconnaissance and the like. In recent years, with the continuous development of signal processing technology, researchers have developed a large amount of research on image fusion methods.
At present, multi-source remote sensing image fusion is mainly divided into three layers: pixel layer fusion, feature layer fusion and decision layer fusion. Compared with feature layer fusion and decision layer fusion, pixel layer fusion has better performance in the aspects of accuracy and timeliness. The multi-source remote sensing image fusion method aiming at the pixel layer mainly comprises three types: component substitution-based image fusion, multi-resolution analysis-based image fusion, and pattern-based image fusion. The image fusion method based on component substitution has the characteristics of low complexity and easy realization, and the spatial detail information of the fusion result is well stored, but the spatial transformation operation is involved in the processing process, so that spectral distortion often occurs; the image fusion method based on multi-resolution analysis is not easy to generate spectrum distortion phenomenon, but the ringing phenomenon is generated in the fusion process, so that the fusion result is lost in space characteristics; the image fusion method based on the model is not easy to have spectrum distortion and loss of spatial characteristics, however, the complexity of the method involved in the fusion process is higher. At present, although the image fusion method is continuously emerging, a plurality of problems and difficulties still face, and the method is mainly characterized in the following aspects: 1) Spatial information of the fusion image is easy to lose; 2) The spectrum of the fused image is prone to distortion.
Therefore, the problems of spatial information loss and spectrum distortion existing in the fusion process of the multi-source remote sensing images are solved, and the method is one of important subjects of the technicians in the field.
Disclosure of Invention
The invention aims to solve the technical problems that: the multi-source remote sensing image fusion method is provided to effectively inhibit the spectrum distortion phenomenon of the fusion result and reduce the loss of space detail information.
The technical scheme adopted by the invention for achieving the purpose comprises the following steps:
step one: acquiring a multispectral image and a panchromatic image of the same ground object target, and upsampling the multispectral image by using a bicubic interpolation method to ensure that the size of the multispectral image is consistent with that of the panchromatic image;
step two: performing color space transformation on the multispectral image by adopting IHS transformation, extracting a brightness I component of the multispectral image for the next processing, and reserving a chromaticity H component and a saturation S component of the multispectral image for the subsequent IHS inverse transformation;
step three: constructing an adaptive fractional differential for enhancing edge details of the full-color image and retaining feature contour information, and simultaneously adopting guide filtering to carry out filtering processing on a brightness I component of the multispectral image;
step four: respectively carrying out wavelet transformation on the full-color image subjected to fractional differentiation processing and the multi-spectrum image brightness I component subjected to guide filtering processing to obtain transformed high-frequency components and low-frequency components, wherein the high-frequency components adopt an absolute value maximization principle, and the low-frequency components adopt a weighted average principle;
step five: obtaining a result image after wavelet inverse transformation through wavelet reconstruction;
step six: taking the result obtained by wavelet inverse transformation as I new And carrying out IHS inverse transformation on the components, the H component and the S component to obtain a fusion image.
The invention has the following advantages and beneficial effects:
1. compared with the prior art, the method has the advantages that the brightness component of the multispectral image after up-sampling is directly processed, so that the problem of blocking effect is easily generated; the invention introduces the guided filtering with the structure transfer characteristic and the edge protection smoothing characteristic, and realizes the inhibition of the blocking effect in the image fusion process. Meanwhile, the spatial texture information and the local detail information of the component I of the multispectral image are enhanced, so that the visual effect of the fusion image is improved in an auxiliary mode.
2. Compared with the prior art, the method has the advantages that histogram matching is directly carried out on the brightness component and the full-color image, so that the problems of gray level reduction and partial detail loss are caused; the invention introduces the fractional differential into the image fusion, particularly improves the order of the fractional differential by combining with the image statistical characteristics, avoids artificially setting a fixed order, and realizes the adaptive fractional differential. Not only can the image be effectively reserved as a flat part of the base layer, but also the edge detail part of the image is enhanced.
Drawings
FIG. 1 is a flow chart of image fusion in accordance with the method of the present invention.
FIG. 2 is a graph showing the comparison of the fusion effect of the method of the present invention on the test image and the test image used in the experiment.
Wherein, fig. 2 (a) is a full-color image, fig. 2 (b) is a multispectral image, fig. 2 (c) is a fusion image obtained by an IHS transformation method, fig. 2 (d) is a fusion image obtained by a Brovey method, fig. 2 (e) is a fusion image obtained by a PCA method, fig. 2 (f) is a fusion image obtained by a DWT method, fig. 2 (g) is a fusion image obtained by an ATWT-M3 method, fig. 2 (h) is a fusion image obtained by an ATWT method, fig. 2 (i) is a fusion image obtained by an AWLP method, fig. 2 (j) is a fusion image obtained by a GS method, fig. 2 (k) is a fusion image obtained by an HPF method, fig. 2 (l) is a fusion image obtained by an MTF-GLP method, and fig. 2 (M) is a fusion image obtained by the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples.
As shown in fig. 1, a multi-source remote sensing image fusion method includes the following steps:
step one: and acquiring a multispectral image and a panchromatic image of the same ground object target, and upsampling the multispectral image by using a bicubic interpolation method to ensure that the size of the multispectral image is consistent with that of the panchromatic image.
Step two: performing color space transformation on the multispectral image by adopting IHS transformation, extracting a brightness I component of the multispectral image for further processing, and reserving a chromaticity H component and a saturation S component of the multispectral image for subsequent IHS inverse transformation, wherein the calculation formulas of the I component, the H component and the S component are respectively as follows:
Figure BDA0002481223390000031
H=tan -1 [s 1 /s 2 ] (2)
Figure BDA0002481223390000032
wherein R, G, B is the red, green and blue bands of the multispectral image, respectively.
Step three: the self-adaptive fractional differentiation is constructed for enhancing the edge details of the full-color image and retaining the feature contour information, and the specific modes are as follows:
assuming that the size of the full-color image f (i, j) is mxn, the spatial frequency calculation formula is:
Figure BDA0002481223390000041
Figure BDA0002481223390000042
Figure BDA0002481223390000043
where RF and CF represent the row frequency and column frequency of the full-color image f (i, j), respectively. The larger the spatial frequency value of the image is, the more abundant the spatial information of the image is, and the stronger the layering of the image is. Further, the calculation formula of the average gradient of the full-color image f (i, j) is:
Figure BDA0002481223390000044
the larger the value of the average gradient, the more prominent the detail features such as edges, textures and the like of the image, and the higher the definition. Then, the spatial frequency and the average gradient of the panchromatic image are normalized by adopting an anti-complementary nonlinear normalization function, namely:
Figure BDA0002481223390000045
Figure BDA0002481223390000046
considering that the spatial frequency of the image and the magnitude of the value of the average gradient are equally important to the influence of the differential order, the following average weighting process is performed on both:
Figure BDA0002481223390000047
because the Tanh function is a monotonically increasing function in a real number range and presents a nonlinear increasing trend, the characteristic accords with the change rule of differential order along with image statistical information, and the Tanh function is adopted to construct the order function as follows.
Figure BDA0002481223390000051
When the fractional order differential order v epsilon [0.5,0.7], the texture details of the image can be highlighted, and the image contour information can be fully reserved. Therefore, the invention expands the correction process for f (Y), wherein, beta and alpha respectively take 0.5 and 0.7, and then the self-adaptive fractional order differential order calculation function v is obtained.
Figure BDA0002481223390000052
Meanwhile, the brightness component I of the multispectral image is filtered by adopting guide filtering, and the specific mode is as follows:
first, the radius r=7 of the guided filter is set, and the regularization parameter epsilon=10 -6 . Calculating linear coefficient a of guided filtering (k,l) And b (k,l) The values of (2) are respectively:
Figure BDA0002481223390000053
Figure BDA0002481223390000054
in the formula, |ω| represents a rectangular partial window ω with the pixel point (k, l) as the center and r as the radius (k,l) The number of pixels in the pixel array is equal to the number of pixels in the pixel array,
Figure BDA0002481223390000055
sum mu (k,l) Respectively local window omega (k,l) Variance and mean of the contained pixels, +.>
Figure BDA0002481223390000056
Representing a local window omega (k,l) Pixels containing I component of multispectral imageThe mean, ε, is the regularization parameter. Consider that during the process of guided filtering, a pixel (i, j) may be simultaneously windowed by multiple local windows ω (k,l) Sliding over. Thus, the alignment coefficient a is required (k,l) And b (k,l) And (3) carrying out average value processing, namely:
Figure BDA0002481223390000057
Figure BDA0002481223390000061
will be
Figure BDA0002481223390000062
And->
Figure BDA0002481223390000063
Substituting the linear definition model of the guided filtering to obtain a filtered output image.
Figure BDA0002481223390000064
And taking the result after the guide filtering as a base layer of the image, subtracting the base image from the I component of the multispectral image to obtain a detail layer of the image, carrying out linear transformation on the gray level change range of the detail layer, and finally adding the detail layer with the base image to obtain the texture structure enhanced image.
Step four: and respectively carrying out wavelet transformation on the full-color image subjected to fractional differentiation processing and the multi-spectrum image brightness I component subjected to guide filtering processing to obtain transformed high-frequency components and low-frequency components. Wherein, the high frequency component adopts the principle of absolute value maximization, and the low frequency component adopts the principle of weighted average.
Step five: and obtaining a result image after wavelet inverse transformation through wavelet reconstruction.
Step six: taking the result obtained by wavelet inverse transformation as I new Component and IHS with H component and S componentInverse transformation is carried out to obtain a fusion image, and the calculation formula is as follows:
Figure BDA0002481223390000065
wherein R is new 、G new 、B new The three bands of red, green and blue of the fused image are respectively.
The method selects registered multispectral images and panchromatic images as test images, and performs comparison research with IHS, brovey, PCA, DWT, ATWT-M3, ATWT, AWLP, GS, HPF and MTF-GLP methods.
The experimental results are as follows:
experiment 1 the fusion results of the different methods on the test image are shown in fig. 2. After comparative analysis, the fusion result obtained by the method provided by the invention enhances the detail texture of the ground object in the image while retaining the spectral characteristics of the image, so that the definition of the image is higher and the visual effect is better.
Experiment 2, in order to improve the accuracy of the quality evaluation of the fusion result, the method of the invention adopts several common evaluation indexes, including: average Gradient (AG), average value (ME), standard Deviation (SD), information Entropy (IE), mutual Information (MI) and Spatial Frequency (SF). The larger the value of the evaluation index, the more the spatial information of the image is, and the stronger the gradation is, as shown in table 1. According to Table 1, compared with other methods, the fusion result obtained by the method of the invention has different degrees of improvement on each quality evaluation index, and has certain comprehensive advantages. The result shows that the fusion image of the method has more abundant details and better visual effect.
TABLE 1 statistical table of image fusion result quality evaluation index
Figure BDA0002481223390000071
In summary, the multi-source remote sensing image fusion method disclosed by the invention can effectively reduce the problems of spectrum distortion phenomenon and space detail information loss of the fusion result, and has higher effectiveness and feasibility.
While the preferred embodiments of the present invention have been described in detail, the scope of the present invention is not limited to the above embodiments, and other variations can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention, and thus should be included in the scope of the present invention.

Claims (2)

1. The multi-source remote sensing image fusion method is characterized by comprising the following steps of:
step one: acquiring a multispectral image and a panchromatic image of the same ground object target, and upsampling the multispectral image by using a bicubic interpolation method to ensure that the size of the multispectral image is consistent with that of the panchromatic image;
step two: performing color space transformation on the multispectral image by adopting IHS transformation, extracting a brightness I component of the multispectral image for the next processing, and reserving a chromaticity H component and a saturation S component of the multispectral image for the subsequent IHS inverse transformation;
step three: constructing an adaptive fractional differential for enhancing edge details of the full-color image and retaining feature contour information, and simultaneously adopting guide filtering to carry out filtering processing on a brightness I component of the multispectral image;
step four: respectively carrying out wavelet transformation on the full-color image subjected to fractional differentiation processing and the multi-spectrum image brightness I component subjected to guide filtering processing to obtain transformed high-frequency components and low-frequency components, wherein the high-frequency components adopt an absolute value maximization principle, and the low-frequency components adopt a weighted average principle;
step five: obtaining a result image after wavelet inverse transformation through wavelet reconstruction;
step six: taking the result obtained by wavelet inverse transformation as I new The component, the H component and the S component are subjected to IHS inverse transformation to obtain a fusion image;
the luminance component I, the chrominance component H and the saturation component S obtained in the step two are respectively:
Figure FDA0004107376910000011
H=tan -1 [s 1 /s 2 ]
Figure FDA0004107376910000012
wherein R, G, B is respectively three wave bands of red, green and blue of the multispectral image;
the self-adaptive fractional order differentiation method adopted in the step three comprises the following specific modes:
first, the spatial frequency and the average gradient of the full-color image f (i, j) are calculated, wherein the spatial frequency is obtained using the following formula:
Figure FDA0004107376910000013
Figure FDA0004107376910000014
Figure FDA0004107376910000015
wherein the size of the full-color image f (i, j) is mxn, and RF and CF represent the row frequency and the column frequency of the full-color image f (i, j), respectively; in addition, the calculation formula of the average gradient of the full-color image f (i, j) is:
Figure FDA0004107376910000021
then, the spatial frequency and the average gradient of the panchromatic image are normalized by adopting an anti-complementary nonlinear normalization function, namely:
Figure FDA0004107376910000022
the following average weighting process is carried out on the two materials:
Figure FDA0004107376910000023
finally, the differential order v is constructed as follows using a Tanh function:
Figure FDA0004107376910000024
Figure FDA0004107376910000025
wherein, beta and alpha are respectively 0.5 and 0.7;
the specific mode of filtering the multispectral image I component by adopting the guided filtering in the step three is as follows:
first, the radius r=7 of the guided filter is set, and the regularization parameter epsilon=10 -6 Calculating linear coefficient a of guided filtering (k,l) And b (k,l) The values of (2) are respectively:
Figure FDA0004107376910000026
Figure FDA0004107376910000027
in the formula, |ω| represents a rectangular partial window ω with the pixel point (k, l) as the center and r as the radius (k,l) The number of pixels in the pixel array is equal to the number of pixels in the pixel array,
Figure FDA0004107376910000028
sum mu (k,l) Respectively local window omega (k,l) Variance and mean of the contained pixels, +.>
Figure FDA0004107376910000029
Representing a local window omega (k,l) The pixel mean value of the component I of the multispectral image is contained, and epsilon is a regularization parameter;
alignment coefficient a again (k,l) And b (k,l) And (3) carrying out average value processing, namely:
Figure FDA00041073769100000210
Figure FDA00041073769100000211
handle
Figure FDA00041073769100000212
And->
Figure FDA00041073769100000213
Substituting the linear definition model of the guided filtering to obtain a filtered output image:
Figure FDA0004107376910000031
and taking the result after the guide filtering as a base layer of the image, subtracting the base image from the I component of the multispectral image to obtain a detail layer of the image, carrying out linear transformation on the gray level change range of the detail layer, and finally adding the detail layer with the base image to obtain the texture structure enhanced image.
2. The method of claim 1, wherein the step six is performed with the obtained I new IHS inverse transformation is carried out on the component, the H component and the S component, and finally the fusion map is obtainedThe image has the following calculation formula:
Figure FDA0004107376910000032
wherein R is new 、G new 、B new The three bands of red, green and blue of the fused image are respectively.
CN202010378705.2A 2020-05-07 2020-05-07 Multisource remote sensing image fusion method Active CN111563866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010378705.2A CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010378705.2A CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Publications (2)

Publication Number Publication Date
CN111563866A CN111563866A (en) 2020-08-21
CN111563866B true CN111563866B (en) 2023-05-12

Family

ID=72070788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010378705.2A Active CN111563866B (en) 2020-05-07 2020-05-07 Multisource remote sensing image fusion method

Country Status (1)

Country Link
CN (1) CN111563866B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112330581B (en) * 2020-11-02 2022-07-12 燕山大学 Fusion method and system of SAR and multispectral image
CN113992838A (en) * 2021-08-09 2022-01-28 中科联芯(广州)科技有限公司 Imaging focusing method and control method of silicon-based multispectral signal
CN114897757B (en) * 2022-06-10 2024-06-25 大连民族大学 NSST and parameter self-adaptive PCNN-based remote sensing image fusion method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851091A (en) * 2015-04-28 2015-08-19 中山大学 Remote sensing image fusion method based on convolution enhancement and HCS transform
CN108921809A (en) * 2018-06-11 2018-11-30 上海海洋大学 Multispectral and panchromatic image fusion method under integral principle based on spatial frequency
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216557B (en) * 2007-12-27 2011-07-20 复旦大学 Residual hypercomplex number dual decomposition multi-light spectrum and full-color image fusion method
KR100944462B1 (en) * 2008-03-07 2010-03-03 한국항공우주연구원 Satellite image fusion method and system
CN101930604B (en) * 2010-09-08 2012-03-28 中国科学院自动化研究所 Infusion method of full-color image and multi-spectral image based on low-frequency correlation analysis
CN103679661B (en) * 2013-12-25 2016-09-28 北京师范大学 A kind of self adaptation remote sensing image fusion method based on significance analysis
CN104346790B (en) * 2014-10-30 2017-06-20 中山大学 A kind of remote sensing image fusion method of HCS combined with wavelet transformed
CN104851077B (en) * 2015-06-03 2017-10-13 四川大学 A kind of panchromatic sharpening method of adaptive remote sensing images
CN105741252B (en) * 2015-11-17 2018-11-16 西安电子科技大学 Video image grade reconstruction method based on rarefaction representation and dictionary learning
CN106023129A (en) * 2016-05-26 2016-10-12 西安工业大学 Infrared and visible light image fused automobile anti-blooming video image processing method
US10176966B1 (en) * 2017-04-13 2019-01-08 Fractilia, Llc Edge detection system
CN108874857A (en) * 2018-04-13 2018-11-23 重庆三峡学院 A kind of local records document is compiled and digitlization experiencing system
CN109166089A (en) * 2018-07-24 2019-01-08 重庆三峡学院 The method that a kind of pair of multispectral image and full-colour image are merged

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851091A (en) * 2015-04-28 2015-08-19 中山大学 Remote sensing image fusion method based on convolution enhancement and HCS transform
CN108921809A (en) * 2018-06-11 2018-11-30 上海海洋大学 Multispectral and panchromatic image fusion method under integral principle based on spatial frequency
CN109993717A (en) * 2018-11-14 2019-07-09 重庆邮电大学 A kind of remote sensing image fusion method of combination guiding filtering and IHS transformation

Also Published As

Publication number Publication date
CN111563866A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN111563866B (en) Multisource remote sensing image fusion method
CN107123089B (en) Remote sensing image super-resolution reconstruction method and system based on depth convolution network
CN111080567B (en) Remote sensing image fusion method and system based on multi-scale dynamic convolutional neural network
CN109102469B (en) Remote sensing image panchromatic sharpening method based on convolutional neural network
CN109447922B (en) Improved IHS (induction heating system) transformation remote sensing image fusion method and system
CN108921809B (en) Multispectral and panchromatic image fusion method based on spatial frequency under integral principle
CN108830796A (en) Based on the empty high spectrum image super-resolution reconstructing method combined and gradient field is lost of spectrum
CN103942769B (en) A kind of satellite remote-sensing image fusion method
CN111260580B (en) Image denoising method, computer device and computer readable storage medium
CN110544212B (en) Convolutional neural network hyperspectral image sharpening method based on hierarchical feature fusion
Wen et al. An effective network integrating residual learning and channel attention mechanism for thin cloud removal
CN111882485B (en) Hierarchical feature feedback fusion depth image super-resolution reconstruction method
CN107958450B (en) Panchromatic multispectral image fusion method and system based on self-adaptive Gaussian filtering
Bi et al. Haze removal for a single remote sensing image using low-rank and sparse prior
CN103268596A (en) Method for reducing image noise and enabling colors to be close to standard
CN117575953B (en) Detail enhancement method for high-resolution forestry remote sensing image
CN111311503A (en) Night low-brightness image enhancement system
CN116109535A (en) Image fusion method, device and computer readable storage medium
CN110084774B (en) Method for minimizing fusion image by enhanced gradient transfer and total variation
CN114897757B (en) NSST and parameter self-adaptive PCNN-based remote sensing image fusion method
CN113284067A (en) Hyperspectral panchromatic sharpening method based on depth detail injection network
CN116630198A (en) Multi-scale fusion underwater image enhancement method combining self-adaptive gamma correction
Yamaguchi et al. Image demosaicking via chrominance images with parallel convolutional neural networks
CN115294001A (en) Night light remote sensing image fusion method for improving IHS and wavelet transformation
CN111080560B (en) Image processing and identifying method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant