CN109636768A - Remote sensing image fusing method, device and electronic equipment - Google Patents
Remote sensing image fusing method, device and electronic equipment Download PDFInfo
- Publication number
- CN109636768A CN109636768A CN201811516585.7A CN201811516585A CN109636768A CN 109636768 A CN109636768 A CN 109636768A CN 201811516585 A CN201811516585 A CN 201811516585A CN 109636768 A CN109636768 A CN 109636768A
- Authority
- CN
- China
- Prior art keywords
- image
- original
- multispectral image
- multispectral
- panchromatic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012545 processing Methods 0.000 claims abstract description 14
- 230000004927 fusion Effects 0.000 claims description 71
- 238000005070 sampling Methods 0.000 claims description 35
- 238000007500 overflow downdraw method Methods 0.000 claims description 27
- 230000009466 transformation Effects 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 11
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 2
- 230000003595 spectral effect Effects 0.000 abstract description 39
- 238000005516 engineering process Methods 0.000 abstract description 10
- 238000002156 mixing Methods 0.000 abstract 2
- 238000004891 communication Methods 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 6
- 230000014759 maintenance of location Effects 0.000 description 6
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 238000002474 experimental method Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013441 quality evaluation Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 238000011426 transformation method Methods 0.000 description 2
- 230000016776 visual perception Effects 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004438 eyesight Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000000491 multivariate analysis Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10036—Multispectral image; Hyperspectral image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The present invention is applicable in Remote Sensing Image Processing Technology field, a kind of remote sensing image fusing method, device and electronic equipment is provided, this method comprises: obtaining original panchromatic image and corresponding original multispectral image;The original panchromatic image is carried out down-sampled with the original multispectral image and merges to obtain the first multispectral image, and the original panchromatic image is merged to obtain the second multispectral image with the original multispectral image;Network is fought based on generating, the original multispectral image is fitted using first multispectral image, obtains the generation confrontation network model between first multispectral image and the original multispectral image;Second multispectral image is input to the generation confrontation network model, final blending image is obtained, makes final blending image when keeping higher spatial resolution, be still able to maintain more spectral information.
Description
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a remote sensing image fusion method and device and electronic equipment.
Background
In the process of acquiring the remote sensing image, the remote sensing data with high spatial resolution and high spectral resolution is difficult to acquire for the same image data due to the limitation of the sensor technology, and the image fusion technology can effectively solve the problem. In the conventional image fusion method, the fused image has a higher spatial resolution, but generally has a larger spectral distortion, and how to solve the spectral distortion is one of the key problems to be solved in image fusion.
Currently, remote sensing technology is developing in the direction of high spectral resolution, high spatial resolution and high temporal resolution. However, for the same image data, it is difficult to obtain a remote sensing image having both high spatial resolution and high spectral resolution. The image fusion technology can synthesize the multispectral information of a single sensor or the information provided by different sensors, improve the timeliness and reliability of remote sensing information extraction, and improve the use efficiency of data. A variety of different image fusion methods have been studied, including methods based on component replacement (such as PCA transformation fusion, Brovey transformation fusion, Gram-schmidt fusion, etc.), methods based on multi-scale transformation (such as wavelet transformation fusion, Contourlet transformation fusion, etc.), methods based on combination of component replacement and multi-scale transformation, etc.
The Brovey transform is a simpler method for fusing remote sensing images. The method enhances the information of the image by the product of the normalized multispectral wave band and the high-resolution image. The result values of the red (R '), green (G ') and blue (B ') 3 bands after fusion are as follows:
the PCA transformation is multidimensional linear transformation established on the basis of image statistical characteristics, has the functions of variance information concentration and data volume compression, and is also called K-L transformation mathematically. The PCA conversion comprises the following specific steps: after the multispectral wave band is subjected to PCA conversion, carrying out gray stretching on the panchromatic high-resolution image to enable the mean value and the variance of the panchromatic high-resolution image to be consistent with the image of the first component of the PCA conversion; the first component is then replaced with a stretched high resolution full color image and finally the fused image is obtained by inverse PCA transformation.
The Gram-schmidt transformation is a common method in linear algebra and multivariate statistics, and is similar to the PCA transformation method, and can carry out orthogonal transformation on a matrix or a multi-dimensional image to eliminate the correlation between multispectral wave bands. The difference between Gram-schmidt conversion and PCA is that after PCA conversion, information is redistributed among main components, the information content of the first main component is the largest, and the information content of other main components is reduced in sequence; and the components after Gram-schmidt transformation are only orthogonal, and the information quantity contained in each component is not greatly different, so that the problem of over-centralized information in PCA can be solved.
The fusion method based on multi-scale transformation comprises the steps of carrying out multi-scale transformation on original panchromatic images and multispectral images to obtain components in different frequency ranges, then fusing different frequency components of the panchromatic images and the multispectral images according to a certain fusion rule, and finally carrying out inverse transformation on the fused frequency components to obtain fused images. This approach can typically keep spectral distortions low, but also results in some loss of spatial detail information.
The method combines the advantages of the component replacement method and the multi-scale transformation method based on the component replacement and multi-scale transformation combined fusion method, achieves certain fusion in the aspects of spectral distortion and spatial information preservation, and still has certain spectral distortion.
Therefore, how to keep the spectral distortion low for practical application is a key issue to be solved urgently in the field of image fusion
Disclosure of Invention
The invention aims to provide a remote sensing image fusion method, a remote sensing image fusion device and electronic equipment, and aims to solve the technical problem that low spectral distortion cannot be kept during remote sensing image fusion in the prior art.
In a first aspect, the present invention provides a remote sensing image fusion method, including the following steps:
acquiring an original panchromatic image and a corresponding original multispectral image;
down-sampling and fusing the original panchromatic image and the original multispectral image to obtain a first multispectral image, and fusing the original panchromatic image and the original multispectral image to obtain a second multispectral image;
based on a generated countermeasure network, fitting the original multispectral image by adopting the first multispectral image to obtain a generated countermeasure network model between the first multispectral image and the original multispectral image;
and inputting the second multispectral image into the generation countermeasure network model to obtain a final fusion image.
Optionally, before the step of inputting the first multispectral image into the generation confrontation network model to obtain a final fusion image, the method further includes:
and fitting the original multispectral image by adopting the second multispectral image based on a generated countermeasure network to obtain a generated countermeasure network model of the original multispectral image.
Optionally, the step of down-sampling and fusing the original panchromatic image and the original multispectral image to obtain a first multispectral image includes:
respectively performing down-sampling processing on the original panchromatic image and the original multispectral image according to a preset down-sampling level to obtain a down-sampling panchromatic image and a down-sampling multispectral image;
and fusing the down-sampling panchromatic image and the down-sampling multispectral image to obtain the first multispectral image.
Optionally, the step of fusing the downsampled panchromatic image and the downsampled multispectral image to obtain the first multispectral image includes:
and fusing the down-sampling panchromatic image and the down-sampling multispectral image based on an image fusion algorithm of IHS transformation.
Optionally, based on a generated countermeasure network, the step of fitting the original multispectral image with the first multispectral image to obtain a generated countermeasure network model between the first multispectral image and the original multispectral image includes:
cutting the first multispectral image and the original multispectral image into image blocks with the same size to form a training data set;
constructing a generator and a discriminator based on the generated countermeasure network;
inputting a first multispectral image and an original multispectral image in the training data set into the generator to be trained to obtain a fitting image;
and inputting the fitting image and the original multispectral image into a discriminator for countermeasure training, and finally obtaining a generated countermeasure network model between the first multispectral image and the original multispectral image.
In a second aspect, a remote sensing image fusion device is provided, which includes:
the original image acquisition module is used for acquiring an original panchromatic image and a corresponding original multispectral image;
the image fusion module is used for performing down-sampling on the original panchromatic image and the original multispectral image and fusing the original panchromatic image and the original multispectral image to obtain a first multispectral image, and fusing the original panchromatic image and the original multispectral image to obtain a second multispectral image;
the training module is used for fitting the original multispectral image by adopting the first multispectral image based on a generated confrontation network to obtain a generated confrontation network model between the first multispectral image and the original multispectral image;
and the final fusion module is used for inputting the second multispectral image into the generation confrontation network model to obtain a final fusion image.
Optionally, the image fusion module includes:
the down-sampling unit is used for respectively carrying out down-sampling processing on the original panchromatic image and the original multispectral image according to a preset down-sampling level to obtain a down-sampled panchromatic image and a down-sampled multispectral image;
and the image fusion unit is used for fusing the downsampling panchromatic image and the downsampling multispectral image to obtain the first multispectral image.
In a third aspect, an electronic device is provided, including:
a processor; and
a memory communicatively coupled to the processor; wherein,
the memory stores readable instructions which, when executed by the processor, implement the method of the first aspect.
In a fourth aspect, a computer readable storage medium is provided, having stored thereon a computer program which, when executed, implements the method of the first aspect.
When remote sensing image fusion is carried out, after an original panchromatic image and a corresponding original multispectral image are obtained, the original panchromatic image and the original multispectral image are subjected to down-sampling and fusion to obtain a first multispectral image, the original panchromatic image and the original multispectral image are fused to obtain a second multispectral image, then the original multispectral image is fitted by adopting the first multispectral image based on a generation countermeasure network to obtain a generation countermeasure network model between the first multispectral image and the original multispectral image, and the second multispectral image is input into the generation countermeasure network model, so that the finally obtained fusion image can keep good spatial resolution and good spectral information.
Drawings
Fig. 1 is a flowchart illustrating an implementation of a remote sensing image fusion method according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a remote sensing image fusion method according to an embodiment;
fig. 3 is a block diagram of a remote sensing image fusion device according to a third embodiment of the present invention;
fig. 4 is a block diagram of an electronic device 100 according to a fourth embodiment of the present invention;
fig. 5 is a schematic diagram of a fused image obtained by using the methods according to the fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows an implementation process of a remote sensing image fusion method provided by an embodiment of the present invention. The embodiment of the invention is suitable for electronic equipment such as smart phones and computers, and the electronic equipment is provided with the processor to perform fusion according to the input remote sensing images. For convenience of explanation, only the parts related to the embodiments of the present invention are shown, and detailed as follows:
in step S110, an original panchromatic image and a corresponding original multispectral image are obtained.
In step S120, the original panchromatic image and the original multispectral image are down-sampled and fused to obtain a first multispectral image, and the original panchromatic image and the original multispectral image are fused to obtain a second multispectral image.
Because the image fusion lacks a real reference image, the original panchromatic image and the corresponding original multispectral image are respectively subjected to downsampling according to the Wald principle, then the fusion is carried out, the first multispectral image obtained by the fusion is compared with the original multispectral image, namely the original multispectral image is used as the reference image for evaluating the quality of the fusion method. And then, directly applying the evaluated method to the second multispectral image to obtain a final fusion image.
Because the down-sampled image shows good fusion performance, more spectral information can still be kept when higher spatial resolution is kept after the image is fused.
Optionally, when the original panchromatic image and the original multispectral image are downsampled and fused to obtain the first multispectral image, downsampling the original panchromatic image and the original multispectral image according to a preset downsampling level to obtain a downsampled panchromatic image and a downsampled multispectral image, and fusing the downsampled panchromatic image and the downsampled multispectral image to obtain the first multispectral image.
The predetermined down-sampling level may be 4 times, for example, when the resolution of the original panchromatic image is 1024 × 1024 and the resolution of the corresponding original multispectral image is 256 × 256, the resolutions of the down-sampled panchromatic image and the down-sampled multispectral image obtained by down-sampling by 4 times are respectively: 256 × 256, 64 × 64. Of course, other preset down-sampling levels are also possible.
When the downsampling panchromatic image and the downsampling multispectral image are fused, an image fusion algorithm based on IHS transformation can be adopted, and other image fusion algorithms can also be adopted.
When an IHS transformation-based image fusion algorithm is adopted to fuse the downsampling panchromatic image and the downsampling multispectral image, firstly, the downsampling multispectral image is subjected to wave band combination to obtain an RGB color image, the RGB color image is subjected to IHS forward transformation to separate out components of brightness (I), chroma (H) and saturation (S), histogram matching is carried out on the downsampling panchromatic image according to the brightness component to obtain a matched image, the I components of the matched image have the same mean value and variance, then a stretched image is adopted to replace the V component in the RGB color image, and the first multispectral image with the spatial resolution consistent with the original panchromatic image but with the spectral information changed a little is obtained through IHS inverse transformation.
In step S130, based on the generated countermeasure network, the original multispectral image is fitted with the first multispectral image to obtain a generated countermeasure network model between the first multispectral image and the original multispectral image.
Firstly, cutting the first multispectral image and the original multispectral image into image blocks with the same size to form a training data set, then constructing a generator and a discriminator based on a generated confrontation network, inputting the first multispectral image into the generator to train and fit the brightness values of all wave bands in the original multispectral image to obtain a fitted image, and then inputting the fitted image and the original multispectral image into the discriminator to carry out confrontation training to obtain a generated confrontation network model between the first multispectral image and the original multispectral image.
The loss functions of the generator and the discriminator are respectively:
Gloss=-Ex~Pg[fw(x)]
Dloss=Ex~Pg[fw(x)]-Ex~Pr[fw(x)]
when the difference between the generated fitting image and the original multispectral image is judged, on one hand, the difference between the final result images is calculated by adopting an MSE (mean square error) -based method, on the other hand, the difference between the final result images and the original multispectral image are input into a trained VGG-19 model, and the difference between each feature layer of the two feature layers is calculated and then averaged. Based on MSE loss and VGG-19 loss are respectively:
in step S140, the second multispectral image is input to the generated confrontation network model to obtain a final fusion image.
Fig. 2 is a schematic flow chart of a remote sensing image fusion method according to this embodiment.
By using the method, when remote sensing image fusion is carried out, after an original panchromatic image and a corresponding original multispectral image are obtained, the original panchromatic image and the original multispectral image are subjected to down-sampling and fusion to obtain a first multispectral image, the original panchromatic image and the original multispectral image are fused to obtain a second multispectral image, then the original multispectral image is fitted by adopting the first multispectral image based on a generation countermeasure network to obtain a generation countermeasure network model between the first multispectral image and the original multispectral image, and the second multispectral image is input into the generation countermeasure network model, so that the finally obtained fusion image can keep good spatial resolution and good spectral information.
Example two:
the second embodiment of the invention shows the implementation process of another remote sensing image fusion method. The difference between the second embodiment and the first embodiment is that the method includes step S210, that is, the method for fusing remote sensing images provided by the second embodiment includes: step S110, step S120, step S210, and step S140. For convenience of explanation, only the parts related to the second embodiment are shown, and the details are as follows:
in step S210, based on the generated countermeasure network, the original multispectral image is fitted with the second multispectral image to obtain a generated countermeasure network model between the second multispectral image and the original multispectral image.
In the second embodiment, after the original panchromatic image and the original multispectral image are fused to obtain the second multispectral image, the second multispectral image is adopted to fit the original multispectral image directly on the basis of the generation countermeasure network, and the generation countermeasure network model of the original multispectral image is obtained.
Because the original panchromatic image and the corresponding original multispectral image can also show good fusion performance, after the second multispectral image is adopted to fit the original multispectral image to obtain a generation countermeasure network model of the original multispectral image, the second multispectral image is input into the generation countermeasure network model to obtain a final fusion image, and still more spectral information can be kept when higher spatial resolution is kept.
Example three:
fig. 3 shows a block diagram of a remote sensing image fusion device according to a third embodiment of the present invention, and for convenience of description, only the parts related to the third embodiment of the present invention are shown, which include:
an original image obtaining module 110, configured to obtain an original panchromatic image and a corresponding original multispectral image;
the image fusion module 120 is configured to perform downsampling on the original panchromatic image and the original multispectral image, fuse the downsampled original panchromatic image and the original multispectral image to obtain a first multispectral image, and fuse the original panchromatic image and the original multispectral image to obtain a second multispectral image;
the training module 130 is configured to fit the original multispectral image with the first multispectral image based on the generated countermeasure network to obtain a generated countermeasure network model between the second multispectral image and the original multispectral image;
and a final fusion module 140, configured to input the second multispectral image into the generated confrontation network model to obtain a final fusion image.
Preferably, the gray marking module 120 includes:
the down-sampling unit 121 is configured to perform down-sampling processing on the original panchromatic image and the original multispectral image according to a preset down-sampling level, so as to obtain a down-sampled panchromatic image and a down-sampled multispectral image;
the image fusion unit 122 is configured to fuse the downsampled panchromatic image and the downsampled multispectral image to obtain a first multispectral image.
In the embodiment of the present invention, each module of the remote sensing image fusion device may be implemented by corresponding hardware or software unit, and each module may be an independent software or hardware module, or may be integrated into a software or hardware unit, which is not limited herein. The detailed implementation of each module can refer to the description of the first embodiment, and is not described herein again.
Example four:
fig. 4 shows a block diagram of an electronic device 100 according to a fourth embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown.
Referring to fig. 4, electronic device 100 may include one or more of the following components: a processing component 101, a memory 102, a power component 103, a multimedia component 104, an audio component 105, a sensor component 107 and a communication component 108. The above components are not all necessary, and the electronic device 100 may add other components or reduce some components according to its own functional requirements, which is not limited in this embodiment.
The processing component 101 generally controls overall operations of the electronic device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 101 may include one or more processors 109 to execute instructions to perform all or a portion of the above-described operations. Further, the processing component 101 may include one or more modules that facilitate interaction between the processing component 101 and other components. For example, the processing component 101 may include a multimedia module to facilitate interaction between the multimedia component 104 and the processing component 101.
The memory 102 is configured to store various types of data to support operations at the electronic device 100. Examples of such data include instructions for any application or method operating on the electronic device 100. The Memory 102 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as an SRAM (Static random access Memory), an EEPROM (Electrically Erasable Programmable Read-Only Memory), an EPROM (Erasable Programmable Read-Only Memory), a PROM (Programmable Read-Only Memory), a ROM (Read-Only Memory), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk. Also stored in memory 102 are one or more modules configured to be executed by the one or more processors 109 to perform all or a portion of the steps of any of the methods described below.
The power supply component 103 provides power to the various components of the electronic device 100. Power components 103 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 100.
The multimedia component 104 includes a screen that provides an output interface between the electronic device 100 and a user. In some embodiments, the screen may include an LCD (Liquid Crystal Display) and a TP (touch panel). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation.
The audio component 105 is configured to output and/or input audio signals. For example, the audio component 105 includes a microphone configured to receive external audio signals when the electronic device 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 102 or transmitted via the communication component 108. In some embodiments, audio component 105 also includes a speaker for outputting audio signals.
The sensor assembly 107 includes one or more sensors for providing various aspects of status assessment for the electronic device 100. For example, the sensor component 107 may detect an open/closed state of the electronic device 100, a relative positioning of the components, the sensor component 107 may also detect a change in coordinates of the electronic device 100 or a component of the electronic device 100, and a change in temperature of the electronic device 100. In some embodiments, the sensor assembly 107 may also include a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 108 is configured to facilitate wired or wireless communication between the electronic device 100 and other devices. The electronic device 100 may access a Wireless network based on a communication standard, such as WiFi (Wireless-Fidelity), 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 108 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 108 further includes a Near Field Communication (NFC) module to facilitate short-range Communication. For example, the NFC module may be implemented based on an RFID (Radio Frequency Identification) technology, an IrDA (Infrared data association) technology, an UWB (Ultra-Wideband) technology, a BT (Bluetooth) technology, and other technologies.
In an exemplary embodiment, the electronic Device 100 may be implemented by one or more ASICs (Application specific integrated circuits), DSPs (Digital Signal processors), PLDs (Programmable Logic devices), FPGAs (Field-Programmable gate arrays), controllers, microcontrollers, microprocessors or other electronic components for performing the above-described methods.
The specific manner in which the processor in the server performs the operation is described in detail in the embodiment of the remote sensing image fusion method, and will not be described in detail here. The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Example four:
in this example, the effectiveness of the present invention will be analyzed by actual experimental data.
In order to compare the fusion effect of various image fusion methods, other methods are adopted in the experiment: the Brovey transform fusion method, the standard IHS fusion method, the PCA fusion method, the Choi method, the wavelet fusion method, the HCS fusion method.
In order to verify the validity and correctness of the algorithm of the present invention, the present invention performs experiments with an IKONOS panchromatic image (resolution of 1 meter) and a multispectral image (resolution of 4 meters) in the beijing area, respectively, where the size of the experimental image is 500 pixels × 500 pixels, as shown in fig. 5 below, fig. 5(a) is an original panchromatic image, fig. 5(b) is an original multispectral image corresponding to fig. 5(a), and fig. 5(c) - (i) are fused images finally obtained by using brooey transform, standard IHS transform, PCA transform, choice's IHS (t ═ 4), wavelet fusion algorithm, HCS algorithm, and the present invention and method, respectively, it should be noted that the original panchromatic image and the original multispectral image are registered before the experiments.
Firstly, in vision, the fused image obtained by the method is very close to the original multispectral image in brightness, and has the advantages of moderate brightness, bright color, clear edge, good contrast, strong layering and good visual perception. In the aspect of image details, the fused image obtained by the method is very similar to the original full-color image, such as house edges, the structure of bushes, vehicles on roads and small parts of buildings can be clearly shown. Although other methods also perform well in detail presentation, there are different degrees of color deviation and color cast, which are not well suited to the visual perception of the human eye.
In addition, considering that the fused image needs to reduce the spectral distortion as much as possible and improve the spatial resolution, the invention adopts the following objective quality evaluation indexes to comprehensively compare the fusion methods:
(1) entropy: reflecting the richness of the information content of the image. The larger the entropy, the larger the amount of information contained in the fused image, and the more spatial detail information of the full-color image can be maintained.
(2) ERGAS: an index for evaluating the spectral retention in terms of global integration error is calculated by the following equation:
wherein h and l represent the spatial resolution of the panchromatic and multispectral images, respectively, and N is the number of the bands of the original multispectral image, RMSE (B)i) And Mean (B)i) Respectively representing the root mean square error and the mean value of the ith band.
(3) Universal image quality evaluation index UIQI: and measuring the similarity of the images before and after fusion from the three aspects of relevant information loss, radiation value distortion and contrast distortion, wherein the higher the value is, the higher the fusion quality is, by comparing the UIQI value, and the value is 1 if the two images are completely consistent. Calculated from the following formula:
whereinAndrespectively representing the mean, delta, of the original multi-spectral image and the fusion result imagexAnd deltayRepresents the variance, delta, of the original multispectral image and the fusion result image, respectivelyxyRepresenting their covariance.
(4) Spectral deviation index: the distortion degree of the fused image and the original multispectral image in the aspect of spectrum is reflected, and is calculated by the following formula:
wherein, Ii,jAndand respectively representing the gray values of the multispectral images before and after fusion.
(5) Correlation coefficient: the method is divided into a spatial correlation coefficient and a spectral correlation coefficient. The spatial correlation coefficient reflects the similarity of the fusion result image and the original panchromatic image on the spatial detail characteristics, and the spectral correlation coefficient reflects the similarity of the fusion result image and the original multispectral image on the spectral characteristics. The larger the correlation coefficient is, the higher the similarity is, and the higher the retention of the fused image on the detail features of the original panchromatic image Pan and the retention of the spectral features of the original multispectral image are. The fused image comprises the spatial detail information of the panchromatic image and the spectral information of the multispectral image, but the spatial detail information can interfere the calculation of the spectral correlation coefficient, and the spectral informationThe information will cause interference to the calculation of the spatial correlation coefficient, and in order to reduce or even eliminate the influence of the interference, the invention proposes that certain preprocessing is required to be carried out when the two correlation coefficients are calculated. When calculating the spatial correlation coefficient, firstly calculating the brightness image Ave of the fusion result multispectral image, calculating the detail characteristic image according to the following formula, and then calculating CpAnd CAAs spatial correlation coefficient:
when calculating the spectral correlation coefficient, calculating the spectral characteristics of each wave band of the fused image and the original multispectral image according to the following formula, and then calculating Cfusion_iAnd Cmulti_iAs a spectral correlation coefficient, wherein FusioniAnd MultiiRespectively representing the ith wave band of the fusion result image and the original multispectral image.
The evaluation index used above was calculated as shown in table 1 below:
TABLE 1 IKONOS panchromatic image and multispectral image different fusion method performance evaluation
(the entropy of the original panchromatic image is 7.376017, the entropy of the original multispectral image is R wave band 5.128325, G wave band 4.753197 and B wave band 5.154605)
Comparing the performance evaluation index values in table 1 can lead to the following conclusions:
(1) the three spatial projection methods provided by the invention have slight difference in the values of the evaluation indexes, which explains the robustness of the method, and can be well adapted to the needs of image fusion.
(2) The entropy describes the degree of retention of the fusion result image on the spatial detail features, and as can be seen from the table, the entropy of each fusion method is higher than that of the original image, which indicates that each method can improve the information content of the image. The entropy value of PCA transformation is the lowest, the standard IHS transformation is the next, and the entropy values of the three methods are the highest.
(3) From two indexes reflecting the spectrum torsion degree, namely ERGAS and a deviation index, the performances of the fusion methods tend to be consistent, the higher the two values are, the more serious the spectrum torsion degree is, wherein the values of standard IHS transformation and PCA transformation are the largest, the spectrum torsion degree of the two methods is the highest, and the conclusion that the image fusion field has already reached consensus is also provided. The values of the three space projection methods provided by the invention on the two indexes are the lowest, namely, the method can better keep the spectral characteristics of the image.
(4) The UIQI index is a comprehensive evaluation index, and reflects image quality characteristics such as information loss, spectral distortion and contrast distortion. As can be seen from the index values in the table, the value of the three methods provided by the invention is the largest on the index, which shows that the method of the invention has greater advantages than other methods in the aspects of spectrum distortion, contrast distortion and the like.
(5) In the correlation coefficient indexes, the PAN represents the spatial correlation coefficient, the R, G, B represents the spectral correlation coefficient of the three wave bands and the original multispectral image respectively, and index values in the table show that the spatial projection method provided by the invention is slightly different from the traditional method in the spatial correlation coefficient and has a difference of less than 0.02 with the highest value, but the spectral correlation coefficient is far higher than other methods, so that the method can obtain high spatial correlation and high spectral correlation comprehensively, and the method has obvious advantages in the aspects of spatial detail feature retention and spectral feature retention compared with other similar methods.
The 5 evaluation indexes are used for evaluating the quality of the fused image comprehensively and objectively only by considering that the fused image is required to keep the spatial detail information of the original panchromatic image and the spectral information of the original multispectral image as much as possible. In general, the fusion method provided by the invention can obtain a high entropy value, a high UIQI value, a high spatial correlation coefficient, a high spectral correlation coefficient and a low global error, which shows that the method is superior to other methods in the aspects of maintaining image spectral information, reducing radiation distortion and contrast distortion and maintaining image detail characteristics, so that the fusion image obtained by the method provided by the invention can retain more useful information of the original image and reduce spectral distortion.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (9)
1. A remote sensing image fusion method is characterized by comprising the following steps:
acquiring an original panchromatic image and a corresponding original multispectral image;
down-sampling and fusing the original panchromatic image and the original multispectral image to obtain a first multispectral image, and fusing the original panchromatic image and the original multispectral image to obtain a second multispectral image;
based on a generated countermeasure network, fitting the original multispectral image by adopting the first multispectral image to obtain a generated countermeasure network model between the first multispectral image and the original multispectral image;
and inputting the second multispectral image into the generation countermeasure network model to obtain a final fusion image.
2. The method of claim 1, wherein prior to inputting said first multispectral image into said generating a confrontational network model to obtain a final fused image, said method further comprises:
and fitting the original multispectral image by adopting the second multispectral image based on a generated countermeasure network to obtain a generated countermeasure network model of the original multispectral image.
3. The method of claim 1, wherein the step of downsampling and fusing the original panchromatic image and the original multispectral image to obtain a first multispectral image comprises:
respectively performing down-sampling processing on the original panchromatic image and the original multispectral image according to a preset down-sampling level to obtain a down-sampling panchromatic image and a down-sampling multispectral image;
and fusing the down-sampling panchromatic image and the down-sampling multispectral image to obtain the first multispectral image.
4. The method of claim 3, wherein fusing the downsampled panchromatic image and the downsampled multispectral image to obtain the first multispectral image comprises:
and fusing the down-sampling panchromatic image and the down-sampling multispectral image based on an image fusion algorithm of IHS transformation.
5. The method of claim 1, wherein fitting the original multispectral image with the first multispectral image based on generating a competing network to obtain a model of the generating competing network between the first multispectral image and the original multispectral image comprises:
cutting the first multispectral image and the original multispectral image into image blocks with the same size to form a training data set;
constructing a generator and a discriminator based on the generated countermeasure network;
inputting a first multispectral image and an original multispectral image in the training data set into the generator to be trained to obtain a fitting image;
and inputting the fitting image and the original multispectral image into a discriminator for countermeasure training, and finally obtaining a generated countermeasure network model between the first multispectral image and the original multispectral image.
6. A remote sensing image fusion device, the device comprising:
the original image acquisition module is used for acquiring an original panchromatic image and a corresponding original multispectral image;
the image fusion module is used for performing down-sampling on the original panchromatic image and the original multispectral image and fusing the original panchromatic image and the original multispectral image to obtain a first multispectral image, and fusing the original panchromatic image and the original multispectral image to obtain a second multispectral image;
the training module is used for fitting the original multispectral image by adopting the first multispectral image based on a generated confrontation network to obtain a generated confrontation network model between the first multispectral image and the original multispectral image;
and the final fusion module is used for inputting the second multispectral image into the generation confrontation network model to obtain a final fusion image.
7. The apparatus of claim 6, wherein the image fusion module comprises:
the down-sampling unit is used for respectively carrying out down-sampling processing on the original panchromatic image and the original multispectral image according to a preset down-sampling level to obtain a down-sampled panchromatic image and a down-sampled multispectral image;
and the image fusion unit is used for fusing the downsampling panchromatic image and the downsampling multispectral image to obtain the first multispectral image.
8. An electronic device, characterized in that the electronic device comprises:
a processor; and
a memory communicatively coupled to the processor; wherein,
the memory stores readable instructions which, when executed by the processor, implement the method of any of claims 1-5.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed, carries out the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811516585.7A CN109636768B (en) | 2018-12-12 | 2018-12-12 | Remote sensing image fusion method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811516585.7A CN109636768B (en) | 2018-12-12 | 2018-12-12 | Remote sensing image fusion method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109636768A true CN109636768A (en) | 2019-04-16 |
CN109636768B CN109636768B (en) | 2021-05-25 |
Family
ID=66072869
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811516585.7A Active CN109636768B (en) | 2018-12-12 | 2018-12-12 | Remote sensing image fusion method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109636768B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110211046A (en) * | 2019-06-03 | 2019-09-06 | 重庆邮电大学 | A kind of remote sensing image fusion method, system and terminal based on generation confrontation network |
CN111340743A (en) * | 2020-02-18 | 2020-06-26 | 云南大学 | Semi-supervised multispectral and panchromatic remote sensing image fusion method and system |
CN111951199A (en) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | Image fusion method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399422A (en) * | 2018-02-01 | 2018-08-14 | 华南理工大学 | A kind of image channel fusion method based on WGAN models |
CN108960345A (en) * | 2018-08-08 | 2018-12-07 | 广东工业大学 | A kind of fusion method of remote sensing images, system and associated component |
-
2018
- 2018-12-12 CN CN201811516585.7A patent/CN109636768B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108399422A (en) * | 2018-02-01 | 2018-08-14 | 华南理工大学 | A kind of image channel fusion method based on WGAN models |
CN108960345A (en) * | 2018-08-08 | 2018-12-07 | 广东工业大学 | A kind of fusion method of remote sensing images, system and associated component |
Non-Patent Citations (1)
Title |
---|
XIANGYU LIU等: "Psgan: A Generative Adversarial Network for Remote Sensing Image Pan-Sharpening", 《2018 25TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111951199A (en) * | 2019-05-16 | 2020-11-17 | 武汉Tcl集团工业研究院有限公司 | Image fusion method and device |
CN110211046A (en) * | 2019-06-03 | 2019-09-06 | 重庆邮电大学 | A kind of remote sensing image fusion method, system and terminal based on generation confrontation network |
CN110211046B (en) * | 2019-06-03 | 2023-07-14 | 重庆邮电大学 | Remote sensing image fusion method, system and terminal based on generation countermeasure network |
CN111340743A (en) * | 2020-02-18 | 2020-06-26 | 云南大学 | Semi-supervised multispectral and panchromatic remote sensing image fusion method and system |
CN111340743B (en) * | 2020-02-18 | 2023-06-06 | 云南大学 | Semi-supervised multispectral and panchromatic remote sensing image fusion method and system |
Also Published As
Publication number | Publication date |
---|---|
CN109636768B (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9706111B2 (en) | No-reference image and video quality evaluation | |
US20180367774A1 (en) | Convolutional Color Correction in Digital Images | |
CN109636768B (en) | Remote sensing image fusion method and device and electronic equipment | |
Ma et al. | Objective quality assessment for color-to-gray image conversion | |
US10325346B2 (en) | Image processing system for downscaling images using perceptual downscaling method | |
US10949958B2 (en) | Fast fourier color constancy | |
Wang et al. | Image quality assessment: from error visibility to structural similarity | |
US20160350900A1 (en) | Convolutional Color Correction | |
US20170193672A1 (en) | Estimating depth from a single image | |
KR101725884B1 (en) | Automatic processing of images | |
US20050190990A1 (en) | Method and apparatus for combining a plurality of images | |
CN106603941A (en) | Computational complexity adaptive HDR image conversion method and its system | |
CN110335330A (en) | Image simulation generation method and its system, deep learning algorithm training method and electronic equipment | |
US11727321B2 (en) | Method for rendering of augmented reality content in combination with external display | |
CN113240760B (en) | Image processing method, device, computer equipment and storage medium | |
CN110298829A (en) | A kind of lingual diagnosis method, apparatus, system, computer equipment and storage medium | |
CN110689546A (en) | Method, device and equipment for generating personalized head portrait and storage medium | |
WO2020259123A1 (en) | Method and device for adjusting image quality, and readable storage medium | |
Wu et al. | Underwater No‐Reference Image Quality Assessment for Display Module of ROV | |
Brémond et al. | Vision models for image quality assessment: one is not enough | |
CN113706400A (en) | Image correction method, image correction device, microscope image correction method, and electronic apparatus | |
CN114219725A (en) | Image processing method, terminal equipment and computer readable storage medium | |
CN113706438A (en) | Image processing method, related device, equipment, system and storage medium | |
Jiao | [Retracted] Optimization of Color Enhancement Processing for Plane Images Based on Computer Vision | |
CN117975484B (en) | Training method of change detection model, change detection method, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |