CN110930315B - Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM - Google Patents
Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM Download PDFInfo
- Publication number
- CN110930315B CN110930315B CN201911009926.6A CN201911009926A CN110930315B CN 110930315 B CN110930315 B CN 110930315B CN 201911009926 A CN201911009926 A CN 201911009926A CN 110930315 B CN110930315 B CN 110930315B
- Authority
- CN
- China
- Prior art keywords
- layer
- equation
- path
- feature
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000004927 fusion Effects 0.000 claims abstract description 22
- 238000012549 training Methods 0.000 claims abstract description 17
- 238000005070 sampling Methods 0.000 claims abstract description 8
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 5
- 230000003595 spectral effect Effects 0.000 claims description 48
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 42
- 230000006870 function Effects 0.000 claims description 19
- 238000010276 construction Methods 0.000 claims description 12
- 230000004913 activation Effects 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 11
- 238000001228 spectrum Methods 0.000 claims description 5
- 238000005457 optimization Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 2
- 238000004088 simulation Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 241000372285 Isanda Species 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
The method comprises two parts of model training and multispectral image panchromatic sharpening, wherein in the stage of model training, original clear multispectral and panchromatic images are subjected to down-sampling to obtain a simulation training image pair; secondly, extracting and fusing the characteristics of the panchromatic image and the multispectral image by using a dual-channel convolution network, and realizing the fusion between convolution characteristics of multiple layers and different depths by combining with a layer CLSTM; then, reconstructing a multispectral image with high spatial resolution from the fused features by using a deconvolution network; finally, adjusting the parameters of the model by using an Adam algorithm; in the multispectral image panchromatic sharpening stage, firstly, the trained dual-channel convolution network and the hierarchical CLSTM are used for extracting and fusing the characteristics of panchromatic images and multispectral images. The convolution network is responsible for extracting the characteristics of the multispectral image and the panchromatic image and fusing the characteristics selected by the convolution network and the CLSTM, and the CLSTM selects and memorizes the characteristics of different depths at multiple levels, so that the fusion of the characteristics of multiple levels and different depths is realized.
Description
Technical Field
The invention belongs to the field of remote sensing image processing, and particularly relates to a multispectral image panchromatic sharpening method based on a dual-channel convolution network and a multi-level fusion strategy.
Background
Remote sensing images have two important properties-spectral resolution and spatial resolution. The spectral resolution refers to the minimum wavelength range which can be distinguished by the sensor when receiving the spectrum of the target radiation, the narrower the wavelength range is, the higher the spectral resolution is, the stronger the ability of the sensor to distinguish and identify light in each wave band in the spectrum is, the more the number of the generated wave bands is, and the richer the spectral information of the obtained remote sensing image is. The spatial resolution refers to the minimum distance between two adjacent ground objects which can be identified on the remote sensing image, the smaller the minimum distance is, the higher the spatial resolution is, the richer the detail information of the ground objects visible in the remote sensing image is, and the stronger the ability of the remote sensing image to identify the objects is.
Most remote sensing applications require remote sensing images with high spatial and spectral resolution. However, it is difficult to directly acquire such images through a single sensor in consideration of data storage amount and sensor signal-to-noise ratio. Therefore, the remote sensing image obtained by the sensor manufactured by the current technology only has the characteristics of high spatial resolution or high spectral resolution. To alleviate this problem, many optical earth observation satellites carry two optical sensors to simultaneously acquire two images with different but complementary characteristics in the same geographic area. For example, IKONOS, high score No. 2, and WorldView-2 all carry a panchromatic sensor and a multispectral image sensor. The panchromatic sensor acquires a single-band high spatial resolution remote sensing image, and the multispectral sensor acquires a multi-band low spatial resolution image. These two types of images are referred to as panchromatic images and multispectral images, respectively.
In practical applications, the color information in the image and the definition of the target are crucial to the interpretation and analysis of the image, so that multispectral images with high spatial resolution are often required in various occasions. Obviously, the original multispectral image or full-color image is often difficult to meet the needs of the user. Therefore, attempts have been made to combine the unique information of the multispectral image and the panchromatic image organically by using an image fusion technique, and to improve the spatial resolution of the multispectral image by using the spatial detail information in the panchromatic image, so as to obtain the multispectral image having the same spatial resolution as the panchromatic image and having the abundant spectral information of the original multispectral image. This is a multispectral image fusion technique, also called multispectral image panchromatic sharpening. At present, the multispectral image and the panchromatic image are fused, and the method is the only method for obtaining the multispectral image with high spatial resolution. In recent years, commercial products using high-resolution remote sensing images (e.g., Google Earth and Bing Maps) have been increasing, and the demand for fused multispectral image data has been increasing. Furthermore, multispectral image panchromatic sharpening techniques are important pre-processing steps for image enhancement for many remote sensing tasks such as change detection, target recognition, image classification, and the like. Therefore, the multispectral image panchromatic sharpening method is widely concerned by the remote sensing field and the image processing field, and is intensively researched all the time.
In recent years, with the development of artificial intelligence and machine learning, many scholars use the technology to solve the key problem in the multispectral image panchromatic sharpening process. In 2015, Wei Huang et al applied the deep neural network in machine learning to the field of multispectral image panchromatic sharpening for the first time. They believe that the relationship between the high resolution image and the low resolution image is the same for both the panchromatic image and the multispectral image. By studying the mapping relationship between the high-resolution and low-resolution full-color images, the mapping relationship between the high-resolution and low-resolution multispectral images can be obtained. Their models do not outperform the traditional methods. Based on the same principle, Azarang and ghassemia propose a stacked auto-encoder structure to generate high-resolution multispectral images, and achieve better performance than the conventional one. However, the method uses the framework of the traditional method, and only partially uses the convolutional neural network. Influenced by the super-resolution field, Masi Giuseppe et al propose a three-layer convolutional neural network PNN by improving SRCNN. The PNN network can obtain very excellent performance. It is too simple to be constructed of only three convolutional layers and needs further improvement. A multi-scale, multi-depth convolutional neural network (MSDCNN) is a multi-scale, multi-depth, dual-branch convolutional neural network (MSDCNN) proposed by Yuan Qiangqiang et al. Due to the multi-scale and multi-depth structure, the MSDCNN has complex nonlinear mapping capability and can process objects of different scales acquired by a plurality of sensors. However, like PNN, MSDCNN directly splices and fuses the multispectral image and the panchromatic image and inputs them into the network, and if the multispectral image and the panchromatic image are not accurately registered, the result is greatly influenced. Recently, Liu Xiangyu et al proposed a network architecture named TFNet. The network does not directly splice and fuse the full-color image and the multispectral image, but extracts the characteristics of the image firstly and then fuses the images indirectly by fusing the characteristics. Therefore, the whole network consists of three modules of feature extraction, feature fusion and image reconstruction. First, in the feature extraction network, the features of the panchromatic image and the multispectral image are extracted with two sets of convolution layers of three-layer convolution, respectively. And (4) connecting the full-color image and the multispectral features in series and inputting the full-color image and the multispectral features into a feature fusion network. The feature fusion sub-network then fuses the resulting features through three convolutional layers to form a more compact representation. Finally, the image reconstruction sub-network reconstructs a multispectral image with high spatial resolution through the 11 convolutional layers. Features of different depths have different meanings, but these networks only fuse features of a certain depth, and do not fully exploit the fusion between features of multiple levels and different depths. In 2019, Zhang Yongjun et al proposed a new end-to-end bidirectional pyramid network (BDPN) for sharpening. BDPN can be described as a bi-directional pyramid that processes multispectral and panchromatic images in two branches, respectively. The method uses a bidirectional network for the first time, and has certain innovation. However, the two paths of the network are not balanced, for example, the branch that processes multispectral images contains only two convolutional layers, which is very different from the branch that processes panchromatic images containing 20 convolutional layers.
Disclosure of Invention
Technical problem to be solved
In order to solve the problem of spatial and spectral distortion caused by single aspects of a convolution network type, fusion feature hierarchy, feature fusion depth and the like in the conventional multispectral and panchromatic image fusion method, the invention provides a multispectral image panchromatic sharpening method based on a dual-channel convolution network and a multi-level fusion strategy. The method provided by the invention processes different types of data (full-color image 2D characteristics and multispectral image 3D characteristics) by using two different convolutional neural networks (2D and 3D), and then realizes a multi-level fusion strategy by using a level CLSTM. Because the full-color image of the single waveband is 2D data, the method adopts a 2D convolution neural network to process spatial information; the 3D multispectral data is processed similarly using a 3D network. And features of different depths are selectively memorized by utilizing a hierarchical CLSTM, and the features automatically selected by the CLSTM are fused into multiple layers of the double-channel, so that the fusion of the features of multiple layers and different depths is realized. And finally, reconstructing a multispectral image with high spatial resolution by using a deconvolution network.
Technical scheme
A multispectral image panchromatic sharpening method based on a dual-channel convolution network and a hierarchical CLSTM is characterized by comprising the following steps:
1. Training of fusion models
Inputting: image block set F0(MS, PAN), where the original multispectral MS is of size H × W × S and the PAN is of size 4H × 4W × 1, H, W and S representing the height, width and number of spectra of the original multispectral image, respectively;
(1) constructing a simulated training dataset
Step 11: the original multispectral image MS is down-sampled to obtain a simulated multispectral image blockThe size of the image block is
Step 12: multispectral image to be downsampledPerforming bilinear interpolation up-sampling to obtain multispectral image with the same height and width as MS
Step 13: downsampling original full-color image PAN to obtain full-color image With simulated upsampled multi-spectral imagesAre the same in height and width;
(2) constructing a dual-path network
Step 21: constructing a spatial characteristic path; firstly, constructing a stem layer; the full color image is characterized in the stem layer using one 2D convolutional layer and the prerlu active layer as shown in equation (1):
wherein the content of the first and second substances,layer 0 features representing spatial feature path extraction, F being an abbreviation for feature; 0 represents a 0 th layer; a is used for representing a spatial feature path; wherein W is an abbreviation for weight, which is a parameter of the convolutional layer; b is a bias term;
Then constructing T residual blocks, wherein each residual block comprises two 2D convolution layers and two PReLU activation functions; the 1 st residual block has an input of Fa 0The remaining residual block is inputAndas shown in equations (2) and (3):
in equation (2), Wa 1,2Subscripts of (A) andthe meanings are the same; wa 1,2The superscript 1,2 of (a) indicates the number of layers, where 1 indicates the 1 st layer of the spatial signature path, i.e., the 1 st residual block, and 2 indicates the 2 nd convolutional layer of the residual block;upper and lower meanings of (A) and Wa 1,2The same; in equation (3), the superscript T denotes the number of layers in the path of the network, the maximum number of layers being denoted by T; wa t,1Upper and lower meaning of (1) and Wa 1,2The same;the upper and lower designations of (A) and (B)The same;the following table CLSTM of (a) indicates that the feature is the output of the t-th layer CLSTM, and a indicates the feature output to the spatial feature path;
step 22: constructing a spectral characteristic path: the spectral characteristic path is similar to the spatial characteristic path, and the path consists of a 3D stem layer and T3D residual blocks; in the 3D stem layer, a 3D convolutional layer and a prerlu active layer are first used to extract the features of the multispectral image, as shown in equation (4):
in the equation, Fe 0Layer 0 features representing spectral feature path extraction; 0 represents a 0 th layer; e is used for representing a spectral feature path;
Then constructing a 3D residual block: each 3D residual block comprises two 3D convolutional layers and two PReLU activation functions; wherein the input of the 1 st 3D residual block is Fe 0The inputs of the other layers areAndas shown in equations (5) and (6):
in equation (5), We 1,2Subscripts of (A) andthe meanings are the same; we 1,2The superscripts 1,2 of (a) indicate the number of layers, where 1 indicates the 1 st layer of the spectral signature path and 2 indicates the 2 nd convolutional layer of that layer;upper and lower meanings of (A) and We 1,2The same is true. In equation (6), the superscript t indicates that the network is on the wayThe number of layers in (1);upper and lower meaning of (1) and We 1,2The same;the upper and lower designations of (A) and (B)The same;the following table CLSTM of (a) indicates that the feature is the output of the t-th layer CLSTM;
(3) building hierarchical CLSTM networks
Step 31: construction of forgetting Gate ftThe door forgets the state information; the CLSTM network has T layers, all CLSTMs share parameters, and the number of layers of the CLSTM network is calculated from 1; three gates in the network are a forgetting gate, an input gate and an output gate respectively; the construction of the forgetting gate is shown in equation (7):
wherein t represents the number of layers; ct-1State information representing the previous layer, the initialization of the feature being 0; ht-1Representing the history information of the previous layer, and the initialization of the characteristic is also 0; Andfeatures of the t-th layer in the spatial feature path and the spectral feature path, respectively; w represents a weight, and this parameter is shared in the T layer; subscript f indicates that the parameter is a forgetting gate parameter; b is an offset term, and the parameters are shared in the same way;
step 32: build input Gate itThe door pair input featuresSelecting; as shown in equation (8):
wherein the subscript i indicates that the parameter is a parameter of the input gate;
step 33: build output gate otAs shown in equation (9):
wherein the subscript o indicates that the parameter is a parameter of the output gate;
step 34: the state information is updated as shown in equation (10):
wherein the subscript c indicates that the parameter is a parameter to the status information update procedure; ctIs an updated status feature; tanh is an activation function;
step 35: extracting output characteristics H from state information in combination with output gatetAs shown in equation (11):
step 36: the output information is passed to the spectral signature path, as shown in equation (12):
step 37: converting the output information 3D output information into 2D data used by the spatial signature path, and passing to the spatial signature path, as shown in equation (13):
wherein the view () function is to splice the features of the same spatial location;
(4) Building reconstruction modules
Step 41: the high spatial resolution multispectral image is finally generated by a reconstruction module; this module consists of a deconvolution layer, as shown in equation (14):
whereinTRepresenting a deconvolution;
and (3) outputting: high spatial resolution multispectral image MSR;
(5) Back propagation tuning parameters
Step 51: the Loss function Loss is constructed as shown in equation (15):
wherein S represents the number of pairs of simulated training images; i | · | purple wind1Represents an L1 paradigm; i | · | purple wind2Represents an L2 paradigm; λ is the equilibrium error term | | | MSR,s-MSs||1And the regular term | | W | | ceiling2The parameters of (1); s represents an index of the image pair;
step 52: calculating an optimal panchromatic sharpening network parameter { W, b } by using an Adam optimization algorithm;
and (3) outputting: training the finished panchromatic sharpening network;
2. fusion of full-color and multi-spectral images
Inputting: image block set F0The size of the MS is H × W × S, and the size of the PAN is 4H × 4W × 1, H, W and S respectively represent the height, width and number of channels of the multispectral image;
(1) building a data set
Carrying out bilinear interpolation on the multispectral image MSUp-sampling to obtain multi-spectral images with the same height and width as the PAN
(2) Constructing a dual-path network
Step 61: constructing a spatial characteristic path; firstly, constructing a stem layer; the full color image is characterized in the stem layer using one 2D convolutional layer and the prerlu active layer as shown in equation (16):
Then, constructing T residual blocks: the 1 st residual block has an input of Fa 0The remaining residual block is inputAndas shown in equations (17) and (18):
whereinIndicating that the feature is a feature output to the spatial feature path a by the t-th layer CLSTM;
step 62: constructing a spectral characteristic path; the spectral signature path is similar to the spatial signature path, which consists of one 3D stem layer and T3D residual blocks, as shown in equation (19):
wherein Fe 0Layer 0 features representing spectral feature path extraction; e is used for representing a spectral feature path; the input of the 1 st 3D residual block is Fe 0The remaining residual block is inputAndas shown in equations (20) and (21):
(3) building hierarchical CLSTM networks
Step 71: construction of forgetting Gate ftThe door forgets the state information; the CLSTM network has T layers, all CLSTMs share parameters, and the number of layers of the CLSTM network is calculated from 1; three gates in the network are a forgetting gate, an input gate and an output gate respectively; the construction of the forgetting gate is shown as equation (22):
wherein t represents the number of layers; ct-1State information representing the previous layer, the initialization of the feature being 0; ht-1Representing the history information of the previous layer, and the initialization of the characteristic is also 0; Andfeatures of the t-th layer in the spatial feature path and the spectral feature path, respectively; w represents a weight, and this parameter is shared in the T layer; subscript f indicates that the parameter is a forgetting gate parameter; b is an offset term, and the parameters are shared in the same way;
step 72: build input Gate itThe gate selects an input feature; as shown in equation (23):
wherein the subscript i indicates that the parameter is a parameter of the input gate;
step 73: build output gate otAs shown in equation (24):
wherein the subscript o indicates that the parameter is a parameter of the output gate;
step 74: the state information is updated as shown in equation (25):
wherein the subscript c indicates that the parameter is a parameter to the status information update procedure; ctIs an updated status feature; tanh is an activation function;
step 75: extracting output characteristics H from state information in combination with output gatetAs shown in equation (26):
step 76: the output information is passed to the spectral signature path as shown in equation (27):
step 77: converting the output information 3D output information into 2D data used by the spatial signature path, and passing to the spatial signature path, as shown in equation (28):
wherein the view () function is to splice the features of the same spatial location;
(4) Building reconstruction modules
Step 81: the high spatial resolution multispectral image is finally generated by a reconstruction module; this module consists of a deconvolution layer, as shown in equation (29):
(5) forward propagation to obtain multispectral images
Step 91: PAN and multispectral images of full-color imagesInputting the result into a panchromatic sharpening network, and acquiring a result MS of forward propagation of the panchromatic sharpening networkR;
And (3) outputting: high spatial resolution multispectral image MSR。
Advantageous effects
The multispectral image panchromatic sharpening method based on the dual-channel convolution network and the hierarchical CLSTM provided by the invention fully utilizes abundant spectral information in the multispectral image and spatial detail information in the panchromatic image. The multispectral and panchromatic image features extracted by the algorithm are extracted and fused by utilizing 2D and 3D networks respectively, and features of different depths are selected and extracted by combining with hierarchical CLSTM, so that the visual quality of a reconstructed image is effectively improved, and the structural features such as edges, textures and the like of the image are more effectively reconstructed in a spatial domain. 2D spatial information and 3D multispectral information are processed by utilizing 2D and 3D networks, and features of different depths are fused on multiple levels, so that a multispectral image with high spatial resolution can be well reconstructed.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 shows a specific network structure of the present invention.
Fig. 3 shows a specific structure of CLSTM.
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the invention provides a multispectral image panchromatic sharpening method based on a dual-channel convolution network and a hierarchical CLSTM. The method comprises two parts of model training and multispectral image panchromatic sharpening. In the model training stage, firstly, the original clear multispectral and panchromatic images are subjected to down-sampling to obtain a simulation training image pair; secondly, extracting and fusing the characteristics of the panchromatic image and the multispectral image by using a dual-channel convolution network, and realizing the fusion between convolution characteristics of multiple layers and different depths by combining with a layer CLSTM; then, reconstructing a multispectral image with high spatial resolution from the fused features by using a deconvolution network; and finally, adjusting the parameters of the model by using an Adam algorithm. In the multispectral image panchromatic sharpening stage, firstly, the trained dual-channel convolution network and the hierarchical CLSTM are used for extracting and fusing the characteristics of panchromatic images and multispectral images. The specific network structure is shown in fig. 2, the convolution network is responsible for extracting the characteristics of the multispectral image and the panchromatic image and fusing the characteristics selected by the convolution network and the characteristics selected by the CLSTM, and the CLSTM selects and memorizes the characteristics at different depths in multiple levels, so that the fusion of the characteristics at multiple levels and different depths is realized.
The specific implementation flow is as follows:
1. training of fusion models
Inputting: image block set F0Where the original multispectral MS is of size H × W × S and the PAN is of size 4H × 4W × 1, H, W and S representing the height, width and number of spectra of the original multispectral image, respectively.
(1) Constructing a simulated training dataset
Step 1: the original multispectral image MS is down-sampled to obtain a simulated multispectral image blockThe size of the image block is
Step 2: multispectral image to be downsampledPerforming bilinear interpolation up-sampling to obtain multispectral image with the same height and width as MS
And step 3: downsampling original full-color image PAN to obtain full-color image With simulated upsampled multi-spectral imagesAre the same in height and width.
(2) Constructing a dual-path network
Step 1: and constructing a spatial feature passage. The spatial signature path is used to process full color image information and is located in the upper box in the middle of fig. 2. The spatial signature path comprises a stem layer and T2D residual blocks. In the stem layer, a 2D convolutional layer and a prerlu active layer are first used to extract the features of a full color image, as shown in equation (1). In the equation, Fa 0Layer 0 features representing spatial feature path extraction, F is an abbreviation for feature; 0 represents a 0 th layer; a is used for representing a spatial feature path; then, T2D residual blocks are constructed. The 1 st 2D residual block is input The 2D residual block is input asAndas shown in equations (2) and (3); in equation (3), the superscript T denotes the number of layers in the path of the network, the maximum number of layers being denoted by T;the subscript CLSTM of (a) indicates that the feature is the output of the t-th layer CLSTM, and a indicates the feature output to the spatial feature path.
Step 2: constructing a spectral characteristic path; the spectral feature path is used for processing multispectral information and is positioned in a lower frame in the middle of the graph 2; the spectral signature path is similar to the spatial signature path, which consists of one 3D stem layer and T3D residual blocks. Firstly, extracting the characteristics of the multispectral image by using a 3D convolution layer and a PReLU activation layer in a 3D stem layer, as shown in an equation (4); in the equation, the first and second phases are,layer 0 features representing spectral feature path extraction; 0 represents a 0 th layer; e is used for representing a spectral feature path; then constructing a 3D residual block; wherein the 1 st 3D residual block has the input ofThe inputs to the remaining layers areAndas shown in equations (5) and (6);
(3) building hierarchical CLSTM networks
Step 1: construction of forgetting Gate ftThe door forgets the state information; the hierarchical CLSTM network has a T layer, all CLSTMs share parameters, and the structure of the hierarchical CLSTM network is shown in FIG. 3; the number of layers of the CLSTM network is calculated from 1; three gates in the network are a forgetting gate, an input gate and an output gate respectively; the construction of the forgetting gate is shown in equation (7); wherein t represents the number of layers; c t-1State information representing the previous layer, the initialization of the feature being 0; ht-1Representing the history information of the previous layer, and the initialization of the characteristic is also 0;andfeatures of the t-th layer in the spatial feature path and the spectral feature path, respectively; w represents a weight, and this parameter is shared in the T layer; subscript f indicates that the parameter is a forgetting gate parameter; b is an offset term, the parameters are shared as well;
Step 2: build input Gate itThe gate selects the input feature. As shown in equation (8), where the subscript i indicates that the parameter is a parameter of the input gate;
and step 3: build output gate otAs shown in equation (9); wherein the subscript o indicates that the parameter is a parameter of the output gate;
and 4, step 4: updating the state information as shown in equation (10); wherein the subscript c indicates that the parameter is a parameter to the status information update procedure; ctIs an updated status feature; tanh is an activation function;
and 5: extracting output characteristics H from state information in combination with output gatetAs shown in equation (11);
step 6: passing the output information to the spectral signature path, as shown in equation (12);
and 7: converting the output information 3D output information into 2D data used by a spatial feature path, and transmitting the 2D data to the spatial feature path, as shown in equation (13); the view () function is to splice the features of the same spatial position;
(4) Building reconstruction modules
Step 1: as shown in fig. 2, the high spatial resolution multispectral image is ultimately generated by the reconstruction module; this module consists of a deconvolution layer, as shown in equation (14).
(5) Back propagation tuning parameters
Step 1: constructing a Loss function Loss, as shown in equation (15); s represents the number of pairs of simulated training images; i | · | purple wind1Represents an L1 paradigm; i | · | purple wind2Represents an L2 paradigm; λ is the equilibrium error term | | | MSR,s-MSs||1And the regular term | | W | | ceiling2The parameters of (1); s represents an index of the image pair;
step 2: and calculating the optimal network parameters W, b by using an Adam optimization algorithm.
And (3) outputting: a well-learned network.
2. Fusion of full-color and multi-spectral images
Inputting: image block set F0Where the size of the MS is H × W × S and the size of the PAN is 4H × 4W × 1, H, W and S respectively represent the height, width and number of channels of the multispectral image.
(1) Building a data set
Carrying out bilinear interpolation up-sampling on the multispectral image MS, thereby obtaining the multispectral image with the same height and width as the PAN
(2) Constructing a dual-path network
Step 1: constructing a spatial characteristic path; firstly, constructing a stem layer; extracting the feature of the full color image using one 2D convolution layer and the prerlu active layer in the stem layer as shown in equation (16); then constructing T residual blocks; the 1 st residual block has an input of F a 0The remaining residual block is inputAndas shown in equations (17) and (18);indicating that the feature is a feature output by the t-th layer CLSTM to the spatial feature path a.
Step 2: constructing a spectral characteristic path; the spectral feature path is used for processing multispectral information and is positioned in a part of light cyan shadow in the graph 2; the spectral signature path is similar to the spatial signature path, which consists of one 3D stem layer and T3D residual blocks, as shown in equation (19); fe 0Layer 0 features representing spectral feature path extraction; e is used for representing a spectral feature path; the input of the 1 st 3D residual block is Fe 0And the rest of the residThe input of the ual block isAndas shown in equations (20) and (21).
(3) Building hierarchical CLSTM networks
Step 1: construction of forgetting Gate ftThe door forgets the state information; the hierarchical CLSTM network has a T layer, all CLSTMs share parameters, and the structure of the hierarchical CLSTM network is shown in FIG. 3; the number of layers of the CLSTM network is calculated from 1; three gates in the network are a forgetting gate, an input gate and an output gate respectively; the construction of the forgetting gate is shown in equation (22); wherein t represents the number of layers; ct-1State information representing the previous layer, the initialization of the feature being 0; ht-1Representing the history information of the previous layer, and the initialization of the characteristic is also 0; Andfeatures of the t-th layer in the spatial feature path and the spectral feature path, respectively; w represents a weight, and this parameter is shared in the T layer; subscript f indicates that the parameter is a forgetting gate parameter; b is an offset term, and the parameters are shared in the same way;
step 2: build input Gate itThe gate selects the input feature. As shown in equation (23), where the subscript i indicates that the parameter is a parameter of the input gate;
and step 3: build output gate otAs shown in equation (24); wherein the subscript o indicates that the parameter is a parameter of the output gate;
and 4, step 4: updating the state information as shown in equation (25); wherein the subscript c indicates that the parameter is a parameter to the status information update procedure; ctIs an updated status feature; tanh is an activation function;
and 5: extracting output characteristics H from state information in combination with output gatetAs shown in equation (26);
step 6: passing the output information to the spectral signature path, as shown in equation (27);
and 7: converting the output information 3D output information into 2D data used by a spatial feature path, and transmitting the 2D data to the spatial feature path, as shown in equation (28); the view () function is to splice the features of the same spatial position;
(4) building reconstruction modules
Step 1: as shown in fig. 2, the high spatial resolution multispectral image is ultimately generated by the reconstruction module; this module consists of a deconvolution layer, as shown in equation (29).
(5) Forward propagation to obtain multispectral images
Step 1: PAN and multispectral images of full-color imagesInputting into network, obtaining result MS of network forward propagationR;
And (3) outputting: high spatial resolution multispectral image MSR。
Claims (1)
1. A multispectral image panchromatic sharpening method based on a dual-channel convolution network and a hierarchical CLSTM is characterized by comprising the following steps:
1. training of fusion models
Inputting: image block set F0(MS, PAN), where the original multispectral MS is of size H × W × S and the PAN is of size 4H × 4W × 1, H, W and S representing the height, width and number of spectra of the original multispectral image, respectively;
(1) constructing a simulated training dataset
Step 11: the original multispectral image MS is down-sampled to obtain a simulated multispectral image blockThe size of the image block is
Step 12: multispectral image to be downsampledPerforming bilinear interpolation up-sampling to obtain multispectral image with the same height and width as MS
Step 13: downsampling original full-color image PAN to obtain full-color image With simulated upsampled multi-spectral images Are the same in height and width;
(2) constructing a dual-path network
Step 21: constructing a spatial characteristic path; firstly, constructing a stem layer; the full color image is characterized in the stem layer using one 2D convolutional layer and the prerlu active layer as shown in equation (1):
wherein the content of the first and second substances,layer 0 features representing spatial feature path extraction, F being an abbreviation for feature; 0 represents a 0 th layer; a is used for representing a spatial feature path; wherein W is an abbreviation for weight, which is a parameter of the convolutional layer; b is a bias term;
then constructBuilding T resideal blocks, wherein each resideal block comprises two 2D convolution layers and two PReLU activation functions; the 1 st Residualblock input isThe remaining residual block is input asAndas shown in equations (2) and (3):
in the case of the equation (2),subscripts of (A) andthe meanings are the same;the superscript 1,2 of (a) indicates the number of layers, where 1 indicates the 1 st layer of the spatial signature path, i.e., the 1 st residual block, and 2 indicates the 2 nd convolutional layer of the residual block;the upper and lower indices ofThe same; in equation (3), t table is superscriptedShowing the number of layers in a channel, wherein the maximum number of layers is represented by T;the upper and lower designations of (A) and (B)The same;the upper and lower designations of (A) and (B)The same;the subscript CLSTM of (a) indicates that the feature is the output of the t-th layer CLSTM, and a indicates the feature output to the spatial feature path;
Step 22: constructing a spectral characteristic path: the spectral characteristic path is similar to the spatial characteristic path and consists of a 3D stem layer and T3D residual blocks; in the 3D stem layer, a 3D convolutional layer and a prerlu active layer are first used to extract the features of the multispectral image, as shown in equation (4):
in the equation, the first and second phases are,layer 0 features representing spectral feature path extraction; 0 represents a 0 th layer; e is used for representing a spectral feature path;
then constructing a 3D residual block: each 3D residual block comprises two 3D convolutional layers and two PReLU activation functions; wherein the 1 st 3D residual block has the input ofThe inputs to the remaining layers areAndas shown in equations (5) and (6):
in the case of the equation (5),subscripts of (A) andthe meanings are the same;the superscripts 1,2 of (a) indicate the number of layers, where 1 indicates the 1 st layer of the spectral signature path and 2 indicates the 2 nd convolutional layer of that layer;the upper and lower indices ofThe same; in equation (6), the superscript t denotes the number of layers in the path of the network;the upper and lower designations of (A) and (B)The same;the upper and lower designations of (A) and (B)The same;the subscript CLSTM of (a) indicates that the feature is the output of the t-th layer CLSTM;
(3) building hierarchical CLSTM networks
Step 31: construction of forgetting Gate f tThe door forgets the state information; the CLSTM network has T layers, all CLSTMs share parameters, and the number of layers of the CLSTM network is calculated from 1; three gates in the network are a forgetting gate, an input gate and an output gate respectively; the construction of the forgetting gate is shown in equation (7):
wherein t represents the number of layers; ct-1State information representing the previous layer, the initialization of the feature being 0; ht-1Representing the history information of the previous layer, and the initialization of the characteristic is also 0;andfeatures of the t-th layer in the spatial feature path and the spectral feature path, respectively; w represents a weight, and this parameter is shared in the T layer; subscript f indicates that the parameter is a forgetting gate parameter; b is an offset term, and the parameters are shared in the same way;
step 32: build input Gate itThe gate selects an input feature; as shown in equation (8):
wherein the subscript i indicates that the parameter is a parameter of the input gate;
step 33: build output gate otAs shown in equation (9):
wherein the subscript o indicates that the parameter is a parameter of the output gate;
step 34: the state information is updated as shown in equation (10):
wherein the subscript c indicates that the parameter is a parameter to the status information update procedure; ctIs an updated status feature; tanh is an activation function;
Step 35: extracting output characteristics H from state information in combination with output gatetAs shown in equation (11):
step 36: the output information is passed to the spectral signature path, as shown in equation (12):
step 37: converting the output information 3D output information into 2D data used by the spatial signature path, and passing to the spatial signature path, as shown in equation (13):
wherein the view () function is to splice the features of the same spatial location;
(4) building reconstruction modules
Step 41: the high spatial resolution multispectral image is finally generated by a reconstruction module; this module consists of a deconvolution layer, as shown in equation (14):
whereinTRepresenting a deconvolution;
and (3) outputting: high spatial resolution multispectral image MSR;
(5) Back propagation tuning parameters
Step 51: the Loss function Loss is constructed as shown in equation (15):
wherein S represents the number of pairs of simulated training images; i | · | purple wind1Represents an L1 paradigm; i | · | purple wind2Represents an L2 paradigm; λ is the equilibrium error term | | | MSR,s-MSs||1And the regular term | | W | | ceiling2The parameters of (1); s represents an index of the image pair;
step 52: calculating an optimal panchromatic sharpening network parameter { W, b } by using an Adam optimization algorithm;
and (3) outputting: training the finished panchromatic sharpening network;
2. fusion of full-color and multi-spectral images
Inputting: image block set F0The size of the MS is H × W × S, and the size of the PAN is 4H × 4W × 1, H, W and S respectively represent the height, width and number of channels of the multispectral image;
(1) building a data set
Carrying out bilinear interpolation up-sampling on the multispectral image MS, thereby obtaining the multispectral image with the same height and width as the PAN
(2) Constructing a dual-path network
Step 61: constructing a spatial characteristic path; firstly, constructing a stem layer; the full color image is characterized in the stem layer using one 2D convolutional layer and the prerlu active layer as shown in equation (16):
then, constructing T residual blocks: the 1 st residual block is inputThe remaining residual block is input asAndas shown in equations (17) and (18):
whereinIndicating that the feature is a feature output to the spatial feature path a by the t-th layer CLSTM;
step 62: constructing a spectral characteristic path; the spectral signature path is similar to the spatial signature path, which consists of one 3D stem layer and T3D residual blocks, as shown in equation (19):
whereinLayer 0 features representing spectral feature path extraction; e is used for representing a spectral feature path; the 1 st 3D residual block is inputThe remaining residual block is input as Andas shown in equations (20) and (21):
(3) building hierarchical CLSTM networks
Step 71: construction of forgetting Gate ftThe door forgets the state information; the CLSTM network has T layers, all CLSTMs share parameters, and the number of layers of the CLSTM network is calculated from 1; three gates in the network are a forgetting gate, an input gate and an output gate respectively; the construction of the forgetting gate is shown as equation (22):
wherein t represents the number of layers; ct-1State information representing the previous layer, the initialization of the feature being 0; ht-1Representing the history information of the previous layer, and the initialization of the characteristic is also 0;andfeatures of the t-th layer in the spatial feature path and the spectral feature path, respectively; w represents a weight, and this parameter is shared in the T layer; subscript f indicates that the parameter is a forgetting gate parameter; b is an offset term, and the parameters are shared in the same way;
step 72: build input Gate itThe gate selects an input feature; as shown in equation (23):
wherein the subscript i indicates that the parameter is a parameter of the input gate;
step 73: build output gate otAs shown in equation (24):
wherein the subscript o indicates that the parameter is a parameter of the output gate;
step 74: the state information is updated as shown in equation (25):
wherein the subscript c indicates that the parameter is a status information update procedure The parameters of (1); ctIs an updated status feature; tanh is an activation function;
step 75: extracting output characteristics H from state information in combination with output gatetAs shown in equation (26):
step 76: the output information is passed to the spectral signature path as shown in equation (27):
step 77: converting the output information 3D output information into 2D data used by the spatial signature path, and passing to the spatial signature path, as shown in equation (28):
wherein the view () function is to splice the features of the same spatial location;
(4) building reconstruction modules
Step 81: the high spatial resolution multispectral image is finally generated by a reconstruction module; this module consists of a deconvolution layer, as shown in equation (29):
(5) forward propagation to obtain multispectral images
Step 91: PAN and multispectral images of full-color imagesInputting the result into a panchromatic sharpening network, and acquiring a result MS of forward propagation of the panchromatic sharpening networkR;
And (3) outputting: high spatial resolution multispectral image MSR。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911009926.6A CN110930315B (en) | 2019-10-23 | 2019-10-23 | Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911009926.6A CN110930315B (en) | 2019-10-23 | 2019-10-23 | Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110930315A CN110930315A (en) | 2020-03-27 |
CN110930315B true CN110930315B (en) | 2022-02-11 |
Family
ID=69849322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911009926.6A Active CN110930315B (en) | 2019-10-23 | 2019-10-23 | Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110930315B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112861774A (en) * | 2021-03-04 | 2021-05-28 | 山东产研卫星信息技术产业研究院有限公司 | Method and system for identifying ship target by using remote sensing image |
CN113139902A (en) * | 2021-04-23 | 2021-07-20 | 深圳大学 | Hyperspectral image super-resolution reconstruction method and device and electronic equipment |
CN113902650B (en) * | 2021-12-07 | 2022-04-12 | 南湖实验室 | Remote sensing image sharpening method based on parallel deep learning network architecture |
CN114581347B (en) * | 2022-01-24 | 2024-03-12 | 中国科学院空天信息创新研究院 | Optical remote sensing spatial spectrum fusion method, device, equipment and medium without reference image |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106611410A (en) * | 2016-11-29 | 2017-05-03 | 北京空间机电研究所 | Pansharpen fusion optimization method based on pyramid model |
CN107808131A (en) * | 2017-10-23 | 2018-03-16 | 华南理工大学 | Dynamic gesture identification method based on binary channel depth convolutional neural networks |
CN108038501A (en) * | 2017-12-08 | 2018-05-15 | 桂林电子科技大学 | Hyperspectral image classification method based on multi-modal compression bilinearity pond |
CN109102469A (en) * | 2018-07-04 | 2018-12-28 | 华南理工大学 | A kind of panchromatic sharpening method of remote sensing images based on convolutional neural networks |
CN109146831A (en) * | 2018-08-01 | 2019-01-04 | 武汉大学 | Remote sensing image fusion method and system based on double branch deep learning networks |
CN109272010A (en) * | 2018-07-27 | 2019-01-25 | 吉林大学 | Multi-scale Remote Sensing Image fusion method based on convolutional neural networks |
CN109767412A (en) * | 2018-12-28 | 2019-05-17 | 珠海大横琴科技发展有限公司 | A kind of remote sensing image fusing method and system based on depth residual error neural network |
CN109858540A (en) * | 2019-01-24 | 2019-06-07 | 青岛中科智康医疗科技有限公司 | A kind of medical image recognition system and method based on multi-modal fusion |
CN109886870A (en) * | 2018-12-29 | 2019-06-14 | 西北大学 | Remote sensing image fusion method based on binary channels neural network |
CN109886929A (en) * | 2019-01-24 | 2019-06-14 | 江苏大学 | A kind of MRI tumour voxel detection method based on convolutional neural networks |
CN110189282A (en) * | 2019-05-09 | 2019-08-30 | 西北工业大学 | Based on intensive and jump connection depth convolutional network multispectral and panchromatic image fusion method |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9949714B2 (en) * | 2015-07-29 | 2018-04-24 | Htc Corporation | Method, electronic apparatus, and computer readable medium of constructing classifier for disease detection |
-
2019
- 2019-10-23 CN CN201911009926.6A patent/CN110930315B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106611410A (en) * | 2016-11-29 | 2017-05-03 | 北京空间机电研究所 | Pansharpen fusion optimization method based on pyramid model |
CN107808131A (en) * | 2017-10-23 | 2018-03-16 | 华南理工大学 | Dynamic gesture identification method based on binary channel depth convolutional neural networks |
CN108038501A (en) * | 2017-12-08 | 2018-05-15 | 桂林电子科技大学 | Hyperspectral image classification method based on multi-modal compression bilinearity pond |
CN109102469A (en) * | 2018-07-04 | 2018-12-28 | 华南理工大学 | A kind of panchromatic sharpening method of remote sensing images based on convolutional neural networks |
CN109272010A (en) * | 2018-07-27 | 2019-01-25 | 吉林大学 | Multi-scale Remote Sensing Image fusion method based on convolutional neural networks |
CN109146831A (en) * | 2018-08-01 | 2019-01-04 | 武汉大学 | Remote sensing image fusion method and system based on double branch deep learning networks |
CN109767412A (en) * | 2018-12-28 | 2019-05-17 | 珠海大横琴科技发展有限公司 | A kind of remote sensing image fusing method and system based on depth residual error neural network |
CN109886870A (en) * | 2018-12-29 | 2019-06-14 | 西北大学 | Remote sensing image fusion method based on binary channels neural network |
CN109858540A (en) * | 2019-01-24 | 2019-06-07 | 青岛中科智康医疗科技有限公司 | A kind of medical image recognition system and method based on multi-modal fusion |
CN109886929A (en) * | 2019-01-24 | 2019-06-14 | 江苏大学 | A kind of MRI tumour voxel detection method based on convolutional neural networks |
CN110189282A (en) * | 2019-05-09 | 2019-08-30 | 西北工业大学 | Based on intensive and jump connection depth convolutional network multispectral and panchromatic image fusion method |
Non-Patent Citations (7)
Title |
---|
A Multiscale and Multidepth Convolutional Neural Network for Remote Sensing Imagery Pan-Sharpening;Qiangqiang yuan et al;《 IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 》;20180205;第978-989页 * |
A new pansharpening method using multi resolution analysis framework and deep neural networks;Arian Azarang et al;《2017 3rd International Conference on Pattern Recognition and Image Analysis (IPRIA)》;20170720;第1-6页 * |
FORECAST-CLSTM: A New Convolutional LSTM Network for Cloudage Nowcasting;Chao Tan et al;《2018 IEEE Visual Communications and Image Processing (VCIP)》;20190425;第1-4页 * |
Remote Sensing Image Fusion Based on Two-stream Fusion Network;Xiangyu Liu et al;《Preprint submitted to Information Fusion》;20180129;第1-14页 * |
Semantic annotation for complex video street views based on 2D-3D multi-feature fusion and aggregated boosting decision forests;Xun Wang et al;《Preprint submitted to Pattern Recognition》;20160707;第1-33页 * |
多尺度输入3D卷积融合双流模型的行为识别方法;宋立飞等;《计算机辅助设计与图形学学报》;20181130;第30卷(第11期);第2074-2083页 * |
融合主题的CLSTM短文本情感分类;秦峰等;《安徽工业大学学报(自然科学版)》;20170731;第34卷(第3期);第289-295页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110930315A (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110930315B (en) | Multispectral image panchromatic sharpening method based on dual-channel convolution network and hierarchical CLSTM | |
CN110555446B (en) | Remote sensing image scene classification method based on multi-scale depth feature fusion and migration learning | |
CN107123089B (en) | Remote sensing image super-resolution reconstruction method and system based on depth convolution network | |
CN109272010B (en) | Multi-scale remote sensing image fusion method based on convolutional neural network | |
CN104112263B (en) | The method of full-colour image and Multispectral Image Fusion based on deep neural network | |
CN110428387B (en) | Hyperspectral and full-color image fusion method based on deep learning and matrix decomposition | |
Liao et al. | Deep learning for fusion of APEX hyperspectral and full-waveform LiDAR remote sensing data for tree species mapping | |
CN110415199B (en) | Multispectral remote sensing image fusion method and device based on residual learning | |
CN109087375B (en) | Deep learning-based image cavity filling method | |
CN113129247B (en) | Remote sensing image fusion method and medium based on self-adaptive multi-scale residual convolution | |
CN112184554A (en) | Remote sensing image fusion method based on residual mixed expansion convolution | |
CN110189282A (en) | Based on intensive and jump connection depth convolutional network multispectral and panchromatic image fusion method | |
CN109801218B (en) | Multispectral remote sensing image Pan-sharpening method based on multilayer coupling convolutional neural network | |
CN114937202A (en) | Double-current Swin transform remote sensing scene classification method | |
CN112509021B (en) | Parallax optimization method based on attention mechanism | |
Hu et al. | Hyperspectral image super resolution based on multiscale feature fusion and aggregation network with 3-D convolution | |
CN113744136A (en) | Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion | |
CN116309070A (en) | Super-resolution reconstruction method and device for hyperspectral remote sensing image and computer equipment | |
CN113793289A (en) | Multi-spectral image and panchromatic image fuzzy fusion method based on CNN and NSCT | |
CN112529828B (en) | Reference data non-sensitive remote sensing image space-time fusion model construction method | |
Ye et al. | An unsupervised SAR and optical image fusion network based on structure-texture decomposition | |
CN113902646A (en) | Remote sensing image pan-sharpening method based on depth layer feature weighted fusion network | |
CN114092803A (en) | Cloud detection method and device based on remote sensing image, electronic device and medium | |
CN113569905A (en) | Heterogeneous image fusion method based on multi-scale feature extraction and total variation | |
CN112686830A (en) | Super-resolution method of single depth map based on image decomposition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |