CN107103331B - Image fusion method based on deep learning - Google Patents

Image fusion method based on deep learning Download PDF

Info

Publication number
CN107103331B
CN107103331B CN201710211621.8A CN201710211621A CN107103331B CN 107103331 B CN107103331 B CN 107103331B CN 201710211621 A CN201710211621 A CN 201710211621A CN 107103331 B CN107103331 B CN 107103331B
Authority
CN
China
Prior art keywords
frequency
layer
convolution
image
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710211621.8A
Other languages
Chinese (zh)
Other versions
CN107103331A (en
Inventor
蔺素珍
韩泽
郑瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North University of China
Original Assignee
North University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North University of China filed Critical North University of China
Priority to CN201710211621.8A priority Critical patent/CN107103331B/en
Publication of CN107103331A publication Critical patent/CN107103331A/en
Application granted granted Critical
Publication of CN107103331B publication Critical patent/CN107103331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image fusion method, in particular to an image fusion method based on deep learning, which comprises the following steps: constructing a basic unit by utilizing the convolution layer based on an automatic encoder; stacking a plurality of basic units to train to obtain a deep stacking neural network, and adjusting the stacking network in an end-to-end mode; decomposing the input image by using the stacked network to obtain respective high-frequency and low-frequency feature mapping maps, and respectively taking the large sum of the local variance and the region matching degree to combine the high-frequency and low-frequency feature mapping maps; and putting the high-frequency fusion characteristic mapping chart and the low-frequency fusion characteristic mapping chart back to the last layer of network to obtain a final fusion image. The method can carry out self-adaptive decomposition and reconstruction on the image, only one high-frequency characteristic mapping image and one low-frequency characteristic mapping image are needed during fusion, the number and the type of the filters do not need to be defined manually, the number of decomposition layers and the number of filtering directions of the image do not need to be selected, and the dependency of a fusion algorithm on priori knowledge can be greatly improved.

Description

Image fusion method based on deep learning
Technical Field
The invention relates to an image fusion method, in particular to an image fusion method based on deep learning.
Background
The image fusion is one of the key technologies of a complex detection system, and aims to synthesize a plurality of images or sequence detection images of the same scene into an image with more complete and comprehensive information so as to facilitate subsequent image analysis, target identification and tracking. The multi-scale transform domain fusion is a main method adopted at present, but a proper multi-scale transform method and a fusion rule are often selected according to priori knowledge, and engineering application is not facilitated.
Recently, deep learning has successfully broken through the constraint of a fixed state model in many fields such as image classification, target tracking and the like, and has less exploratory researches in the field of image fusion, for example, a remote sensing image is decomposed into high and low frequency images by using a deep support value learning network and then respectively fused; and the multi-scale transformation and the low-frequency automatic coding are combined to fuse multi-focus images, so that a better fusion result is obtained. The former only realizes the self-adaptation of a support value filter, and can not realize the self-adaptation of the filter for the fusion which is not suitable for the support transformation, and the latter realizes the self-adaptation of a fusion rule, but the filter needs to be defined manually. In general, these studies prove that the parameters obtained by deep artificial neural network learning are more and more comprehensive, and can be adaptive with full strain, and the dependency of the method on the prior knowledge is reduced, but the problems that various multi-scale transformations in image fusion need to select the filter type, the number of decomposition layers, the number of directions and the like according to the prior knowledge are not solved. In actual detection, the acquisition of the prior knowledge is very difficult.
Therefore, a new method is needed to solve the problem that the image fusion method based on prior knowledge is difficult to engineer when multi-scale transformation and other methods are used for fusing images.
Disclosure of Invention
The invention provides an image fusion method based on deep learning, aiming at solving the problems that the type, the number of decomposition layers, the number of directions and the like of a filter need to be selected according to prior knowledge when an image is fused by multi-scale transformation.
The invention is realized by adopting the following technical scheme: an image fusion method based on deep learning comprises the following steps:
constructing a basic unit of the deep-stack convolutional neural network, wherein the basic unit consists of a high-frequency subnet, a low-frequency subnet and a fusion convolutional layer, and the high-frequency subnet and the low-frequency subnet respectively consist of three convolutional layers, wherein the first convolutional layer limits input information, the second convolutional layer combines the information, and the third convolutional layer merges the information into a mapping map;
training a basic unit, stacking a plurality of basic units, and training in an end-to-end mode to obtain a deep stacking neural network;
decomposing the input images by using the stacked neural network respectively, obtaining respective high-frequency and low-frequency feature mapping maps at the third layer convolution layer of the last basic unit respectively, obtaining a fused high-frequency feature mapping map by using local variance, and obtaining a fused low-frequency feature mapping map by using the region matching degree;
and (4) putting the high-frequency feature mapping graph and the low-frequency feature mapping graph back to the fusion convolution layer of the last basic unit to obtain a final fusion image.
In the image fusion method based on deep learning, the convolution kernels of the first convolution layers of the high-frequency sub-network and the low-frequency sub-network are respectively initialized into the Gaussian Laplace filter and the Gaussian filter, and the convolution kernels of the other layers are used
Figure BDA0001261052030000021
And (5) initializing.
According to the image fusion method based on deep learning, the training set and the test set of the deep stacking neural network are formed by proportionally mixing various images, the images comprise visible light images, infrared images and medical images, and the mini-batch value is between 22 and 32. The mini-batch size determines the stability of error convergence, the larger the value is, the more stable the error convergence is, but the more memory is occupied, so the value is between 22 and 32, the learning rate determines the speed of error convergence, the larger the value is, the faster the convergence speed is, but the more unstable the error convergence speed is, and the value is usually between 0.001 and 0.01.
According to the image fusion method based on deep learning, the convolution kernels of the first layer of convolution layer and the second layer of convolution layer of the basic unit are equal in number and range from 4 to 16, and the stacking number of the basic unit ranges from 3 to 6. The more the number of the convolution kernels is, the more the features are learned, but the larger the memory is occupied, and the value is usually between 4 and 16. The more the number of the stacked basic units is, the finer the decomposition of the network on the image is, but the more the number is, the lower the reconstruction capability of the details such as edges is, and the value is usually between 3 and 6.
The image fusion method based on deep learning, the activation function selection of the convolution layer
Figure BDA0001261052030000031
Or y ═ max (0, x).
One advantage of multi-scale transformation fusion images is that the same synthesis rules can be applied to the same kind of information (high frequency or low frequency) so as to improve the information utilization rate and make the fusion result more accurate, but the disadvantage is that the proper parameters need to be selected depending on prior knowledge; the convolutional neural network is a common deep learning model, performs feature extraction and abstraction on an image through series of convolution and downsampling, is consistent with the idea of performing multi-scale decomposition on the image through the convolution and downsampling, and is different in that more and more filters are obtained through neural network learning, so that a proper filter can be selected from the neural network in a self-adaptive manner when different objects are fused; the automatic encoder is also a deep learning model, which can carry out encoding and decoding on signals within a certain error and is consistent with the idea of decomposing and reconstructing images through multi-scale transformation, but the model is not suitable for processing two-dimensional signals such as images, so the invention utilizes the convolution layer of the convolution neural network to replace the full-connection layer of the automatic encoder to construct the neural network to simulate the decomposition and reconstruction of the images by multi-scale transformation, overcomes the defect of the multi-scale transformation by utilizing the advantage of self-adaption, however, if the network is not limited, it extracts high and low frequency features that are not images, which is not beneficial to making fusion rules in a targeted manner, therefore, the invention uses the Gaussian Laplace filter and the Gaussian filter as initial convolution kernels of the first-layer network of the high-frequency sub-network and the low-frequency sub-network respectively, ensuring that the input image can obtain a high-frequency characteristic mapping chart and a low-frequency characteristic mapping chart after passing through the high-frequency subnet and the low-frequency subnet; because the learning ability of the shallow layer network is weaker, the deep stacking convolutional neural network is formed by stacking the network serving as a basic unit so as to improve the learning ability; the accuracy, stability and convergence of the network are improved by using an end-to-end mode for training; different types of images are used for training the convolutional neural network stacked at the same depth, so that the generalization capability of the network is ensured, and the final fusion result is conveniently obtained; and then, respectively formulating proper fusion rules for the high-frequency and low-frequency characteristics obtained by decomposing the image, the invention can realize the self-adaptive decomposition and fusion of the input image, and compared with a multi-scale transformation method, the method has the advantages of simple algorithm, good quality of fusion results and high operation speed.
Fig. 4-6 are examples of infrared/visible images, where fig. 4 is an infrared image, fig. 5 is a visible image, and fig. 6 is a fusion result of the present invention.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is a diagram showing a basic unit structure.
Fig. 3 is a network structure diagram in the case of fusing images according to the present invention.
Fig. 4 is an infrared image.
Fig. 5 is a visible light image.
FIG. 6 is a fused image of the present invention.
Detailed Description
An image fusion method based on deep learning comprises the following steps:
1. building a deep stacked convolutional neural network base unit
The deep-stacking convolutional neural network is formed by stacking a plurality of basic units, each basic unit is composed of a high-frequency subnet, a low-frequency subnet and a fusion convolutional layer, each of the high-frequency subnet and the low-frequency subnet is composed of three convolutional layers, wherein the first convolutional layer limits input information, the second convolutional layer combines the information, and the third convolutional layer combines the information into high-frequency and low-frequency characteristic mapping maps, which is specifically as follows:
(1) inputting a source image x, and obtaining a feature map of a first convolution layer H1 of the high-frequency subnet through convolution operation
Figure BDA0001261052030000041
In the formula (I), the compound is shown in the specification,
Figure BDA0001261052030000042
denotes convolution operation, f is activation function, ω is convolution kernel, θ denotes offset, j denotes jth output, j ═ 1,2, …, n1},n1The number of characteristic maps of H1 layers; the network structures of the high-frequency sub-network and the low-frequency sub-network are the same, and the difference is that the high-frequency sub-network H1 layer convolution kernel is initialized to a Gaussian Laplace filter, and the low-frequency sub-network H1 layer convolution kernel is initialized to a Gaussian filter;
(2)IH1convolution is carried out to obtain a feature map of the second convolution layer H2 of the high-frequency subnet
Figure BDA0001261052030000051
In the formula, convolution kernel
Figure BDA0001261052030000052
Where G is gaussian, n is the number of input neurons, i is the ith input, j is the jth (j ═ {1,2, …, n2}) outputs, n2The number of characteristic maps of H2 layers; will be input into IH1Is changed to IL1Obtaining a second convolution layer L2 layer characteristic map I of the low-frequency subnetL2
(3)IH2Then is convolutedThe feature map of the third convolution layer H3 of the high-frequency sub-network can be obtained
Figure BDA0001261052030000053
Convolution kernel
Figure BDA0001261052030000054
Will IH2Is replaced by IL2The L3 layer characteristic map I of the third convolution layer of the low-frequency subnet can be obtainedL3
(4) High frequency feature map IH3And low frequency feature map IL3Then obtaining a reconstructed image by fusing convolution layer convolution
Figure BDA0001261052030000055
Convolution kernel
Figure BDA0001261052030000056
2. Training base unit
(1) After the basic unit is constructed, initializing all the biases of the basic unit to 0;
(2) the basic unit is trained in an unsupervised mode, and training data is input
Figure BDA0001261052030000057
Where x denotes the input image, t denotes the target image, the training set ts=xsS represents the current training sample, N represents the size of the training set, and the output of the network is set to ZsThe objective function is:
Figure BDA0001261052030000058
where m and n are the size of a single image, ZsuvRepresenting the value at the current sample output result point (u, v), and then training the network through a back propagation algorithm;
3. building and training stacked networks
Connecting a plurality of trained basic units end to end, taking the output of the previous basic unit as the input of the next basic unit to form a stacked network, and simultaneously adjusting the whole network in an end-to-end mode by using a data set and a target function which are the same as the basic units to finally obtain a stacked convolutional neural network;
4. fusion image based on depth stacking convolution neural network
(1) Inputting source images A and B, and respectively obtaining a high-frequency feature map A at the H3 layer and the L3 layer of the last basic unit by the trained deep-stacking convolutional neural networkH3、BH3And low frequency feature map AL3、BL3
(2) Fusing high frequency feature maps
Figure BDA0001261052030000061
In the formula, FH3(x,y)、AH3(x, y) and BH3(x, y) respectively representing the values of the fusion result, the high-frequency feature map of the source image A and the high-frequency feature map of the source image B at points (x, y), wherein sigma (x, y) represents the local variance of the points (x, y), and the size of a local window is 5 multiplied by 5 or 7 multiplied by 7;
(3) fusing low frequency feature maps
Low frequency feature map AL3The local energy of (x, y) is defined as:
Figure BDA0001261052030000062
where P, Q controls the size of the point (x, y) local window, AL3(x + p, y + q) represents AL3At the value of point (x + p, y + q), AL3Change to BL3Can obtain BL2Local area energy of
Figure BDA0001261052030000063
Low frequency feature map AL3(x, y) and BL3The matching degree of the corresponding area of (x, y) is as follows:
Figure BDA0001261052030000064
the matching degree reflects AL3And BL3If it corresponds to the region AL3=BL3If so, the MAB (x, y) is 1, and the matching degree of the information in the region is the highest;
let α be the threshold of the degree of matchThe threshold value α is usually between 0.7-0.9, MAB (x, y)<α, indicating that the difference of the corresponding regions is large, it is preferable to adopt a large merging rule:
Figure BDA0001261052030000071
otherwise, if MAB (x, y) ≧ α, indicating that the difference between the two is small, then an adaptive weighted average is used:
Figure BDA0001261052030000072
in the formula, λLAnd λSRespectively represent a greater weight and a lesser weight, an
Figure BDA0001261052030000073
λS=1-λL
(4) F is to beH3And FL3And (5) putting back the fusion convolution layer of the last basic unit to obtain a fusion result F.
In the image fusion method based on deep learning, the convolution kernels of the first convolution layers of the high-frequency sub-network and the low-frequency sub-network are respectively initialized into the Gaussian Laplace filter and the Gaussian filter, and the convolution kernels of the other layers are used
Figure BDA0001261052030000074
And (5) initializing.
According to the image fusion method based on deep learning, the training set and the test set of the deep stacking neural network are formed by proportionally mixing various images, the images comprise visible light images, infrared images and medical images, and the mini-batch value is between 22 and 32. The mini-batch size determines the stability of error convergence, the larger the value is, the more stable the error convergence is, but the more memory is occupied, so the value is between 22 and 32, the learning rate determines the speed of error convergence, the larger the value is, the faster the convergence speed is, but the more unstable the error convergence speed is, and the value is usually between 0.001 and 0.01.
According to the image fusion method based on deep learning, the convolution kernels of the first layer of convolution layer and the second layer of convolution layer of the basic unit are equal in number and range from 4 to 16, and the stacking number of the basic unit ranges from 3 to 6. The more the number of the convolution kernels is, the more the learned features are, but the larger the memory occupied is, the value is usually between 4 and 16. The more the number of the stacked basic units is, the finer the decomposition of the network on the image is, but the more the number is, the lower the reconstruction capability of the details such as edges is, and the value is usually between 3 and 6.
The image fusion method based on deep learning, the activation function selection of the convolution layer
Figure BDA0001261052030000081
Or y ═ max (0, x).
The implementation of the back propagation algorithm, the gaussian filter and the laplacian of gaussian filter are algorithms well known to those skilled in the art, and the specific procedures can be referred to in the corresponding textbooks or technical literature.

Claims (4)

1. An image fusion method based on deep learning is characterized by comprising the following steps:
the method comprises the following steps of constructing a basic unit of the deep-stacking convolutional neural network, wherein the basic unit is composed of a high-frequency subnet, a low-frequency subnet and a fusion convolutional layer, the high-frequency subnet and the low-frequency subnet are respectively composed of three convolutional layers, the first convolutional layer limits input information, the second convolutional layer combines the information, and the third convolutional layer merges the information into a mapping map, specifically as follows:
(1) inputting a source image x, and obtaining a feature map I of a first convolution layer H1 of the high-frequency subnet through convolution operationH1
Figure FDA0002457715350000011
In the formula (I), the compound is shown in the specification,
Figure FDA0002457715350000012
denotes convolution operation, f is activation function, ω is convolution kernel, θ denotes offset, j denotes jth output, j ═ 1,2, …, n1},n1The number of characteristic maps of H1 layers; the network structure of the high-frequency sub-network and the low-frequency sub-network are the same, and the difference is that the H1 layer convolution kernel of the high-frequency sub-network is initialized to be Gauss Laplace filterA wave filter, wherein a low-frequency subnet L1 layer convolution kernel is initialized to be a Gaussian filter;
(2)IH1convolution is carried out to obtain a feature map I of a second convolution layer H2 of the high-frequency subnetH2
Figure FDA0002457715350000013
In the formula, convolution kernel
Figure FDA0002457715350000014
Where G is gaussian, n is the number of input neurons, i is the ith input, j is the jth (j ═ {1,2, …, n2}) outputs, n2The number of characteristic maps of H2 layers; will be input into IH1Is changed to IL1Obtaining a second convolution layer L2 layer characteristic map I of the low-frequency subnetL2
(3)IH2Convolution is carried out to obtain a feature map I of a third convolution layer H3 of the high-frequency subnetH3
Figure FDA0002457715350000015
Convolution kernel
Figure FDA0002457715350000016
Will IH2Is replaced by IL2The L3 layer characteristic map I of the third convolution layer of the low-frequency subnet can be obtainedL3
(4) High frequency feature map IH3And low frequency feature map IL3Then obtaining a reconstructed image by fusing convolution layer convolution
Figure FDA0002457715350000017
Convolution kernel
Figure FDA0002457715350000018
Training a basic unit, stacking a plurality of basic units, and training in an end-to-end mode to obtain a deep stacking neural network;
decomposing the input images by using the stacked neural network respectively, obtaining respective high-frequency and low-frequency feature mapping maps at the third layer convolution layer of the last basic unit respectively, obtaining a fused high-frequency feature mapping map by using local variance, and obtaining a fused low-frequency feature mapping map by using the region matching degree;
and (4) putting the high-frequency feature mapping graph and the low-frequency feature mapping graph back to the fusion convolution layer of the last basic unit to obtain a final fusion image.
2. The image fusion method based on deep learning of claim 1, wherein the training set and the testing set of the deep stacked neural network are composed of an equal proportion mixture of various types of images, and the images comprise visible light images, infrared images and medical images.
3. The image fusion method based on deep learning of claim 1 or 2, characterized in that the convolution kernels of the first layer of convolution layer and the second layer of convolution layer of the basic unit are equal in number and have a value range of 4-16, and the stacking number of the basic unit has a value range of 3-6.
4. The image fusion method based on deep learning of claim 1 or 2, characterized in that the activation function of convolutional layer is selected
Figure FDA0002457715350000021
Or y ═ max (0, x).
CN201710211621.8A 2017-04-01 2017-04-01 Image fusion method based on deep learning Active CN107103331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710211621.8A CN107103331B (en) 2017-04-01 2017-04-01 Image fusion method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710211621.8A CN107103331B (en) 2017-04-01 2017-04-01 Image fusion method based on deep learning

Publications (2)

Publication Number Publication Date
CN107103331A CN107103331A (en) 2017-08-29
CN107103331B true CN107103331B (en) 2020-06-16

Family

ID=59676045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710211621.8A Active CN107103331B (en) 2017-04-01 2017-04-01 Image fusion method based on deep learning

Country Status (1)

Country Link
CN (1) CN107103331B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109685058B (en) 2017-10-18 2021-07-09 杭州海康威视数字技术股份有限公司 Image target identification method and device and computer equipment
CN108447041B (en) * 2018-01-30 2020-12-15 中国航天电子技术研究院 Multi-source image fusion method based on reinforcement learning
CN108564546B (en) * 2018-04-18 2020-08-04 厦门美图之家科技有限公司 Model training method and device and photographing terminal
CN108764316B (en) * 2018-05-18 2022-08-26 河海大学 Remote sensing image scene classification method based on deep convolutional neural network and multi-core learning
CN110557572B (en) * 2018-05-31 2021-04-27 杭州海康威视数字技术股份有限公司 Image processing method and device and convolutional neural network system
CN108846822B (en) * 2018-06-01 2021-08-24 桂林电子科技大学 Fusion method of visible light image and infrared light image based on hybrid neural network
CN109410114B (en) * 2018-09-19 2023-08-25 湖北工业大学 Compressed Sensing Image Reconstruction Algorithm Based on Deep Learning
CN109166087A (en) * 2018-09-29 2019-01-08 上海联影医疗科技有限公司 Style conversion method, device, medical supply, image system and the storage medium of medical image
CN109544492A (en) * 2018-10-25 2019-03-29 东南大学 A kind of multi-focus image fusion data set production method based on convolutional neural networks
CN109447936A (en) * 2018-12-21 2019-03-08 江苏师范大学 A kind of infrared and visible light image fusion method
CN109886305B (en) * 2019-01-23 2021-05-04 浙江大学 Multi-sensor non-sequential measurement asynchronous fusion method based on GM-PHD filtering
CN110084773A (en) * 2019-03-25 2019-08-02 西北工业大学 A kind of image interfusion method based on depth convolution autoencoder network
CN110084288B (en) * 2019-04-11 2023-04-18 江南大学 Image fusion method based on self-learning neural unit
CN110097528B (en) * 2019-04-11 2023-04-18 江南大学 Image fusion method based on joint convolution self-coding network
CN110132554B (en) * 2019-04-17 2020-10-09 东南大学 Rotary machine fault diagnosis method based on deep Laplace self-coding
CN110210541B (en) * 2019-05-23 2021-09-03 浙江大华技术股份有限公司 Image fusion method and device, and storage device
CN110232670B (en) * 2019-06-19 2023-05-12 重庆大学 Method for enhancing visual effect of image based on high-low frequency separation
CN110930375B (en) * 2019-11-13 2021-02-09 广东国地规划科技股份有限公司 Method, system and device for monitoring land coverage change and storage medium
CN111091138B (en) * 2019-11-14 2024-07-16 远景智能国际私人投资有限公司 Irradiation prediction processing method, stacking generalization model training method and device
CN111414842B (en) * 2020-03-17 2021-04-13 腾讯科技(深圳)有限公司 Video comparison method and device, computer equipment and storage medium
CN111681195B (en) * 2020-06-09 2023-06-30 中国人民解放军63811部队 Fusion method and device of infrared image and visible light image and readable storage medium
US11710227B2 (en) * 2020-06-19 2023-07-25 Kla Corporation Design-to-wafer image correlation by combining information from multiple collection channels
CN112232430A (en) * 2020-10-23 2021-01-15 浙江大华技术股份有限公司 Neural network model testing method and device, storage medium and electronic device
CN113012087B (en) * 2021-03-31 2022-11-04 中南大学 Image fusion method based on convolutional neural network
CN113822833B (en) * 2021-09-26 2024-01-16 沈阳航空航天大学 Infrared and visible light image frequency domain fusion method based on convolutional neural network and regional energy

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177433A (en) * 2013-04-09 2013-06-26 南京理工大学 Infrared and low light image fusion method
CN104835130A (en) * 2015-04-17 2015-08-12 北京联合大学 Multi-exposure image fusion method
US9741107B2 (en) * 2015-06-05 2017-08-22 Sony Corporation Full reference image quality assessment based on convolutional neural network
US9852492B2 (en) * 2015-09-18 2017-12-26 Yahoo Holdings, Inc. Face detection
CN105631466B (en) * 2015-12-21 2019-05-07 中国科学院深圳先进技术研究院 The method and device of image classification
CN106295714B (en) * 2016-08-22 2020-01-21 中国科学院电子学研究所 Multi-source remote sensing image fusion method based on deep learning

Also Published As

Publication number Publication date
CN107103331A (en) 2017-08-29

Similar Documents

Publication Publication Date Title
CN107103331B (en) Image fusion method based on deep learning
Batson et al. Noise2self: Blind denoising by self-supervision
Krull et al. Noise2void-learning denoising from single noisy images
CN110532859B (en) Remote sensing image target detection method based on deep evolution pruning convolution net
Hao et al. A deep network architecture for super-resolution-aided hyperspectral image classification with classwise loss
CN109711426B (en) Pathological image classification device and method based on GAN and transfer learning
CN109272010B (en) Multi-scale remote sensing image fusion method based on convolutional neural network
Pacifici et al. Urban mapping using coarse SAR and optical data: Outcome of the 2007 GRSS data fusion contest
CN111445476B (en) Monocular depth estimation method based on multi-mode unsupervised image content decoupling
Zhang et al. A survey on computational spectral reconstruction methods from RGB to hyperspectral imaging
CN105981050B (en) For extracting the method and system of face characteristic from the data of facial image
CN108596222B (en) Image fusion method based on deconvolution neural network
Schmelzle et al. Cosmological model discrimination with Deep Learning
CN112560624B (en) High-resolution remote sensing image semantic segmentation method based on model depth integration
CN114897728A (en) Image enhancement method and device, terminal equipment and storage medium
Hughes et al. A semi-supervised approach to SAR-optical image matching
CN106845343A (en) A kind of remote sensing image offshore platform automatic testing method
Lai et al. Mixed attention network for hyperspectral image denoising
Dumka et al. Advanced digital image processing and its applications in big data
Wang et al. Single neuron segmentation using graph-based global reasoning with auxiliary skeleton loss from 3D optical microscope images
Hepburn et al. Enforcing perceptual consistency on generative adversarial networks by using the normalised laplacian pyramid distance
CN116563683A (en) Remote sensing image scene classification method based on convolutional neural network and multi-layer perceptron
CN115565079A (en) Remote sensing image semantic segmentation method based on bilateral fusion
CN102436642A (en) Multi-scale color texture image segmentation method combined with MRF (Markov Random Field) and neural network
Riese Development and Applications of Machine Learning Methods for Hyperspectral Data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant