CN110555820A - Image fusion method based on convolutional neural network and dynamic guide filtering - Google Patents

Image fusion method based on convolutional neural network and dynamic guide filtering Download PDF

Info

Publication number
CN110555820A
CN110555820A CN201910803493.5A CN201910803493A CN110555820A CN 110555820 A CN110555820 A CN 110555820A CN 201910803493 A CN201910803493 A CN 201910803493A CN 110555820 A CN110555820 A CN 110555820A
Authority
CN
China
Prior art keywords
image
focus
map
neural network
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910803493.5A
Other languages
Chinese (zh)
Inventor
王健
杨珂
秦春霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Northwest University of Technology
Xian Aisheng Technology Group Co Ltd
Original Assignee
Northwest University of Technology
Xian Aisheng Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwest University of Technology, Xian Aisheng Technology Group Co Ltd filed Critical Northwest University of Technology
Priority to CN201910803493.5A priority Critical patent/CN110555820A/en
Publication of CN110555820A publication Critical patent/CN110555820A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a novel multi-focus image fusion method based on a convolutional neural network and dynamic guided filtering. The direct mapping from the source image to the focus image is generated by constructing a convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution diagrams of a focus region and a non-focus region are obtained, and then a high-quality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation.

Description

Image fusion method based on convolutional neural network and dynamic guide filtering
Technical Field
The invention relates to an image fusion algorithm of multi-focus images, which can be applied to various military or civil image processing systems.
background
The unmanned aerial vehicle is used as an aerial reconnaissance and weapon platform and generally carries various imaging sensors such as infrared sensors, visible light sensors, laser sensors, SAR sensors and the like, and the purpose of the unmanned aerial vehicle is to comprehensively utilize image information of multiple sensors to enable the unmanned aerial vehicle to better perform tasks such as aerial reconnaissance, battlefield monitoring and the like. The image fusion technology integrates image information obtained by various imaging sensors, and obtains an image with clear targets and scenes by utilizing complementary information among the imaging sensors, so that the unmanned aerial vehicle can be very favorable for accurately reconnaissance, identification, positioning and the like of the ground targets. The fused image has higher definition and larger information quantity, and the acquired target and scene information is more comprehensive and more suitable for human visual perception, so the fused image plays an increasingly important role in the fields of military and national defense (unmanned aerial vehicle autonomous navigation, remote sensing target detection), civil use (such as medical diagnosis, photography and the like).
In recent years, deep learning has been widely used in the field of computer vision and image processing, and has achieved significant success in different computer vision tasks, which has the advantage of being able to use large amounts of data to train networks and learn target features, thereby avoiding complex manual design, and also achieving better results. The document "Ma J, Zhou Z, Wang B, et al, intrinsic and visual image fusion based on visual design of weighted least squares optimization. intrinsic Physics & technology" proposes a novel multi-scale fusion method based on the proposed weighted least squares optimization, first decomposing the input image into base and detail layers using a rolling guided filter and a gaussian filter. Second, the base layer is fused with the visual saliency map to remove residual low frequency information. Finally, a weighted least squares method is used to optimize detail layer information. The resulting fused image detail of the paper appears more naturally and fits human visual perception. The document "Liu Y, Chen X, Ward R, actual. image Fusion With connected Sparse representation letters." proposes a new image Fusion algorithm, which first decomposes an image into a base layer and details by gaussian filtering, however, performs Fusion of two-scale images using a method based on convolution Sparse representation, and finally implements Fusion of images using weighted averaging. The method solves the problem that the traditional sparse representation-based method is low in detail retention capability. The methods proposed in the above documents can achieve certain effects when used for image fusion, but these methods have the problems of complex algorithm implementation, low implementation efficiency, and unsatisfactory fusion effect. Therefore, the invention provides an image fusion algorithm based on Convolutional Neural Networks (CNN) and dynamic guided filtering.
Disclosure of Invention
Technical problem to be solved
Aiming at the problems of high algorithm complexity, low efficiency and excessive dependence on manual design in the traditional multi-focus image fusion, a novel multi-focus image fusion method based on a convolutional neural network and dynamic guided filtering is provided. The direct mapping from the source image to the focus image is generated by constructing a convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution diagrams of a focus region and a non-focus region are obtained, and then a high-quality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation.
Technical scheme
An image fusion method based on a convolutional neural network and dynamic guided filtering is characterized by comprising the following steps:
step 1: inputting a source image A and a source image B into a CNN model for training to obtain a score map S, wherein the CNN model comprises three convolutional layers and a maximum pool layer, and the size and the step length of each convolutional layer are respectively set to be 3 multiplied by 3 and 1; the kernel size and the stride of the maximum pool layer are respectively set to be 2 multiplied by 2 and 2; the output of the score map S, i.e. the network, is a two-dimensional vector fully connected to the 256-dimensional vector, resulting in a probability distribution of two classes; filling the score map S by 2 multiplied by 2 according to pixel points, and carrying out average processing on pixels overlapped by different filling blocks to obtain a focus map M with the same size as the source image;
Step 2: and (3) dividing the focus map M into binary maps T (x, y) by adopting a 'selection-maximum' strategy:
And step 3: and (3) carrying out small-region removal and edge smoothing operation on the wrongly classified pixels in the binary map T (x, y) by adopting dynamic guided filtering to obtain a decision map D (x, y):
D(x,y)=RGF(T,σSr,t)
Where RGF denotes a dynamically guided filtering operation, σSDenotes the standard deviation, σrDemonstration of the tablethe surrounding weight, t represents the iteration number;
And 4, step 4: processing the decision graph D (x, y) obtained in the step 3 according to a pixel weighted average rule to obtain a final fusion image F:
F(x,y)=D(x,y)A(x,y)+(1-D(x,y))B(x,y)
Where a (x, y) and B (x, y) respectively represent two input images.
advantageous effects
The invention provides an image fusion method based on a convolutional neural network and dynamic guided filtering. Aiming at the problems of high algorithm complexity, low efficiency and excessive dependence on manual design in the traditional multi-focus image fusion, a novel method for performing multi-focus image fusion by using a convolutional neural network and dynamic guided filtering is provided. The direct mapping from the source image to the focus image is generated by constructing a convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution diagrams of a focus region and a non-focus region are obtained, and then a high-quality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation. Experimental results show that the method solves the problems of complex algorithm and unsatisfactory fusion effect of the traditional image fusion method, and further improves the fusion quality of the images.
Drawings
FIG. 1: image fusion algorithm flow based on convolutional neural network and dynamic guided filtering
FIG. 2: convolutional neural network model used in the present invention
FIG. 3: dynamic guided filtering model used in the invention
FIG. 4: multi-focus original image and process image generated by adopting method
FIG. 5: comparison of results of different algorithms
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
The invention relates to an image fusion method based on a convolutional neural network and dynamic guided filtering, which comprises the following specific implementation processes: focus detection based on convolutional neural network
CNN is a typical deep learning model that learns the hierarchical feature representation mechanism of signal or image data with different levels of abstraction. CNN is a trainable multi-stage feedforward artificial neural network that contains a certain number of feature maps per stage, which correspond to the level of abstraction of a feature.
Local receptive fields, shared weights and subsampling are three basic architectural ideas of CNN. The local receptive field shows that the neuron in a certain stage is only connected with a plurality of spatially adjacent neurons in the previous stage, and the connection of different neurons corresponds to different parameters, so that the parameters to be trained can be reduced by adopting the thought of the local receptive field. Sharing the weights means that the weights of the convolution kernels are spatially invariant in the feature mapping of a particular stage, which means that the same convolution kernels are used for the extraction of the same feature, and thus the parameters to be trained are further reduced. By combining the two ideas, the number of parameters to be trained is greatly reduced. x is the number ofiAnd yjRespectively representing the ith input feature map and the jth output feature map of the convolutional layer. Convolution and nonlinear ReLU activation in CNN are collectively expressed as:
wherein k isijIs xiAnd yjConvolution kernel of bjIs the deviation, denotes the convolution operation. Sub-sampling is also known as pooling, and is used to reduce data dimensionality. Maximum pooling and average pooling are common operations in CNNs. Maximum pooling is expressed as
WhereinIs the neuron at (r, c) in the ith output map of the largest pooling layer, and maps x at the ith inputiIn the method, a local region with a size of s × s is assignedthe maximum value, m, n is the step size. By combining the above three ideas, CNN can get some important invariance of transformation and scaling to some extent.
In the present invention, multi-focus image fusion is considered as a two-class classification problem. For a pair of image blocks p of the same sceneA,pBthe goal is to learn the convolutional neural network model, whose output is a scalar from 0 to 1. Specifically, when p isAFocus pBAt defocus, the output should be close to 1, when pBIs focused and pAAt defocus, the value should be close to 0. The output value represents the focus attribute of the patch pair.
fig. 2 shows a convolutional neural network model used in the fusion algorithm proposed by the present invention. Each branch in the network has three convolutional layers and one max pool layer. The kernel size and stride for each convolutional layer are set to 3 × 3 and 1, respectively. The kernel size and stride for the maximum pool level are set to 2 x 2 and 2, respectively. The output of the network is a two-dimensional vector fully connected to a 256-dimensional vector, resulting in a probability distribution for both classes.
image fusion based on dynamic guided filtering
After the focus image of the image is obtained through the convolutional neural network, post-processing operation needs to be performed on the image to obtain a fused image. The image post-processing operation employed by the present invention requires both smoothing edges and correcting misconfigured point information. When smoothing the image, the edge preserving filter can preserve the boundaries of the image structure to reduce the halo phenomenon and maintain the consistency of the structure space. Therefore, edge-preserving filter based fusion methods typically achieve better performance. However, most edge preserving filters preserve the edges of an image only according to the contrast of the image content, regardless of scale. Therefore, these filters are not suitable for application to the image fusion algorithm of the present invention. The dynamic guiding filtering adopted by the invention has the characteristics of scale perception and edge preservation. Fig. 3 is a dynamic guided filtering model, which mainly includes two steps: small structure removal and edge restoration.
Firstly, a small structure removing part mainly uses a Gaussian filter, an input image I is processed by the Gaussian filter to obtain an output image G, and the process is expressed as follows:
Where p and q represent the coordinates of the pixel, σsThe standard deviation is indicated. KpThe purpose is normalization, n (p) represents the set of window pixels centered on pixel p.
Second, edge restoration iterates using a joint filter. The pilot filter has high computational efficiency and good edge-preserving characteristics, so the invention selects the pilot filter as the joint filter. This step is an iterative process in which the gaussian-filtered smoothed image G is taken as the initial image J1restoration to image J by iterative updatetwhere the tth iteration may be represented as:
Wherein sigmarRepresenting the range weight. KpThe aim is normalization and t represents the number of iterations.
Combining equation (5) with equation (6), the dynamic guided filtering can be expressed as:
J=RGF(I,σsr,t) (7)
Where I denotes the input image, σsDenotes the standard deviation, σrRepresenting the range weight, t representing the number of iterations, J representing the output image, and RGF representing the dynamically guided filtering operation.
The method comprises the following specific steps:
The method comprises the following steps: focus detection
Let A and B denote the two source images. A score map S is obtained by feeding a and B to CNN model training. The value of each coefficient in S is in the range 0 to 1, which indicates the focus characteristics of a pair of patches of size 16 x 16 in the source image (see fig. 4 (c)). The closer the value is to 1, the more focused the patch from the source image a is; the closer the value is to 0, the more focused the patch from the source image B is. Since the convolution operation is performed with pooling, the image is reduced to half of the original size, and in order to make the final fused image have the same size as the source image, the score map S is averaged, each block corresponds to a corresponding position in the focus map M, the overlapped parts are averaged, and the obtained focus map M has the same size as the source image.
Step two: initial segmentation
Further processing of the image resulting from the focus detection is required in order to preserve as much useful information as possible. As with most spatial domain multi-focus image fusion methods, the "select-max" strategy is employed to process the focus map M. The focus map is divided into:
The resulting binary chart T is shown in FIG. 4 (d). It can be seen that almost all gray pixels in the focus map are correctly classified, which indicates that the learned CNN model can achieve accurate performance even for plain regions in the source image.
Step three: consistency check
As can be seen from fig. 4(d), the binary image may contain some misclassified small pixel regions, and the transition regions between the focused and unfocused images in the binary image are unnatural and present as glitches. If the binary image is directly used for subsequent operation, the obtained fusion image has artifacts in the edge area, and the quality of the obtained image is not high, so that binary image processing is required. The invention adopts dynamic guided filtering to remove small regions of wrongly classified pixels in the binary map and carry out smoothing operation on edges to obtain a decision map D, as shown in FIG. 4 (e).
D(x,y)=RGF(T,σSr,t) (9)
Wherein RGF representsThe filtering operation is dynamically guided, T represents the binary diagram obtained in the last step, sigmaSDenotes the standard deviation, σrRepresenting the range weight and t representing the number of iterations. All pixel points in the decision graph obtained through the consistency check operation are correctly classified and the edge transition is natural.
Step four: image fusion
And finally, processing the decision graph D obtained in the last step according to a pixel weighted average rule to obtain a final fusion image F.
F(x,y)=D(x,y)A(x,y)+(1-D(x,y))B(x,y) (10)
Where D (x, y) represents a decision graph, a (x, y) and B (x, y) represent two input images, respectively, and a fused image F is shown in fig. 4 (F).
1. Experimental parameter settings
The experimental environment is Intel (R) core (TM) i3-8350CPU @3.4GHz, the memory is 16GB, and the GPU processor is NVIDA GeForce GTX 1080 Ti. The training pictures were generated from images in the ILSVRC 2012 validation image set, which contained 50000 high quality natural images from the ImageNet data set. For each image, five blurred versions with different blur levels were obtained using a gaussian filter with a standard deviation of 2 and a cutoff frequency of 7 × 7. Training was performed using a Stochastic Gradient Descent (SGD) method to minimize the loss function. The batch size was set to 128, the momentum and weight decay were set to 0.9 and 0.0005 respectively, the ownership weights were initialized with the Xavier algorithm and the learning rate was 0.0001. In the consistency check process, the standard deviation is set to 3, the range weight is set to 0.05, and the number of iterations is set to 4.
the experimental data source is a standard image fusion image library. The proposed fusion method is compared with four commonly used representative multi-focus image fusion methods, which are based on Cross Bilateral Filtering (CBF), based on Weighted Least Squares (WLS), based on Convolution Sparse Representation (CSR) and Convolutional Neural Network (CNN), and related parameters in the comparison method take the optimal parameters given in the corresponding paper. Objective evaluation plays an important role in image fusion, because the performance of the fusion method is mainly quantitatively evaluated through a plurality of objective evaluation indexes. In the present invention, the four objective metric criteria we choose are: 1) normalized Mutual Information (QNMI), which measures the amount of mutual information between the fused image and the source image. 2) A Gradient-based metric (QG) that evaluates the degree of spatial detail injected into the fused image from the source image. 3) A Structural Similarity-based metric (QSS), which measures the amount of Structural information retained in the fused image. 4) Based on a measure of Human Perception (QHP), it addresses the main features of the Human visual system. For each of the four metrics above, a larger value indicates better fusion performance.
2. Content of the experiment
According to the method, the direct mapping from the source image to the focus image is generated by constructing the convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution maps of a focus region and a non-focus region are obtained, and then a high-quality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation. The method further verifies the feasibility of the method by evaluating the quality of the generated fusion image through the normalized mutual information QNMI, the measurement QG based on gradient, the measurement QSS based on structural similarity and the human perception QHP, and comparing the quality with different image fusion algorithms. The image fusion algorithm principle based on the convolutional neural network and the dynamic guided filtering is shown in fig. 1. Fig. 2 and fig. 3 show the model structure of the convolutional neural network designed by the invention and the structure of the dynamic pilot filtering, respectively. The results obtained for each of the four steps of the practice of the present invention are shown in FIG. 4. A comparison of the different algorithms is shown in figure 5.
3. Evaluation index
The image fusion effect is generally to use objective evaluation indexes to evaluate the performance, but the selection of the objective evaluation indexes is not a uniform criterion, so four criteria generally accepted in the image fusion are evaluated.
(1) Normalized mutual information QNMI
Hossny proposes a normalized mutual information index to effectively evaluate the performance of the image fusion algorithm. This index can be calculated as:
Wherein, H (A), H (B) and H (F) respectively represent entropies of the source image A, the source image B and the fusion image F, MI (A, F) represents mutual information between the source image A and the fusion image F, and MI (B, F) represents mutual information between the source image B and the fusion image F.
(2) Gradient-based metric QG
QG is a commonly used fusion metric method for evaluating the range of gradient information injected into a fused image from a source image, and a higher value indicates a sharper fused image. The QG calculation process is as follows:
Wherein the content of the first and second substances,WhileandRepresenting the edge intensity and direction retention value at pixel (x, y). QBFThe definitions of (x, y) are similar. Weighting factor wA(x, y) and wB(x, y) each represents QAF(x, y) and QBFSignificance of (x, y).
(3) structure similarity based metric QSS
The QSS measures the amount of structural information retained in the fused image, with higher values indicating more complete retention of the fused image information. The QSS calculation process is as follows:
Wherein: SSIM (a, F | w) is the structural similarity between a and F, defined as SSIM (a, F | w) ═ l (m, n) · c (m, n) · s (m, n), l (m, n) denotes the luminance similarity of two images, c (m, n) denotes the contrast similarity of two images, s (m, n) denotes the structural similarity of two images, SSIM (B, F | w), SSIM (a, B | w) are defined similarly. w is a window size of 7 × 7, and the weight τ (w) is defined as follows:
where s (A | w) and s (B | w) are the variances of the source images A and B at the window w, respectively.
(4) Human perception based metric QHP
QHP is a fusion metric based on human perception that utilizes the principal features of the human visual system model to compare the contrast features of the source image to those of the fused image. Let QAFAnd QBFRespectively representing the contrast information of the fusion image F retained from the source image A and the source image B, and firstly calculating a Global Quality Map (QGQM):
QGQM(i,j)=λA(i,j)QAF(i,j)+λB(i,j)QBF(i,j) (15)
Wherein λ isAAnd λBRespectively represent QAFAnd QBFThe significance of (a). Then, QGQM is averaged to obtain QHP.
4. Simulation test
as can be seen from fig. 5, the contrast of the fused image obtained by the image fusion algorithm based on the Cross Bilateral Filtering (CBF) is reduced, which does not meet the requirement of the index in the occasion of high contrast requirement; the fused image obtained by the image fusion algorithm based on the Weighted Least Square (WLS) has a tiny structure (a flower part in the middle of the left cup) which does not exist in the original image, and the problem of structural distortion exists; the fuzzy distortion phenomenon of the fused image at the edge part of focusing and non-focusing transition is obtained by the image fusion algorithm based on Convolution Sparse Representation (CSR), so that the obtained fusion result is not ideal; fusion images obtained by an image fusion algorithm based on a Convolutional Neural Network (CNN) have small difference in subjective vision and are difficult to judge by naked eyes, so specific objective evaluation is performed according to objective evaluation criteria in table 1.
As can be seen from table 1, the proposed image fusion algorithm based on the convolutional neural network and the dynamic guided filtering achieves the optimal results in the four evaluation indexes of the normalized mutual information QNMI, the gradient-based metric QG, the structural similarity-based metric QSS, and the human perception-based metric QHP. Therefore, the algorithm proposed by the present invention is optimal compared to cross-bilateral filtering (CBF), Weighted Least Squares (WLS) -based, Convolution Sparse Representation (CSR) -based and Convolution Neural Network (CNN) -based image fusion methods.
TABLE 1 image fusion Effect of different algorithms under different objective evaluation criteria

Claims (1)

1. An image fusion method based on a convolutional neural network and dynamic guided filtering is characterized by comprising the following steps:
step 1: inputting a source image A and a source image B into a CNN model for training to obtain a score map S, wherein the CNN model comprises three convolutional layers and a maximum pool layer, and the size and the step length of each convolutional layer are respectively set to be 3 multiplied by 3 and 1; the kernel size and the stride of the maximum pool layer are respectively set to be 2 multiplied by 2 and 2; the output of the score map S, i.e. the network, is a two-dimensional vector fully connected to the 256-dimensional vector, resulting in a probability distribution of two classes; filling the score map S by 2 multiplied by 2 according to pixel points, and carrying out average processing on pixels overlapped by different filling blocks to obtain a focus map M with the same size as the source image;
step 2: and (3) dividing the focus map M into binary maps T (x, y) by adopting a 'selection-maximum' strategy:
And step 3: and (3) carrying out small-region removal and edge smoothing operation on the wrongly classified pixels in the binary map T (x, y) by adopting dynamic guided filtering to obtain a decision map D (x, y):
D(x,y)=RGF(T,σSr,t)
Where RGF denotes a dynamically guided filtering operation, σSdenotes the standard deviation, σrRepresenting the range weight, and t representing the iteration number;
and 4, step 4: processing the decision graph D (x, y) obtained in the step 3 according to a pixel weighted average rule to obtain a final fusion image F:
F(x,y)=D(x,y)A(x,y)+(1-D(x,y))B(x,y)
where a (x, y) and B (x, y) respectively represent two input images.
CN201910803493.5A 2019-08-28 2019-08-28 Image fusion method based on convolutional neural network and dynamic guide filtering Pending CN110555820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910803493.5A CN110555820A (en) 2019-08-28 2019-08-28 Image fusion method based on convolutional neural network and dynamic guide filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910803493.5A CN110555820A (en) 2019-08-28 2019-08-28 Image fusion method based on convolutional neural network and dynamic guide filtering

Publications (1)

Publication Number Publication Date
CN110555820A true CN110555820A (en) 2019-12-10

Family

ID=68736762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910803493.5A Pending CN110555820A (en) 2019-08-28 2019-08-28 Image fusion method based on convolutional neural network and dynamic guide filtering

Country Status (1)

Country Link
CN (1) CN110555820A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111666807A (en) * 2020-04-20 2020-09-15 浙江工业大学 Multi-source fingerprint image fusion method based on convolution sparse representation
CN112184646A (en) * 2020-09-22 2021-01-05 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN113822828A (en) * 2021-08-18 2021-12-21 吉林大学 Multi-focus image fusion method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596872A (en) * 2018-03-08 2018-09-28 北京交通大学 The detection method of rail disease based on Gabor wavelet and SVM
CN108629757A (en) * 2018-05-08 2018-10-09 山东理工大学 Image interfusion method based on complex shear wave conversion Yu depth convolutional neural networks
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method
CN109191413A (en) * 2018-08-21 2019-01-11 西京学院 A kind of multi-focus image fusing method based on modified convolutional neural networks
CN109360179A (en) * 2018-10-18 2019-02-19 上海海事大学 A kind of image interfusion method, device and readable storage medium storing program for executing
CN109410158A (en) * 2018-08-21 2019-03-01 西安电子科技大学 A kind of Multi-focal-point image fusion method based on convolutional neural networks

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108596872A (en) * 2018-03-08 2018-09-28 北京交通大学 The detection method of rail disease based on Gabor wavelet and SVM
CN108830818A (en) * 2018-05-07 2018-11-16 西北工业大学 A kind of quick multi-focus image fusing method
CN108629757A (en) * 2018-05-08 2018-10-09 山东理工大学 Image interfusion method based on complex shear wave conversion Yu depth convolutional neural networks
CN109191413A (en) * 2018-08-21 2019-01-11 西京学院 A kind of multi-focus image fusing method based on modified convolutional neural networks
CN109410158A (en) * 2018-08-21 2019-03-01 西安电子科技大学 A kind of Multi-focal-point image fusion method based on convolutional neural networks
CN109360179A (en) * 2018-10-18 2019-02-19 上海海事大学 A kind of image interfusion method, device and readable storage medium storing program for executing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JINLEI MA 等: "Infrared and visible image fusion based on visual saliency map and weighted least square optimization", 《INFRARED PHYSICS & TECHNOLOGY》 *
YU LIU 等: "Image Fusion With Convolutional Sparse Representation", 《IEEE SIGNAL PROCESSING LETTERS》 *
洪铭 等: "基于动态引导滤波的多尺度图像融合算法", 《海峡科学》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080567A (en) * 2019-12-12 2020-04-28 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolution neural network
CN111080567B (en) * 2019-12-12 2023-04-21 长沙理工大学 Remote sensing image fusion method and system based on multi-scale dynamic convolutional neural network
CN111666807A (en) * 2020-04-20 2020-09-15 浙江工业大学 Multi-source fingerprint image fusion method based on convolution sparse representation
CN111666807B (en) * 2020-04-20 2023-06-30 浙江工业大学 Multi-source fingerprint image fusion method based on convolution sparse representation
CN112184646A (en) * 2020-09-22 2021-01-05 西北工业大学 Image fusion method based on gradient domain oriented filtering and improved PCNN
CN113822828A (en) * 2021-08-18 2021-12-21 吉林大学 Multi-focus image fusion method

Similar Documents

Publication Publication Date Title
Lu et al. Multi-scale adversarial network for underwater image restoration
CN112446270B (en) Training method of pedestrian re-recognition network, pedestrian re-recognition method and device
Zhou et al. Unsupervised learning of stereo matching
CN107680054B (en) Multi-source image fusion method in haze environment
CN110555820A (en) Image fusion method based on convolutional neural network and dynamic guide filtering
CN112446380A (en) Image processing method and device
CN104217404A (en) Video image sharpness processing method in fog and haze day and device thereof
CN111476806B (en) Image processing method, image processing device, computer equipment and storage medium
CN110136075B (en) Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle
CN113129236B (en) Single low-light image enhancement method and system based on Retinex and convolutional neural network
CN111652817B (en) Underwater image sharpening method based on human eye visual perception mechanism
CN114125216B (en) Imaging system and imaging method for software defined satellite
Fan et al. Multi-scale depth information fusion network for image dehazing
CN110222718A (en) The method and device of image procossing
Swami et al. Candy: Conditional adversarial networks based fully end-to-end system for single image haze removal
Jia et al. Effective meta-attention dehazing networks for vision-based outdoor industrial systems
CN114332166A (en) Visible light infrared target tracking method and device based on modal competition cooperative network
CN113971644A (en) Image identification method and device based on data enhancement strategy selection
CN112200887A (en) Multi-focus image fusion method based on gradient perception
CN110135508B (en) Model training method and device, electronic equipment and computer readable storage medium
Saleem et al. A non-reference evaluation of underwater image enhancement methods using a new underwater image dataset
Singh et al. Construction of fused image with improved depth-of-field based on guided co-occurrence filtering
CN116129417A (en) Digital instrument reading detection method based on low-quality image
Galetto et al. Single image defocus map estimation through patch blurriness classification and its applications
CN114841887A (en) Image restoration quality evaluation method based on multi-level difference learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191210

RJ01 Rejection of invention patent application after publication