CN110555820A  Image fusion method based on convolutional neural network and dynamic guide filtering  Google Patents
Image fusion method based on convolutional neural network and dynamic guide filtering Download PDFInfo
 Publication number
 CN110555820A CN110555820A CN201910803493.5A CN201910803493A CN110555820A CN 110555820 A CN110555820 A CN 110555820A CN 201910803493 A CN201910803493 A CN 201910803493A CN 110555820 A CN110555820 A CN 110555820A
 Authority
 CN
 China
 Prior art keywords
 image
 focus
 map
 neural network
 fusion
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Pending
Links
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T5/00—Image enhancement or restoration
 G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/10—Image acquisition modality
 G06T2207/10141—Special mode during image acquisition
 G06T2207/10148—Varying focus

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/20—Special algorithmic details
 G06T2207/20081—Training; Learning

 G—PHYSICS
 G06—COMPUTING; CALCULATING OR COUNTING
 G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
 G06T2207/00—Indexing scheme for image analysis or image enhancement
 G06T2207/20—Special algorithmic details
 G06T2207/20084—Artificial neural networks [ANN]
Abstract
The invention relates to a novel multifocus image fusion method based on a convolutional neural network and dynamic guided filtering. The direct mapping from the source image to the focus image is generated by constructing a convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution diagrams of a focus region and a nonfocus region are obtained, and then a highquality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation.
Description
Technical Field
The invention relates to an image fusion algorithm of multifocus images, which can be applied to various military or civil image processing systems.
background
The unmanned aerial vehicle is used as an aerial reconnaissance and weapon platform and generally carries various imaging sensors such as infrared sensors, visible light sensors, laser sensors, SAR sensors and the like, and the purpose of the unmanned aerial vehicle is to comprehensively utilize image information of multiple sensors to enable the unmanned aerial vehicle to better perform tasks such as aerial reconnaissance, battlefield monitoring and the like. The image fusion technology integrates image information obtained by various imaging sensors, and obtains an image with clear targets and scenes by utilizing complementary information among the imaging sensors, so that the unmanned aerial vehicle can be very favorable for accurately reconnaissance, identification, positioning and the like of the ground targets. The fused image has higher definition and larger information quantity, and the acquired target and scene information is more comprehensive and more suitable for human visual perception, so the fused image plays an increasingly important role in the fields of military and national defense (unmanned aerial vehicle autonomous navigation, remote sensing target detection), civil use (such as medical diagnosis, photography and the like).
In recent years, deep learning has been widely used in the field of computer vision and image processing, and has achieved significant success in different computer vision tasks, which has the advantage of being able to use large amounts of data to train networks and learn target features, thereby avoiding complex manual design, and also achieving better results. The document "Ma J, Zhou Z, Wang B, et al, intrinsic and visual image fusion based on visual design of weighted least squares optimization. intrinsic Physics & technology" proposes a novel multiscale fusion method based on the proposed weighted least squares optimization, first decomposing the input image into base and detail layers using a rolling guided filter and a gaussian filter. Second, the base layer is fused with the visual saliency map to remove residual low frequency information. Finally, a weighted least squares method is used to optimize detail layer information. The resulting fused image detail of the paper appears more naturally and fits human visual perception. The document "Liu Y, Chen X, Ward R, actual. image Fusion With connected Sparse representation letters." proposes a new image Fusion algorithm, which first decomposes an image into a base layer and details by gaussian filtering, however, performs Fusion of twoscale images using a method based on convolution Sparse representation, and finally implements Fusion of images using weighted averaging. The method solves the problem that the traditional sparse representationbased method is low in detail retention capability. The methods proposed in the above documents can achieve certain effects when used for image fusion, but these methods have the problems of complex algorithm implementation, low implementation efficiency, and unsatisfactory fusion effect. Therefore, the invention provides an image fusion algorithm based on Convolutional Neural Networks (CNN) and dynamic guided filtering.
Disclosure of Invention
Technical problem to be solved
Aiming at the problems of high algorithm complexity, low efficiency and excessive dependence on manual design in the traditional multifocus image fusion, a novel multifocus image fusion method based on a convolutional neural network and dynamic guided filtering is provided. The direct mapping from the source image to the focus image is generated by constructing a convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution diagrams of a focus region and a nonfocus region are obtained, and then a highquality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation.
Technical scheme
An image fusion method based on a convolutional neural network and dynamic guided filtering is characterized by comprising the following steps:
step 1: inputting a source image A and a source image B into a CNN model for training to obtain a score map S, wherein the CNN model comprises three convolutional layers and a maximum pool layer, and the size and the step length of each convolutional layer are respectively set to be 3 multiplied by 3 and 1; the kernel size and the stride of the maximum pool layer are respectively set to be 2 multiplied by 2 and 2; the output of the score map S, i.e. the network, is a twodimensional vector fully connected to the 256dimensional vector, resulting in a probability distribution of two classes; filling the score map S by 2 multiplied by 2 according to pixel points, and carrying out average processing on pixels overlapped by different filling blocks to obtain a focus map M with the same size as the source image;
Step 2: and (3) dividing the focus map M into binary maps T (x, y) by adopting a 'selectionmaximum' strategy:
And step 3: and (3) carrying out smallregion removal and edge smoothing operation on the wrongly classified pixels in the binary map T (x, y) by adopting dynamic guided filtering to obtain a decision map D (x, y):
D(x,y)＝RGF(T,σ_{S},σ_{r},t)
Where RGF denotes a dynamically guided filtering operation, σ_{S}Denotes the standard deviation, σ_{r}Demonstration of the tablethe surrounding weight, t represents the iteration number;
And 4, step 4: processing the decision graph D (x, y) obtained in the step 3 according to a pixel weighted average rule to obtain a final fusion image F:
F(x,y)＝D(x,y)A(x,y)+(1D(x,y))B(x,y)
Where a (x, y) and B (x, y) respectively represent two input images.
advantageous effects
The invention provides an image fusion method based on a convolutional neural network and dynamic guided filtering. Aiming at the problems of high algorithm complexity, low efficiency and excessive dependence on manual design in the traditional multifocus image fusion, a novel method for performing multifocus image fusion by using a convolutional neural network and dynamic guided filtering is provided. The direct mapping from the source image to the focus image is generated by constructing a convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution diagrams of a focus region and a nonfocus region are obtained, and then a highquality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation. Experimental results show that the method solves the problems of complex algorithm and unsatisfactory fusion effect of the traditional image fusion method, and further improves the fusion quality of the images.
Drawings
FIG. 1: image fusion algorithm flow based on convolutional neural network and dynamic guided filtering
FIG. 2: convolutional neural network model used in the present invention
FIG. 3: dynamic guided filtering model used in the invention
FIG. 4: multifocus original image and process image generated by adopting method
FIG. 5: comparison of results of different algorithms
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
The invention relates to an image fusion method based on a convolutional neural network and dynamic guided filtering, which comprises the following specific implementation processes: focus detection based on convolutional neural network
CNN is a typical deep learning model that learns the hierarchical feature representation mechanism of signal or image data with different levels of abstraction. CNN is a trainable multistage feedforward artificial neural network that contains a certain number of feature maps per stage, which correspond to the level of abstraction of a feature.
Local receptive fields, shared weights and subsampling are three basic architectural ideas of CNN. The local receptive field shows that the neuron in a certain stage is only connected with a plurality of spatially adjacent neurons in the previous stage, and the connection of different neurons corresponds to different parameters, so that the parameters to be trained can be reduced by adopting the thought of the local receptive field. Sharing the weights means that the weights of the convolution kernels are spatially invariant in the feature mapping of a particular stage, which means that the same convolution kernels are used for the extraction of the same feature, and thus the parameters to be trained are further reduced. By combining the two ideas, the number of parameters to be trained is greatly reduced. x is the number of^{i}And y^{j}Respectively representing the ith input feature map and the jth output feature map of the convolutional layer. Convolution and nonlinear ReLU activation in CNN are collectively expressed as:
wherein k is^{ij}Is x^{i}And y^{j}Convolution kernel of b^{j}Is the deviation, denotes the convolution operation. Subsampling is also known as pooling, and is used to reduce data dimensionality. Maximum pooling and average pooling are common operations in CNNs. Maximum pooling is expressed as
WhereinIs the neuron at (r, c) in the ith output map of the largest pooling layer, and maps x at the ith input^{i}In the method, a local region with a size of s × s is assignedthe maximum value, m, n is the step size. By combining the above three ideas, CNN can get some important invariance of transformation and scaling to some extent.
In the present invention, multifocus image fusion is considered as a twoclass classification problem. For a pair of image blocks p of the same scene_{A},p_{B}the goal is to learn the convolutional neural network model, whose output is a scalar from 0 to 1. Specifically, when p is_{A}Focus p_{B}At defocus, the output should be close to 1, when p_{B}Is focused and p_{A}At defocus, the value should be close to 0. The output value represents the focus attribute of the patch pair.
fig. 2 shows a convolutional neural network model used in the fusion algorithm proposed by the present invention. Each branch in the network has three convolutional layers and one max pool layer. The kernel size and stride for each convolutional layer are set to 3 × 3 and 1, respectively. The kernel size and stride for the maximum pool level are set to 2 x 2 and 2, respectively. The output of the network is a twodimensional vector fully connected to a 256dimensional vector, resulting in a probability distribution for both classes.
image fusion based on dynamic guided filtering
After the focus image of the image is obtained through the convolutional neural network, postprocessing operation needs to be performed on the image to obtain a fused image. The image postprocessing operation employed by the present invention requires both smoothing edges and correcting misconfigured point information. When smoothing the image, the edge preserving filter can preserve the boundaries of the image structure to reduce the halo phenomenon and maintain the consistency of the structure space. Therefore, edgepreserving filter based fusion methods typically achieve better performance. However, most edge preserving filters preserve the edges of an image only according to the contrast of the image content, regardless of scale. Therefore, these filters are not suitable for application to the image fusion algorithm of the present invention. The dynamic guiding filtering adopted by the invention has the characteristics of scale perception and edge preservation. Fig. 3 is a dynamic guided filtering model, which mainly includes two steps: small structure removal and edge restoration.
Firstly, a small structure removing part mainly uses a Gaussian filter, an input image I is processed by the Gaussian filter to obtain an output image G, and the process is expressed as follows:
Where p and q represent the coordinates of the pixel, σ_{s}The standard deviation is indicated. K_{p}The purpose is normalization, n (p) represents the set of window pixels centered on pixel p.
Second, edge restoration iterates using a joint filter. The pilot filter has high computational efficiency and good edgepreserving characteristics, so the invention selects the pilot filter as the joint filter. This step is an iterative process in which the gaussianfiltered smoothed image G is taken as the initial image J_{1}restoration to image J by iterative update_{t}where the tth iteration may be represented as:
Wherein sigma_{r}Representing the range weight. K_{p}The aim is normalization and t represents the number of iterations.
Combining equation (5) with equation (6), the dynamic guided filtering can be expressed as:
J＝RGF(I,σ_{s},σ_{r},t) (7)
Where I denotes the input image, σ_{s}Denotes the standard deviation, σ_{r}Representing the range weight, t representing the number of iterations, J representing the output image, and RGF representing the dynamically guided filtering operation.
The method comprises the following specific steps:
The method comprises the following steps: focus detection
Let A and B denote the two source images. A score map S is obtained by feeding a and B to CNN model training. The value of each coefficient in S is in the range 0 to 1, which indicates the focus characteristics of a pair of patches of size 16 x 16 in the source image (see fig. 4 (c)). The closer the value is to 1, the more focused the patch from the source image a is; the closer the value is to 0, the more focused the patch from the source image B is. Since the convolution operation is performed with pooling, the image is reduced to half of the original size, and in order to make the final fused image have the same size as the source image, the score map S is averaged, each block corresponds to a corresponding position in the focus map M, the overlapped parts are averaged, and the obtained focus map M has the same size as the source image.
Step two: initial segmentation
Further processing of the image resulting from the focus detection is required in order to preserve as much useful information as possible. As with most spatial domain multifocus image fusion methods, the "selectmax" strategy is employed to process the focus map M. The focus map is divided into:
The resulting binary chart T is shown in FIG. 4 (d). It can be seen that almost all gray pixels in the focus map are correctly classified, which indicates that the learned CNN model can achieve accurate performance even for plain regions in the source image.
Step three: consistency check
As can be seen from fig. 4(d), the binary image may contain some misclassified small pixel regions, and the transition regions between the focused and unfocused images in the binary image are unnatural and present as glitches. If the binary image is directly used for subsequent operation, the obtained fusion image has artifacts in the edge area, and the quality of the obtained image is not high, so that binary image processing is required. The invention adopts dynamic guided filtering to remove small regions of wrongly classified pixels in the binary map and carry out smoothing operation on edges to obtain a decision map D, as shown in FIG. 4 (e).
D(x,y)＝RGF(T,σ_{S},σ_{r},t) (9)
Wherein RGF representsThe filtering operation is dynamically guided, T represents the binary diagram obtained in the last step, sigma_{S}Denotes the standard deviation, σ_{r}Representing the range weight and t representing the number of iterations. All pixel points in the decision graph obtained through the consistency check operation are correctly classified and the edge transition is natural.
Step four: image fusion
And finally, processing the decision graph D obtained in the last step according to a pixel weighted average rule to obtain a final fusion image F.
F(x,y)＝D(x,y)A(x,y)+(1D(x,y))B(x,y) (10)
Where D (x, y) represents a decision graph, a (x, y) and B (x, y) represent two input images, respectively, and a fused image F is shown in fig. 4 (F).
1. Experimental parameter settings
The experimental environment is Intel (R) core (TM) i38350CPU @3.4GHz, the memory is 16GB, and the GPU processor is NVIDA GeForce GTX 1080 Ti. The training pictures were generated from images in the ILSVRC 2012 validation image set, which contained 50000 high quality natural images from the ImageNet data set. For each image, five blurred versions with different blur levels were obtained using a gaussian filter with a standard deviation of 2 and a cutoff frequency of 7 × 7. Training was performed using a Stochastic Gradient Descent (SGD) method to minimize the loss function. The batch size was set to 128, the momentum and weight decay were set to 0.9 and 0.0005 respectively, the ownership weights were initialized with the Xavier algorithm and the learning rate was 0.0001. In the consistency check process, the standard deviation is set to 3, the range weight is set to 0.05, and the number of iterations is set to 4.
the experimental data source is a standard image fusion image library. The proposed fusion method is compared with four commonly used representative multifocus image fusion methods, which are based on Cross Bilateral Filtering (CBF), based on Weighted Least Squares (WLS), based on Convolution Sparse Representation (CSR) and Convolutional Neural Network (CNN), and related parameters in the comparison method take the optimal parameters given in the corresponding paper. Objective evaluation plays an important role in image fusion, because the performance of the fusion method is mainly quantitatively evaluated through a plurality of objective evaluation indexes. In the present invention, the four objective metric criteria we choose are: 1) normalized Mutual Information (QNMI), which measures the amount of mutual information between the fused image and the source image. 2) A Gradientbased metric (QG) that evaluates the degree of spatial detail injected into the fused image from the source image. 3) A Structural Similaritybased metric (QSS), which measures the amount of Structural information retained in the fused image. 4) Based on a measure of Human Perception (QHP), it addresses the main features of the Human visual system. For each of the four metrics above, a larger value indicates better fusion performance.
2. Content of the experiment
According to the method, the direct mapping from the source image to the focus image is generated by constructing the convolutional neural network for focus detection, so that manual operation is avoided, pixel distribution maps of a focus region and a nonfocus region are obtained, and then a highquality fused image is obtained through dynamic guiding filtering operation of small region removal and edge preservation. The method further verifies the feasibility of the method by evaluating the quality of the generated fusion image through the normalized mutual information QNMI, the measurement QG based on gradient, the measurement QSS based on structural similarity and the human perception QHP, and comparing the quality with different image fusion algorithms. The image fusion algorithm principle based on the convolutional neural network and the dynamic guided filtering is shown in fig. 1. Fig. 2 and fig. 3 show the model structure of the convolutional neural network designed by the invention and the structure of the dynamic pilot filtering, respectively. The results obtained for each of the four steps of the practice of the present invention are shown in FIG. 4. A comparison of the different algorithms is shown in figure 5.
3. Evaluation index
The image fusion effect is generally to use objective evaluation indexes to evaluate the performance, but the selection of the objective evaluation indexes is not a uniform criterion, so four criteria generally accepted in the image fusion are evaluated.
(1) Normalized mutual information QNMI
Hossny proposes a normalized mutual information index to effectively evaluate the performance of the image fusion algorithm. This index can be calculated as:
Wherein, H (A), H (B) and H (F) respectively represent entropies of the source image A, the source image B and the fusion image F, MI (A, F) represents mutual information between the source image A and the fusion image F, and MI (B, F) represents mutual information between the source image B and the fusion image F.
(2) Gradientbased metric QG
QG is a commonly used fusion metric method for evaluating the range of gradient information injected into a fused image from a source image, and a higher value indicates a sharper fused image. The QG calculation process is as follows:
Wherein the content of the first and second substances,WhileandRepresenting the edge intensity and direction retention value at pixel (x, y). Q^{BF}The definitions of (x, y) are similar. Weighting factor w^{A}(x, y) and w^{B}(x, y) each represents Q^{AF}(x, y) and Q^{BF}Significance of (x, y).
(3) structure similarity based metric QSS
The QSS measures the amount of structural information retained in the fused image, with higher values indicating more complete retention of the fused image information. The QSS calculation process is as follows:
Wherein: SSIM (a, F  w) is the structural similarity between a and F, defined as SSIM (a, F  w) ═ l (m, n) · c (m, n) · s (m, n), l (m, n) denotes the luminance similarity of two images, c (m, n) denotes the contrast similarity of two images, s (m, n) denotes the structural similarity of two images, SSIM (B, F  w), SSIM (a, B  w) are defined similarly. w is a window size of 7 × 7, and the weight τ (w) is defined as follows:
where s (A  w) and s (B  w) are the variances of the source images A and B at the window w, respectively.
(4) Human perception based metric QHP
QHP is a fusion metric based on human perception that utilizes the principal features of the human visual system model to compare the contrast features of the source image to those of the fused image. Let Q_{AF}And Q_{BF}Respectively representing the contrast information of the fusion image F retained from the source image A and the source image B, and firstly calculating a Global Quality Map (QGQM):
QGQM(i,j)＝λ_{A}(i,j)Q_{AF}(i,j)+λ_{B}(i,j)Q_{BF}(i,j) (15)
Wherein λ is_{A}And λ_{B}Respectively represent Q_{AF}And Q_{BF}The significance of (a). Then, QGQM is averaged to obtain QHP.
4. Simulation test
as can be seen from fig. 5, the contrast of the fused image obtained by the image fusion algorithm based on the Cross Bilateral Filtering (CBF) is reduced, which does not meet the requirement of the index in the occasion of high contrast requirement; the fused image obtained by the image fusion algorithm based on the Weighted Least Square (WLS) has a tiny structure (a flower part in the middle of the left cup) which does not exist in the original image, and the problem of structural distortion exists; the fuzzy distortion phenomenon of the fused image at the edge part of focusing and nonfocusing transition is obtained by the image fusion algorithm based on Convolution Sparse Representation (CSR), so that the obtained fusion result is not ideal; fusion images obtained by an image fusion algorithm based on a Convolutional Neural Network (CNN) have small difference in subjective vision and are difficult to judge by naked eyes, so specific objective evaluation is performed according to objective evaluation criteria in table 1.
As can be seen from table 1, the proposed image fusion algorithm based on the convolutional neural network and the dynamic guided filtering achieves the optimal results in the four evaluation indexes of the normalized mutual information QNMI, the gradientbased metric QG, the structural similaritybased metric QSS, and the human perceptionbased metric QHP. Therefore, the algorithm proposed by the present invention is optimal compared to crossbilateral filtering (CBF), Weighted Least Squares (WLS) based, Convolution Sparse Representation (CSR) based and Convolution Neural Network (CNN) based image fusion methods.
TABLE 1 image fusion Effect of different algorithms under different objective evaluation criteria
Claims (1)
1. An image fusion method based on a convolutional neural network and dynamic guided filtering is characterized by comprising the following steps:
step 1: inputting a source image A and a source image B into a CNN model for training to obtain a score map S, wherein the CNN model comprises three convolutional layers and a maximum pool layer, and the size and the step length of each convolutional layer are respectively set to be 3 multiplied by 3 and 1; the kernel size and the stride of the maximum pool layer are respectively set to be 2 multiplied by 2 and 2; the output of the score map S, i.e. the network, is a twodimensional vector fully connected to the 256dimensional vector, resulting in a probability distribution of two classes; filling the score map S by 2 multiplied by 2 according to pixel points, and carrying out average processing on pixels overlapped by different filling blocks to obtain a focus map M with the same size as the source image;
step 2: and (3) dividing the focus map M into binary maps T (x, y) by adopting a 'selectionmaximum' strategy:
And step 3: and (3) carrying out smallregion removal and edge smoothing operation on the wrongly classified pixels in the binary map T (x, y) by adopting dynamic guided filtering to obtain a decision map D (x, y):
D(x,y)＝RGF(T,σ_{S},σ_{r},t)
Where RGF denotes a dynamically guided filtering operation, σ_{S}denotes the standard deviation, σ_{r}Representing the range weight, and t representing the iteration number;
and 4, step 4: processing the decision graph D (x, y) obtained in the step 3 according to a pixel weighted average rule to obtain a final fusion image F:
F(x,y)＝D(x,y)A(x,y)+(1D(x,y))B(x,y)
where a (x, y) and B (x, y) respectively represent two input images.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201910803493.5A CN110555820A (en)  20190828  20190828  Image fusion method based on convolutional neural network and dynamic guide filtering 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201910803493.5A CN110555820A (en)  20190828  20190828  Image fusion method based on convolutional neural network and dynamic guide filtering 
Publications (1)
Publication Number  Publication Date 

CN110555820A true CN110555820A (en)  20191210 
Family
ID=68736762
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201910803493.5A Pending CN110555820A (en)  20190828  20190828  Image fusion method based on convolutional neural network and dynamic guide filtering 
Country Status (1)
Country  Link 

CN (1)  CN110555820A (en) 
Cited By (4)
Publication number  Priority date  Publication date  Assignee  Title 

CN111080567A (en) *  20191212  20200428  长沙理工大学  Remote sensing image fusion method and system based on multiscale dynamic convolution neural network 
CN111666807A (en) *  20200420  20200915  浙江工业大学  Multisource fingerprint image fusion method based on convolution sparse representation 
CN112184646A (en) *  20200922  20210105  西北工业大学  Image fusion method based on gradient domain oriented filtering and improved PCNN 
CN113822828A (en) *  20210818  20211221  吉林大学  Multifocus image fusion method 
Citations (6)
Publication number  Priority date  Publication date  Assignee  Title 

CN108596872A (en) *  20180308  20180928  北京交通大学  The detection method of rail disease based on Gabor wavelet and SVM 
CN108629757A (en) *  20180508  20181009  山东理工大学  Image interfusion method based on complex shear wave conversion Yu depth convolutional neural networks 
CN108830818A (en) *  20180507  20181116  西北工业大学  A kind of quick multifocus image fusing method 
CN109191413A (en) *  20180821  20190111  西京学院  A kind of multifocus image fusing method based on modified convolutional neural networks 
CN109360179A (en) *  20181018  20190219  上海海事大学  A kind of image interfusion method, device and readable storage medium storing program for executing 
CN109410158A (en) *  20180821  20190301  西安电子科技大学  A kind of Multifocalpoint image fusion method based on convolutional neural networks 

2019
 20190828 CN CN201910803493.5A patent/CN110555820A/en active Pending
Patent Citations (6)
Publication number  Priority date  Publication date  Assignee  Title 

CN108596872A (en) *  20180308  20180928  北京交通大学  The detection method of rail disease based on Gabor wavelet and SVM 
CN108830818A (en) *  20180507  20181116  西北工业大学  A kind of quick multifocus image fusing method 
CN108629757A (en) *  20180508  20181009  山东理工大学  Image interfusion method based on complex shear wave conversion Yu depth convolutional neural networks 
CN109191413A (en) *  20180821  20190111  西京学院  A kind of multifocus image fusing method based on modified convolutional neural networks 
CN109410158A (en) *  20180821  20190301  西安电子科技大学  A kind of Multifocalpoint image fusion method based on convolutional neural networks 
CN109360179A (en) *  20181018  20190219  上海海事大学  A kind of image interfusion method, device and readable storage medium storing program for executing 
NonPatent Citations (3)
Title 

JINLEI MA 等: "Infrared and visible image fusion based on visual saliency map and weighted least square optimization", 《INFRARED PHYSICS & TECHNOLOGY》 * 
YU LIU 等: "Image Fusion With Convolutional Sparse Representation", 《IEEE SIGNAL PROCESSING LETTERS》 * 
洪铭 等: "基于动态引导滤波的多尺度图像融合算法", 《海峡科学》 * 
Cited By (6)
Publication number  Priority date  Publication date  Assignee  Title 

CN111080567A (en) *  20191212  20200428  长沙理工大学  Remote sensing image fusion method and system based on multiscale dynamic convolution neural network 
CN111080567B (en) *  20191212  20230421  长沙理工大学  Remote sensing image fusion method and system based on multiscale dynamic convolutional neural network 
CN111666807A (en) *  20200420  20200915  浙江工业大学  Multisource fingerprint image fusion method based on convolution sparse representation 
CN111666807B (en) *  20200420  20230630  浙江工业大学  Multisource fingerprint image fusion method based on convolution sparse representation 
CN112184646A (en) *  20200922  20210105  西北工业大学  Image fusion method based on gradient domain oriented filtering and improved PCNN 
CN113822828A (en) *  20210818  20211221  吉林大学  Multifocus image fusion method 
Similar Documents
Publication  Publication Date  Title 

Lu et al.  Multiscale adversarial network for underwater image restoration  
Li et al.  Blind image deblurring via deep discriminative priors  
CN107680054B (en)  Multisource image fusion method in haze environment  
Zhou et al.  Unsupervised learning of stereo matching  
CN110555820A (en)  Image fusion method based on convolutional neural network and dynamic guide filtering  
CN112446270A (en)  Training method of pedestrian reidentification network, and pedestrian reidentification method and device  
CN112446380A (en)  Image processing method and device  
CN111476806B (en)  Image processing method, image processing device, computer equipment and storage medium  
CN110136075B (en)  Remote sensing image defogging method for generating countermeasure network based on edge sharpening cycle  
CN114125216B (en)  Imaging system and imaging method for software defined satellite  
Fan et al.  Multiscale depth information fusion network for image dehazing  
CN110222718A (en)  The method and device of image procossing  
Swami et al.  Candy: Conditional adversarial networks based fully endtoend system for single image haze removal  
CN111652817B (en)  Underwater image sharpening method based on human eye visual perception mechanism  
Jia et al.  Effective metaattention dehazing networks for visionbased outdoor industrial systems  
CN112200887A (en)  Multifocus image fusion method based on gradient perception  
CN110135508B (en)  Model training method and device, electronic equipment and computer readable storage medium  
Singh et al.  A review of image fusion: Methods, applications and performance metrics  
CN113129236B (en)  Single lowlight image enhancement method and system based on Retinex and convolutional neural network  
Saleem et al.  A nonreference evaluation of underwater image enhancement methods using a new underwater image dataset  
Singh et al.  Construction of fused image with improved depthoffield based on guided cooccurrence filtering  
CN113971644A (en)  Image identification method and device based on data enhancement strategy selection  
CN108830804B (en)  Virtualreal fusion fuzzy consistency processing method based on line spread function standard deviation  
CN116129417A (en)  Digital instrument reading detection method based on lowquality image  
Ghosh et al.  PB3CCNN: An integrated PB3C and CNN based approach for plant leaf classification 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
RJ01  Rejection of invention patent application after publication 
Application publication date: 20191210 

RJ01  Rejection of invention patent application after publication 