CN112508828A - Multi-focus image fusion method based on sparse representation and guided filtering - Google Patents

Multi-focus image fusion method based on sparse representation and guided filtering Download PDF

Info

Publication number
CN112508828A
CN112508828A CN201910869494.XA CN201910869494A CN112508828A CN 112508828 A CN112508828 A CN 112508828A CN 201910869494 A CN201910869494 A CN 201910869494A CN 112508828 A CN112508828 A CN 112508828A
Authority
CN
China
Prior art keywords
image
focus
filter
decision diagram
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910869494.XA
Other languages
Chinese (zh)
Inventor
李启磊
童嘉蕙
杨晓敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201910869494.XA priority Critical patent/CN112508828A/en
Publication of CN112508828A publication Critical patent/CN112508828A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses a multifocal image fusion method, which comprises the following steps: (1) carrying out fuzzy processing on the natural image of the training sample for multiple times; (2) randomly extracting image blocks with certain sizes from the training sample images to obtain a sample group, training the image sample group and obtaining a learning dictionary; (3) inputting a test sample image, sampling pixel by pixel from the test sample image to obtain an input image set, calculating to obtain a sparse coefficient and storing the sparse coefficient; (4) combining the sparse coefficient and the learning dictionary to obtain a focus characteristic map; (5) smoothing the focus feature map to determine a score map, and judging the current pixel point according to the score map to obtain an initial decision map; (6) removing black points and small holes in a small area, and refining to obtain a final decision diagram; (7) and restoring the fused image according to the final decision diagram. The method has excellent effect on fusing two pictures with different focuses, and can show more information on the premise of keeping the definition and the resolution of the original image.

Description

Multi-focus image fusion method based on sparse representation and guided filtering
Technical Field
The invention belongs to the technical field of computer vision, and relates to a multi-focus image fusion method, in particular to fusion of images with focuses concentrated in two different areas in the same scene.
Background
Due to the limited depth of field of the vision sensor, it is difficult to obtain a fully focused image, which presents difficulties in the analysis and understanding of the image. Wherein, the image generated by the sensor with excellent focusing power is true and clear, and the images generated by other objects beyond the object determination limit have fuzzy feeling. When a clear image of the target space is taken, it is necessary to take multiple clear images of each target because there is no way to clearly image all the entities in the target space. And the focuses of the images are concentrated in different areas, and a method of fusing pictures with different focuses in the same scene can be adopted to solve the problem. In the multi-focus image fusion method which is popular at present, there are two main branches, a spatial domain method and a transform domain method. The spatial domain method directly fuses source images through a specific fusion rule, and the basic principle is to calculate the mean value of the source images pixel by pixel. However, pixel-based fusion methods tend to suffer from noise and registration errors. The fusion of the initial images is mainly at the pixel level, and is a direct process for various information data contained in the source images. Much research has focused on image fusion at the pixel level, but the amount of information processed in this way is excessive, resulting in redundancy in storage and higher requirements on processing equipment. In order to further improve the fusion performance, a fusion method based on blocks and regions is proposed. For example, selecting blocks of images according to spatial frequency, or decomposing the image into multiple gradients and fusing through a rolling guided filter. In addition to spatial frequency and image gradient, the laplacian energy method is also an important sharpness evaluation method. Although the effects of noise and registration errors are getting smaller, these methods tend to suffer from block artifacts and contrast degradation. Unlike the spatial domain method, the main idea of the transform domain method is to fuse the multi-focus image in the transform domain, such as laplacian pyramid, discrete wavelet transform, dual-tree complex wavelet transform, etc. In addition to the transform domain methods described above, some new transform domain methods such as independent component analysis and sparse representation are also used to fuse the multifocal images. The image fusion method based on sparse representation divides a source image into small blocks through a sliding window with a fixed size, converts the small blocks of the image into sparse coefficients, and then measures the activity level by applying an L1 paradigm to the sparse coefficients. Although some multifocal fusion methods achieve better results, there are still some deficiencies to be solved. For spatial domain methods, some of which are prone to noise and registration errors, blocking artifacts may be generated in the fused image. In addition, some methods also increase artifacts near the boundary, reducing contrast and sharpness. For the transform domain approach, the fusion rule is based on correlation coefficients; thus, small changes in the coefficients will result in large changes in the pixel values, which will result in unwanted artifacts.
Aiming at the problems, a multifocal image fusion method based on sparse coding and a guide filter is provided.
Disclosure of Invention
The present invention is directed to solving the above problems and providing a multi-focus image fusion method with high accuracy.
The invention realizes the purpose through the following technical scheme:
1. the multi-focus image fusion method comprises the following steps:
(1) performing multiple times of fuzzy processing on the natural image of the training sample M by using a Gaussian filter, and controlling the fuzzy degree according to actual needs, wherein the fuzzy degree comprises standard deviation, size, fuzzy iteration times and the like;
(2) randomly extracting image blocks with certain sizes from the training sample images to obtain a sample group, training the image sample group by a K-svd method and obtaining a dictionary D;
(3) inputting a test sample picture, sampling pixel by pixel from the image of the test sample to obtain an input picture set, calculating by using an OMP (object model processing) method to obtain a sparse coefficient X, and storing the sparse coefficient X;
(4) obtaining a remodeling-related effect level measurement vector f from sparse coefficientsiI belongs to {1,2} and calculating to obtain a focus characteristic map Ei,i∈{1,2};
(5) Smoothing is carried out by a guide filter to distinguish the contour of the focus area from the contour of the defocus area, and a fractional image S is obtainediI belongs to {1,2}, and then an initial decision diagram Q is obtained;
(6) removing small black dots and small holes in the small area on the premise of not influencing the accuracy of the boundary of the large-area object;
(7) and optimizing the boundary of the decision diagram Q by using a guide filter to obtain a final decision diagram.
(8) And obtaining the fused image according to the final decision diagram.
The basic principle of the method is as follows:
the method trains an over-complete dictionary and calculates related sparse coefficients. These coefficients are used to measure activity level and then a focus profile is derived from the activity level. A derivative filter is applied to the focused feature map to generate a score map. By comparing the score maps, an initial decision map is obtained. Then, using guided filtering
And the device refines the initial decision diagram to obtain a final decision diagram.
Specifically, in the step (1), a gaussian filter is used to perform multiple times of blurring on the natural image of the training sample M, so as to improve sample diversity, and thus, the sampled image block can be expanded into a sparse dictionary better than that of the traditional sparse expression method.
In the step (2), the dictionary is trained by using a K-svd method, firstly, the dictionary D is initialized randomly, that is, n samples are randomly selected from the training sample set Y as atoms of D, and the encoding matrix X is initialized to be a 0 matrix. And step two, fixing the dictionary, and solving the sparse code of each sample by utilizing an OMP (object-matching pursuit) method. The calculation method is as follows:
Figure BDA0002202334400000021
s.t.||xi||0≤k0,1≤i≤N
wherein
Figure BDA0002202334400000022
A training sample signal is input to the training device,
Figure BDA0002202334400000023
in order to be an ultra-complete dictionary,
Figure BDA0002202334400000024
for input sample signal yiThe sparse coefficient of (2). At this point, the sample set y is obtainediAnd (5) sparse coding.
In the step (3), after the dictionary D is learned, it is used to calculate the sparse coefficients of the N input multifocal images. In the stage of sparse coding of test samples, the method adopts the sparse coding and training stageIs sampled pixel by pixel from the image of the test sample to the input picture set, and when these image blocks are sampled they will be expanded to a column vector X ═ Xi1,xi2,…,xi(n-1),xinWhere n denotes the number of image blocks. And obtaining the sparse coefficient of the test sample by utilizing an OMP method, wherein the sparse coefficient is defined as:
Figure BDA0002202334400000031
s.t.||Yi-DXi||2≤δ
wherein Y isiFor inputting the test sample signal, D is the overcomplete dictionary, and X is the sparse coefficient of the input test sample signal. Calculating to obtain and store a sparse coefficient X of the test sample;
in the step (4), a remodeling-related effect level measurement vector f is obtained from the sparse coefficientiI ∈ {1,2}, i.e., f1=(f11,f12,…,f1(n-1),f1n),f2=(f21,f22,…,f2(n-1),f2n) Where n represents the number of image blocks, namely:
Figure BDA0002202334400000032
0<i≤n
and then obtaining a focus characteristic map according to calculation, wherein the focus characteristic map comprises the following steps:
Ei=reshape(fi),i∈{1,2}
wherein reshape (·) is a remolding sparse coefficient operator, EiAnd i belongs to {1,2} is a focus characteristic map.
In the step (5), smoothing is performed by a guide filter. Specifying the guided filtering operation as the operator GF (-) the score map after guided filtering can be expressed as follows:
Si=GF(Ei,Ei,r1,∈1),i∈{1,2}
where GF (-) is the pilot filter operator, note that the pilot image of the pilot filter is itself the focus profile. And obtaining an initial decision diagram Q by using the following formula:
Figure BDA0002202334400000033
wherein S is1、S2Respectively, are partial images from the source test sample image.
In the step (6), the partial graphs S are comparediThe resulting initial decision graph Q may result in some non-smooth edges and some pinholes. This is because some regions have similar visual effects on both input images, and the sparse coefficients cannot determine whether they are in focus. To remove these small holes, we use a small region removal strategy. This process can be represented by the following equation:
Qr(x,y)=Qi(x,y)-smallholes
Q(x,y)=upsample(Qr(x,y))
in the step (7), the boundary between the focus area and the non-focus area is smooth, and the decision diagram Q is sharp at the boundary. To solve this problem, we use a guided filter to optimize the decision graph Q. And the fused image W is made the guide image of the guide filter. The following were used:
W=Q(x,y)I1(x,y)+(1-Q(x,y))I2(x,y)
Figure BDA0002202334400000041
where GF (-) represents the pilot filter operator.
Figure BDA0002202334400000042
Representing the final decision graph after guided filtering. I is1、I2Representing the source input image.
In the step (8), the fused image is obtained by using the following formula:
Figure BDA0002202334400000043
the invention has the beneficial effects that:
in the invention, sparse coefficients are used for classifying a focusing region and a non-focusing region to construct an initial decision diagram instead of directly fusing the sparse coefficients. The initial decision map will be optimized in the later steps. Thus, artifacts caused by improper selection of sparse coefficients are avoided. Secondly, in order to solve the problem of spatial inconsistency, the guided filtering is used for smoothing the focusing feature map, and connection with adjacent pixels is fully considered. Therefore, the structure of the image is effectively reserved, and the problem of inconsistent space is avoided. In addition, in order to generate a decision graph related to boundary information, a guide filter is used for refining the initial decision graph. So that it has a slow transition and eventually the decision graph boundary is smoothed. Thereby effectively reducing the vignetting artifact of the fused image.
Drawings
FIG. 1-1 is an overall framework diagram in an embodiment of the invention;
FIGS. 1-2 are one of the training set sample images in an embodiment of the present invention;
FIGS. 1-3 illustrate a second example of a training set sample image in accordance with an embodiment of the present invention;
FIGS. 1-4 are one of the images of a training set sample after blurring in an embodiment of the present invention;
FIGS. 1-5 illustrate a second example of a blurred image of a training set sample;
FIG. 2-1 is one of the image blocks extracted from the training set samples in an embodiment of the present invention;
FIG. 2-2 is a learning dictionary D obtained by training image sample sets by the K-svd method according to an embodiment of the present invention;
FIG. 3-1 is one of the images of the test specimen in an embodiment of the present invention;
FIG. 3-2 is a second image of a test specimen in an embodiment of the present invention;
FIG. 4-1 is an image of the test sample image of FIG. 3-1 after thinning to record focus features;
FIG. 4-2 is an image of the test sample image of FIG. 3-2 after thinning to record focus features;
FIG. 5-1 is an image of the focal point feature image of FIG. 4-1 after a guided filtering;
FIG. 5-2 is an image of the focal point feature image of FIG. 4-2 after a guided filtering;
FIG. 5-3 is the fused image of FIGS. 5-1 and 5-2 as an initial decision map;
FIG. 6-1 is the corresponding image of FIG. 5-3 after thinning and hole removal;
FIG. 7-1 is the image of FIG. 6-1 after guided filtering as a final decision graph;
FIG. 8-1 is the final restored fused image according to the fusion principle.
Detailed Description
The invention will be further illustrated with reference to the following specific examples and the accompanying drawings:
example (b):
in order to make the image fusion method of the present invention more easily understood and approximate to the real application, the whole process is described from the beginning of dictionary learning of the original training samples to the completion of image fusion, wherein the whole process includes the core fusion method of the present invention, and the general frame diagram is shown in fig. 1-1:
(1) natural images of the training sample M are blurred multiple times using a gaussian filter, and fig. 1-2 and 1-3 are representative of the original training sample images. The standard deviation, size and number of fuzzy iterations of the gaussian filter are set to 3, 5 × 5 and 5. Fig. 1-4 and fig. 1-5 are training sample images after five times of blurring processing.
(2) Randomly extracting image blocks with a certain size from the training sample images to obtain a sample set, in this example, setting the size of the image blocks to be 8 × 8, as shown in fig. 2-1. Then training the image sample group by a K-svd method to obtain a learning dictionary D, wherein the size of the dictionary is set to be 64 multiplied by 512, as shown in figure 2-2;
(3) and (3) using the updated dictionary trained in the step (2) to calculate sparse coefficients of the N input multi-focus images. In the stage of sparse coding of test samples, the sampling and training stages are adoptedTwo test sample images, such as fig. 3-1, 3-2, 3-3, and 3-4, are selected using a sliding window with identical image blocks, pixel-by-pixel sampling is performed from the test sample images into the input picture set, and when these image blocks are sampled, they are expanded into a column vector X ═ Xi1,xi2,…,xi(n-1),xinWhere n denotes the number of image blocks. And obtaining the sparse coefficient of the test sample by utilizing an OMP method, wherein the sparse coefficient is defined as:
Figure BDA0002202334400000051
s.t.||Y}-DXi||2≤δ
wherein Y isiFor inputting the test sample signal, D is the overcomplete dictionary, and X is the sparse coefficient of the input test sample signal. Calculating to obtain and store a sparse coefficient X of the test sample;
(4) obtaining a remodeling-related effect level measurement vector f from sparse coefficientsiI ∈ {1,2}, i.e., f1=(f11,f12,…,f1(n-1),f1n),f2=(f21,f22,…,f2(n-1),f2n) Where n represents the number of image blocks, namely:
Figure BDA0002202334400000052
0<i≤n
and then obtaining a focus characteristic map according to calculation, wherein the focus characteristic map comprises the following steps:
Ei=reshape(fi),i∈{1,2}
wherein reshape (·) is a reshaping sparse coefficient operator and is a focus feature map. The effect is shown in figure 4-1 and figure 4-2.
(5) Smoothing is performed by a guided filter. Now, by generalizing the above-mentioned guided filtering operation into an operator GF (-) the score map after guided filtering can be expressed as follows:
Si=GF(Ei,Ei,r1,∈1),i∈{1,2}
where GF (-) is the pilot filter operator, note that the pilot image of the pilot filter is itself the focus profile. The initial decision diagram Q is then obtained using the following formula, as shown in fig. 5-3:
Figure BDA0002202334400000061
wherein S is1、S2Respectively, are partial images from the source test sample image.
(6) By comparing the partial graphs SiThe resulting initial decision graph Q may result in some non-smooth edges and some pinholes. This is because some regions have similar visual effects on both input images, and the sparse coefficients cannot determine whether they are in focus. To remove these small holes, we use a small region removal strategy. Is defined as:
Qr(x,y)=Qi(x,y)-smallholes
Q(x,y)=upsample(Qr(x,y))
the image after removal of the small hole is shown in FIG. 6-1.
(7) The boundaries of the focused and unfocused regions are smooth, while the decision graph Q is sharp at the boundaries. To solve this problem, we use a guided filter to optimize the decision graph Q. The multi-focus image is fused by using a decision diagram Q, and the fused image W is used as a guide image of a guide filter. The following were used:
W=Q(x,y)I1(x,y)+(1-Q(x,y))I2(x,y)
Figure BDA0002202334400000062
where GF (-) represents the pilot filter operator. I is1、I2Representing the source input image, the optimized image is shown in fig. 7-1.
(8) Define the fusion formula as:
Figure BDA0002202334400000063
the fused image is obtained according to the formula and is shown in figure 8-1. Comparing fig. 3-1, fig. 3-2 and fig. 8-1, it can be seen that the fused image and the source test image before processing, i.e. the effect of the multi-focus image fusion technique of the present invention is better.
Among the above steps, steps (5) to (7) are the main steps of the image fusion method of the present invention.
In this example, we will use a set of parameters (bi-directional information MI, edge preservation Q) that are common in the field of subjective vision and objective image processing as a measure of the amount of information involvedAB/FFMI, standard deviation SD, etc.), the source image and the fused image are compared, thereby verifying the reliability of the present invention.
The above embodiments are only preferred embodiments of the present invention, and are not intended to limit the technical solutions of the present invention, so long as the technical solutions can be realized on the basis of the above embodiments without creative efforts, which should be considered to fall within the protection scope of the patent of the present invention.

Claims (2)

1. The invention relates to a multi-focus image fusion method, which is characterized in that: the method comprises the following steps:
(1) smoothing the contour of the in-focus area and the out-of-focus area by a guide filter to obtain a fractional image SiI belongs to {1,2}, and then an initial decision diagram Q is obtained;
(2) removing small black dots and small holes in the small area on the premise of not influencing the accuracy of the boundary of the large-area object;
(3) and optimizing the boundary of the decision diagram Q by using a guide filter to obtain a final decision diagram.
2. The multi-focus image fusion method according to claim 1, wherein:
in the step (1), the guiding filter is used for smoothing, and the guiding filtering operation is defined as an operator GF (-) so that the score map after guiding filtering can be expressed as follows:
Si=GF(Ei,Ei,r1,∈1),i∈{1,2}
wherein GF (-) is a guide filter operator, and the guide image of the guide filter is a focusing characteristic diagram; and obtaining an initial decision diagram Q by using the following formula:
Figure FDA0002202334390000011
wherein S is1、S2Respectively representing partial images from a source test sample image;
in the step (2), the partial graphs S are comparediThe obtained initial decision graph Q may generate some unsmooth edges and some small holes because some regions have similar visual effects on two input images, and the sparse coefficient cannot determine whether the two input images are focused or not, and in order to remove the small holes, a small region removal strategy is adopted; this process can be represented by the following equation:
Qr(x,y)=Qi(x,y)-smallholes
Q(x,y)=upsample(Qr(x,y))
in the step (3), the boundary of the focus area and the non-focus area is smooth, and the decision diagram Q is sharp on the boundary; in order to solve the problem, a guide filter is adopted to optimize a decision diagram Q; and the fused image W is used as a guide image of a guide filter; the following were used:
W=Q(x,y)I1(x,y)+(1-Q(x,y))I2(x,y)
Figure FDA0002202334390000012
wherein GF (-) represents the guided filter operator;
Figure FDA0002202334390000013
representing a final decision graph after the guiding filtering; i is1、I2Representing the source input image.
CN201910869494.XA 2019-09-16 2019-09-16 Multi-focus image fusion method based on sparse representation and guided filtering Pending CN112508828A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910869494.XA CN112508828A (en) 2019-09-16 2019-09-16 Multi-focus image fusion method based on sparse representation and guided filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910869494.XA CN112508828A (en) 2019-09-16 2019-09-16 Multi-focus image fusion method based on sparse representation and guided filtering

Publications (1)

Publication Number Publication Date
CN112508828A true CN112508828A (en) 2021-03-16

Family

ID=74923853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910869494.XA Pending CN112508828A (en) 2019-09-16 2019-09-16 Multi-focus image fusion method based on sparse representation and guided filtering

Country Status (1)

Country Link
CN (1) CN112508828A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822828A (en) * 2021-08-18 2021-12-21 吉林大学 Multi-focus image fusion method
CN114549384A (en) * 2022-02-24 2022-05-27 吉林大学 Image fusion method based on multi-scale dictionary and recursive filter

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680502A (en) * 2015-03-19 2015-06-03 四川大学 Infrared image super-resolution reconstruction method based on sparse dictionary and non-subsample Contourlet transform
CN106228528A (en) * 2016-07-29 2016-12-14 华北电力大学 A kind of multi-focus image fusing method based on decision diagram Yu rarefaction representation
CN106447640A (en) * 2016-08-26 2017-02-22 西安电子科技大学 Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof
US20170293825A1 (en) * 2016-04-08 2017-10-12 Wuhan University Method and system for reconstructing super-resolution image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680502A (en) * 2015-03-19 2015-06-03 四川大学 Infrared image super-resolution reconstruction method based on sparse dictionary and non-subsample Contourlet transform
US20170293825A1 (en) * 2016-04-08 2017-10-12 Wuhan University Method and system for reconstructing super-resolution image
CN106228528A (en) * 2016-07-29 2016-12-14 华北电力大学 A kind of multi-focus image fusing method based on decision diagram Yu rarefaction representation
CN106447640A (en) * 2016-08-26 2017-02-22 西安电子科技大学 Multi-focus image fusion method based on dictionary learning and rotating guided filtering and multi-focus image fusion device thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
QILEI LI等: "Multi-Focus Image Fusion Method for Vision Sensor Systems via Dictionary Learning with Guided Filter", 《SENSORS》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822828A (en) * 2021-08-18 2021-12-21 吉林大学 Multi-focus image fusion method
CN114549384A (en) * 2022-02-24 2022-05-27 吉林大学 Image fusion method based on multi-scale dictionary and recursive filter

Similar Documents

Publication Publication Date Title
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
Du et al. Image segmentation-based multi-focus image fusion through multi-scale convolutional neural network
Pertuz et al. Generation of all-in-focus images by noise-robust selective fusion of limited depth-of-field images
CN111709895A (en) Image blind deblurring method and system based on attention mechanism
Aslantas et al. A pixel based multi-focus image fusion method
CN109509163B (en) FGF-based multi-focus image fusion method and system
CN111091503A (en) Image out-of-focus blur removing method based on deep learning
CN106447640B (en) Multi-focus image fusing method and device based on dictionary learning, rotation guiding filtering
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
Ding et al. U 2 D 2 Net: Unsupervised unified image dehazing and denoising network for single hazy image enhancement
CN105894483A (en) Multi-focusing image fusion method based on multi-dimensional image analysis and block consistency verification
CN111784620A (en) Light field camera full-focus image fusion algorithm for guiding angle information by spatial information
CN112508828A (en) Multi-focus image fusion method based on sparse representation and guided filtering
Wang et al. An efficient method for image dehazing
Das et al. A comparative study of single image fog removal methods
CN113763300B (en) Multi-focusing image fusion method combining depth context and convolution conditional random field
CN116128768B (en) Unsupervised image low-illumination enhancement method with denoising module
CN115965844B (en) Multi-focus image fusion method based on visual saliency priori knowledge
CN110555414B (en) Target detection method, device, equipment and storage medium
Li et al. Deep image quality assessment driven single image deblurring
CN116579958A (en) Multi-focus image fusion method of depth neural network guided by regional difference priori
CN110717873A (en) Traffic sign deblurring detection recognition algorithm based on multi-scale residual error
Wang et al. New insights into multi-focus image fusion: A fusion method based on multi-dictionary linear sparse representation and region fusion model
CN113379660B (en) Multi-dimensional rule multi-focus image fusion method and system
CN116912649B (en) Infrared and visible light image fusion method and system based on relevant attention guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210316