CN113947554B - A multi-focus image fusion method based on NSST and salient information extraction - Google Patents

A multi-focus image fusion method based on NSST and salient information extraction Download PDF

Info

Publication number
CN113947554B
CN113947554B CN202010693743.7A CN202010693743A CN113947554B CN 113947554 B CN113947554 B CN 113947554B CN 202010693743 A CN202010693743 A CN 202010693743A CN 113947554 B CN113947554 B CN 113947554B
Authority
CN
China
Prior art keywords
fusion
frequency
low
image
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010693743.7A
Other languages
Chinese (zh)
Other versions
CN113947554A (en
Inventor
何小海
吴剑
吴晓红
王正勇
卿粼波
吴小强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN202010693743.7A priority Critical patent/CN113947554B/en
Publication of CN113947554A publication Critical patent/CN113947554A/en
Application granted granted Critical
Publication of CN113947554B publication Critical patent/CN113947554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明提出一种基于NSST和显著信息提取的多聚焦图像融合方法。主要涉及图像融合领域中的多聚焦图像融合问题。首先,将源图像通过NSST变换进行多尺度、多方向分解得到高、低频子带。其次,对低频子带系数采用局部区域的改进拉普拉斯能量和构建低频子带初始融合权重,为对低频初始融合权重进行修正,增加了非局部均值滤波修正融合规则;对高频子带系数采用基于相关系数的空间频率与能量相结合的融合规则,再加以相位一致性融合规则进行修正,构建高频子带融合权重;最后,通过NSST反变换得到最终融合图像。由于对低频和高频子带分别进行了相应的权重修正策略,降低了聚焦区域的判别错误率。在多组不同聚焦图像的实验中验证了该方法的有效性。

Figure 202010693743

The invention proposes a multi-focus image fusion method based on NSST and salient information extraction. It mainly involves the problem of multi-focus image fusion in the field of image fusion. First, the source image is decomposed into high-frequency and low-frequency sub-bands through multi-scale and multi-directional decomposition through NSST transformation. Secondly, the improved Laplacian energy of the local area is used for the coefficients of the low-frequency subbands and the initial fusion weights of the low-frequency subbands are constructed. The coefficient adopts the fusion rule based on the combination of spatial frequency and energy based on the correlation coefficient, and then corrects it with the phase consistency fusion rule to construct the high-frequency sub-band fusion weight; finally, the final fusion image is obtained through NSST inverse transformation. Due to the corresponding weight correction strategies for the low-frequency and high-frequency sub-bands, the discrimination error rate of the focused area is reduced. The effectiveness of the method is verified in experiments with multiple sets of different focused images.

Figure 202010693743

Description

一种基于NSST和显著信息提取的多聚焦图像融合方法A multi-focus image fusion method based on NSST and salient information extraction

技术领域technical field

本发明涉及图像融合领域中的多聚焦图像融合问题,尤其是涉及一种基于NSST和显著信息提取的多聚焦图像融合方法。The invention relates to the multi-focus image fusion problem in the image fusion field, in particular to a multi-focus image fusion method based on NSST and salient information extraction.

背景技术Background technique

图像融合技术是属于图像处理领域的一大研究热点。一幅图像中包含的信息(亮度,色彩,空间等)有限,因此,仅仅通过一张信息有限的图像,是很难满足特定的应用场景。通过将多幅带有不同侧重信息的图像以某种规则融合在一起,从而得到一幅含有更多信息,能够更加方便观察的图像。显然,图像融合的目标是将有用的信息尽可能的保留,同时去除一定的冗余信息。多聚焦图像融合将同一场景下的不同聚焦图像通过一定的融合方法整合在一起,从而使融合图像的清晰度更高,包含的信息更加丰富。Image fusion technology is a research hotspot in the field of image processing. The information contained in an image (brightness, color, space, etc.) is limited. Therefore, it is difficult to meet specific application scenarios only through an image with limited information. By merging multiple images with different emphases of information together with certain rules, an image that contains more information and can be observed more easily is obtained. Obviously, the goal of image fusion is to retain useful information as much as possible while removing certain redundant information. Multi-focus image fusion integrates different focused images in the same scene through a certain fusion method, so that the definition of the fused image is higher and the information contained is more abundant.

图像融合领域应用较多的像素级融合方法中,包括基于空间域的融合、基于变换域的融合方法以及基于深度学习的融合方法。其中,基于变换域的融合方法较其他两种方法,应用较多。所谓变换域,就是将图像的原始数据信息通过某种可逆的数学变换,得到具有不同特征信息的中间数据。通过对这些中间数据进行相应的融合规则处理,在经过可逆变换得到融合图像。显然,这类方法中,可逆变换、融合规则成为了关键。相继出现的可逆变换有金字塔变换、小波变换、非下采样轮廓波变换(NSCT)、非下采样剪切波变换(NSST)等。采用的融合规则也相对较多,有基于空间频率的融合规则,基于能量信息的融合规则,基于导向滤波的融合规则等。学者们从不同的角度出发,提出了很多融合算法。大多数融合算法得出的融合图像,存在清晰度不高,聚焦信息丢失,以及聚焦边缘模糊不清等现象。Among the pixel-level fusion methods that are widely used in the field of image fusion, they include fusion methods based on spatial domain, fusion methods based on transform domain, and fusion methods based on deep learning. Among them, the fusion method based on the transform domain is more widely used than the other two methods. The so-called transformation domain is to transform the original data information of the image through a certain reversible mathematical transformation to obtain intermediate data with different characteristic information. By processing these intermediate data with corresponding fusion rules, the fused image is obtained after reversible transformation. Obviously, in this type of method, reversible transformation and fusion rules become the key. The successive reversible transforms include pyramid transform, wavelet transform, non-subsampled contourlet transform (NSCT), non-subsampled shearlet transform (NSST) and so on. There are relatively many fusion rules adopted, including fusion rules based on spatial frequency, fusion rules based on energy information, and fusion rules based on guided filtering. Scholars have proposed many fusion algorithms from different perspectives. The fused images obtained by most fusion algorithms have phenomena such as low definition, loss of focus information, and blurred focus edges.

发明内容Contents of the invention

本发明提出了一种基于NSST和显著信息提取的多聚焦图像融合方法。对不同聚焦图像进行NSST变换得到的高低频子带系数,采用不同的融合规则加以一定的修正规则进行处理,最终得到融合图像。本发明主要通过以下过程步骤实现上述目的:The invention proposes a multi-focus image fusion method based on NSST and salient information extraction. The high and low frequency subband coefficients obtained by performing NSST transformation on different focused images are processed by using different fusion rules and certain correction rules, and finally a fused image is obtained. The present invention mainly realizes above-mentioned purpose through following process step:

(1)使用NSST变换对不同聚焦图像进行处理,得到高低频子带系数;(1) Use NSST transform to process different focused images to obtain high and low frequency subband coefficients;

(2)对(1)中得到的低频子带系数采用改进拉普拉斯能量和(SML)的初始低频融合规则进行初步处理,得到初始低频融合权重;(2) Preliminarily process the low-frequency sub-band coefficients obtained in (1) using an improved Laplacian energy sum (SML) initial low-frequency fusion rule to obtain an initial low-frequency fusion weight;

(3)运用显著性信息提取的低频修正融合规则对(2)中的结果进行一定的差错修正;(3) Use the low-frequency correction fusion rule of salient information extraction to perform certain error correction on the results in (2);

(4)对(1)中得到的高频子带系数采用基于相关系数的初始高频融合规则进行初步处理,得到初始高频融合权重;(4) Preliminarily process the high-frequency subband coefficients obtained in (1) using the initial high-frequency fusion rule based on the correlation coefficient to obtain the initial high-frequency fusion weight;

(5)对一系列高频融合权重辅以不同程度的相位一致性(PC)修正规则,进行判别修正;(5) A series of high-frequency fusion weights are supplemented with different degrees of phase consistency (PC) correction rules to perform discriminant correction;

(6)对(3)和(5)得到的处理结果进行NSST反变换,得到融合结果。(6) Perform NSST inverse transformation on the processing results obtained in (3) and (5) to obtain the fusion result.

附图说明Description of drawings

图一基于NSST和显著信息提取的多聚焦图像融合框架图;Figure 1. Framework of multi-focus image fusion based on NSST and salient information extraction;

具体实施方式Detailed ways

本发明引入非局部均值滤波(NLMF)结合导向滤波(GF)对低频子带系数进行加权修正,将基于相关系数的空间频率与能量相结合构成初始高频加权融合规则,同时运用相位一致性修正策略对初始高频融合权重进行修正判定。The present invention introduces non-local mean filter (NLMF) combined with guided filter (GF) to carry out weighted correction on low-frequency sub-band coefficients, combines spatial frequency and energy based on correlation coefficient to form initial high-frequency weighted fusion rule, and uses phase consistency correction at the same time The strategy modifies the initial high-frequency fusion weights.

非局部均值滤波修正融合规则如下:The non-local mean filter correction fusion rules are as follows:

结合附图二低频子带修正规则框图;Combined with attached drawing 2, the low-frequency sub-band correction rule block diagram;

不同聚焦图像经过NSST变换后,所得到的低频子带图像丢失细节信息,运用基于改进拉普拉斯能量和(SML)的初始融合规则所得到的初始融合权重存在判别差错,本发明考虑源图像的细节信息,利用非局部均值滤波对源图像进行滤波,再将源图像与滤波图像进行差值运算得到显著性信息DlAfter different focused images are transformed by NSST, the obtained low-frequency sub-band images lose detailed information, and the initial fusion weights obtained by using the initial fusion rules based on the improved sum of Laplacian energy (SML) have discriminative errors. The present invention considers the source image The detailed information of the source image is filtered by non-local mean filter, and then the source image and the filtered image are differentially calculated to obtain the saliency information D l ,

Dl=|Il-Il×NMLF| (1)D l =|I l -I l ×NMLF| (1)

(1)式中:Il(0<l<L)为源图像,L代表源图像个数,×表示滤波操作。(1) In the formula: I l (0<l<L) is the source image, L represents the number of source images, and × represents the filtering operation.

再利用导向滤波得到聚焦区域细节信息,Then use guided filtering to get the detailed information of the focus area,

Gl=guidedfilter(Il,Dl,r,eps) (2)G l =guidedfilter(I l ,D l ,r,eps) (2)

最后采用取大策略得到低频子带修正融合权重

Figure GDA0002658273060000021
对初始融合权重进行差错修正。Finally, the large strategy is used to obtain the low-frequency sub-band correction fusion weight
Figure GDA0002658273060000021
Error correction is performed on the initial fusion weights.

Figure GDA0002658273060000022
Figure GDA0002658273060000022

初始高频加权融合规则如下:The initial high-frequency weighted fusion rules are as follows:

高频子带图像包含源图像的细节信息,本发明将空间频率和能量利用相关系数相结合,构成初始高频融合规则。The high-frequency sub-band image contains detailed information of the source image, and the present invention combines spatial frequency and energy utilization correlation coefficient to form an initial high-frequency fusion rule.

首先,定义相关系数(Corr)的运算:First, define the operation of the correlation coefficient (Corr):

Figure GDA0002658273060000031
Figure GDA0002658273060000031

式中:

Figure GDA0002658273060000032
分别表示高频子带图像以及对高频子带图像进行均值滤波后的图像,/>
Figure GDA0002658273060000033
分别是/>
Figure GDA0002658273060000034
的均值,M*N为所取的图像块大小。In the formula:
Figure GDA0002658273060000032
respectively represent the high-frequency sub-band image and the image after mean filtering of the high-frequency sub-band image, />
Figure GDA0002658273060000033
respectively />
Figure GDA0002658273060000034
The mean value of , M*N is the size of the image block taken.

接着定义基于相关系数的空间频率及能量(SF_Eng_Corr)计算公式如下:Then define the calculation formula of spatial frequency and energy (SF_Eng_Corr) based on the correlation coefficient as follows:

Figure GDA0002658273060000035
Figure GDA0002658273060000035

式中:SF_Corr和Eng_Corr分别表示空间频率相关系数和能量相关系数,对于一幅图像的聚焦区域与非聚焦区域来说,聚焦区域类的SF_Corr值和Eng_Corr值往往大于非聚焦区域。利用这一点,将两者加权结合构成SF_Eng_Corr,再利用取大策略求取初始高频融合权重。In the formula: SF_Corr and Eng_Corr represent the spatial frequency correlation coefficient and energy correlation coefficient, respectively. For the focus area and non-focus area of an image, the SF_Corr value and Eng_Corr value of the focus area class are often greater than the non-focus area. Taking advantage of this, combine the weights of the two to form SF_Eng_Corr, and then use the large strategy to obtain the initial high-frequency fusion weight.

相位一致性修正策略如下:The phase consistency correction strategy is as follows:

由相关系数得到的初始高频融合权重忽视了高频子带系数本身的相关性。为此,本文采用相位一致性PC加以修正,过程如下:The initial high-frequency fusion weight obtained from the correlation coefficient ignores the correlation of the high-frequency subband coefficient itself. For this reason, this paper adopts phase consistency PC to correct it, and the process is as follows:

本发明采用一种新的活跃测量规则NAM来得到高频修正融合权重:The present invention adopts a new active measurement rule NAM to obtain the high-frequency correction fusion weight:

Figure GDA0002658273060000036
Figure GDA0002658273060000036

式中:PC、LSCM、LE分别表示相位一致性,局部锐度变换以及局部能量;α、β、γ分别为比例因子。In the formula: PC, LSCM, and LE represent phase consistency, local sharpness transformation, and local energy, respectively; α, β, and γ are scaling factors, respectively.

对高频子带系数来说,NAM能够将系数本身所带有的局部能量信息,细节边缘信息,以及梯度信息通过一定的比例整合在一起,从而有利于对聚焦区域的判别。For high-frequency sub-band coefficients, NAM can integrate the local energy information, detail edge information, and gradient information carried by the coefficient itself through a certain ratio, which is beneficial to the identification of the focus area.

为了验证本发明提出的基于NSST和显著信息提取的多聚焦图像融合方法的有效性,进行了一系列对比实验。实验中选取3组大小分别为512pixel×512pixel,640pixel×480pixel,512pixel×512pixel的不同聚焦图像进行融合实验,并与现有的五种常见算法进行比较;同时采用六种评价指标进行定量评估。所用对比图片均已配准,实验结果如表一所示:In order to verify the effectiveness of the multi-focus image fusion method based on NSST and salient information extraction proposed by the present invention, a series of comparative experiments were carried out. In the experiment, three groups of different focused images with sizes of 512pixel×512pixel, 640pixel×480pixel, and 512pixel×512pixel were selected for fusion experiments, and compared with five existing common algorithms; at the same time, six evaluation indicators were used for quantitative evaluation. The comparison images used have been registered, and the experimental results are shown in Table 1:

表一平均指标结果Table 1 Average index results

Tab.1 The result of average indexTab.1 The result of average index

Figure GDA0002658273060000041
Figure GDA0002658273060000041

从表中可以看出本发明方法在保留足够的融合信息的前提下,将融合图像的视觉清晰度提高到了0.9009,结构相似度提高到0.9945。其余指标中,除了标准差STD略有降低,其余均有一定地提升。本文算法不仅有效地保留了源图像的轮廓、纹理等细节信息,同时,在图像的聚焦边缘区域,也有良好的视觉效果。融合图像的对比清晰度都有一定的提升,融合效果较理想。所以,本文算法是一种可行的多聚焦图像融合方法。It can be seen from the table that the method of the present invention improves the visual clarity of the fused image to 0.9009 and the structural similarity to 0.9945 on the premise of retaining sufficient fusion information. Among the other indicators, except for the slight decrease in the standard deviation STD, the rest have improved to a certain extent. The algorithm in this paper not only effectively preserves the outline, texture and other details of the source image, but also has a good visual effect in the focused edge area of the image. The contrast definition of the fused image has been improved to a certain extent, and the fusion effect is ideal. Therefore, the algorithm in this paper is a feasible multi-focus image fusion method.

Claims (5)

1. The multi-focus image fusion method based on NSST and significant information extraction is characterized by comprising the following steps:
(1) Processing different focusing images by using NSST transformation to obtain high-low frequency sub-band coefficients;
(2) Performing preliminary treatment on the low-frequency subband coefficient obtained in the step (1) by adopting an initial low-frequency fusion rule of improved Laplace energy Sum (SML) to obtain an initial low-frequency fusion weight;
(3) Performing certain error correction on the result in the step (2) by using a low-frequency correction fusion rule extracted by the saliency information;
(4) Performing preliminary treatment on the high-frequency subband coefficient obtained in the step (1) by adopting an initial high-frequency fusion rule based on a correlation coefficient to obtain an initial high-frequency fusion weight;
(5) A series of high-frequency fusion weights are assisted by Phase Consistency (PC) correction rules with different degrees, and discrimination correction is carried out;
(6) And (3) performing NSST inverse transformation on the processing results obtained in the steps (3) and (5) to obtain a fusion result.
2. The method of claim 1, wherein in step (3), a non-local mean filtering correction fusion rule is added to make the low frequency fusion weight more accurate, and the correction fusion rule is as follows:
filtering the source image by utilizing non-local mean filtering, and performing difference value operation on the source image and the filtered image to obtain significance information D l
D l =|I l -I l ×NMLF| (1)
(1) Wherein: i l (0 < L < L) is a source image, L represents the number of the source images, and x represents the filtering operation;
and then, guiding filtering is utilized to obtain detail information of a focusing area:
G l =guidedfilter(I l ,D l ,r,eps) (2)
and finally, obtaining the low-frequency sub-band correction fusion weight by adopting a big strategy, and carrying out error correction on the initial fusion weight.
3. The method of claim 1, wherein the spatial frequency and energy based on the correlation coefficient are extracted by applying the initial high frequency fusion weights in step (4), the extraction process is as follows:
for a focused region and an unfocused region of an image, the spatial frequency correlation coefficient value sf_corr and the energy correlation coefficient value (eng_corr) of the focused region tend to be larger than those of the unfocused region; by utilizing the point, the two are weighted and combined to form spatial frequency and energy (SF_Eng_Corr) based on a correlation coefficient, and then an initial high-frequency fusion weight is obtained by utilizing a big strategy;
Figure FDA0004241517730000011
wherein: sf_corr and eng_corr represent spatial frequency correlation coefficients and energy correlation coefficients, respectively.
4. The method of claim 1, wherein in step (5), phase consistency fusion correction is performed on the initial high-frequency fusion weight, and the high-frequency detail information, the local energy information and the gradient edge information are integrated together by a certain proportion, so that discrimination of a focusing area is facilitated.
5. The method of claim 1, wherein the initial fusion weights are obtained according to different fusion rules for the high-frequency and low-frequency subband coefficients obtained through NSST transformation, and the initial weights are respectively corrected by using correction fusion rules, so that the information quantity and definition of the fused image are improved to a certain extent.
CN202010693743.7A 2020-07-17 2020-07-17 A multi-focus image fusion method based on NSST and salient information extraction Active CN113947554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010693743.7A CN113947554B (en) 2020-07-17 2020-07-17 A multi-focus image fusion method based on NSST and salient information extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010693743.7A CN113947554B (en) 2020-07-17 2020-07-17 A multi-focus image fusion method based on NSST and salient information extraction

Publications (2)

Publication Number Publication Date
CN113947554A CN113947554A (en) 2022-01-18
CN113947554B true CN113947554B (en) 2023-07-14

Family

ID=79327234

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010693743.7A Active CN113947554B (en) 2020-07-17 2020-07-17 A multi-focus image fusion method based on NSST and salient information extraction

Country Status (1)

Country Link
CN (1) CN113947554B (en)

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
CN102129676A (en) * 2010-01-19 2011-07-20 中国科学院空间科学与应用研究中心 Microscopic image fusing method based on two-dimensional empirical mode decomposition
CN104077762A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focusing-image fusion method based on NSST and focusing area detecting
CN104156917A (en) * 2014-07-30 2014-11-19 天津大学 X-ray CT image enhancement method based on double energy spectrums
CN104504673A (en) * 2014-12-30 2015-04-08 武汉大学 Visible light and infrared images fusion method based on NSST and system thereof
WO2015061128A1 (en) * 2013-10-21 2015-04-30 Bae Systems Information And Electronic Systems Integration Inc. Medical thermal image processing for subcutaneous detection of veins, bones and the like
CN105551010A (en) * 2016-01-20 2016-05-04 中国矿业大学 Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
CN105719263A (en) * 2016-01-22 2016-06-29 昆明理工大学 Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features
CN105894483A (en) * 2016-03-30 2016-08-24 昆明理工大学 Multi-focusing image fusion method based on multi-dimensional image analysis and block consistency verification
CN105913407A (en) * 2016-04-06 2016-08-31 昆明理工大学 Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN106204510A (en) * 2016-07-08 2016-12-07 中北大学 A kind of infrared polarization based on structural similarity constraint and intensity image fusion method
CN106236117A (en) * 2016-09-22 2016-12-21 天津大学 Emotion detection method based on electrocardio and breath signal synchronism characteristics
CN106803301A (en) * 2017-03-28 2017-06-06 广东工业大学 A kind of recognition of face guard method and system based on deep learning
CN106803242A (en) * 2016-12-26 2017-06-06 江南大学 Multi-focus image fusing method based on quaternion wavelet conversion
CN107203696A (en) * 2017-06-19 2017-09-26 深圳源广安智能科技有限公司 A kind of intelligent medical system based on image co-registration
CN107633495A (en) * 2017-08-02 2018-01-26 中北大学 A kind of infrared polarization based on complementary relationship and the more embedded fusion methods of algorithm 2D VMD of intensity image
CN110097530A (en) * 2019-04-19 2019-08-06 西安电子科技大学 Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation
CN110648342A (en) * 2019-09-30 2020-01-03 福州大学 Foam infrared image segmentation method based on NSST significance detection and image segmentation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8411938B2 (en) * 2007-11-29 2013-04-02 Sri International Multi-scale multi-camera adaptive fusion with contrast normalization

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129676A (en) * 2010-01-19 2011-07-20 中国科学院空间科学与应用研究中心 Microscopic image fusing method based on two-dimensional empirical mode decomposition
CN101968883A (en) * 2010-10-28 2011-02-09 西北工业大学 Method for fusing multi-focus images based on wavelet transform and neighborhood characteristics
WO2015061128A1 (en) * 2013-10-21 2015-04-30 Bae Systems Information And Electronic Systems Integration Inc. Medical thermal image processing for subcutaneous detection of veins, bones and the like
CN104077762A (en) * 2014-06-26 2014-10-01 桂林电子科技大学 Multi-focusing-image fusion method based on NSST and focusing area detecting
CN104156917A (en) * 2014-07-30 2014-11-19 天津大学 X-ray CT image enhancement method based on double energy spectrums
CN104504673A (en) * 2014-12-30 2015-04-08 武汉大学 Visible light and infrared images fusion method based on NSST and system thereof
CN105551010A (en) * 2016-01-20 2016-05-04 中国矿业大学 Multi-focus image fusion method based on NSCT (Non-Subsampled Contourlet Transform) and depth information incentive PCNN (Pulse Coupled Neural Network)
CN105719263A (en) * 2016-01-22 2016-06-29 昆明理工大学 Visible light and infrared image fusion algorithm based on NSCT domain bottom layer visual features
CN105894483A (en) * 2016-03-30 2016-08-24 昆明理工大学 Multi-focusing image fusion method based on multi-dimensional image analysis and block consistency verification
CN105913407A (en) * 2016-04-06 2016-08-31 昆明理工大学 Method for performing fusion optimization on multi-focusing-degree image base on difference image
CN106204510A (en) * 2016-07-08 2016-12-07 中北大学 A kind of infrared polarization based on structural similarity constraint and intensity image fusion method
CN106236117A (en) * 2016-09-22 2016-12-21 天津大学 Emotion detection method based on electrocardio and breath signal synchronism characteristics
CN106803242A (en) * 2016-12-26 2017-06-06 江南大学 Multi-focus image fusing method based on quaternion wavelet conversion
CN106803301A (en) * 2017-03-28 2017-06-06 广东工业大学 A kind of recognition of face guard method and system based on deep learning
CN107203696A (en) * 2017-06-19 2017-09-26 深圳源广安智能科技有限公司 A kind of intelligent medical system based on image co-registration
CN107633495A (en) * 2017-08-02 2018-01-26 中北大学 A kind of infrared polarization based on complementary relationship and the more embedded fusion methods of algorithm 2D VMD of intensity image
CN110097530A (en) * 2019-04-19 2019-08-06 西安电子科技大学 Based on multi-focus image fusing method super-pixel cluster and combine low-rank representation
CN110648342A (en) * 2019-09-30 2020-01-03 福州大学 Foam infrared image segmentation method based on NSST significance detection and image segmentation

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
LIU Y.Simultaneous image fusion and denoising with adaptive sparse representation.《IET Image Process》.2014,第347-357页. *
冯鑫 ; 张建华 ; 胡开群 ; 翟志芬 ; .一种改进含噪多聚焦图像融合方法.光电子・激光.2017,(11),第96-102页. *
刘栋 ; 聂仁灿 ; 周冬明 ; 侯瑞超 ; 熊磊 ; .结合NSST与GA参数优化PCNN图像融合.计算机工程与应用.2018,(19),第164-169+177页. *
张晓琪 ; 侯世英 ; .基于导向滤波与分形维度的图像加权融合算法.包装工程.2018,(09),第230-237页. *
曹义亲 ; 曹婷 ; 黄晓生 ; .基于NSST的CS与区域特性相结合的图像融合方法.计算机工程与应用.2017,(20),第195-201页. *
朱平哲 ; .基于NSST与DBM的可见光与红外图像融合方法.吉林化工学院学报.2019,(03),第65-71页. *
李娇 ; 杨艳春 ; 党建武 ; 王阳萍 ; .NSST与引导滤波相结合的多聚焦图像融合算法.哈尔滨工业大学学报.2018,(11),第151-158页. *

Also Published As

Publication number Publication date
CN113947554A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
CN110738605B (en) Image denoising method, system, equipment and medium based on transfer learning
CN110782399B (en) An image deblurring method based on multi-task CNN
CN111047541B (en) Image restoration method based on wavelet transformation attention model
CN104573731B (en) Fast target detection method based on convolutional neural networks
CN107392852B (en) Super-resolution reconstruction method, device, device and storage medium of depth image
CN101872472B (en) A face image super-resolution reconstruction method based on sample learning
CN101630405B (en) Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation
CN115409733A (en) Low-dose CT image noise reduction method based on image enhancement and diffusion model
CN105488776B (en) Super-resolution image reconstruction method and device
CN106886977A (en) A kind of many figure autoregistrations and anastomosing and splicing method
CN104036479B (en) Multi-focus image fusion method based on non-negative matrix factorization
CN106952229A (en) Image super-resolution reconstruction method based on improved convolutional network with data augmentation
CN109410247A (en) A kind of video tracking algorithm of multi-template and adaptive features select
CN103839223A (en) Image processing method and image processing device
CN112163994B (en) Multi-scale medical image fusion method based on convolutional neural network
CN112288668A (en) Infrared and visible light image fusion method based on depth unsupervised dense convolution network
CN111626927A (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN115601282A (en) Infrared and visible light image fusion method based on multi-discriminator generation countermeasure network
Jam et al. Symmetric skip connection Wasserstein GAN for high-resolution facial image inpainting
Li et al. Multiple degradation and reconstruction network for single image denoising via knowledge distillation
CN114596592B (en) A pedestrian re-identification method, system, device, and computer-readable storage medium
Peng et al. Lightweight adaptive feature de-drifting for compressed image classification
CN113947554B (en) A multi-focus image fusion method based on NSST and salient information extraction
CN109165551B (en) An expression recognition method based on adaptive weighted fusion of saliency structure tensor and LBP features
CN117274628A (en) Image processing method combining contour wave changes and Vision Transformer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant