CN101968882A - Multi-source image fusion method - Google Patents
Multi-source image fusion method Download PDFInfo
- Publication number
- CN101968882A CN101968882A CN 201010288552 CN201010288552A CN101968882A CN 101968882 A CN101968882 A CN 101968882A CN 201010288552 CN201010288552 CN 201010288552 CN 201010288552 A CN201010288552 A CN 201010288552A CN 101968882 A CN101968882 A CN 101968882A
- Authority
- CN
- China
- Prior art keywords
- information
- aliasing
- low frequency
- fused images
- wave filter
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Abstract
The invention discloses a multi-source image fusion method. The method comprises the following steps of: in an early feature extraction stage, combining wavelet kernel-based support transform with anti-aliasing contourlet, wherein on one hand, direction information which cannot be extracted by the support transform is increased, and on the other hand, an aliasing phenomenon generated in contourlet transform is eliminated; and in an image fusion stage, performing fusion judgment on low frequency signals of each fused image by establishing a high frequency judgment device by adopting a pulse coupled neural network, and performing the fusion judgment on high frequency signals of each fused image by adopting an absolute value maximum selection rule. Therefore, effective fusion of high and low frequency signals is realized.
Description
Technical field
The present invention relates to the image co-registration field, be specifically related to a kind of multisource image anastomosing method.
Background technology
The processing of the image co-registration of prior art is generally Pixel-level, feature level, decision level.Pixel-level image merges and mainly carries out at original data, its purpose mainly is figure image intensifying, image segmentation and image classification, thereby for manually interpretation image or further feature level fusion provide better input information, pixel-level image merges, depend on the sensitivity of sensor sensing device, need adopt the sensor of high resolution capacity to remote image; Feature level image co-registration is meant characteristic information extraction from each sensor image, and the process of carrying out analysis-by-synthesis and processing, the characteristic information that extracts is the abundant expression amount or the sufficient statistic of Pixel Information, typical characteristic information has edge, shape, profile, angle, texture, similar luminance area etc., Feature Fusion is not the whole of image information, often when expressing feature, ignored image overall; The decision level image co-registration is to carry out the process of reasoning from logic or statistical reasoning from the information of multiple image, the foundation of a decision function often needs a large amount of samples to carry out long decision function checking computing, and image is adopted the fusion of decision level need expend the plenty of time and takies a large amount of internal memories.
Summary of the invention
In view of this, in order to address the above problem, the invention discloses a kind of multisource image anastomosing method, utilization is based on the support conversion and the combination of anti-aliasing profile ripple of small echo nuclear, realize multiple dimensioned directive feature extraction in the image characteristics extraction stage, reduced image reconstruction " cut " phenomenon simultaneously; Utilize maximum value and Pulse Coupled Neural Network combination, obtain high-precision fusion high and low frequency information, effectively extracted the notable feature of image, improved the visual effect of fused image integral body simultaneously.
The object of the present invention is achieved like this: a kind of multisource image anastomosing method may further comprise the steps:
1) adopts the small echo kernel function to treat fused images and set up multiple dimensioned support wave filter;
2) use multiple dimensioned support wave filter to carry out the support conversion, produce high and low frequency information respectively respectively treating fused images;
3) use anti-aliasing profile wave convert method that high and low frequency information is handled, obtain the anti-aliasing information of high and low frequency;
4) to the anti-aliasing information of the low frequency of respectively treating fused images, adopt pulse coupled neural metanetwork fusion rule, choose and treat that respectively the anti-aliasing information of the low frequency that can be triggered in the fused images is as the low-frequency information after merging;
5) use the absolute value maximum to choose the fusion rule method to the anti-aliasing information of the high frequency of respectively treating fused images, the anti-aliasing information of high frequency of choosing the absolute value maximum of respectively treating each pixel correspondence of fused images is as the high-frequency information after merging;
6) the high and low frequency information after the fusion that is selected is carried out inverse transformation of anti-aliasing profile ripple and support inverse transformation respectively, generate fused images.
Further, in the step 1), utilize the small echo kernel function to make up multiple dimensioned support wave filter prototype Q1, multiple dimensioned support wave filter prototype Q1 in length and breadth to utilizing the zero filling of porous wavelet filter principle, realize the structure of multi-scale transform wave filter, obtain multiple dimensioned support wave filter;
Wherein, multiple dimensioned support wave filter prototype is:
Ω=K+I/ γ, γ are normalization factor, and I is a unit matrix;
Element satisfies k among the K
I, j=K (x
i, x
jφ (the x of)=<
i)
Tφ (x
j) i, j ∈ [1,2n+1], and φ (x) is the small echo kernel function;
Element satisfies Y among the Y
Ij=i * j, i ∈ [1,2n+1], j=1;
N is a natural number, is used for determining the scale size of multiple dimensioned support wave filter prototype Q1;
Further, step 2) specifically may further comprise the steps:
21) the multiple dimensioned support wave filter with each yardstick carries out the support conversion, the high-frequency information after obtaining decomposing with treating fused images:
DM
h=Q
h*M
h;
Wherein, M
1Be original image to be merged, when h 〉=2, M
hBe h layer low-frequency information;
DM
hBe h floor height information frequently, Q
hIt is the multiple dimensioned support wave filter of h yardstick;
DM
hBe Q
hAnd M
hConvolution results;
H ∈ [1, n], h is a decomposed class, n 〉=1;
22) use M
H+1=M
h-DM
hObtain h+1 layer low-frequency information M
H+1
Further, step 4) comprises the steps:
41) the anti-aliasing information of low frequency of utilizing one group of sample image is set up the Pulse Coupled Neural Network of selecting the anti-aliasing information of low frequency as input, comprises receiving element, modulating unit and pulse generation unit; The pulse generation unit excites excited principle to generate dynamic threshold according to decay;
42) the receiving element receiving step 3) the anti-aliasing information of the low frequency of respectively treating fused images that obtains is as presenting input, and modulating unit is modulated by the neighborhood link input of the anti-aliasing information of low frequency and is presented input, obtains neuron output; When neuron output value surpasses dynamic threshold, choose the corresponding anti-aliasing information of the low frequency for the treatment of fused images of this neuron output and be the low-frequency information after merging;
Further, in the step 6), the low-frequency information after the fusion that high-frequency information after the fusion that step 5) is obtained and step 4) obtain is carried out coefficient adjustment, and successively by anti-aliasing profile reconstructed wave and support reconstruct, reconstruct generates fused images.
The invention has the beneficial effects as follows: utilize support conversion and the combination of anti-aliasing profile ripple, realized multiple dimensioned directive feature extraction, reduced image reconstruction " cut " phenomenon simultaneously in the image characteristics extraction stage based on small echo nuclear; According to the high-frequency information token image detailed information of decomposing the back image, the absolute value of high-frequency information is big more, and it is strong that this place's brightness changes Shaoxing opera, and this place comprises the principle of the important information of image, use the absolute value maximum to choose algorithm to high-frequency information, make image detail information fully keep.According to people's vision physiological characteristics, the coupled neural network application to in the low-frequency information processing, has effectively been extracted the notable feature of image, improved the visual effect of fused image integral body simultaneously.
Description of drawings
In order to make the purpose, technical solutions and advantages of the present invention clearer, the present invention is described in further detail below in conjunction with accompanying drawing:
Fig. 1 shows the image co-registration main-process stream;
Fig. 2 shows the change of scale process of multiple dimensioned support wave filter;
Fig. 3 shows pulse coupled neural metanetwork structure.
Embodiment
Below will describe in detail the preferred embodiments of the present invention.
Whole step:
As shown in Figure 1, a kind of multisource image anastomosing method, step is:
In the image characteristics extraction stage:
1) employing small echo kernel function is treated fused images A and is treated that fused images B sets up multiple dimensioned support wave filter;
2) use multiple dimensioned support wave filter to treat fused images and carry out support conversion (that is: SVT conversion), form high and low frequency information;
3) use anti-aliasing profile wave convert method (that is: NACT conversion) that high and low frequency information is handled, obtain the anti-aliasing information of high and low frequency;
In the image co-registration stage:
4) to the anti-aliasing information of the low frequency of respectively treating fused images, adopt pulse coupled neural metanetwork (PCNN) fusion rule, choose and treat that respectively the anti-aliasing information of the low frequency that can be triggered in the fused images is as the low-frequency information after merging;
5) from respectively treat the anti-aliasing information of high frequency the fused images, choose the absolute value anti-aliasing information of high frequency as the high-frequency information after merging;
6) the high and low frequency information after the fusion that is selected is carried out inverse transformation of anti-aliasing profile ripple and support inverse transformation respectively, generate fused images.
One, image characteristics extraction
1. small echo kernel function
Support vector machine (Support vector machine, the general-purpose machinery learning method that SVM) to be Vapnik set up on nineteen ninety-five Statistical Learning Theory basis.People such as Zhang proposed small echo support vector machine theory in 2004, with the small echo nuclear subsitution common gaussian kernel, polynomial kernel etc., based on the small echo nuclear of the female little wave structure of Morlet, the small echo kernel function is defined as follows:
Wherein, λ is a contraction-expansion factor, and it is fixed to come according to circumstances in actual applications; I ∈ [1, L], L is the space dimensionality of data s herein.
This small echo nuclear has nearly orthogonal and the characteristics that are suitable for the signal partial analysis, has improved the classification performance of support vector machine.Suykens in 1999 etc. propose a kind of novel support vector machine (LS-SVM) on Vapnik statistical learning basis, LS-SVM has better pace of learning than SVM, and calculates easy.Use small echo kernel function can be to approach any nonlinear function than higher precision as basis function in LS-SVM, compare with gaussian kernel, small echo is checked non-stationary signal and is had the meticulous step by step premium properties that approaches with feature extraction, the multi-scale filtering device of its generation can extract the notable feature of image preferably, is beneficial to image co-registration.
2. multiple dimensioned support wave filter
In the standard support vector machine, the most basic estimation problem can be optimized by QP and seeks the set of one group of weight coefficient and solve, and can be expressed as:
y=f(x,w)=w
Tφ(x)+b (2)
Wherein: x ∈ R
d, y ∈ R, R
dThe expression input space, d represents dimension, φ (x) is Nonlinear Mapping function: R
d→ R
q, the dimension in q representation feature space, w are feature space R
qAn element.Then f (x, w) its risk function R (w) can be written as:
Wherein: C>0 is a constant, and L[y, and f (x, w)] be loss function
Parameter ε is majorized function f (x, w) deviation that obtains at realistic objective y.
In LS-SVM, risk function R (w) abbreviation becomes the minimization form:
Wherein: γ is a constant, is normalization factor.This moment, constraint definition of equal value was:
y
i=w
Tφ(x
i)+b+e
i,i=1,....N;(6)
Wherein: x
iBe sample, b ∈ R is the projection deviator, and N is the number of x sample;
The definable Lagrangian function is:
Cancellation w and e, so optimal conditions can be written as equality constraint:
Wherein:
Ω=K+I/ γ, γ are normalization factor, and I is a unit matrix; The element of K satisfies k
I, j=K (x
i, x
jφ (the x of)=<
i)
Tφ (x
j) i, j ∈ [1,2n+1], and φ (x) is the small echo kernel function, the expression inner product;
In the building process of multi-scale filtering device,
Element satisfies Y among the Y
Ij=i * j, i ∈ [1,2n+1], j=1; Therefore Y is a constant vector space.
Under the LS-SVM framework, when making up the multi-scale filtering device, estimation function f can be expressed as the linear combination of support vector, as shown in the formula:
Wherein: K (x, x
i)=φ (x)
Tφ (x
i), i=1 ... N.K is a kernel function, α
iBe support vector x
iSupport.
Through type (8) b, α can be written as formula (10) (11):
In LS-SVM, can select to examine K, normalization factor γ and vector element Y=[y in advance
1..., y
N]
T, same method, constant matrices A and B definition also can calculate:
Must see that (10) (11) can be written as equation (12) (13) respectively again:
b=B
TY (12)
Q can pass through equation (14)
Be defined as the vector space size of M * N, M=2m+1 wherein, N=2n+1 generally sets m=n by rewriting matrix Q, and corresponding weighting coefficient nuclear has become new support multi-filter, and matrix Q depends on input vector, kernel function K and parameter γ.
With Q in length and breadth to utilizing the zero filling of porous wavelet filter principle, realize the structure of multi-scale transform wave filter, obtain the multiple dimensioned support wave filter of each yardstick.Show the dimensional variation form of Q, wherein Q as Fig. 2
1Be n=1, the coefficient of the multi-scale wavelet nuclear support vector wave filter of γ=1 o'clock.
3. multiple dimensioned support wave filter is treated fused images and is carried out the support conversion
The multiple dimensioned support wave filter of each yardstick is carried out the support conversion with treating fused images, and the high-frequency information after obtaining decomposing is as follows:
DM
h=Q
h*M
h;
Wherein, M
1Be original image to be merged, when h 〉=2, M
hBe h layer low-frequency information;
DM
hBe h floor height information frequently, Q
hIt is the multiple dimensioned support wave filter of h yardstick;
DM
hBe Q
hAnd M
hConvolution results;
H ∈ [1, n], h is a decomposed class, n 〉=1;
Use M
H+1=M
h-DM
hObtain h+1 layer low-frequency information M
H+1
4. anti-aliasing profile ripple (NACT) is to the high and low frequency information processing
Use a nearly step of anti-aliasing filter method to carry out operation splitting to the high frequency for the treatment of fused images, low-frequency information after decomposing through the support conversion, obtain the anti-aliasing information of high and low frequency.
Two, image co-registration
1. generate and merge the back low-frequency information
According to people's vision physiological characteristics, (Pulse-coupled neural network PCNN) applies in the image co-registration, improves the performance of image co-registration with the coupled neural network.PCNN is based on the visual theory of cat and a kind of neural network model of making up, mainly constitute: accept unit, modulating unit and pulse generation unit by three functional units, after the neuron igniting, can inspire adjacent neuron igniting, this characteristic of PCNN is that image applications is laid a good foundation.Each neuronic activity can be described by following formula:
Wherein,
With
Be respectively neuronic input, internal activity item and output; In the following formula after each parameter n represent number of iterations;
Represent the anti-aliasing information of low frequency of one group of sample image, import as the PCNN network;
Be neuronic link field, V
LExpression
Intrinsic electromotive force, W is the link weight coefficient matrix between the neuron, q and p are the subscript of link field, α
ITime constant for link field; β is a strength of joint coefficient between the cynapse;
Be the dynamic threshold threshold value, n represents number of iterations, if
Greater than threshold value
Then neuron will produce a pulse
Be also referred to as once igniting; In the training process of the PCNN network that is applied to the anti-aliasing frequency information of low frequency, repeatedly trigger to determine triggering times, experience repeatedly triggers the PCNN network that trains better fusion performance the anti-aliasing information of low frequency is selected.
In the PCNN network training process, the pulse generation unit excites excited principle to generate dynamic threshold according to decay in the iterative process
In circulation for the first time,
Initial value is made as 0.Dynamic threshold occurs in iterative process
The time, dynamic threshold increases at once, decays gradually by index again then, excites excitement once more up to neuron, and experience repeatedly triggers training and determines final dynamic gate limit value θ
I, j
The PCNN network application that trains in the anti-aliasing Information Selection of low frequency as shown in Figure 3, receiving element receive frequency territory coordinate (i, the anti-aliasing information F of the low frequency of j) locating
I, jAs presenting input, modulating unit is modulated by the neighborhood link input of the anti-aliasing information of low frequency and is presented input, obtains neuron output Ui, j;
As neuron output Ui, j surpasses the definite dynamic threshold θ of PCNN that trains
I, j, choose the corresponding anti-aliasing information of the low frequency for the treatment of fused images of this neuron output and be the low-frequency information after merging.
2. generate and merge the back high-frequency information
The anti-aliasing information of the high frequency of respectively treating fused images that obtains after anti-aliasing profile wavelength-division is separated, the anti-aliasing information of high frequency of absolute value maximum of choosing each image slices vegetarian refreshments correspondence is as the high-frequency information after merging.
3. reconstruct generates fused images
High and low frequency information after the fusion that is selected is carried out inverse transformation of anti-aliasing profile ripple and support inverse transformation respectively, generate fused images.
The above only preferably is not limited to the present invention for of the present invention, and obviously, those skilled in the art can carry out various changes and modification and not break away from the spirit and scope of the present invention the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.
Claims (5)
1. multisource image anastomosing method is characterized in that: may further comprise the steps:
1) adopts the small echo kernel function to treat fused images and set up multiple dimensioned support wave filter;
2) use multiple dimensioned support wave filter to carry out the support conversion, produce high and low frequency information respectively respectively treating fused images;
3) use anti-aliasing profile wave convert method that high and low frequency information is handled, obtain the anti-aliasing information of high and low frequency;
4) to the anti-aliasing information of the low frequency of respectively treating fused images, adopt pulse coupled neural metanetwork fusion rule, choose and treat that respectively the anti-aliasing information of the low frequency that can be triggered in the fused images is as the low-frequency information after merging;
5) use the absolute value maximum to choose the fusion rule method to the anti-aliasing information of the high frequency of respectively treating fused images, the anti-aliasing information of high frequency of choosing the absolute value maximum of respectively treating each pixel correspondence of fused images is as the high-frequency information after merging;
6) the high and low frequency information after the fusion that is selected is carried out inverse transformation of anti-aliasing profile ripple and support inverse transformation respectively, generate fused images.
2. a kind of multisource image anastomosing method as claimed in claim 1 is characterized in that:
In the step 1), utilize the small echo kernel function to make up multiple dimensioned support wave filter prototype Q1, multiple dimensioned support wave filter prototype Q1 in length and breadth to utilizing the zero filling of porous wavelet filter principle, realize the structure of multi-scale transform wave filter, obtain multiple dimensioned support wave filter;
Wherein, multiple dimensioned support wave filter prototype is:
Ω=K+I/ γ, γ are normalization factor, and I is a unit matrix;
Element satisfies k among the K
I, j=K (x
i, x
jφ (the x of)=<
i)
Tφ (x
j) i, j ∈ [1,2n+1], and φ (x) is the small echo kernel function;
Element satisfies y among the Y
Ij=i * j, i ∈ [1,2n+1], j=1;
N is a natural number, is used for determining the scale size of multiple dimensioned support wave filter prototype Q1.
3. a kind of multisource image anastomosing method as claimed in claim 2 is characterized in that:
Step 2) specifically may further comprise the steps:
21) the multiple dimensioned support wave filter with each yardstick carries out the support conversion, the high-frequency information after obtaining decomposing with treating fused images:
DM
h=Q
h*M
h;
Wherein, M
1Be original image to be merged, when h 〉=2, M
hBe h layer low-frequency information;
DM
hBe h floor height information frequently, Q
hIt is the multiple dimensioned support wave filter of h yardstick;
DM
hBe Q
hAnd M
hConvolution results;
H ∈ [1, n], h is a decomposed class, n 〉=1;
22) use M
H+1=M
h-DM
hObtain h+1 layer low-frequency information M
H+1
4. a kind of multisource image anastomosing method as claimed in claim 3 is characterized in that:
Step 4) comprises the steps:
41) the anti-aliasing information of low frequency of utilizing one group of sample image is selected the Pulse Coupled Neural Network of the anti-aliasing information of low frequency as input, comprises receiving element, modulating unit and pulse generation unit; The pulse generation unit excites excited principle to generate dynamic threshold according to decay;
42) the receiving element receiving step 3) the anti-aliasing information of the low frequency of respectively treating fused images that obtains is as presenting input, and modulating unit is modulated by the neighborhood link input of the anti-aliasing information of low frequency and is presented input, obtains neuron output; When neuron output value surpasses dynamic threshold, choose the corresponding anti-aliasing information of the low frequency for the treatment of fused images of this neuron output and be the low-frequency information after merging.
5. as each described a kind of multisource image anastomosing method in the claim 1 to 4, it is characterized in that: in the step 6), low-frequency information after the fusion that high-frequency information after the fusion that step 5) is obtained and step 4) obtain is carried out coefficient adjustment, by anti-aliasing profile reconstructed wave and support reconstruct, reconstruct generates fused images successively.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010102885529A CN101968882B (en) | 2010-09-21 | 2010-09-21 | Multi-source image fusion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2010102885529A CN101968882B (en) | 2010-09-21 | 2010-09-21 | Multi-source image fusion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101968882A true CN101968882A (en) | 2011-02-09 |
CN101968882B CN101968882B (en) | 2012-08-15 |
Family
ID=43548032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2010102885529A Expired - Fee Related CN101968882B (en) | 2010-09-21 | 2010-09-21 | Multi-source image fusion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101968882B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567977A (en) * | 2011-12-31 | 2012-07-11 | 南京理工大学 | Self-adaptive fusing method of infrared polarization image based on wavelets |
CN103679670A (en) * | 2012-09-25 | 2014-03-26 | 中国航天科工集团第二研究院二〇七所 | A PCNN multisource image fusion method based on an improved model |
CN104992224A (en) * | 2015-06-09 | 2015-10-21 | 浪潮(北京)电子信息产业有限公司 | Pulse coupled neural network extending system and pulse coupled neural network extending method |
CN103985105B (en) * | 2014-02-20 | 2016-11-23 | 江南大学 | Contourlet territory based on statistical modeling multimode medical image fusion method |
CN106971385A (en) * | 2017-03-30 | 2017-07-21 | 西安微电子技术研究所 | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device |
CN108270968A (en) * | 2017-12-30 | 2018-07-10 | 广东金泽润技术有限公司 | A kind of infrared and visual image fusion detection system and method |
CN108537790A (en) * | 2018-04-13 | 2018-09-14 | 西安电子科技大学 | Heterologous image change detection method based on coupling translation network |
CN108830819A (en) * | 2018-05-23 | 2018-11-16 | 青柠优视科技(北京)有限公司 | A kind of image interfusion method and device of depth image and infrared image |
CN110348459A (en) * | 2019-06-28 | 2019-10-18 | 西安理工大学 | Based on multiple dimensioned quick covering blanket method sonar image fractal characteristic extracting method |
CN111340111A (en) * | 2020-02-26 | 2020-06-26 | 上海海事大学 | Method for recognizing face image set based on wavelet kernel extreme learning machine |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101216936A (en) * | 2008-01-18 | 2008-07-09 | 西安电子科技大学 | A multi-focus image amalgamation method based on imaging mechanism and nonsampled Contourlet transformation |
CN101226635A (en) * | 2007-12-18 | 2008-07-23 | 西安电子科技大学 | Multisource image anastomosing method based on comb wave and Laplace tower-shaped decomposition |
US20080292166A1 (en) * | 2006-12-12 | 2008-11-27 | Masaya Hirano | Method and apparatus for displaying phase change fused image |
CN101504766A (en) * | 2009-03-25 | 2009-08-12 | 湖南大学 | Image amalgamation method based on mixed multi-resolution decomposition |
-
2010
- 2010-09-21 CN CN2010102885529A patent/CN101968882B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080292166A1 (en) * | 2006-12-12 | 2008-11-27 | Masaya Hirano | Method and apparatus for displaying phase change fused image |
CN101226635A (en) * | 2007-12-18 | 2008-07-23 | 西安电子科技大学 | Multisource image anastomosing method based on comb wave and Laplace tower-shaped decomposition |
CN101216936A (en) * | 2008-01-18 | 2008-07-09 | 西安电子科技大学 | A multi-focus image amalgamation method based on imaging mechanism and nonsampled Contourlet transformation |
CN101504766A (en) * | 2009-03-25 | 2009-08-12 | 湖南大学 | Image amalgamation method based on mixed multi-resolution decomposition |
Non-Patent Citations (1)
Title |
---|
《光学技术》 20060831 赵鹏等 基于多尺度形态学滤波器的图像融合新方法 全文 1-5 , 2 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102567977B (en) * | 2011-12-31 | 2014-06-25 | 南京理工大学 | Self-adaptive fusing method of infrared polarization image based on wavelets |
CN102567977A (en) * | 2011-12-31 | 2012-07-11 | 南京理工大学 | Self-adaptive fusing method of infrared polarization image based on wavelets |
CN103679670A (en) * | 2012-09-25 | 2014-03-26 | 中国航天科工集团第二研究院二〇七所 | A PCNN multisource image fusion method based on an improved model |
CN103679670B (en) * | 2012-09-25 | 2016-08-31 | 中国航天科工集团第二研究院二〇七所 | A kind of PCNN multisource image anastomosing method based on improved model |
CN103985105B (en) * | 2014-02-20 | 2016-11-23 | 江南大学 | Contourlet territory based on statistical modeling multimode medical image fusion method |
CN104992224A (en) * | 2015-06-09 | 2015-10-21 | 浪潮(北京)电子信息产业有限公司 | Pulse coupled neural network extending system and pulse coupled neural network extending method |
CN104992224B (en) * | 2015-06-09 | 2018-02-06 | 浪潮(北京)电子信息产业有限公司 | A kind of Pulse Coupled Neural Network extends system and method |
CN106971385B (en) * | 2017-03-30 | 2019-10-01 | 西安微电子技术研究所 | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device |
CN106971385A (en) * | 2017-03-30 | 2017-07-21 | 西安微电子技术研究所 | A kind of aircraft Situation Awareness multi-source image real time integrating method and its device |
CN108270968A (en) * | 2017-12-30 | 2018-07-10 | 广东金泽润技术有限公司 | A kind of infrared and visual image fusion detection system and method |
CN108537790A (en) * | 2018-04-13 | 2018-09-14 | 西安电子科技大学 | Heterologous image change detection method based on coupling translation network |
CN108537790B (en) * | 2018-04-13 | 2021-09-03 | 西安电子科技大学 | Different-source image change detection method based on coupling translation network |
CN108830819A (en) * | 2018-05-23 | 2018-11-16 | 青柠优视科技(北京)有限公司 | A kind of image interfusion method and device of depth image and infrared image |
CN108830819B (en) * | 2018-05-23 | 2021-06-18 | 青柠优视科技(北京)有限公司 | Image fusion method and device for depth image and infrared image |
CN110348459A (en) * | 2019-06-28 | 2019-10-18 | 西安理工大学 | Based on multiple dimensioned quick covering blanket method sonar image fractal characteristic extracting method |
CN111340111A (en) * | 2020-02-26 | 2020-06-26 | 上海海事大学 | Method for recognizing face image set based on wavelet kernel extreme learning machine |
CN111340111B (en) * | 2020-02-26 | 2023-03-24 | 上海海事大学 | Method for recognizing face image set based on wavelet kernel extreme learning machine |
Also Published As
Publication number | Publication date |
---|---|
CN101968882B (en) | 2012-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101968882B (en) | Multi-source image fusion method | |
Guo et al. | A review of wavelet analysis and its applications: Challenges and opportunities | |
CN103049895B (en) | Based on the multimode medical image fusion method of translation invariant shearing wave conversion | |
CN102592136B (en) | Three-dimensional human face recognition method based on intermediate frequency information in geometry image | |
CN103295201B (en) | A kind of Multisensor Image Fusion Scheme based on NSST territory IICM | |
CN101980290B (en) | Method for fusing multi-focus images in anti-noise environment | |
CN103985105B (en) | Contourlet territory based on statistical modeling multimode medical image fusion method | |
CN104835130A (en) | Multi-exposure image fusion method | |
CN104809734A (en) | Infrared image and visible image fusion method based on guide filtering | |
CN104299216A (en) | Multimodality medical image fusion method based on multiscale anisotropic decomposition and low rank analysis | |
CN104299232B (en) | SAR image segmentation method based on self-adaptive window directionlet domain and improved FCM | |
CN104408700A (en) | Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images | |
CN104156994A (en) | Compressed sensing magnetic resonance imaging reconstruction method | |
CN103455988B (en) | The super-resolution image reconstruction method of structure based self-similarity and rarefaction representation | |
CN101303764A (en) | Method for self-adaption amalgamation of multi-sensor image based on non-lower sampling profile wave | |
CN109035189A (en) | Infrared and weakly visible light image fusion method based on Cauchy's ambiguity function | |
CN109102485A (en) | Image interfusion method and device based on NSST and adaptive binary channels PCNN | |
CN104392444A (en) | Method of extracting characteristics of medical MR (magnetic resonance) images based on ensemble empirical mode decomposition | |
Baohua et al. | A multi-focus image fusion algorithm based on an improved dual-channel PCNN in NSCT domain | |
CN104978724A (en) | Infrared polarization fusion method based on multi-scale transformation and pulse coupled neural network | |
CN104008537A (en) | Novel noise image fusion method based on CS-CT-CHMM | |
CN106056553A (en) | Image inpainting method based on tight frame feature dictionary | |
CN101847256A (en) | Image denoising method based on adaptive shear wave | |
CN1251145C (en) | Pyramid image merging method being integrated with edge and texture information | |
CN106981059A (en) | With reference to PCNN and the two-dimensional empirical mode decomposition image interfusion method of compressed sensing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20120815 Termination date: 20160921 |
|
CF01 | Termination of patent right due to non-payment of annual fee |