CN101630405A - Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation - Google Patents

Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation Download PDF

Info

Publication number
CN101630405A
CN101630405A CN200910104632A CN200910104632A CN101630405A CN 101630405 A CN101630405 A CN 101630405A CN 200910104632 A CN200910104632 A CN 200910104632A CN 200910104632 A CN200910104632 A CN 200910104632A CN 101630405 A CN101630405 A CN 101630405A
Authority
CN
China
Prior art keywords
prime
source images
sigma
image block
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200910104632A
Other languages
Chinese (zh)
Other versions
CN101630405B (en
Inventor
楚恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Survey Institute
Original Assignee
Chongqing Survey Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Survey Institute filed Critical Chongqing Survey Institute
Priority to CN2009101046321A priority Critical patent/CN101630405B/en
Publication of CN101630405A publication Critical patent/CN101630405A/en
Application granted granted Critical
Publication of CN101630405B publication Critical patent/CN101630405B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation. The method comprises the following steps: firstly, carrying out image block segmentation on source images and calculating definition characteristics of each image block; secondly, taking part of areas of the source images as a training sample and obtaining various parameters of a core Fisher classifier after training; thirdly, utilizing the known core Fisher classifier to obtain preliminary fusion images; and finally, utilizing redundant wavelet transformation and space correlation coefficients to carry out fusion processing on the image blocks positioned at the junction of the clear and fuzzy areas of the source images to obtain final fusion images. The invention has better image fusion performance, does not have obvious blocking artifacts and artifacts in fusion results, obtains better compromise between the effective enhancement of the image fusion quality and the reduction of the calculation quantity and can be used in the subsequent image processing and display. When wavelet decomposition layers with less number are adopted, the invention is more suitable for an occasion with higher real-time requirement.

Description

A kind of multi-focus image fusing method that utilizes nuclear Fisher classification and redundant wavelet transformation
Technical field
The invention belongs to the image co-registration field, be specifically related to a kind of multi-focus image fusing method, this method utilization nuclear Fisher classification is carried out fusion treatment with redundant wavelet transformation to the image that Same Scene has different focusing scenery, obtains the picture rich in detail that a width of cloth focuses on everywhere.
Background technology
Image co-registration is one of the research focus on present image processing circle, and it has been widely used in remote sensing, machine vision, medical science, military affairs, fields such as the administration of justice and manufacturing industry.When imageing sensors such as adopting CCD or CMOS obtained image, because the camera lens depth of field, the scenery that is positioned on the focussing plane can obtain projection clearly on image, and the scenery of other positions is subjected in various degree fuzzy on image.The image that one width of cloth focuses on everywhere is the precondition of many subsequent treatment, the main method that addresses this problem is exactly the multiple focussing image integration technology, promptly adopt different focal lengths that the shooting a series of images is set, then these images are carried out fusion treatment, obtain width of cloth fused images clearly everywhere.The burnt source images of poly can be divided into clear area, fuzzy region and three parts of juncture area between the two usually.The purpose that multiple focussing image merges is exactly the clear area of finding out in the source images, is combined into all scenery of width of cloth composograph all clearly then.
At present, Chang Yong multi-focus image fusing method mainly is divided into transform domain and spatial domain two big class methods.Image interfusion method commonly used based on transform domain mainly adopts Laplce (Laplacian) pyramid, wavelet transformation, Qu Bo (Curvelet) conversion, profile ripple (Contourlet) conversion etc.The whole syncretizing effect of these class methods is better, does not have tangible blocking effect, but is prone to distortion phenomenons such as pseudo-shadow in the fused images, and calculated amount and EMS memory occupation are bigger usually, when especially adopting non-sampling multiresolution analysis method.In recent years, there is the researcher to propose some image interfusion methods based on novel multiresolution analysises such as Contourlet, Curvelet, these fusion methods are to be transplanted to based on the fusion rule of wavelet transformation in the fusion of the high frequency of novel multiresolution analysis and low frequency coefficient mostly, and these methods often need higher calculated amount, and syncretizing effect improves limited.It is three kinds of amalgamation modes of unit that the spatial domain fusion method can be divided into pixel, image block and zone.Being that multiple focussing image is carried out when merging in the basis with the pixel, need judge whether each pixel focuses on usually, its shortcoming is that calculated amount and error are all bigger.Multi-focus image fusing method counting yield based on image block is higher, but how to choose suitable image block size remains further to be studied.Multi-focus image fusing method based on the zone is handled owing at first carrying out image segmentation, thereby has increased calculated amount, and syncretizing effect depends on the quality of image segmentation to a great extent.In these three kinds of methods, have counting yield preferably,, can further improve syncretizing effect if can solve the clear problem that merges with fuzzy region intersection image block in the source images based on the integration technology of image block.
In recent years, method for classifying modes is incorporated in the image co-registration field widely, has the scholar to propose convergence strategy based on neural network, support vector machine and support vector cluster respectively.But existing document does not fully take into account the clear special circumstances that merge with fuzzy region boundary image block in the source images.Source images is clear to carry out fusion treatment with the image block fuzzy region intersection to being positioned at also to have document to propose to adopt discrete cosine transform, but this method need use support vector machine to carry out two subseries, and the fusion method that the syncretizing effect of discrete cosine transform is compared based on multiresolution analysis still has certain gap.The fusion method that also has some documents to propose is a research object with the multiresolution coefficient, and calculated amount is relatively large.
Summary of the invention
At the deficiency of existing multi-focus image fusing method, the purpose of this invention is to provide a kind of multi-focus image fusing method that utilizes nuclear Fisher classification and redundant wavelet transformation, this method calculated amount is little, image co-registration quality height.
The object of the present invention is achieved like this: at first, source images is carried out image block respectively cut apart, calculate the sharpness feature of each image block; Again with the subregion of source images as training sample, after training, obtain the parameters of nuclear Fisher sorter; Utilize known nuclear Fisher sorter to obtain preliminary fused images then; At last, source images is clear to carry out fusion treatment with the image block fuzzy region intersection to being positioned to utilize redundant wavelet transformation and space correlation coefficient, obtains final fused images.
Its concrete fusion method is:
(1) be that source images A, the B of M * N is divided into the image block that some sizes are d * d with size.Definition Sign (m be corresponding to the final sign matrix of each image block of fused images F n), 0≤m≤(M/d-1) wherein, and 0≤n≤(N/d-1).
(2) calculate three sharpness features of each image block respectively: improved Laplce's energy and SML, spatial frequency SF and image gradient ENERGY E OG define corresponding source images piece A hWith B hProper vector be respectively
Figure G2009101046321D00021
With
(3) choose suitable zone as training set in source images, training nuclear Fisher sorter is judged source images piece A h, B hIn which is more clear.Difference value vector after the normalization
Figure G2009101046321D00023
As input, as source images piece A hCompare B hWhen more clear, be output as 1, otherwise be output as 0.
(4) utilize nuclear Fisher sorter that previous step obtains to all source images pieces to classifying.If source images piece A hThan B hMore clear, (m, value n) is 1 to Sign, otherwise is 0.
(5) utilize most wave filters to the sign matrix S ign that obtained (m n) carries out consistency desired result, promptly each fused images piece should be that most of image blocks in the verification window at center are from same source images with it.The verification window size that the present invention selects for use is 3 * 3.(m n), can obtain preliminary fused images Z, promptly according to the sign matrix S ign after the verification like this
Z ( i , j ) = A ( i , j ) Sign ( m , n ) = 1 B ( i , j ) Sign ( m , n ) = 0
Wherein, (m-1) * d+1≤i≤m * d, (n-1) * d+1≤j≤n * d.
(6) find out the image block that is positioned at the clear of source images and fuzzy region intersection.According to the sign matrix S ign (m that finishes consistency desired result, n), if certain image block is from a width of cloth source images, and there is the image block from an other width of cloth source images in it in 3 * 3 neighborhood, can think that then this image block is positioned at the intersection of the clear and fuzzy region of source images.For this class image block, the present invention has provided following convergence strategy, promptly
1. to being positioned at clear and the source images piece X fuzzy region intersection eWith Y eAdopt redundant wavelet transformation (RWT) to decompose the L layer, promptly
X e = Σ l = 1 L H X l + Σ l = 1 L V X l + Σ l = 1 L D X l + A X L
Y e = Σ l = 1 L H Y l + Σ l = 1 L V Y l + Σ l = 1 L D Y l + A Y L
Wherein, H X l, V X l, D X lWith H Y l, V Y l, D Y lBe respectively source images piece X eWith Y eWhat obtain after redundant wavelet transformation decomposes is positioned at l layer Representation Level, vertical and to the high frequency subgraph of angular direction, A X LWith A Y LBe respectively source images piece X eWith Y eThe low frequency subgraph that after redundant wavelet transformation decomposes, obtains.
2. RWT is decomposed the high frequency coefficient that obtains and select the big person of absolute value to be the high frequency coefficient after merging, promptly
H F l ( i , j ) = H X l ( i , j ) | H X l ( i , j ) | &GreaterEqual; | H Y l ( i , j ) | H Y l ( i , j ) | H X l ( i , j ) | < | H Y l ( i , j ) |
V F l ( i , j ) = V X l ( i , j ) | V X l ( i , j ) | &GreaterEqual; | V Y l ( i , j ) | V Y l ( i , j ) | V X l ( i , j ) | < | V Y l ( i , j ) |
D F l ( i , j ) = D X l ( i , j ) | D X l ( i , j ) | &GreaterEqual; | D Y l ( i , j ) | D Y l ( i , j ) | D X l ( i , j ) | < | D Y l ( i , j ) |
Wherein, H F l(i, j), V F l(i, j), D F l(i j) is respectively fused images piece F e(i, the high frequency coefficient of j) locating in l layer Representation Level, vertical and high frequency subgraph to the angular direction.
3. the low frequency coefficient of the big person of improved Laplce's energy (ML) value after in the low frequency coefficient of selecting RWT to decompose to obtain for fusion, promptly
A F L ( i , j ) = A X L ( i , j ) ML X L ( i , j ) &GreaterEqual; ML Y L ( i , j ) A Y L ( i , j ) ML X L ( i , j ) < ML Y L ( i , j )
Wherein, ML X L(i, j) and ML Y L(i j) is respectively source images piece X eWith Y eIn (i, the ML value of j) locating, A X L(i, j), A Y L(i, j) and A F L(i j) is respectively source images piece X e, Y eAnd fused images piece F eBe positioned at (i, the low frequency coefficient of j) locating.
4. after the high and low frequency coefficient after merging is carried out consistency desired result, the image block F after utilizing redundant wavelet inverse transformation to obtain merging e, promptly
F e = &Sigma; l = 1 L H F l + &Sigma; l = 1 L V F l + &Sigma; l = 1 L D F l + A F L
5. fused images piece F eReadability between the clear and fuzzy region of source images piece, but more approach the clear area.To this, the present invention utilizes this characteristic, with the further refinement of this class image block type, has provided a kind of method whether the source images piece comprises the clear and fuzzy region of source images simultaneously of differentiating, and is as follows:
Calculate source images piece X at first respectively e, Y eWith fused images piece F eSpace correlation coefficient (sCC), promptly
If Or
Figure G2009101046321D00043
Then
sCC ( F e &prime; , X e &prime; ) = &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] [ X e &prime; ( i , j ) - X &prime; &OverBar; e ] { &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] 2 } { &Sigma; i = 1 M &Sigma; j = 1 N [ X e &prime; ( i , j ) - X &prime; &OverBar; e ] 2 }
sCC ( F e &prime; , Y e &prime; ) = &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] [ Y e &prime; ( i , j ) - Y &prime; &OverBar; e ] { &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] 2 } { &Sigma; i = 1 M &Sigma; j = 1 N [ Y e &prime; ( i , j ) - Y &prime; &OverBar; e ] 2 }
(m-1)×d+1≤i≤m×d,(n-1)×d+1≤j≤n×d
In the following formula, X ' e, Y ' eWith F ' eBe respectively source images piece X e, Y eWith fused images piece F eThe image block that after high-pass filtering, obtains, X ' e, Y ' eWith F ' eBe respectively image block X ' e, Y ' eWith F ' eThe pixel grey scale average, Q is 3 * 3 neighborhood, " ^ " is the logical computing.Here adopt Laplce's template to carry out high-pass filtering and handle, promptly
- 1 - 1 - 1 - 1 8 - 1 - 1 - 1 - 1
Adopt following convergence strategy to being positioned at the clear image block of source images with the fuzzy region intersection:
If
Figure G2009101046321D00047
Or
Figure G2009101046321D00048
Then
F ( i , j ) = F e ( i , j ) | sCC ( F e &prime; , X e &prime; ) - sCC ( F e &prime; , Y e &prime; ) | < &beta; [ F e ( i , j ) + Z ( i , j ) ] / 2 | sCC ( F e &prime; , X e &prime; ) - sCC ( F e &prime; , Y e &prime; ) | &GreaterEqual; &beta;
(m-1)d+1≤i≤m×d,(n-1)d+1≤j≤n×d
Wherein, β be according to Sign (m, n) calculate all these class image blocks | sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) | value, add up the intermediate value that obtains then.In that (m, all satisfy in n) to sign matrix S ign
Figure G2009101046321D000410
Or
Figure G2009101046321D000411
After the correspondence image piece of condition carries out above operation, can obtain complete final fused images F.
The present invention utilizes nuclear Fisher classification and redundant wavelet transformation to realize that multiple focussing image merges, nuclear Fisher classification is called nuclear Fisher discriminatory analysis again, it is a kind of non-linear sorting technique that proposes on Fisher linear discriminant basis, it does not rely on the selection to model, does not have the dimension disaster and the local minimum point's problem that are prone in adopting neural network to classify to handle yet.Compare support vector machine, nuclear Fisher classification has two advantages: the one, there is not the notion of support vector, and the number of its complicacy and training sample is proportional, and the number of the complexity of support vector machine and support vector is closely related; The 2nd, the performance of nuclear Fisher sorter is better than support vector machine in some direction, and its main cause is that the former training depends on whole training samples, and the latter mainly relies on support vector.The present invention is merged by nuclear Fisher classification carrying out multiple focussing image.
Compared to existing technology, the present invention has following effect:
1) compare traditional multi-focus image fusing method based on wavelet transformation and other novel multiresolution analysises, the present invention has the performance of better fusion.Its main cause is, the high and low coefficient that the multiresolution that obtains by these common methods merges in the pyramid obtains by certain fusion rule, usually not exclusively from the clear area of source images, this readability that has just caused fused images is between the clear and fuzzy region of source images.And most of image-region of the present invention is fully from clear area corresponding in the source images.In addition, because the present invention need not entire image is carried out the computing of multiresolution decomposition and reconstruction, its counting yield is better than the fusion method that adopts non-sampling multiresolution analysis.
2) compare traditional multi-focus image fusing method of cutting apart based on image block, the present invention has better visual effect.Adopt the fusion method commonly used of image block often not consider to comprise simultaneously the image block of clear and fuzzy region in the source images, this just makes the situation that fused images is prone to crenellated phenomena or has obvious fuzzy region.And the present invention utilizes redundant wavelet transformation to handle by certain fusion rule near the image block that is positioned at the clear and fuzzy region separatrix of source images, has improved fusion mass.
3) the present invention combine based on multiresolution analysis with based on the fusion method commonly used advantage separately of image block, obtained between the calculated amount trading off preferably with reducing improving the image co-registration quality.
Description of drawings
Fig. 1 merges synoptic diagram for the multiple focussing image of cutting apart based on image block, and the dashed curve among the figure is represented the separatrix of clear and fuzzy region in the source images.
Fig. 2 is the image interfusion method procedure chart that the present invention proposes, and the dashed curve among the figure is represented the separatrix of clear and fuzzy region in the source images.
Fig. 3 is that the present invention is for the simulation result of source images to Disk, 3 (a) wherein, (b) be respectively the source images that focuses on different scenery, wherein the zone in the white box is a training set, 3 (c) are the reference picture that everywhere focuses on, 3 (d) adopt linear kernel function and three layers of fused images that wavelet decomposition obtains for the present invention, 3 (e) adopt radially basic kernel function and three layers of fused images that wavelet decomposition obtains for the present invention, the fused images that 3 (f) adopt linear kernel function and one deck wavelet decomposition to obtain for the present invention, the fused images that 3 (g) adopt radially basic kernel function and one deck wavelet decomposition to obtain for the present invention, 3 (h) are the fused images of DWT-1 method, 3 (i) are the fused images of DWT-2 method, 3 (j) are the fused images of RWT-1 method, 3 (k) are the fused images of RWT-2 method, and 3 (l) are the fused images of RWT-3 method.
Fig. 4 is that the present invention is for the simulation result of source images to Lab, 4 (a) wherein, (b) be respectively the source images that focuses on different scenery, 4 (c) are the reference picture that everywhere focuses on, 4 (d) adopt linear kernel function and three layers of fused images that wavelet decomposition obtains for the present invention, 4 (e) adopt radially basic kernel function and three layers of fused images that wavelet decomposition obtains for the present invention, the fused images that 4 (f) adopt linear kernel function and one deck wavelet decomposition to obtain for the present invention, the fused images that 4 (g) adopt radially basic kernel function and one deck wavelet decomposition to obtain for the present invention, 4 (h) are the fused images of DWT-1 method, 4 (i) are the fused images of DWT-2 method, 4 (j) are the fused images of RWT-1 method, 4 (k) are the fused images of RWT-2 method, and 4 (l) are the fused images of RWT-3 method.
Fig. 5 is that the present invention is for the simulation result of source images to Pepsi, 5 (a) wherein, (b) be respectively the source images that focuses on different scenery, 5 (c) are the reference picture that everywhere focuses on, 5 (d) adopt linear kernel function and three layers of fused images that wavelet decomposition obtains for the present invention, 5 (e) adopt radially basic kernel function and three layers of fused images that wavelet decomposition obtains for the present invention, the fused images that 5 (f) adopt linear kernel function and one deck wavelet decomposition to obtain for the present invention, the fused images that 5 (g) adopt radially basic kernel function and one deck wavelet decomposition to obtain for the present invention, 5 (h) are the fused images of DWT-1 method, 5 (i) are the fused images of DWT-2 method, 5 (j) are the fused images of RWT-1 method, 5 (k) are the fused images of RWT-2 method, and 5 (l) are the fused images of RWT-3 method.
Embodiment
The present invention is further illustrated below in conjunction with accompanying drawing.
The present invention will use following three sharpness features, promptly
1) improved Laplce's energy and (SML)
ML (x, y)=| 2I (x, y)-I (x-step, y)-I (x+step, y) |+| 2I (x, y)-I (x, y-step)-I (x, y+step) | in the following formula, step represents the distance between the coefficient, and step is taken as 1 among the present invention.(x y) is (x, the grey scale pixel value of y) locating in the source images to I.
SML = &Sigma; x = 1 d &Sigma; y = 1 d [ ML ( x , y ) ] 2
The SML value of same position image block is big more in the different source images, and its corresponding image block is clear more.
2) spatial frequency (SF)
Spatial frequency has reflected the overall active degree in piece image space, it comprise line frequency (Row Frequency, RF) and the row frequency (Column Frequency, CF).Size is defined as follows for the image block spatial frequency of d * d:
RF = 1 d &times; d &Sigma; x = 1 d &Sigma; y = 2 d [ I ( x , y ) - I ( x , y - 1 ) ] 2
CF = 1 d &times; d &Sigma; x = 2 d &Sigma; y = 1 d [ I ( x , y ) - I ( x - 1 , y ) ] 2
Overall spatial frequency values is got the root mean square of RF and CF, promptly
SF = RF 2 + CF 2
Spatial frequency is big more, and its pairing image block is clear more.
3) image gradient energy (EOG)
The image gradient energy has reflected the gradient information of image, has characterized the focus characteristics and the sharpness of image to a certain extent, and its value is big more, and pairing image block is clear more.Size is defined as for the image block gradient energy of d * d:
EOG = &Sigma; x = 1 d &Sigma; y = 1 d [ G x 2 ( x , y ) + G y 2 ( x , y ) ]
Wherein, G x=I (x, y)-I (x-1, y), G y=I (x, y)-I (x, y-1).
The present invention at first carries out image block with source images to be cut apart, and calculates above-mentioned three sharpness features of each image block; Again with the subregion of source images as training sample, after training, obtain the parameters of nuclear Fisher sorter; Utilize known nuclear Fisher sorter to obtain preliminary fused images then; At last, source images is clear to carry out fusion treatment with the image block fuzzy region intersection to being positioned to utilize redundant wavelet transformation and space correlation coefficient, obtains final fused images.
Concrete steps of the present invention are as follows:
(1) be that source images A, the B of M * N is divided into the image block that some sizes are d * d with size.Algorithm of the present invention is selected the image block of 16 * 16 sizes for use.Definition Sign (m be corresponding to the final sign matrix of each image block of fused images F n), 0≤m≤(M/d-1) wherein, and 0≤n≤(N/d-1).
(2) calculate three sharpness features of each image block respectively: improved Laplce's energy and SML, spatial frequency SF and image gradient ENERGY E OG define corresponding source images piece A hWith B hProper vector be respectively
Figure G2009101046321D00075
With
Figure G2009101046321D00076
(3) choose suitable zone as training set in source images, training nuclear Fisher sorter is judged source images piece A h, B hIn which is more clear.Difference value vector after the normalization
Figure G2009101046321D00077
As input, as source images piece A hCompare B hWhen more clear, be output as 1, otherwise be output as 0.
(4) utilize nuclear Fisher sorter that previous step obtains to all source images pieces to classifying.If source images piece A hThan B hMore clear, (m, value n) is 1 to Sign, otherwise is 0.
(5) utilize most wave filters to the sign matrix S ign that obtained (m n) carries out consistency desired result, promptly each fused images piece should be that most of image blocks in the verification window at center are from same source images with it.The verification window size that the present invention selects for use is 3 * 3.(m n), can obtain preliminary fused images Z, promptly according to the sign matrix S ign after the verification like this
Z ( i , j ) = A ( i , j ) Sign ( m , n ) = 1 B ( i , j ) Sign ( m , n ) = 0
Wherein, (m-1) * d+1≤i≤m * d, (n-1) * d+1≤j≤n * d.
(6) find out the image block that is positioned at the clear of source images and fuzzy region intersection.Fig. 1 has provided the multiple focussing image cut apart based on image block and has merged synoptic diagram, and wherein, " 1 " represents this image block from source images A, and " 0 " represents this image block from source images B, and the dashed curve among the figure is represented the separatrix of clear and fuzzy region in the source images.According to the sign matrix S ign (m that finishes consistency desired result, n), if certain image block is from a width of cloth source images, and there is the image block from an other width of cloth source images in it in 3 * 3 neighborhood, can think that then this image block is positioned at the intersection of the clear and fuzzy region of source images.
For this class image block, the present invention has provided following convergence strategy, promptly
1. to being positioned at clear and the source images piece X fuzzy region intersection eWith Y eAdopt redundant wavelet transformation (RWT) to decompose the L layer, promptly
X e = &Sigma; l = 1 L H X l + &Sigma; l = 1 L V X l + &Sigma; l = 1 L D X l + A X L
Y e = &Sigma; l = 1 L H Y l + &Sigma; l = 1 L V Y l + &Sigma; l = 1 L D Y l + A Y L
Wherein, H X l, V X l, D X lWith H Y l, V Y l, D Y lBe respectively source images piece X eWith Y eWhat obtain after redundant wavelet transformation decomposes is positioned at l layer Representation Level, vertical and to the high frequency subgraph of angular direction, A X LWith A Y LBe respectively source images piece X eWith Y eThe low frequency subgraph that after redundant wavelet transformation decomposes, obtains.Here adopt the reason of RWT to mainly contain two: the one, RWT has TIME SHIFT INVARIANCE, can effectively improve the Gibbs phenomenon that the sampling wavelet transformation easily causes; The 2nd, because image block is less usually, the information that the high and low frequency subgraph that the decomposition of sampling wavelet transformation obtains can provide is limited, and the high and low frequency subgraph that redundant wavelet transformation obtains after decomposing size is identical with source images.
2. RWT is decomposed the high frequency coefficient that obtains and select the big person of absolute value to be the high frequency coefficient after merging, promptly
H F l ( i , j ) = H X l ( i , j ) | H X l ( i , j ) | &GreaterEqual; | H Y l ( i , j ) | H Y l ( i , j ) | H X l ( i , j ) | < | H Y l ( i , j ) |
V F l ( i , j ) = V X l ( i , j ) | V X l ( i , j ) | &GreaterEqual; | V Y l ( i , j ) | V Y l ( i , j ) | V X l ( i , j ) | < | V Y l ( i , j ) |
D F l ( i , j ) = D X l ( i , j ) | D X l ( i , j ) | &GreaterEqual; | D Y l ( i , j ) | D Y l ( i , j ) | D X l ( i , j ) | < | D Y l ( i , j ) |
Wherein, H F l(i, j), V F l(i, j), D F l(i j) is respectively fused images piece F e(i, the high frequency coefficient of j) locating in l layer Representation Level, vertical and high frequency subgraph to the angular direction.
3. the low frequency coefficient of the big person of improved Laplce's energy (ML) value after in the low frequency coefficient of selecting RWT to decompose to obtain for fusion, promptly
A F L ( i , j ) = A X L ( i , j ) ML X L ( i , j ) &GreaterEqual; ML Y L ( i , j ) A Y L ( i , j ) ML X L ( i , j ) < ML Y L ( i , j )
Wherein, ML X L(i, j) and ML Y L(i j) is respectively source images piece X eWith Y eAt (i, the ML value of j) locating, A X L(i, j), A Y L(i, j) and A F L(i j) is respectively source images piece X e, Y eAnd fused images piece F eIn be positioned at (i, the low frequency coefficient of j) locating.
4. after the high and low frequency coefficient after merging is carried out consistency desired result, the image block F after utilizing redundant wavelet inverse transformation to obtain merging e, promptly
F e = &Sigma; l = 1 L H F l + &Sigma; l = 1 L V F l + &Sigma; l = 1 L D F l + A F L
5. during multiple focussing image merges, be not that all are positioned at the clear and fuzzy region that clear and the source images piece fuzzy region intersection all comprise source images simultaneously.This part image block account for adopt that step of the present invention (6) judges be positioned at half of the clear image block sum with fuzzy region of source images, see Fig. 1.Image block F eReadability between the clear and fuzzy region of source images piece, but more approach the clear area.To this, the present invention utilizes this characteristic, with the further refinement of image block type, has provided a kind of method whether the source images piece comprises the clear and fuzzy region of source images simultaneously of differentiating, and is as follows:
Calculate source images piece X at first respectively e, Y eWith fused images piece F eSpace correlation coefficient (sCC), promptly
If
Figure G2009101046321D00093
Or
Figure G2009101046321D00094
Then
sCC ( F e &prime; , X e &prime; ) = &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] [ X e &prime; ( i , j ) - X &prime; &OverBar; e ] { &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] 2 } { &Sigma; i = 1 M &Sigma; j = 1 N [ X e &prime; ( i , j ) - X &prime; &OverBar; e ] 2 }
sCC ( F e &prime; , Y e &prime; ) = &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] [ Y e &prime; ( i , j ) - Y &prime; &OverBar; e ] { &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F e &prime; &OverBar; ] 2 } { &Sigma; i = 1 M &Sigma; j = 1 N [ Y e &prime; ( i , j ) - Y &prime; &OverBar; e ] 2 }
(m-1)×d+1≤i≤m×d,(n-1)×d+1≤j≤n×d
In the following formula, X ' e, Y ' eWith F ' eBe respectively source images piece X e, Y eWith fused images piece F eThe image block that after high-pass filtering, obtains, X ' e, Y ' eWith F ' eBe respectively image block X ' e, Y ' eWith F ' eThe pixel grey scale average, Q is 3 * 3 neighborhood, " ^ " is the logical computing.Here adopt Laplce's template to carry out high-pass filtering and handle, promptly
- 1 - 1 - 1 - 1 8 - 1 - 1 - 1 - 1
The character that merges by multiple focussing image as can be known, if the image block F that obtains after adopting wavelet transformation to merge eCorresponding picture rich in detail piece X in the complete useful source image eSubstitute, then F eWith X eSpatial detail very approaching, sCC value between the two is bigger, this moment F eWith Y eSpatial detail differ bigger, sCC value between the two is less, so sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) value bigger.Picture rich in detail piece corresponding in source images is Y eThe time, sCC (F ' e, Y ' e)-sCC (F ' e, X ' e) value bigger.Hence one can see that, when | sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) | value when big, F eUsually the clear and fuzzy region that does not comprise source images simultaneously.Therefore according to Sign (m, n) calculate all these class image blocks | sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) | value, statistics obtains intermediate value β then.Such image block account for adopt that algorithm steps of the present invention (6) judges be positioned at half of the clear image block sum with the fuzzy region intersection of source images.Therefore, can judge and work as | sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) | value during greater than β, can think that this image block does not comprise the clear and fuzzy region of source images simultaneously, otherwise think that then this image block comprises the clear and fuzzy region of source images simultaneously.
Adopt following convergence strategy to being positioned at the clear image block of source images with the fuzzy region intersection:
If Or Then
F ( i , j ) = F e ( i , j ) | sCC ( F e &prime; , X e &prime; ) - sCC ( F e &prime; , Y e &prime; ) | < &beta; [ F e ( i , j ) + Z ( i , j ) ] / 2 | sCC ( F e &prime; , X e &prime; ) - sCC ( F e &prime; , Y e &prime; ) | &GreaterEqual; &beta;
(m-1)d+1≤i≤m×d,(n-1)d+1≤j≤n×d
In the following formula, in order to obtain better visual effect, right | sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) | the value clear source images piece that adopts multiresolution fused images piece and nuclear Fisher sorter to differentiate to obtain greater than the image block of the β way of averaging obtain final fused images piece.In that (m after all correspondence image pieces that satisfy above-mentioned condition carry out above operation in n), can obtain complete final fused images F to sign matrix S ign.The image interfusion method procedure chart that Fig. 2 proposes for the present invention.
In order to verify correctness of the present invention and validity, we select three groups of gray levels for use is that 256 source images experimentizes to Disk (size is 480 * 640), Lab (size is 480 * 640) and Pepsi (size is 512 * 512).Here, by cliping and pasting the clear part of two width of cloth source images, the desirable fused images that synthetic one width of cloth all focuses on, i.e. reference picture everywhere.Emulation experiment selects to be configured to Intel Duo 2 double-core E7400 central processing units (dominant frequency 2.80GHz), in save as the desktop computer of 3GB, utilize Matlab 7.1 softwares under Windows XP operating system, to carry out emulation experiment.Y-PSNR (PSNR) between experiment employing fused images and the reference picture and mutual information (MI) are as the objective evaluation standard.The value of PSNR and mutual information is big more, and then the quality of image co-registration is good more.
Y-PSNR (PSNR) is defined as:
PSNR = 255 2 M &times; N &Sigma; i = 1 M &Sigma; j = 1 N | R ( i , j ) - F ( i , j ) | 2
In the following formula, F is a fused images, and R is a reference picture, and their size is all M * N.
Mutual information (MI) is defined as:
MI FR = &Sigma; f = 0 J - 1 &Sigma; r = 0 J - 1 p FR ( f , r ) log 2 p FR ( f , r ) p F ( f ) p R ( r )
In the following formula, p FR(f r) is the joint probability density of fused images F and reference image R, p F(f) be the probability density of fused images F, p R(r) be the probability density of reference image R, J is the gray level of image.
Emulation experiment has also adopted based on the sampling wavelet transformation (DWT) and the fusion method commonly used of redundant wavelet transformation (RWT) carries out the multiple focussing image fusion, wherein DWT-1 is the high frequency coefficient big person that takes absolute value, the fusion method that low frequency coefficient is averaged, DWT-2 is high, low frequency coefficient all adopts the fusion method of local variance the maximum, RWT-1 is the high frequency coefficient big person that takes absolute value, the fusion method that low frequency coefficient is averaged, RWT-2 is high, low frequency coefficient all adopts local variance the maximum, RWT-3 is high, low frequency coefficient all adopts document " a kind of Image Fusion based on wavelet transformation " (electronic letters, vol, 2004,32 (5): 750-753) the maximum rule of the local energy that proposes is carried out fusion treatment.What deserves to be mentioned is, when relating to wavelet transformation in the experiment, all adopt " db8 " wavelet basis, and all the high and low frequency coefficient has been carried out consistency desired result.In the experiment, fusion method commonly used adopts three layers of wavelet decomposition, and the present invention adopts one deck and three layers of wavelet decomposition respectively.
Experiment is only once trained nuclear Fisher sorter, and source images is used to extract training mode to a pair of zone of containing 5 * 5 image blocks among the Disk, and whole training set has 50 training modes, shown in Fig. 3 (a) and (b).Emulation experiment is selected linear and radially basic kernel function respectively for use when examining the Fisher classification, regularization parameter λ is set to 0.01, and the punishment parameter value C of one-dimensional linear support vector machine (SVM) is 100.
Table 1 syncretizing effect of the present invention
Figure G2009101046321D00112
Table 1 provides the present invention respectively and adopts linear kernel function and radially basic kernel function respectively at the syncretizing effect of wavelet decomposition one deck acquisition during with three layers.The syncretizing effect that table 2 obtains for fusion method commonly used.By table 1,2 data as can be known, the contrast linear kernel function and the fusion results of basic kernel function radially, when adopting the identical wavelet decomposition number of plies, for Disk, the former syncretizing effect is better than the latter; For Lab, be better than the latter on the former PSNR, the former MI value will be slightly less than the latter; For Pepsi, the former PSNR value outline is inferior to the latter, and the MI value is better than the latter.Generally, the fusion results that adopts these two kinds different kernel functions to obtain is more or less the same.Comparatively speaking, linear kernel function has robustness preferably, and the syncretizing effect of selecting for use different punishment parameter value C to obtain changes little, and therefore in actual applications, linear kernel function is selected in suggestion for use.Use the result of the syncretizing effect of one deck wavelet decomposition as the present invention, but still be higher than the effect of other several fusion methods commonly used a little less than three layers of wavelet decomposition.By the data of table 2 as can be known, in the fusion method commonly used based on wavelet transformation, when adopting identical fusion rule, the syncretizing effect of redundant wavelet transformation is better than the sampling wavelet transformation usually.In several fusion methods commonly used, the syncretizing effect that the RWT-3 method obtains is best.
The effect assessment of table 2 blending algorithm commonly used
Figure G2009101046321D00121
The working time of table 3 the present invention and algorithms most in use (second)
Figure G2009101046321D00122
Table 3 has provided all running time of algorithm in the emulation experiment.By the data of table 3 as can be known, the working time of redundant wavelet transformation is far more than other several fusion methods, and fusion rule is complicated more, and calculated amount is big more.Difference working time when the present invention adopts linearity and radially basic kernel function respectively is little, the calculated amount minimum during one deck wavelet decomposition.Calculated amount during three layers of wavelet decomposition of the present invention is only a little more than the fusion method commonly used that adopts the sampling wavelet transformation, far below the fusion method commonly used based on redundant wavelet transformation.Therefore the occasion of having relatively high expectations in real-time can be selected the present invention of one deck wavelet decomposition for use, and the occasion that fusion mass is had relatively high expectations can be selected the present invention of three layers of wavelet decomposition for use.
Fig. 3~5 have provided source images respectively to Disk, Lab and Pepsi and syncretizing effect thereof.From Fig. 3~5 as can be known, the syncretizing effect that the present invention obtains is very natural, does not have tangible blocking effect and pseudo-shadow.The pseudo-shadow of various degrees in the fused images of common method, whole structure is clear not as the present invention, and for example near the white books in Fig. 3 middle part, near personage's the head, the literal in Fig. 5 upper right corner is neighbouring etc. among Fig. 4.Generally speaking, syncretizing effect of the present invention obviously is better than fusion method commonly used.
Fusion method of the present invention can be used for the fusion of two width of cloth and above source images thereof, when carrying out the fusion of the above source images of two width of cloth, can be earlier two width of cloth source images be wherein merged, obtain a width of cloth fused images, again with this fused images as new source images, merge with a remaining wherein width of cloth source images that has neither part nor lot in fusion, obtain the new fused images of a width of cloth once more, again with this fused images as new source images, merge with a remaining wherein width of cloth source images that has neither part nor lot in fusion, obtain a width of cloth fused images again ... the rest may be inferred, finish up to all source images are merged.

Claims (4)

1, a kind of multi-focus image fusing method that utilizes nuclear Fisher classification and redundant wavelet transformation, it is characterized in that: it comprises the steps: at first, source images is carried out image block respectively cut apart, and calculates the sharpness feature of each image block; Again with the subregion of source images as training sample, after training, obtain the parameters of nuclear Fisher sorter; Utilize known nuclear Fisher sorter to obtain preliminary fused images then; At last, source images is clear to carry out fusion treatment with the image block fuzzy region intersection to being positioned to utilize redundant wavelet transformation and space correlation coefficient, obtains final fused images.
2, the multi-focus image fusing method of utilization nuclear Fisher classification according to claim 1 and redundant wavelet transformation, it is characterized in that: its concrete fusion steps is:
(1) source images A, the B that size is M * N is divided into the image block that some sizes are d * d, and definition Sign (m be corresponding to the final sign matrix of each image block of fused images F n), 0≤m≤(M/d-1) wherein, and 0≤n≤(N/d-1);
(2) calculate three sharpness features of each image block respectively: improved Laplce's energy and SML, spatial frequency SF and image gradient ENERGY E OG;
(3) choose suitable zone as training set in source images, training nuclear Fisher sorter is judged source images piece A h, B hIn which is more clear, after training, obtain the parameters of nuclear Fisher sorter;
(4) utilize nuclear Fisher sorter that previous step obtains to all source images pieces to classifying, if source images piece A hThan B hMore clear, and Sign (m, value n) is 1, otherwise is 0;
(5) (m n) carries out consistency desired result, and (m n), obtains preliminary fused images Z, promptly according to the sign matrix S ign after the verification to the sign matrix S ign that obtained to utilize most wave filters
Z ( i , j ) = A ( i , j ) Sign ( m , n ) = 1 B ( i , j ) Sign ( m , n ) = 0
Wherein, (m-1) * d+1≤i≤m * d, (n-1) * d+1≤j≤n * d;
(6) the clear and image block fuzzy region intersection to corresponding source images among the preliminary fused images Z merges as follows:
1. to being positioned at clear in the source images and the image block X fuzzy region intersection eWith Y eAdopt redundant wavelet transformation to decompose the L layer, promptly
X e = &Sigma; l = 1 L H X l + &Sigma; l = 1 L V X l + &Sigma; l = 1 L D X l + A X L
Y e = &Sigma; l = 1 L H Y l + &Sigma; l = 1 L V Y l + &Sigma; l = 1 L D Y l + A Y L
Wherein, H X l, V X l, D X lWith H Y l, V Y l, D Y lBe respectively source images piece X eWith Y eWhat obtain after redundant wavelet transformation decomposes is positioned at l layer Representation Level, vertical and to the high frequency subgraph of angular direction, A X LWith A Y LBe respectively source images piece X eWith Y eThe low frequency subgraph that after redundant wavelet transformation decomposes, obtains;
2. RWT is decomposed the high frequency coefficient that obtains and select the big person of absolute value to be the high frequency coefficient after merging, promptly
H F l ( i , j ) = H X l ( i , j ) | H X l ( i , j ) | &GreaterEqual; | H Y l ( i , j ) | H Y l ( i , j ) | H X l ( i , j ) | < | H Y l ( i , j ) |
V F l ( i , j ) = V X l ( i , j ) | V X l ( i , j ) | &GreaterEqual; | V Y l ( i , j ) | V Y l ( i , j ) | V X l ( i , j ) | < | V Y l ( i , j ) |
D F l ( i , j ) = D X l ( i , j ) | D X l ( i , j ) | &GreaterEqual; | D Y l ( i , j ) | D Y l ( i , j ) | D X l ( i , j ) | < | D Y l ( i , j ) |
Wherein, H F l(i, j), V F l(i, j), D F l(i j) is respectively fused images piece F e(i, the high frequency coefficient of j) locating in l layer Representation Level, vertical and high frequency subgraph to the angular direction;
3. the big person of improved Laplce's energy value is the low frequency coefficient after merging in the low frequency coefficient of selecting redundant wavelet transformation to decompose to obtain, promptly
A F L ( i , j ) = A X L ( i , j ) ML X L ( i , j ) &GreaterEqual; ML Y L ( i , j ) A Y L ( i , j ) ML X L ( i , j ) < ML Y L ( i , j )
Wherein, ML X L(i, j) and ML Y L(i j) is respectively source images piece X eWith Y eIn (i, the ML value of j) locating, A X L(i, j), A Y L(i, j) and A F L(i j) is respectively source images piece X e, Y eAnd fused images piece F eBe positioned at (i, the low frequency coefficient of j) locating;
4. after again the high and low frequency coefficient after merging being carried out consistency desired result, the image block F after utilizing redundant wavelet inverse transformation to obtain merging e, promptly
F e = &Sigma; l = 1 L H F l + &Sigma; l = 1 L V F l + &Sigma; l = 1 L D F l + A F L
The image block of the clear and fuzzy region intersection of corresponding source images is by image block F among the preliminary fused images Z ePromptly obtain final fused images after the replacement.
3, a kind of multi-focus image fusing method that utilizes nuclear Fisher classification and redundant wavelet transformation according to claim 2 is characterized in that: further adopt following fusion method to being positioned at the clear image block with the fuzzy region intersection of source images:
Calculate at first respectively and be positioned at the clear image block X with the fuzzy region intersection of source images e, Y eWith fused images piece F eSpace correlation coefficient (sCC), promptly
If
Figure A2009101046320003C6
Or
Figure A2009101046320003C7
Then
sCC ( F e &prime; , X e &prime; ) = &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F &OverBar; e &prime; ] [ X e &prime; ( i , j ) - X &prime; &OverBar; e ] { &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F &OverBar; e &prime; ] 2 } { &Sigma; i = 1 M &Sigma; j = 1 N [ X e &prime; ( i , j ) - X &OverBar; &prime; e ] 2 }
sCC ( F e &prime; , Y e &prime; ) = &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F &OverBar; e &prime; ] [ Y e &prime; ( i , j ) - Y &prime; &OverBar; e ] { &Sigma; i = 1 M &Sigma; j = 1 N [ F e &prime; ( i , j ) - F &OverBar; e &prime; ] 2 } { &Sigma; i = 1 M &Sigma; j = 1 N [ Y e &prime; ( i , j ) - Y &prime; &OverBar; e ] 2 }
(m-1)×d+1≤i≤m×d,(n-1)×d+1≤j≤n×d
In the following formula, X ' e, Y ' eWith F ' eBe respectively source images piece X e, Y eWith fused images piece F eThe image block that after high-pass filtering, obtains, X ' e, Y ' eWith F ' eBe respectively image block X ' e, Y ' eWith F ' eThe pixel grey scale average, Q is 3 * 3 neighborhood, " ∧ " is the logical computing.
Adopt following fusion method to being positioned at the clear image block of source images with the fuzzy region intersection:
If
Figure A2009101046320004C2
Or
Figure A2009101046320004C3
Then
F ( i , j ) = F e ( i , j ) | sCC ( F e &prime; , X e &prime; ) - sCC ( F e &prime; , Y e &prime; ) | < &beta; [ F e ( i , j ) + Z ( i , j ) ] / 2 | sCC ( F e &prime; , X e &prime; ) - sCC ( F e &prime; , Y e &prime; ) | &GreaterEqual; &beta;
(m-1)d+1≤i≤m×d,(n-1)d+1≤j≤n×d
Wherein, β be according to Sign (m, n) calculate all these class image blocks | sCC (F ' e, X ' e)-sCC (F ' e, Y ' e) | value, add up the intermediate value that obtains then; In that (m, all satisfy in n) to sign matrix S ign
Figure A2009101046320004C5
Or
Figure A2009101046320004C6
After the correspondence image piece of condition carries out above operation, promptly obtain complete final fused images F.
4, according to claim 2 or 3 described a kind of multi-focus image fusing methods that utilize nuclear Fisher classification and redundant wavelet transformation, it is characterized in that: whether certain image block of preliminary fused images Z is clear from source images with the method for discrimination fuzzy region intersection is: according to the sign matrix S ign (m that finishes consistency desired result, n), if certain image block is from a width of cloth source images among the preliminary fused images Z, and there is the image block from an other width of cloth source images in it in 3 * 3 neighborhood, thinks that then this image block is positioned at the intersection of the clear and fuzzy region of source images.
CN2009101046321A 2009-08-14 2009-08-14 Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation Expired - Fee Related CN101630405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009101046321A CN101630405B (en) 2009-08-14 2009-08-14 Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009101046321A CN101630405B (en) 2009-08-14 2009-08-14 Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation

Publications (2)

Publication Number Publication Date
CN101630405A true CN101630405A (en) 2010-01-20
CN101630405B CN101630405B (en) 2011-10-12

Family

ID=41575506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009101046321A Expired - Fee Related CN101630405B (en) 2009-08-14 2009-08-14 Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation

Country Status (1)

Country Link
CN (1) CN101630405B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887581A (en) * 2010-06-17 2010-11-17 东软集团股份有限公司 Image fusion method and device
CN101980290A (en) * 2010-10-29 2011-02-23 西安电子科技大学 Method for fusing multi-focus images in anti-noise environment
CN102542545A (en) * 2010-12-24 2012-07-04 方正国际软件(北京)有限公司 Multi-focal length photo fusion method and system and photographing device
CN102567977A (en) * 2011-12-31 2012-07-11 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
CN102800080A (en) * 2011-05-23 2012-11-28 株式会社摩如富 Image identification device and image identification method
CN105245784A (en) * 2014-06-26 2016-01-13 深圳锐取信息技术股份有限公司 Shooting processing method and shooting processing device for projection region in multimedia classroom
CN105574820A (en) * 2015-12-04 2016-05-11 南京云石医疗科技有限公司 Deep learning-based adaptive ultrasound image enhancement method
CN106846287A (en) * 2017-01-13 2017-06-13 西京学院 A kind of multi-focus image fusing method based on biochemical ion exchange model
CN107194903A (en) * 2017-04-25 2017-09-22 阜阳师范学院 A kind of multi-focus image fusing method based on wavelet transformation
CN108665436A (en) * 2018-05-10 2018-10-16 湖北工业大学 A kind of multi-focus image fusing method and system based on gray average reference
CN108780571A (en) * 2015-12-31 2018-11-09 上海联影医疗科技有限公司 A kind of image processing method and system
CN109801248A (en) * 2018-12-18 2019-05-24 重庆邮电大学 One New Image fusion method based on non-lower sampling shear transformation
CN110031847A (en) * 2018-09-29 2019-07-19 浙江师范大学 The dynamic of wavelet transformation and support vector machines combination radar reflectivity is short to face Quantitative Precipitation estimating and measuring method
CN110136091A (en) * 2019-04-12 2019-08-16 深圳云天励飞技术有限公司 Image processing method and Related product
CN116665615A (en) * 2023-07-27 2023-08-29 深圳市安立信电子有限公司 Medical display control method, system, equipment and storage medium thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5952957A (en) * 1998-05-01 1999-09-14 The United States Of America As Represented By The Secretary Of The Navy Wavelet transform of super-resolutions based on radar and infrared sensor fusion
US7054468B2 (en) * 2001-12-03 2006-05-30 Honda Motor Co., Ltd. Face recognition using kernel fisherfaces
CN1177298C (en) * 2002-09-19 2004-11-24 上海交通大学 Multiple focussing image fusion method based on block dividing
CN1286065C (en) * 2004-07-22 2006-11-22 上海交通大学 Image fusing method based on direction filter unit

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887581A (en) * 2010-06-17 2010-11-17 东软集团股份有限公司 Image fusion method and device
CN101887581B (en) * 2010-06-17 2012-03-14 东软集团股份有限公司 Image fusion method and device
CN101980290A (en) * 2010-10-29 2011-02-23 西安电子科技大学 Method for fusing multi-focus images in anti-noise environment
CN101980290B (en) * 2010-10-29 2012-06-20 西安电子科技大学 Method for fusing multi-focus images in anti-noise environment
CN102542545A (en) * 2010-12-24 2012-07-04 方正国际软件(北京)有限公司 Multi-focal length photo fusion method and system and photographing device
CN102800080A (en) * 2011-05-23 2012-11-28 株式会社摩如富 Image identification device and image identification method
US8855368B2 (en) 2011-05-23 2014-10-07 Morpho, Inc. Image identification device, image identification method, and recording medium
CN102567977A (en) * 2011-12-31 2012-07-11 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
CN102567977B (en) * 2011-12-31 2014-06-25 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
CN105245784A (en) * 2014-06-26 2016-01-13 深圳锐取信息技术股份有限公司 Shooting processing method and shooting processing device for projection region in multimedia classroom
CN105574820A (en) * 2015-12-04 2016-05-11 南京云石医疗科技有限公司 Deep learning-based adaptive ultrasound image enhancement method
CN108780571A (en) * 2015-12-31 2018-11-09 上海联影医疗科技有限公司 A kind of image processing method and system
CN108780571B (en) * 2015-12-31 2022-05-31 上海联影医疗科技股份有限公司 Image processing method and system
US11880978B2 (en) 2015-12-31 2024-01-23 Shanghai United Imaging Healthcare Co., Ltd. Methods and systems for image processing
CN106846287A (en) * 2017-01-13 2017-06-13 西京学院 A kind of multi-focus image fusing method based on biochemical ion exchange model
CN107194903A (en) * 2017-04-25 2017-09-22 阜阳师范学院 A kind of multi-focus image fusing method based on wavelet transformation
CN108665436A (en) * 2018-05-10 2018-10-16 湖北工业大学 A kind of multi-focus image fusing method and system based on gray average reference
CN110031847A (en) * 2018-09-29 2019-07-19 浙江师范大学 The dynamic of wavelet transformation and support vector machines combination radar reflectivity is short to face Quantitative Precipitation estimating and measuring method
CN110031847B (en) * 2018-09-29 2022-08-23 浙江师范大学 Dynamic short-term quantitative rainfall estimation method combining wavelet transformation and support vector machine with radar reflectivity
CN109801248A (en) * 2018-12-18 2019-05-24 重庆邮电大学 One New Image fusion method based on non-lower sampling shear transformation
CN110136091A (en) * 2019-04-12 2019-08-16 深圳云天励飞技术有限公司 Image processing method and Related product
CN116665615A (en) * 2023-07-27 2023-08-29 深圳市安立信电子有限公司 Medical display control method, system, equipment and storage medium thereof
CN116665615B (en) * 2023-07-27 2023-11-14 深圳市安立信电子有限公司 Medical display control method, system, equipment and storage medium thereof

Also Published As

Publication number Publication date
CN101630405B (en) 2011-10-12

Similar Documents

Publication Publication Date Title
CN101630405B (en) Multi-focusing image fusion method utilizing core Fisher classification and redundant wavelet transformation
CN111008562B (en) Human-vehicle target detection method with feature map depth fusion
CN106846289B (en) A kind of infrared light intensity and polarization image fusion method
CN102063713B (en) Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
CN101546428B (en) Image fusion of sequence infrared and visible light based on region segmentation
CN105719263B (en) Visible ray and infrared image fusion method based on NSCT domains bottom visual signature
CN109325550B (en) No-reference image quality evaluation method based on image entropy
CN106228528B (en) A kind of multi-focus image fusing method based on decision diagram and rarefaction representation
CN103279935B (en) Based on thermal remote sensing image super resolution ratio reconstruction method and the system of MAP algorithm
CN106339998A (en) Multi-focus image fusion method based on contrast pyramid transformation
CN107341786A (en) The infrared and visible light image fusion method that wavelet transformation represents with joint sparse
CN104408700A (en) Morphology and PCA (principal component analysis) based contourlet fusion method for infrared and visible light images
CN103186894B (en) A kind of multi-focus image fusing method of self-adaptation piecemeal
CN105894483B (en) A kind of multi-focus image fusing method based on multi-scale image analysis and block consistency checking
CN112733950A (en) Power equipment fault diagnosis method based on combination of image fusion and target detection
CN110097617B (en) Image fusion method based on convolutional neural network and significance weight
CN104616274A (en) Algorithm for fusing multi-focusing image based on salient region extraction
CN103235929B (en) Identification method and identification device on basis of hand vein images
CN109242834A (en) It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method
CN114187520B (en) Building extraction model construction and application method
CN105893971A (en) Traffic signal lamp recognition method based on Gabor and sparse representation
CN103854265A (en) Novel multi-focus image fusion technology
CN104951800A (en) Resource exploitation-type area-oriented remote sensing image fusion method
Shrivastava et al. Bridging the semantic gap with human perception based features for scene categorization
Lin et al. Manifold learning via the principle bundle approach

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20111012

Termination date: 20170814