CN101231748A - Image anastomosing method based on singular value decomposition - Google Patents

Image anastomosing method based on singular value decomposition Download PDF

Info

Publication number
CN101231748A
CN101231748A CNA2007101992748A CN200710199274A CN101231748A CN 101231748 A CN101231748 A CN 101231748A CN A2007101992748 A CNA2007101992748 A CN A2007101992748A CN 200710199274 A CN200710199274 A CN 200710199274A CN 101231748 A CN101231748 A CN 101231748A
Authority
CN
China
Prior art keywords
image
resolution layer
layer
images
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007101992748A
Other languages
Chinese (zh)
Inventor
梁继民
胡海虹
王静
赵恒�
候彦宾
张毅
田捷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CNA2007101992748A priority Critical patent/CN101231748A/en
Publication of CN101231748A publication Critical patent/CN101231748A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image fusion method based on singular value decomposition. The processing procedures of the method are as follows: a group of original infrared images and each image in a group of visible images are preprocessed in the way of the same mean and the same variance; singular value decomposition is used for dividing the preprocessed original images into a low resolution layer, a high resolution layer and a super high resolution layer; according to different characteristics of each layer, the low resolution layer is fused in a weighted average and sharpening way, the high resolution layer is fused based on the selection of gray scale, local energy and a wavelet transform and the super high resolution layer is discarded; the final fusion image are reconstructed with the fused low resolution layer and the fused high resolution layer. According to the experimental results, the fused images obtained by using the method provided by the invention are is highly similar to the original image, and contain more marginal information and detailed information. The invention is better than prior image fusion method, and can be used for accurately identifying the objective.

Description

Image interfusion method based on svd
Technical field
The invention belongs to image processing field, be specifically related to picture breakdown and image interfusion method, be used for Target Recognition.
Background technology
Image co-registration is a kind of advanced person's of comprehensive a plurality of original image information a image processing techniques, its objective is redundant information and the complementary information inherited in a plurality of original images, to strengthen the reliability of information, increase image understanding and identification in the image.Will obtain more accurate result by image co-registration, also will make system more practical.Simultaneously, the image after the fusion has good robustness, for example, can increase degree of confidence, reduces ambiguity, increases reliability, improve classification performance etc.
At present, the fundamental purpose that image co-registration is applied to Digital Image Processing has following several:
1) image sharpening;
2) produce stereoscopic vision, to be used for stereophotogrammetry;
3) strengthen some feature that in the single-sensor image, can't see/see clearly;
4) improve detection/classification/understanding/recognition performance, obtain additional image data information;
5) utilize different image sequences constantly to detect the situation of change of scene/target;
6) utilize other sensor images to remedy in a certain sensor image to lose/information of fault.
Image fusion technology at first militarily is applied.At present, image fusion technology application militarily more and more widely.No matter be the big system of missile defense systems, or mini system such as precision guidance missile, autonomous type shell all be unable to do without these technology.Aspect civilian, image co-registration is applied in fields such as remote sensing, intelligent robots.In manufacturing industry, image co-registration can be used for check, material flaw detection, complex apparatus diagnosis, manufacture process supervision of product etc.; Medically, can help the doctor that disease is diagnosed more accurately; Aspect image and information encryption, can realize also that by image co-registration the image of hiding of digital picture and digital watermarking is implanted; Along with deepening continuously of image fusion technology research, image fusion technology will obtain using more widely.
It is generally acknowledged that image co-registration is divided into 3 levels, i.e. Pixel-level fusion, feature level merge and decision level fusion.It is that the bottom that directly acts on the image slices vegetarian refreshments merges that Pixel-level merges.Mainly be divided into two big classes:
(1) spatial domain fusion method
The spatial domain fusion method comprise weighted mean fusion method, grey scale pixel value select fusion method, based on the fusion method of provincial characteristics etc.The ultimate principle of these class methods is each original image of participating in fusion not to be carried out any image transformation or decomposition, but after directly each respective pixel in each original image simply being handled, is fused into the new image of a width of cloth.This method have realize simple, merge fireballing characteristics, can obtain syncretizing effect preferably in some specific application scenario, but the deficiency that exists is to be difficult to obtain satisfied syncretizing effect in most application scenarios.
(2) frequency field fusion method
Image co-registration must be carried out under the cardinal rule that merges, and promptly preferentially keeps the low-frequency information of low spatial resolution image, and improves the spatial resolution of fusion results on this basis as much as possible., the high and low frequency information of original image is separated effectively, and different frequency ranges is carried out different processing is that cardinal rule is more directly embodied, so will obtain better fusion results based on the image interfusion method of frequency field for this reason.Can be used for method that frequency field merges mainly contain the method for decomposing based on tower, based on method of wavelet etc., the inherent mechanism of many resolutions data structure of their structure fused images is the same, all is based on the image co-registration of multi-frequency lane.
The image interfusion method that decomposes based on tower is a kind of multiple dimensioned, multi-resolution image fusion method, and its fusion process is carried out on different scale, different spatial resolutions, different decomposition level.The image interfusion method that decomposes based on tower mainly can be divided into following several:
1) image interfusion method that decomposes based on Laplce's tower;
2) image interfusion method that decomposes based on the ratio tower;
3) based on the image interfusion method that decomposes than the degree tower;
4) image interfusion method that decomposes based on the gradient tower.
It is that original image is decomposed respectively on the different spatial frequency band that figure carries out the purpose that tower decomposes, and the pyramidal structure after utilizing it to decompose is differentiated layer to the difference with different spatial resolutions, adopts different fusion rules to carry out fusion treatment respectively.
Image interfusion method based on the tower decomposition, proposed multiple dimensioned, the image interfusion method of multiresolution, make the syncretizing effect of image obtain very big improvement, but because tower decomposes is redundant the decomposition, and there is correlativity in it each minute stratum, thereby the deficiency of this tower decomposition method is, decomposing data total amount afterwards based on tower will Duo more than 1/3rd than original data volume, and the tower decomposition does not have directivity.
When image being merged, earlier each original image is carried out wavelet transformation respectively, set up the high-resolution layer and the low layer of differentiating of image with method of wavelet; Then each decomposition layer is handled respectively, the different frequency domain components of each decomposition layer can adopt different fusion rules to handle, the wavelet coefficient after finally obtaining merging; At last new wavelet coefficient is carried out wavelet inverse transformation, the reconstructed image that obtains is a fused images.Wavelet decomposition is a nonredundancy, makes that the data total amount of image after wavelet decomposition can be very not big; Simultaneously, wavelet decomposition has directivity, can obtain the better fused images of effect.
Though obtain at present to use widely based on the multiple dimensioned picture breakdown methods of differentiating, also there is certain limitation in it more, such as, some marginal informations etc. may be lost.
The content of invention
The objective of the invention is to avoid the deficiency of above-mentioned prior art, propose a kind of image interfusion method, to be implemented in the good performance that handle image co-registration in the spatial domain based on svd.
For achieving the above object, the present invention handles image co-registration, comprises following process:
(1) each width of cloth image in original one group of infrared image and the one group of visible images is carried out with the homoscedastic pre-service of average;
(2) utilize svd that hanged down resolution layer, high-resolution layer, three level of super-resolution layer pretreated infrared being divided into the visible light original image;
(3), promptly the low layer of differentiating is adopted a weighted mean sharpening to carry out two width of cloth correspondence image successively to merge to the corresponding one by one fusion of carrying out infrared image and visible images respectively at all levels; Adopt gray scale to select to the high-resolution layer,, perhaps carry out two width of cloth image co-registration based on the wavelet transformation mode perhaps based on local energy; The super-resolution layer is abandoned;
(4) with low resolution layer and the final fused images of high-resolution layer reconstruct after merging be
f=f L+f H
In the formula, f LBe the low resolution layer of fused image,
f HHigh-resolution layer for fused image.
Described original image is carried out with the homoscedastic pre-service of average, utilizes following pre-service regular expression to carry out:
g ( i , j ) = ( g 0 ( i , j ) - μ 0 ) × σ σ 0 + μ
In the formula, g 0Image before the expression pre-service,
μ 0Image g before the expression pre-service 0Average,
σ 0Image g before the expression pre-service 0Mean square deviation,
μ represents the average of image g after the pre-service,
σ represents the mean square deviation of image g after the pre-service.
Described a sharpening merges to low resolution layer employing weighted mean, is undertaken by following formula
f L = α [ g 1 L + k × g 2 L ] + β | g 1 L - k × g 2 L |
In the formula, f LBe the low resolution layer after merging,
g 1 LAnd g 2 LBe respectively the low resolution layer of original two width of cloth images,
α, β and k are constant coefficients.
Describedly adopting gray scale to select to carry out two width of cloth image co-registration to the high-resolution layer, is by choosing the high-resolution layer g of original two width of cloth images 1 HWith g 2 HIn bigger pixel value as new high-resolution layer f HThat is:
f H ( i , j ) = max { | g 1 H ( i , j ) | , | g 2 H ( i , j ) | }
In the formula, g 1 H(i, j) and g 2 H(i j) is the pixel value of the high-resolution layer of waiting to merge two width of cloth images,
f H(i, j) the pixel value that is used for forming new high-resolution layer for selecting.
The present invention is owing to adopt based on the image layered method of the singular value of image energy, layering in the existing image co-registration is handled better in effect, can be used for Multispectral Image Fusion, experimental result shows, fused images that this method draws and original image have higher similarity, and comprise more marginal information and detailed information, improve accuracy Target Recognition.
Description of drawings
Fig. 1 is a process flow diagram of the present invention;
Fig. 2 is by method layering synoptic diagram of the present invention, wherein:
(a) original image,
(b) the low tomographic image of differentiating,
(c) high-resolution tomographic image,
(d) super-resolution tomographic image;
Fig. 3 is the image synoptic diagram after merging with different fusion methods, wherein:
(a) original infrared image,
(b) primary visible light image,
(c) adopt the image that has after the weighted mean sharpening merges,
(d) adopt the image that has after Laplce's tower decomposes fusion,
(e) adopt the image that has based on after the small echo fusion of local energy,
(f) image after employing SVD gray scale of the present invention is selected to merge,
(g) image after employing SVD local energy of the present invention merges,
(h) image after employing SVD wavelet transformation of the present invention merges.
Embodiment
With reference to Fig. 1, detailed process of the present invention comprises:
1, original visible images and infrared image are carried out pre-service
In the process of multi-modality images fusion treatment, might there be bigger difference in the intensity of illumination of input picture, thereby influences the stability of the designed convergence strategy that goes out.In order to reduce the adverse effect of illumination difference, before all input pictures being carried out svd SVD layering processing, all carried out with the homoscedastic pre-service of average.After pre-service, can obtain the infrared gray level image of standard visible light gray level image and standard.
Described pre-service is undertaken by following formula:
g ( i , j ) = ( g 0 ( i , j ) - μ 0 ) × σ σ 0 + μ - - - ( 1 )
In the formula, g 0Image before the expression pre-service,
μ 0Image g before the expression pre-service 0Average,
σ 0Image g before the expression pre-service 0Mean square deviation,
μ represents the average of image g after the pre-service,
σ represents the mean square deviation of image g after the pre-service.
If g 1∈ R M * nWith g 2∈ R M * nTwo width of cloth images of representing different spectrum respectively, and be σ through average μ and variance all 2Pre-service after, then: ‖ g 1‖=‖ g 2‖, following formula show that adopting identical average and variance to carry out pretreated image will have equal image energy.
2, image layered based on the SVD of image energy
(1) the image layered principle of SVD
Svd SVD is widely used in data compression, signal Processing and pattern-recognition many aspects, and general svd formula is:
A = US V T = Σ i = 1 r λ i 1 2 μ i v i T - - - ( 2 )
In the formula, U is an orthogonal matrix, U=[μ 1, μ 2..., μ m] ∈ R M * m,
V is an orthogonal matrix, V=[v 1, v 2..., v n] ∈ R N * n,
S is a diagonal matrix,
Figure S2007101992748D00053
Wherein λ 1 1 2 ≥ λ 2 1 2 ≥ . . . ≥ λ r 1 2 ≥ 0 ,
Vector μ iBe AA TProper vector,
Vector v iBe A TThe proper vector of A,
Expansion coefficient λ iIt is the eigenwert of matrix.
Arbitrary width of cloth gray level image can be regarded the matrix of a two dimension as, so gray level image can carry out SVD layering processing arbitrarily.Formula (2) is the basis of the image layered method of SVD, shows that simultaneously any piece image can be decomposed into the r tomographic image, and wherein the norm of i layer is:
| | λ i 1 2 μ i v i T | | = λ i | | μ i v i T | | = λ i tr [ ( μ i v i T ) T ( μ i v i T ) ]
= λ i tr [ v i μ i T μ i v i T ] = λ i tr [ v i v i T ] - - - ( 3 )
= λ i ( v i T v i ) = λ i .
Therefore, following formula is set up:
| | λ i 1 2 μ i v i T | | = λ i - - - ( 4 )
The norm of image array can be regarded as the energy of this image, and formula (4) points out that the energy of i layer equals its corresponding eigenwert, and eigenwert is by the series arrangement of successively decreasing, and that is to say that the less layer of i value concentrated most of energy of image.
(2) level of SVD image
Owing to obtain the r tomographic image through the SVD resolution process, each layer does not have tangible space structure information, thereby is unfavorable for merging design, therefore, the r layer can be reconstructed into less which floor, is that example is described in detail with 3 layers of reconstruct.
Original image g is decomposed into three layers of following form:
g=g L+g H+g N (5)
In the formula, g = Σ i = 1 r λ i 1 2 μ i v i T ,
g L = Σ i = I l I h - 1 λ i 1 2 μ i v i T , The low resolution layer of representative image;
g H = Σ i = I h I n - 1 λ i 1 2 μ i v i T , The high-resolution layer of representative image;
g N = Σ i = I n r λ i 1 2 μ i v i T , The super-resolution layer of representative image;
Make I l=1, and I l≤ I h≤ I n≤ r, then following formula is necessarily set up:
| | g L , H , N | | = Σ i = I l , I h , I n I h , I n , r - 1 λ i - - - ( 6 )
By choosing the low value I that differentiates layer and the critical layer of high-resolution layer hValue I with the critical layer of high-resolution layer and super-resolution layer n, a gray level image can be decomposed into three layers with different-energy resolution.Owing in the formula (5) eigenwert of image has been carried out descending sort, so g LForm g by layer with big eigenwert HForm g by layer with mid-level characteristics value NForm by layer with less eigenwert.The then low layer g that differentiate LOccupy the big number percent of total image energy, high-resolution layer g HOccupy medium number percent, super-resolution layer g NOccupy less number percent.As shown in Figure 2, the number percent of the energy of three levels and image gross energy is respectively: the low layer of differentiating occupies 99% of total image energy, and the high-resolution layer occupies 0.99% of total image energy, and the super-resolution layer occupies 0.01% of total image energy respectively.
3, adopt different amalgamation modes that different layers is carried out data processing respectively
To two width of cloth original image g 1, g 2After carrying out the SVD layering processing based on energy respectively, level is expressed as g respectively 1 L, g 1 H, g 1 N, with g 2 L, g 2 H, g 2 NLow resolution layer, high-resolution layer and the super-resolution layer of this two width of cloth image correspondence will have approximate energy.
(1) the super-resolution layer data is handled
Usually the super-resolution layer can be similar to the noise of seeing image as, so drop policy is adopted in processing that will this layer in whole fusion process usually.
(2) the high-resolution layer data is handled
For the fusion problem of high-resolution layer, the present invention proposes three kinds of amalgamation modes:
The A.SVD gray scale is selected to merge
For the high-resolution layer of two width of cloth images after the SVD decomposition, select the pixel value of value bigger in the respective pixel as new high-resolution layer, form new high-resolution layer by these pixel values.Concrete computing formula is as follows:
f H ( i , j ) = max { | g 1 H ( i , j ) | , | g 2 H ( i , j ) | } - - - ( 7 )
In the formula, g 1 H(i, j) and g 2 H(i j) is the pixel value of the high-resolution layer of waiting to merge two width of cloth images,
f H(i, j) the pixel value that is used for forming new high-resolution layer for selecting.
The B.SVD local energy is selected to merge
For the high-resolution layer of two width of cloth images after the SVD decomposition, handle by following formula:
f H = W × g 1 H + ( 1 - W ) × g 2 H - - - ( 8 )
In following formula, f HBe the new high-resolution layer that forms,
g 1 HAnd g 2 HRepresent the high-resolution layer of two width of cloth images to be merged respectively,
Weights W is determined that by (9) (10) (11) (12) concrete formula is as follows:
When | M AB| during 〉=a, if E A〉=E B, then W = 1 2 + 1 2 ( 1 - M AB 1 - a ) - - - ( 9 )
If E A<E B, then W = 1 2 - 1 2 ( 1 - M AB 1 - a ) - - - ( 10 )
When | M AB| during<a, if E A〉=E B, W=1 (11) then
If E A<E B, W=0 (12) then
In above-mentioned formula, E represents the energy in a zone, and computing formula is as follows:
E ( m , n ) = Σ m ′ , n ′ p 2 ( m + m ′ , n + n ′ ) - - - ( 13 )
P (m, n) expression (m, the n) pixel value of position,
M ', n ' expression with p (m n) is the center, is a zone of size with m ' * n ',
In above-mentioned formula, M represents the matching degree of two width of cloth images in a zone, and computing formula is as follows:
M AB ( m , n ) = 2 E A + E B Σ m ′ , n ′ p A ( m + m ′ , n + n ′ ) p B ( m + m ′ , n + n ′ ) - - - ( 14 )
C. the fusion of wavelet transformation
At first, the high-resolution layer to two width of cloth images carries out wavelet decomposition;
Then, the wavelet coefficient that decomposes the two groups of low frequencies in back is got average obtain new low frequency wavelet coefficient, to decomposing the wavelet coefficient of the two groups of high frequencies in back, the big coefficient of absolute value forms new high frequency wavelet coefficient in the coefficient of taking-up correspondence;
At last, according to low layer and the high-resolution layer differentiated of new wavelet decomposition that previous step obtains, produce the new high-resolution layer f of fused images by wavelet reconstruction H
(3) the low layer data of differentiating is handled
Adopt weighted mean sharpening strategy in the processing of low resolution layer, this processing can be described with following formula:
f L = α [ g 1 L + k × g 2 L ] + β | g 1 L - k × g 2 L | - - - ( 15 )
In the formula, f LBe the new low resolution layer that forms,
g 1 LAnd g 2 LRepresent the low resolution layer of two width of cloth images to be merged respectively,
α, β are constant coefficients, get α=0.8 and β=0.05,
K is a constant coefficient, and the k value is by the average of the local variance ratio that depends on two width of cloth images, and calculating formula is:
k &Element; 1 mlcr &GreaterEqual; 0.8 2 0.6 &le; mlcr < 0.8 3 otherwise . - - - - ( 16 )
In the formula, mlcr represents the average of two width of cloth image local variance ratios.
4, form fused images
The new low resolution layer f that previous step is obtained LWith new high-resolution layer f HCarry out addition, obtain final fused images as shown in the formula:
f=f L+f H (17)
Effect of the present invention can further specify by following emulation experiment:
This emulation experiment has adopted two kinds of objective image that need not the benchmark fused images to merge evaluation method.First kind of objective fused images evaluation method that is based on image conspicuousness feature, this method has defined three evaluation indexes, is respectively the Q index, the Weighted Guidelines Q that characterize fused images and original image characteristic similarity WWith edge similarity index Q E, this Q WAnd Q EIndex is the improvement to index Q.Second kind of evaluation index Q that evaluation index is to use image edge information to be weighted PIn these two kinds of evaluation indexes if Q W, Q EAnd Q PObtain bigger value, characterize fused images and comprise more detailed information.
This experiment used 16 groups infrared with visible images as test data.Adopt the weighted mean fusion method respectively, based on Laplce's fusion method, select fusion method, SVD local energy fusion method and SVD wavelet transform fusion that these 16 groups of images are merged based on small echo fusion method, the SVD gray scale of local energy, fusion results is shown in Fig. 3 and table 1.
Referring to Fig. 3, this iamge description be the scene in a battlefield, wherein Fig. 3 (a) is the primary visible light image, Fig. 3 (b) is original infrared image.Because light is darker, the information that shows among Fig. 3 (a) seldom, Fig. 3 (b) has comprised more information, this explanation infrared image when light is dark is better than the effect of visible images, but in the smoke of gunpowder of figure (b) information that does not see Chu is arranged still.
To primary visible light image 3 (a) and original infrared image 3 (b) adopt the weighted mean fusion method respectively, the fusion method of decomposing based on Laplce's tower, merge based on the fusion method of wavelet transformation after, the gained image is for being respectively 3 (c), 3 (d), 3 (e).By Fig. 3 (c), 3 (d), 3 (e), as seen (a) compare with image 3, through many information much in the image after merging, such as seeing the light smoke of gunpowder, mountain peak at a distance.Compared as seen with figure (b) by Fig. 3 (c), 3 (d), 3 (e), though the information that comprises is many, it is light that the smoke of gunpowder changes than image 3 (b) for the image after process merges, and the mountain peak is fuzzyyer.
It is 3 (f), 3 (g), 3 (h) that the method that adopts the present invention to propose respectively to primary visible light image 3 (a) and original infrared image 3 (b) merges three width of cloth images that obtain, with image 3 (f), 3 (g), 3 (h) with to original image 3 (a), Fig. 3 (b) relatively as seen, image after merging with the inventive method is not only original has Duoed a lot of detailed information than image 3 (a), 3 (b), and the fused images that obtains than other several fusion methods is more clear, reaches good syncretizing effect.
Table 116 group Multispectral Image Fusion result's average ratings index
Multispectral Image Fusion Methods Q W Q E Q P
The fusion method that the weighted mean fusion method is decomposed based on Laplce's tower 0.6290 0.6378 0.5949 0.6332 0.2998 0.3915
The fusion method of the fusion method SVD wavelet transformation of the fusion method SVD local energy of selecting based on the fusion method SVD gray scale of wavelet transformation 0.5930 0.7195 0.7157 0.7199 0.5752 0.6846 0.6789 0.6852 0.2722 0.4501 0.4425 0.4539
Table 1 has shown the mean value of the evaluation index of several fused images.Q in the table WIndex is based on image local conspicuousness feature, and " the conspicuousness feature " used during specific implementation is the local variance of image, Q EAnd Q PIndex all is based on image edge information.The value of these three evaluation indexes is big more, illustrates that the information that comprises is abundant more, and the effect of fused images is also good more.By table 1 data as seen, the value of three evaluation indexes of the fused images that obtains of the fused images that all obviously obtains of the value of three evaluation indexes of the image that merges based on the SVD fusion method that proposes of the present invention, the fusion method of decomposing and the fused images that obtains based on the fusion method of wavelet transformation based on Laplce's tower greater than existing weighted mean fusion method.Table 1 shows that the fused images that the method that proposes with the present invention obtains comprises maximum image informations, and good visual effect is not only arranged, and also comprises abundant information, and has obtained good syncretizing effect.

Claims (5)

1. image interfusion method based on svd comprises following process:
(1) each width of cloth image in original one group of infrared image and the one group of visible images is carried out with the homoscedastic pre-service of average;
(2) utilize svd that pretreated infrared light and visible light original image are divided into low layer, high-resolution layer, three level of super-resolution layer differentiated;
(3), promptly the low layer of differentiating is adopted a weighted mean sharpening to carry out two width of cloth correspondence image successively to merge to the corresponding one by one fusion of carrying out infrared image and visible images respectively at all levels; Adopt gray scale to select to the high-resolution layer,, perhaps carry out two width of cloth image co-registration based on the wavelet transformation mode perhaps based on local energy; The super-resolution layer is abandoned;
(4) with low resolution layer and the final fused images of high-resolution layer reconstruct after merging be
f=f L+f H
In the formula, f LBe the low resolution layer of fused image,
f HHigh-resolution layer for fused image.
2. image interfusion method according to claim 1 is characterized in that step (1) carries out with the homoscedastic pre-service of average original image, utilizes following pre-service regular expression to carry out:
g ( i , j ) = ( g 0 ( i , j ) - &mu; 0 ) &times; &sigma; &sigma; 0 + &mu;
In the formula, g 0Image before the expression pre-service,
μ 0Image g before the expression pre-service 0Average,
σ 0Image g before the expression pre-service 0Mean square deviation,
μ represents the average of image g after the pre-service,
σ represents the mean square deviation of image g after the pre-service.
3. image interfusion method according to claim 1 is characterized in that step (3) is described to adopt a weighted mean sharpening to merge to the low layer of differentiating, and is undertaken by following formula:
f L = &alpha; [ g 1 L + k &times; g 2 L ] + &beta; | g 1 L - k &times; g 2 L | ,
In the formula, f LBe the low resolution layer after merging, g 1 LAnd g 2 LBe respectively the low resolution layer of original two width of cloth images, α, β and k are constant coefficients.
4. image interfusion method according to claim 1 is characterized in that step (3) is described to adopt gray scale to select to carry out two width of cloth image co-registration to the high-resolution layer, is by choosing the high-resolution layer g of original two width of cloth images 1 HWith g 2 HIn bigger pixel value as new high-resolution layer f HThat is:
f H ( i , j ) = max { | g 1 H ( i , j ) | , | g 2 H ( i , j ) | }
In the formula, g 1 H(i, j) and g 2 H(i j) is the pixel value of the high-resolution layer of waiting to merge two width of cloth images,
f H(i, j) the pixel value that is used for forming new high-resolution layer for selecting.
5. image interfusion method according to claim 1 is characterized in that the described high-resolution layer is adopted of step (3) carry out two width of cloth image co-registration based on the wavelet transformation mode, carries out according to the following procedure:
At first, the high-resolution layer to two width of cloth images carries out wavelet decomposition;
Then, the wavelet coefficient that decomposes the two groups of low frequencies in back is got average obtain new low frequency wavelet coefficient, two groups of high frequency wavelet coefficients after decomposing, the big coefficient of absolute value forms new high frequency wavelet coefficient in the taking-up coefficient of correspondence;
At last, by described new low frequency wavelet coefficient and high frequency wavelet coefficient, reconstruct produces the new high-resolution layer f of fused images H
CNA2007101992748A 2007-12-18 2007-12-18 Image anastomosing method based on singular value decomposition Pending CN101231748A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007101992748A CN101231748A (en) 2007-12-18 2007-12-18 Image anastomosing method based on singular value decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007101992748A CN101231748A (en) 2007-12-18 2007-12-18 Image anastomosing method based on singular value decomposition

Publications (1)

Publication Number Publication Date
CN101231748A true CN101231748A (en) 2008-07-30

Family

ID=39898193

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007101992748A Pending CN101231748A (en) 2007-12-18 2007-12-18 Image anastomosing method based on singular value decomposition

Country Status (1)

Country Link
CN (1) CN101231748A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101846751A (en) * 2010-05-14 2010-09-29 中国科学院上海技术物理研究所 Real-time image fusion system and method for detecting concealed weapons
CN101872473A (en) * 2010-06-25 2010-10-27 清华大学 Multiscale image natural color fusion method and device based on over-segmentation and optimization
CN101916435A (en) * 2010-08-30 2010-12-15 武汉大学 Method for fusing multi-scale spectrum projection remote sensing images
CN101964111A (en) * 2010-09-27 2011-02-02 山东大学 Method for improving sight tracking accuracy based on super-resolution
CN101924869B (en) * 2009-06-11 2012-09-26 联咏科技股份有限公司 Image processing circuit and method
CN104995910A (en) * 2012-12-21 2015-10-21 菲力尔系统公司 Infrared imaging enhancement with fusion
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN109345470A (en) * 2018-09-07 2019-02-15 华南理工大学 Facial image fusion method and system
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN110942424A (en) * 2019-11-07 2020-03-31 昆明理工大学 Composite network single image super-resolution reconstruction method based on deep learning
CN111161176A (en) * 2019-12-24 2020-05-15 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111340059A (en) * 2018-12-19 2020-06-26 北京嘀嘀无限科技发展有限公司 Image feature extraction method and device, electronic equipment and storage medium
CN111539008A (en) * 2020-05-22 2020-08-14 支付宝(杭州)信息技术有限公司 Image processing method and device for protecting privacy
WO2020237931A1 (en) * 2019-05-24 2020-12-03 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN113379640A (en) * 2021-06-25 2021-09-10 哈尔滨工业大学 Multistage filtering image denoising method fusing edge information

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101924869B (en) * 2009-06-11 2012-09-26 联咏科技股份有限公司 Image processing circuit and method
CN101846751B (en) * 2010-05-14 2012-11-14 中国科学院上海技术物理研究所 Real-time image fusion system and method for detecting concealed weapons
CN101846751A (en) * 2010-05-14 2010-09-29 中国科学院上海技术物理研究所 Real-time image fusion system and method for detecting concealed weapons
CN101872473A (en) * 2010-06-25 2010-10-27 清华大学 Multiscale image natural color fusion method and device based on over-segmentation and optimization
CN101916435A (en) * 2010-08-30 2010-12-15 武汉大学 Method for fusing multi-scale spectrum projection remote sensing images
CN101964111A (en) * 2010-09-27 2011-02-02 山东大学 Method for improving sight tracking accuracy based on super-resolution
CN104995910B (en) * 2012-12-21 2018-07-13 菲力尔系统公司 Utilize the infrared image enhancement of fusion
CN104995910A (en) * 2012-12-21 2015-10-21 菲力尔系统公司 Infrared imaging enhancement with fusion
US11030731B2 (en) 2016-12-27 2021-06-08 Zhejiang Dahua Technology Co., Ltd. Systems and methods for fusing infrared image and visible light image
CN106780392A (en) * 2016-12-27 2017-05-31 浙江大华技术股份有限公司 A kind of image interfusion method and device
CN106780392B (en) * 2016-12-27 2020-10-02 浙江大华技术股份有限公司 Image fusion method and device
CN109712102B (en) * 2017-10-25 2020-11-27 杭州海康威视数字技术股份有限公司 Image fusion method and device and image acquisition equipment
CN109712102A (en) * 2017-10-25 2019-05-03 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, device and image capture device
CN109345470B (en) * 2018-09-07 2021-11-23 华南理工大学 Face image fusion method and system
CN109345470A (en) * 2018-09-07 2019-02-15 华南理工大学 Facial image fusion method and system
CN111340059A (en) * 2018-12-19 2020-06-26 北京嘀嘀无限科技发展有限公司 Image feature extraction method and device, electronic equipment and storage medium
WO2020237931A1 (en) * 2019-05-24 2020-12-03 Zhejiang Dahua Technology Co., Ltd. Systems and methods for image processing
CN110942424A (en) * 2019-11-07 2020-03-31 昆明理工大学 Composite network single image super-resolution reconstruction method based on deep learning
CN110942424B (en) * 2019-11-07 2023-04-18 昆明理工大学 Composite network single image super-resolution reconstruction method based on deep learning
CN111161176A (en) * 2019-12-24 2020-05-15 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111161176B (en) * 2019-12-24 2022-11-08 RealMe重庆移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN111539008A (en) * 2020-05-22 2020-08-14 支付宝(杭州)信息技术有限公司 Image processing method and device for protecting privacy
CN111539008B (en) * 2020-05-22 2023-04-11 蚂蚁金服(杭州)网络技术有限公司 Image processing method and device for protecting privacy
CN113379640A (en) * 2021-06-25 2021-09-10 哈尔滨工业大学 Multistage filtering image denoising method fusing edge information

Similar Documents

Publication Publication Date Title
CN101231748A (en) Image anastomosing method based on singular value decomposition
Deshmukh et al. Image fusion and image quality assessment of fused images
Goshtasby et al. Similarity and dissimilarity measures
Wang A new multiwavelet-based approach to image fusion
CN107657217A (en) The fusion method of infrared and visible light video based on moving object detection
CN101546428B (en) Image fusion of sequence infrared and visible light based on region segmentation
Liu et al. Fusing synergistic information from multi-sensor images: an overview from implementation to performance assessment
Su et al. Two-step multitemporal nonlocal means for synthetic aperture radar images
CN106327459A (en) Visible light and infrared image fusion algorithm based on UDCT (Uniform Discrete Curvelet Transform) and PCNN (Pulse Coupled Neural Network)
Hou et al. Infrared and visible images fusion using visual saliency and optimized spiking cortical model in non-subsampled shearlet transform domain
CN103366353A (en) Infrared image and visible-light image fusion method based on saliency region segmentation
Xiao et al. Segmentation of multispectral high-resolution satellite imagery using log Gabor filters
CN108154094A (en) Hyperspectral image unsupervised waveband selection method based on subinterval division
Duan et al. Infrared and visible image fusion using multi-scale edge-preserving decomposition and multiple saliency features
Khare et al. Shearlet transform based technique for image fusion using median fusion rule
CN101052993A (en) Multi-scale filter synthesis for medical image registration
Pan et al. DenseNetFuse: A study of deep unsupervised DenseNet to infrared and visual image fusion
Kekre et al. Implementation and comparison of different transform techniques using Kekre's wavelet transform for image fusion
Ferris et al. Using ROC curves and AUC to evaluate performance of no-reference image fusion metrics
Luo et al. Multi-modal image fusion via deep laplacian pyramid hybrid network
Nercessian et al. Multiresolution decomposition schemes using the parameterized logarithmic image processing model with application to image fusion
ALEjaily et al. Fusion of remote sensing images using contourlet transform
CN107832793A (en) The sorting technique and system of a kind of high spectrum image
CN104392209A (en) Evaluation model for image complexity of target and background
CN114764880B (en) Multi-component GAN reconstructed remote sensing image scene classification method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080730