CN103500444A - Polarization image fusion method - Google Patents

Polarization image fusion method Download PDF

Info

Publication number
CN103500444A
CN103500444A CN201310396089.3A CN201310396089A CN103500444A CN 103500444 A CN103500444 A CN 103500444A CN 201310396089 A CN201310396089 A CN 201310396089A CN 103500444 A CN103500444 A CN 103500444A
Authority
CN
China
Prior art keywords
prime
image
row
polarization
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310396089.3A
Other languages
Chinese (zh)
Inventor
袁艳
张思远
苏丽娟
胡亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201310396089.3A priority Critical patent/CN103500444A/en
Publication of CN103500444A publication Critical patent/CN103500444A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses a polarization image fusion method. The polarization image fusion method combines a non-negative matrix factorization method with a pulse coupling neural network method. The non-negative matrix factorization method is adopted to factorize matrixes formed by polarization parameter images, two feature base vector images are acquired, and through a pulse coupling neural network and a section of frequency iteration, according to a corresponding fusion rule, polarization fusion images are acquired. Compared with other fusion methods, the polarization image fusion method is improved to a certain degree from the aspects of the visual effect and objective indicators, due to the fact that more scene polarization information is fused in the acquired result, disorderly backgrounds can be effectively restrained, the detail features of targets are prominent, and then the capacity for utilizing a polarization imaging detection means to recognize the targets can be improved.

Description

A kind of Polarization Image Fusion method
Technical field
The invention belongs to the information fusion field, relate to a kind of Polarization Image Fusion method, specifically, is a kind of Polarization Image Fusion method based on Non-negative Matrix Factorization and Pulse Coupled Neural Network.
Background technology
Polarization is one of inherent characteristic of optical radiation, and it is strong the supplementing to traditional imaging detection technology that polarization imaging is surveyed.The relevant target reflection that the Polarization Detection technology provides common detection means to obtain or the polarization information of radiant light, the polarization image obtained provides than common intensity image and the more abundant information about scene of spectrum picture, thereby is widely used in many fields such as geology detecting, materials classification, machine vision.
Polarized light generally take the form of elliptically polarized light.In actual applications, introduce the such one group of duplicate parameter of physics dimension of stokes parameter and mark the polarization state of levying polarized light.
Stokes parameter is through being often expressed as:
S=[I,Q,U,V] T
Wherein, I means the total intensity of light wave, and Q means the intensity of linearly polarized light on horizontal direction, and U means the intensity of linearly polarized light on 45 ° of directions, and V means the circular polarization light intensity.Stokes vector can be easy to measure and be included in the random polarization light in partial poolarized light.Two other parameter is extremely important in the expression of polarization state, mean the ratio of complete polarized light intensity in whole light intensity with degree of polarization P, mean the angle of polarized light direction of vibration and reference direction (reference direction is got horizontal direction x axle) with polarization angle θ, be expressed from the next:
P = Q 2 + U 2 + V 2 I θ = 1 2 tan - 1 U Q
Each polarization parameter image, being redundancy, complementary aspect the expression polarization information, utilizes complementarity and redundancy between them, takes image interfusion method to solve these problems.By Polarization Image Fusion, can improve contrast and the sharpness of soft image; Can greatly compress natural background, detect man-made target simultaneously from complicated nature background.Therefore to the research of polarization information fusion method, there is very important researching value and certain construction value.
Document Shutao Li, Bin Yang, Jianwen Hu.Performance comparison of different multi-resolution transforms for image fusion[J] .Information Fusion, 2011,12 (2): in 74-84, openly utilize the method for wavelet transformation to carry out the fusion of image, the method is carried out wavelet transformation to original image, and it is decomposed on the different characteristic territory of different frequency range, sets up the small echo turriform of each image and decomposes; Then each decomposition layer is carried out respectively to fusion treatment, the different frequency component on each decomposition layer adopts different fusion rules to carry out fusion treatment, the wavelet pyramid after being merged; Finally to merging rear gained wavelet pyramid, carry out wavelet inverse transformation (carrying out Image Reconstruction), obtain fused images.If two dimensional image is carried out to the wavelet decomposition of N layer, (3N+1) individual different frequency bands is arranged the most at last, wherein comprise 3N high frequency band and a low-frequency band.This method has the following disadvantages:
(1) fused images obtained has more ground unrest, and improvement of visual effect is poor;
(2) do not possess translation invariance.This means that the displacement that input picture is very small will cause the very large variation of energy distribution between the discrete wavelet transform coefficients of different scale;
(3) limited directional selectivity.In common wavelet decomposition, each metric space can only be broken down into 3 limited directions: level, vertical, diagonal angle, its direction alternative is very limited.
Document P.J.Burt and E.H.Adelson, " The Laplacian pyramid as a compact image code ", IEEE Trans.Comm.31 (4), disclose and utilized the method for Laplacian Pyramid Transform to merge image in 532-540 (1983).At first the method carries out Gassian low-pass filter and the interlacing down-sampling every row to original input picture, obtains the ground floor of gaussian pyramid; To the filtering of ground floor Image Low-passed and down-sampling, obtain the second layer of gaussian pyramid again; Repeat above process, form the gaussian pyramid of each image.And then this tomographic image of gaussian pyramid that calculates each image tomographic image high with it image poor after interpolation is amplified, obtain the laplacian pyramid of each image.Laplacian pyramid has represented the edge details of every one-level image, therefore by comparing the laplacian pyramid of two width image respective stages, with regard to having, outstanding separately image detail can be utilized fusion rule choose in fused images, make like this quantity of information of fused images abundant as far as possible, reach the purpose of fusion.This method, in conversion process, because noise has been introduced in the operations such as quantification or threshold value, has reduced sharpness and the signal to noise ratio (S/N ratio) of fused images, has affected syncretizing effect.
Summary of the invention
In order to address the above problem, the invention provides a kind of Polarization Image Fusion method that Non-negative Matrix Factorization method and Pulse Coupled Neural Network method are combined, can effectively give prominence to the minutia of object, improve the ability of utilizing polarization imaging detection means recognition object.
Polarization Image Fusion method of the present invention, realize by following step:
Step 1: the polarization parameter view data that obtains the target polarization image;
Utilize polarized imaging system to carry out polarization imaging to target, the angle that obtains the light transmission shaft of polaroid in polarized imaging system and horizontal direction is respectively the output intensity polarization image I of 0 °, 45 °, 90 °, 135 ° four polarization directions 0, I 45, I 90, I 135; And then obtain the polarization parameter view data, comprise stokes parameter I, Q, the U of polarized light, and the parameter image of degree of polarization P and polarization angle θ.
Step 2: obtain two width feature bases images;
By polarization parameter view data I, Q, U, P and the θ obtained in step 1, form the data vector matrix V of the capable q row of p p * q;
V p × q = I Q U P θ T = I 11 Q 11 U 11 P 11 θ I 12 Q 12 U 12 P 12 θ . . . . . . . . . . . . . . . I M × N Q M × N U M × N P M × N θ M × N - - - ( 1 )
Wherein, M and N are the polarization parameter data line number and columns.
To the data vector matrix, V carries out Non-negative Matrix Factorization, by data vector matrix V approximate factorization, is the non-negative base vector matrix D with the capable r row of p p * rwith the non-negative coefficient matrix H with the capable q row of r r * qproduct; And make r=2; Have:
min D , H f ( D , H ) ≡ 1 2 Σ i = 1 p Σ j = 1 q [ V ij - ( DH ) ij ] 2 (2)
s . t . D ia ≥ 0 H bj ≥ 0 , ∀ i , a , j
In formula (2), V ijpixel for the capable j row of i in matrix V; I=1,2,3 ..., p; J=1,2,3 ..., q; D iarepresent the pixel of the capable a row of i in matrix D, H ajrepresent the pixel of the capable j row of a in matrix H; A=1,2.
A, random initializtion matrix D and H.
Matrix D and H are upgraded in b, employing alternately multiplication update rule, have:
H aj ← H aj ( D T V ) aj ( D T DH ) aj D ia ← D ia ( VH T ) ia ( DHH T ) ia - - - ( 3 )
Obtain matrix D=[D 1, D 2]; Matrix D has comprised the approximate complete information that participates in whole polarization parameter images of fusion, can be for the approximate reproduction of object polarization information; D 1for the first row pixel of matrix D, as the first image array; D 2for the secondary series pixel of matrix D, as the second image array.
E, acquisition D 1with D 2sharpness g and variance C:
g = 1 MN Σ i ′ = 1 M Σ j ′ = 1 N Δf x 2 ( i ′ , j ′ ) 2 + Δf y 2 ( i ′ , j ′ ) 2 2 - - - ( 4 )
C = 1 MN Σ i ′ = 1 M Σ j ′ = 1 N [ I i ′ j ′ - I ‾ ] 2 - - - ( 5 )
In formula (4), Δ f x, Δ f ydifference representative image matrix x, the difference of y direction; I ' and j ' mean respectively i ' row and the j ' row pixel in the polarization parameter view data, i '=1,2,3 ..., M, j '=1,2,3 ..., N.
In formula (5), I i ' j 'gray-scale value for the i ' row, the j ' row pixel in image array;
Figure BDA0000376916450000046
for the average gray value of image array, choose D 1and D 2the feature bases of the image array that middle sharpness and variance are larger is as the standard base vector, feature bases to another image array carries out the histogram specification processing, finally obtains feature bases image A and feature bases image B after two width are processed.
Step 3: obtain the polarization image merged;
The two width feature bases images that obtain in step 2 are sent into respectively in two identical PCNN networks, utilized the PCNN model formation to carry out iteration N time, obtain respectively two feature bases image the i ' row j ' and be listed as neuronic output Y i ' j '; Subsequently, according to formula R i ' j '(n)=R i ' j '(n-1)+Y i ' j '(n), two feature bases image the i ' row j ' of iterative computation are listed as neuronic igniting number of times R i ' j '; N is iterations.
Obtain the matching degree Z of the PCNN pulse number of two feature bases images:
Z i ′ j ′ ( n max ) = 2 × R A i ′ j ′ ( n max ) × R B i ′ j ′ ( n max ) [ R A i ′ j ′ ( n max ) ] 2 + [ R B i ′ j ′ ( n max ) ] 2 - - - ( 6 )
In formula (6), Z i ' j 'for feature bases image the i ' row, j ' is listed as neuronic matching degree; Nmax is maximum iteration time; for feature bases image A the i ' row, j ' is listed as neuronic igniting number of times;
Figure BDA0000376916450000048
for feature bases image B the i ' row, j ' is listed as neuronic igniting number of times.
Finally, by following formula, the corresponding neuron in two width two width feature bases images is chosen the image after being merged:
Figure BDA0000376916450000043
In formula (7), I ' i ' j 'the gray-scale value that means the polarization image the i ' row the j ' row pixel of fusion; R thit is 0~1 constant; R in the present invention thbe preferably 0.95; A i ' j 'for feature bases image A the i ' row the j ' row neuron; B i ' j 'for the i ' row, the j ' row neuron in the feature bases image B;
G A = R A i ′ j ′ ( n max ) R A i ′ j ′ ( n max ) + R B i ′ j ′ ( n max ) ;
G B = R B i ′ j ′ ( n max ) R A i ′ j ′ ( n max ) + R B i ′ j ′ ( n max ) .
The invention has the advantages that:
1, Polarization Image Fusion method of the present invention all improves than additive method on visual effect and objective indicator;
2, Polarization Image Fusion method of the present invention, fusion results has incorporated more scene polarization information, can effectively suppress mixed and disorderly background, and the minutia of outstanding object, can improve the ability of utilizing polarization imaging detection means recognition object;
3, Polarization Image Fusion method of the present invention, applicability is strong, can be applied to multiple occasion.
The accompanying drawing explanation
Fig. 1 is Polarization Image Fusion method flow diagram of the present invention;
Fig. 2 is for adopting the fusion results figure to target image;
Fig. 3 is for adopting the fusion results figure to target image;
Fig. 4 is for adopting the fusion results figure of the inventive method to target image.
Embodiment
Below in conjunction with accompanying drawing, the present invention is further illustrated.
Polarization Image Fusion method of the present invention, as shown in Figure 1, realize by following step:
Step 1: the polarization parameter view data that obtains the target polarization image;
Utilize polarized imaging system to carry out polarization imaging to target, the angle that obtains the light transmission shaft of polaroid in polarized imaging system and horizontal direction is respectively the output intensity polarization image I of 0 °, 45 °, 90 °, 135 ° four polarization directions 0, I 45, I 90, I 135; Can obtain the polarization parameter view data thus, comprise stokes parameter I, Q, U, the V (it is 0 that common engineering makes V in surveying and calculating) of polarized light, and the parameter image of degree of polarization P and polarization angle θ;
I = 1 2 ( I 0 + I 45 + I 90 + I 135 ) Q = I 0 - I 90 U = I 45 - I 135 V = 0 - - - ( 1 )
P = Q 2 + U 2 I θ = 1 2 tan - 1 U Q - - - ( 2 )
Step 2: obtain two width feature bases images;
By polarization parameter view data I, Q, U, P and the θ obtained in step 1, form the data vector matrix V of p line number q columns p * q, its each row represent a polarization parameter, so n=5.
V p × q = I Q U P θ T = I 11 Q 11 U 11 P 11 θ I 12 Q 12 U 12 P 12 θ . . . . . . . . . . . . . . . I M × N Q M × N U M × N P M × N θ M × N - - - ( 3 )
Wherein, M and N are the polarization parameter data line number and columns;
To the data vector matrix, V carries out Non-negative Matrix Factorization, by data vector matrix V approximate factorization, is the non-negative base vector matrix D with the capable r row of p p * rwith the non-negative coefficient matrix H with the capable q row of r r * qproduct; Have:
min D , H f ( D , H ) ≡ 1 2 Σ i = 1 p Σ j = 1 q [ V ij - ( DH ) ij ] 2
s . t . D ia ≥ 0 H bj ≥ 0 , ∀ i , a , j - - - ( 4 )
In formula (4), V ijpixel for the capable j row of i in matrix V; I=1,2,3 ..., p; J=1,2,3 ..., q; D iarepresent the pixel of the capable a row of i in matrix D, H ajrepresent the pixel of the capable j row of a in matrix H; A=1,2,3 ..., r; For matrix D and matrix H, if select the dimension r of base vector to be less than p or q, the matrix D obtained so and matrix H can be less than the data vector matrix V, be equivalent to thus mean original polarization data vector with relatively few base vector, the matrix D obtained has certain linear independence and sparse property, thereby makes matrix D have suitable ability to express to feature and the structure of original polarization data; As long as suitably choose the dimension r of base vector, just can obtain most of characteristic information of original polarization data; Get r=2 in the present invention;
A, random initializtion matrix D and H;
Matrix D and H are upgraded in b, employing alternately multiplication update rule, have:
H aj ← H aj ( D T V ) aj ( D T DH ) aj D ia ← D ia ( VH T ) ia ( DHH T ) ia - - - ( 5 )
Obtain matrix D=[D 1, D 2]; Matrix D has comprised the approximate complete information that participates in whole polarization parameter images of fusion, can be for the approximate reproduction of object polarization information; D 1for the first row pixel of matrix D, as the first image array; D 2for the secondary series pixel of matrix D, as the second image array;
E, acquisition D 1with D 2sharpness g and variance C:
g = 1 MN Σ i ′ = 1 M Σ j ′ = 1 N Δ f x 2 ( i ′ , j ′ ) 2 + Δ f y 2 ( i ′ , j ′ ) 2 2 - - - ( 6 )
C = 1 MN Σ i ′ = 1 M Σ j ′ = 1 N [ I i ′ j ′ - I ‾ ] 2 - - - ( 7 )
In formula (6), Δ f x, Δ f ydifference representative image matrix x, the difference of y direction; I ' and j ' mean respectively i ' row and the j ' row pixel in the polarization parameter view data, i '=1,2,3 ..., M, j '=1,2,3 ..., N.
In formula (7), I i ' j 'gray-scale value for the i ' row, the j ' row pixel in image array;
Figure BDA0000376916450000066
for the average gray value of image array, choose D 1and D 2the feature bases of the image array that middle sharpness and variance are larger is as the standard base vector, feature bases to another image array carries out the histogram specification processing, finally obtains feature bases image A and feature bases image B after two width are processed.
Step 3: obtain the polarization image merged.
The two width feature bases images that obtain in step 2 are sent into respectively in two identical PCNN networks, are utilized the PCNN model formation to carry out iteration N time, be specially:
F i ′ j ′ ( n ) = I i ′ j ′ ( n ) L i ′ j ′ ( n ) = exp ( - α L ) L i ′ j ′ ( n - 1 ) + V L Σ k , l W i ′ j ′ kl Y kl ( n - 1 ) U i ′ j ′ ( n ) = F i ′ j ′ ( n ) ( 1 + β L i ′ j ′ ( n ) ) Y i ′ j ′ ( n ) = 1 , U i ′ j ′ ( n ) > θ i ′ j ′ ( n ) 0 , U i ′ j ′ ( n ) ≤ θ i ′ j ′ ( n ) θ i ′ j ′ ( n ) = exp ( - α θ ) θ i ′ j ′ ( n - 1 ) + V θ Y i ′ j ′ ( n ) - - - ( 8 )
Wherein, n is iterations; F i ' j 'for feature bases image the i ' row, j ' is listed as neuronic feed back input; L i ' j 'for feature bases image the i ' row j ' is listed as neuronic link input; I i ' j 'for the i ' row j ' in image array is listed as neuronic gray-scale value; U i ' j 'for feature bases image the i ' row the j ' row inside neurons state.Y i ' j 'for feature bases image the i ' row, j ' is listed as neuronic output.W i ' j ' KLfor L i ' j '(n) Y in kl(n-1) weighting coefficient matrix; Y klfor L i ' j 'neuron output in neighborhood (k, 1 means neuron and the scope be connected on every side, generally gets 3 * 3 or 5 * 5); θ i ' j 'for feature bases image the i ' row, j ' is listed as neuronic dynamic threshold; α lfor link input the time ask attenuation constant; α θtime constant for the variable threshold value function.V lfor link input constant; V θfor the threshold value amplification coefficient; β is link modulation constant.When carrying out interative computation, if U at every turn i ' j 'be greater than threshold value θ i ' j 'y greatly, i ' j 'value is 1, now claims the i ' row, the j ' row neuron firing; Otherwise, Y i ' j 'value is that 0, the i ' row, the j ' row neuron misfires.
Subsequently, according to formula R i ' j '(n)=R i ' j '(n-1)+Y i ' j '(n), in two feature bases images of iterative computation, the i ' row j ' is listed as neuronic igniting number of times R i ' j ';
Through type (9) obtains the matching degree Z of the PCNN pulse number of two width feature bases images:
Z i ′ j ′ ( n max ) = 2 × R A i ′ j ′ ( n max ) × R B i ′ j ′ ( n max ) [ R A i ′ j ′ ( n max ) ] 2 + [ R B i ′ j ′ ( n max ) ] 2 - - - ( 9 )
In formula (9), Z i ' j 'for feature bases image i ' row, j ' is listed as neuronic matching degree; Nmax is maximum iteration time;
Figure BDA0000376916450000073
for feature bases image A the i ' row, j ' is listed as neuronic igniting number of times; for feature bases image B the i ' row, j ' is listed as neuronic igniting number of times.
Finally, through type (10) is chosen the corresponding neuron in two width feature bases images, the image after being merged:
Figure BDA0000376916450000081
In formula (10), I ' i ' j 'the gray-scale value that means the polarization image the i ' row the j ' row pixel of fusion; R thconstant for O~1; R in the present invention thbe preferably 0.95; A i ' j 'for feature bases image A the i ' row the j ' row neuron; B i ' j '
For the i ' row, the j ' row neuron in the feature bases image B. G A = R A i ′ j ′ ( n max ) R A i ′ j ′ ( n max ) + R B i ′ j ′ ( n max ) ;
G B = R B i ′ j ′ ( n max ) R A i ′ j ′ ( n max ) + R B i ′ j ′ ( n max ) .
As shown in Figure 2, Figure 3, Figure 4, be respectively and adopt existing small wave converting method, laplacian pyramid method and the inventive method fusion results figure to same target image; Can find out that the fused images that adopts the inventive method to obtain all improves than additive method on visual effect and objective indicator, the result obtained is owing to having incorporated more scene polarization information, therefore can effectively suppress mixed and disorderly background, the minutia of outstanding target, thus the ability of utilizing polarization imaging detection means identification target can be improved.

Claims (2)

1. a Polarization Image Fusion method is characterized in that: by following step, realize:
Step 1: the polarization parameter view data that obtains the target polarization image;
Utilize polarized imaging system to carry out polarization imaging to target, the angle that obtains the light transmission shaft of polaroid in polarized imaging system and horizontal direction is respectively the output intensity polarization image I of 0 °, 45 °, 90 °, 135 ° four polarization directions 0, I 45, I 90, I 135; And then obtain the polarization parameter view data, comprise stokes parameter I, Q, the U of polarized light, and the parameter image of degree of polarization P and polarization angle θ;
Step 2: obtain two width feature bases images;
By polarization parameter view data I, Q, U, P and the θ obtained in step 1, form the data vector matrix V of the capable q row of p p * q;
V p × q = I Q U P θ T = I 11 Q 11 U 11 P 11 θ I 12 Q 12 U 12 P 12 θ · · · · · · · · · · · · · · · I M × N Q M × N U M × N P M × N θ M × N - - - ( 1 )
Wherein, M and N are the polarization parameter data line number and columns;
To the data vector matrix, V carries out Non-negative Matrix Factorization, by data vector matrix V approximate factorization, is the non-negative base vector matrix D with the capable r row of p p * rwith the non-negative coefficient matrix H with the capable q row of r r * qproduct; And make r=2; Have:
min D , H f ( D , H ) ≡ 1 2 Σ i = 1 p Σ j = 1 q [ V ij - ( DH ) ij ] 2 (2)
s . t . { D ia ≥ 0 H bj ≥ 0 ∀ i , α , j
In formula (2), V ijpixel for the capable j row of i in matrix V; I=1,2,3 ..., p; J=1,2,3 ..., q; D iarepresent the pixel of the capable a row of i in matrix D, H ajrepresent the pixel of the capable j row of a in matrix H; A=1,2;
A, random initializtion matrix D and H;
Matrix D and H are upgraded in b, employing alternately multiplication update rule, have:
H aj ← H aj ( D T V ) aj ( D T DH ) aj D ia ← D ia ( VH T ) ia ( DHH T ) ia - - - ( 3 )
Obtain matrix D=[D 1, D 2]; Matrix D has comprised the approximate complete information that participates in whole polarization parameter images of fusion, can be for the approximate reproduction of object polarization information; D 1for the first row pixel of matrix D, as the first image array; D 2for the secondary series pixel of matrix D, as the second image array;
E, acquisition D 1with D 2sharpness g and variance C:
g = 1 MN Σ i ′ = 1 M Σ j ′ = 1 N Δ f x 2 ( i ′ , j ′ ) 2 + Δ f y 2 ( i ′ , j ′ ) 2 2 - - - ( 4 )
C = 1 MN Σ i ′ = 1 M Σ j ′ = 1 N [ I i ′ j ′ - I ‾ ] 2 - - - ( 5 )
In formula (4), Δ f x, Δ f ydifference representative image matrix x, the difference of y direction; I ' and j ' mean respectively i ' row and the j ' row pixel in the polarization parameter view data, i '=1,2,3 ..., M, j '=1,2,3 ..., N:
In formula (5), I i ' j 'gray-scale value for the i ' row, the j ' row pixel in image array;
Figure FDA0000376916440000025
for the average gray value of image array, choose D 1and D 2the feature bases of the image array that middle sharpness and variance are larger is as the standard base vector, feature bases to another image array carries out the histogram specification processing, finally obtains feature bases image A and feature bases image B after two width are processed;
Step 3: obtain the polarization image merged;
The two width feature bases images that obtain in step 2 are sent into respectively in two identical PCNN networks, utilized the PCNN model formation to carry out iteration N time, obtain respectively two feature bases image the i ' row j ' and be listed as neuronic output Y i ' j '; Subsequently, according to formula R i ' j '(n)=R i ' j '(n-1)+Y i ' j '(n), two feature bases image the i ' row j ' of iterative computation are listed as neuronic igniting number of times R i ' j '; N is iterations;
Obtain the matching degree z of the PCNN pulse number of two feature bases images:
Z i ′ j ′ ( n max ) = 2 × R A i ′ j ′ ( n max ) × R B i ′ j ′ ( n max ) [ R A i ′ j ′ ( n max ) ] 2 + [ R B i ′ j ′ ( n max ) ] 2 - - - ( 6 )
In formula (6), Z i ' j 'for feature bases image the i ' row, j ' is listed as neuronic matching degree; Nmax is maximum iteration time; R i ' j 'for the neuronic igniting number of times of feature bases image A the i ' row ' be listed as;
Figure FDA0000376916440000026
for the neuronic igniting number of times of feature bases image B the i ' row ' be listed as;
Finally, by following formula, the corresponding neuron in two width two width feature bases images is chosen the image after being merged:
Figure FDA0000376916440000031
In formula (7), I ' i ' j 'the gray-scale value that means the polarization image the i ' row the j ' row pixel of fusion; R thit is 0~1 constant; R in the present invention thbe preferably 0.95; A i ' j 'for feature bases image A the i ' row the j ' row neuron; B i ' j 'for the i ' row, the j ' row neuron in the feature bases image B.
G A = R A i ′ j ′ ( n max ) R A i ′ j ′ ( n max ) + R B i ′ j ′ ( n max ) ;
G B = R B i ′ j ′ ( n max ) R A i ′ j ′ ( n max ) + R B i ′ j ′ ( n max ) .
2. a kind of Polarization Image Fusion method as claimed in claim 1 is characterized in that: in step 3, and described R thbe preferably 0.95.
CN201310396089.3A 2013-09-04 2013-09-04 Polarization image fusion method Pending CN103500444A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310396089.3A CN103500444A (en) 2013-09-04 2013-09-04 Polarization image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310396089.3A CN103500444A (en) 2013-09-04 2013-09-04 Polarization image fusion method

Publications (1)

Publication Number Publication Date
CN103500444A true CN103500444A (en) 2014-01-08

Family

ID=49865647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310396089.3A Pending CN103500444A (en) 2013-09-04 2013-09-04 Polarization image fusion method

Country Status (1)

Country Link
CN (1) CN103500444A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787499A (en) * 2014-12-26 2016-07-20 南京理工大学 Camouflaged target identification method based on K-means cluster and polarization information extraction
CN106033599A (en) * 2015-03-20 2016-10-19 南京理工大学 Visible light enhancement method based on polarized imaging
CN108447025A (en) * 2018-01-31 2018-08-24 天津大学 A kind of polarization image defogging method based on single image acquisition
CN113421206A (en) * 2021-07-16 2021-09-21 合肥工业大学 Image enhancement method based on infrared polarization imaging
CN115265786A (en) * 2022-09-30 2022-11-01 长春理工大学 Strong light polarization detection device capable of automatically adjusting exposure value and detection method thereof
CN116091361A (en) * 2023-03-23 2023-05-09 长春理工大学 Multi-polarization parameter image fusion method, system and terrain exploration monitor
WO2024012117A1 (en) * 2022-07-14 2024-01-18 中国科学院长春光学精密机械与物理研究所 Polarization intelligent sensing system and sensing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295933A1 (en) * 2006-05-09 2009-12-03 Yoav Schechner Imaging Systems and Methods for Recovering Object Visibility
CN101894364A (en) * 2010-05-31 2010-11-24 重庆大学 Image fusion method and device based on optical non-down sampling contourlet transform
CN102567977A (en) * 2011-12-31 2012-07-11 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
CN102682443A (en) * 2012-05-10 2012-09-19 合肥工业大学 Rapid defogging algorithm based on polarization image guide

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090295933A1 (en) * 2006-05-09 2009-12-03 Yoav Schechner Imaging Systems and Methods for Recovering Object Visibility
CN101894364A (en) * 2010-05-31 2010-11-24 重庆大学 Image fusion method and device based on optical non-down sampling contourlet transform
CN102567977A (en) * 2011-12-31 2012-07-11 南京理工大学 Self-adaptive fusing method of infrared polarization image based on wavelets
CN102682443A (en) * 2012-05-10 2012-09-19 合肥工业大学 Rapid defogging algorithm based on polarization image guide

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
SIYUAN ZHANG ET AL: "A Polarization Imagery Fusion Algorithm Based on NMF and PCNN", 《IMAGING AND APPLIED OPTICS》, 30 June 2013 (2013-06-30) *
周浦城等: "基于非负矩阵分解和 IHS 颜色模型的偏振图像融合方法", 《光子学报》, vol. 39, no. 9, 30 September 2010 (2010-09-30) *
闫敬文等: "改进基于小波变换的 PCNN图像融合算法", 《厦门理工学院学报》, vol. 14, no. 4, 31 December 2006 (2006-12-31) *
陈浩等: "利用脉冲耦合神经网络的图像融合", 《光学精密工程》, vol. 18, no. 4, 30 April 2010 (2010-04-30) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787499A (en) * 2014-12-26 2016-07-20 南京理工大学 Camouflaged target identification method based on K-means cluster and polarization information extraction
CN105787499B (en) * 2014-12-26 2019-04-16 南京理工大学 The camouflaged target recognition methods extracted based on K-means cluster and polarization information
CN106033599A (en) * 2015-03-20 2016-10-19 南京理工大学 Visible light enhancement method based on polarized imaging
CN106033599B (en) * 2015-03-20 2019-01-18 南京理工大学 Visible light Enhancement Method based on polarization imaging
CN108447025A (en) * 2018-01-31 2018-08-24 天津大学 A kind of polarization image defogging method based on single image acquisition
CN108447025B (en) * 2018-01-31 2021-07-27 天津大学 Polarization image defogging method based on single image acquisition
CN113421206A (en) * 2021-07-16 2021-09-21 合肥工业大学 Image enhancement method based on infrared polarization imaging
CN113421206B (en) * 2021-07-16 2022-11-15 合肥工业大学 Image enhancement method based on infrared polarization imaging
WO2024012117A1 (en) * 2022-07-14 2024-01-18 中国科学院长春光学精密机械与物理研究所 Polarization intelligent sensing system and sensing method
CN115265786A (en) * 2022-09-30 2022-11-01 长春理工大学 Strong light polarization detection device capable of automatically adjusting exposure value and detection method thereof
CN116091361A (en) * 2023-03-23 2023-05-09 长春理工大学 Multi-polarization parameter image fusion method, system and terrain exploration monitor

Similar Documents

Publication Publication Date Title
CN103500444A (en) Polarization image fusion method
Li et al. Complex contourlet-CNN for polarimetric SAR image classification
CN107316013B (en) Hyperspectral image classification method based on NSCT (non-subsampled Contourlet transform) and DCNN (data-to-neural network)
CN107239751B (en) High-resolution SAR image classification method based on non-subsampled contourlet full convolution network
CN102063713B (en) Neighborhood normalized gradient and neighborhood standard deviation-based multi-focus image fusion method
Ye et al. FusionCNN: a remote sensing image fusion algorithm based on deep convolutional neural networks
CN204788661U (en) Calculate many spectral imaging system based on compressed sensing
CN104217436B (en) SAR image segmentation method based on multiple features combining sparse graph
CN104154998A (en) Reconstruction method for calculating multispectral imaging map based on compressed sensing
CN1251145C (en) Pyramid image merging method being integrated with edge and texture information
CN101447072B (en) Pyramidal empirical modal resolution image merge method
Zhou et al. A high performance missing pixel reconstruction algorithm for hyperspectral images
CN107590785A (en) A kind of Brillouin spectrum image-recognizing method based on sobel operators
CN109102529A (en) End-to-end high spectrum image change detecting method based on depth convolutional neural networks
CN113008370A (en) Three-dimensional self-adaptive compression reconstruction method based on liquid crystal hyperspectral calculation imaging system
CN116403046A (en) Hyperspectral image classification device and method
CN104123740A (en) Image reconstruction method based on compressive sensing
CN107239757A (en) A kind of polarization SAR silhouette target detection method based on depth ladder net
CN112241937A (en) Hyperspectral image reconstruction method based on neural network
CN113421198B (en) Hyperspectral image denoising method based on subspace non-local low-rank tensor decomposition
Chen et al. Neural classification of SPOT imagery through integration of intensity and fractal information
CN107832798B (en) Polarized SAR image target detection method based on NSCT ladder network model
CN117689579A (en) SAR auxiliary remote sensing image thick cloud removal method with progressive double decoupling
Sun et al. Hyperspectral restoration employing low rank and 3d total variation regularization
CN114067217A (en) SAR image target identification method based on non-downsampling decomposition converter

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140108