CN104240212A - ISAR image fusion method based on target characteristics - Google Patents

ISAR image fusion method based on target characteristics Download PDF

Info

Publication number
CN104240212A
CN104240212A CN201410445675.7A CN201410445675A CN104240212A CN 104240212 A CN104240212 A CN 104240212A CN 201410445675 A CN201410445675 A CN 201410445675A CN 104240212 A CN104240212 A CN 104240212A
Authority
CN
China
Prior art keywords
image
maximum value
value pixel
vector
changing image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410445675.7A
Other languages
Chinese (zh)
Other versions
CN104240212B (en
Inventor
张磊
许志伟
邢孟道
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410445675.7A priority Critical patent/CN104240212B/en
Publication of CN104240212A publication Critical patent/CN104240212A/en
Application granted granted Critical
Publication of CN104240212B publication Critical patent/CN104240212B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an ISAR image fusion method based on target characteristics, and relates to fusion of ISAR images. The method includes the steps that firstly, full-aperture data are segmented, and then sub-images are acquired; space characteristic points and space characteristic descriptor vectors of the sub-images are extracted, and then the space characteristic points are matched; then, the optimized space characteristic points are selected; azimuth scale variation and de-centering are conducted according to an azimuth scale factor search method; the rotation angles between the sub-images are calculated, the transformed images are rotated, and the entropy of average range images is acquired; the fusion sub-images are determined through the minimal entropy; finally, base vectors of the fusion images are acquired according to a non-negative matrix decomposition method, and then the final fusion images are determined. Complete information of targets in the fusion images is reserved, and the method can be used for fusion and real-time processing of different sub-images.

Description

The ISAR image interfusion method of based target feature
Technical field
The invention belongs to Radar Technology field, particularly inverse synthetic aperture radar (ISAR) ISAR image interfusion method, particularly relate to a kind of ISAR image interfusion method of based target feature.
Background technology
Inverse synthetic aperture radar (ISAR) (Inverse Synthetic Aperture Radar, ISAR) imaging technique has been widely used in the civil and military field comprising target imaging and the qualification of target geometric shape.The extraterrestrial target of motion in the continuous phase dry-cure time is after translation and complicated multi-direction rotation, in the ISAR image of shorter Coherent processing time, the local scattering point of target and structure can change, thus cause the loss of target part structural information, therefore need to make multiple short time treatment, carry out registration and fusion by ISAR image in short-term again, thus obtain the ISAR image under long-time multi-pose angle.
In ISAR image interfusion method, directly Single cell fusion carries out to the image obtained after all short time treatment more difficult, therefore need first to merge between two adjacent image, then obtain final fused images accurately by the fusion of multiple stratum.But for fusion between any two in ISAR image, it is long and merge the low shortcoming of accuracy to there is the processing time in the method for directly being undertaken by general image.
In order to address this problem, some researchists and scholar propose the method directly utilizing the method for image registration and carry out based on image object high-energy point merging.Carry out registration for utilizing image thus reach the method for syncretizing effect, CHENFulong proposes a kind of method utilizing shape Matrix Estimation, obtaining translation component by asking for feature descriptor, then carrying out the registration of image by the anglec of rotation of estimation and the constant method of geometry.But this anglec of rotation method of estimation based on shape matrix mainly utilizes Edge Feature Points, and the corner not being suitable for the ISAR image of long coherence time is estimated, and when ISAR target is run in long coherence time to there is geometric deformation in its orientation, and the method does not consider this kind of situation.For this problem, can utilize extract high-energy point and rotate cost function process.Vignaud utilizes RELAX algorithm to extract a large amount of high-energy point from a large amount of continuous print high precision ISAR image.The high-energy of target in different images point is carried out reverse rotation by the cost function then by setting up reverse rotation structure, thus obtains the consistent target image of a series of attitude.Afterwards, then these high-energy points are superposed, obtain final fused images.But the original I SAR image that the method obtains has very high precision, the high-energy point of extraction more being conducive to utilizes this kind of scattering point to merge.But this method has two shortcomings, one is that the high-energy point that common ISAR image has generally can not be many especially; Two is that the fused images loss of learning that utilizes the method to obtain is serious.
Above method all effectively can not extract enough unique points and carry out the anglec of rotation and estimate exactly, lacks ISAR target local structural variation and orientation to the consideration of dimensional variation simultaneously, cannot carry out effective image co-registration to the ISAR image of long coherence time.
Summary of the invention
The object of the invention is to the deficiency for above-mentioned prior art, a kind of ISAR image interfusion method of based target feature is provided, realizes the fusion of different subimage, and remain the complete information of target in fused images.
For achieving the above object, the present invention is achieved by the following technical solutions.
An ISAR image interfusion method for based target feature, is characterized in that, comprise the following steps:
Step 1, obtain radar full aperture ISAR data, radar full aperture ISAR data upwards have b range unit in orientation, b is positive integer; To radar full aperture ISAR data, in orientation, b range unit upwards carries out decile segmentation, and obtain X cross-talk aperture data, X is greater than 1 and is less than or equal to b;
Carry out ISAR imaging to described X cross-talk aperture data, obtain X width ISAR subimage, the line number of each width ISAR sub-image pixels point is N 1, columns is N 2; N 1and N 2be respectively natural number;
By the first width subimage T in X width ISAR subimage 1as reference image P, the second width subimage T 2as changing image S;
Step 2, the amplitude thresholds of given pixel amplitude, is defined as maximum value pixel by the pixel being greater than amplitude thresholds;
From reference picture P, determine that the number of maximum value pixel is A 1;
Spatial feature extraction is carried out to a maximum value pixel in reference picture P, obtains the spatial feature Descriptor vector of a maximum value pixel 1≤a≤A 1; And then the A obtained in reference picture P 1the reference spatial feature descriptor matrix that individual maximum value pixel is formed dimension is A 1× 2 n, wherein, A 1for the number of maximum value pixel in reference picture P is namely with reference to spatial feature Descriptor vector D 1line number, [] trepresent vector transpose;
From changing image S, determine that the number of maximum value pixel is A 2;
Spatial feature extraction is carried out to the individual maximum value pixel of a ' in changing image S, obtains the spatial feature Descriptor vector of the individual maximum value pixel of a ' 1≤a '≤A 2; And then the A obtained in changing image S 2the conversion spatial feature descriptor matrix that individual maximum value pixel is formed dimension is A 2× 2 n, [] trepresent vector transpose, wherein, A 2for in changing image S, the number of maximum value pixel namely converts spatial feature descriptor matrix D 2line number, A 1, A 2positive integer is with n;
Step 3, computing reference spatial feature Descriptor vector D 1the i-th row data and conversion spatial feature Descriptor vector D 2each row of data between Euclidean distance, 1≤i≤A 1, obtain with reference to spatial feature Descriptor vector D 1the A of the i-th row data 2individual Euclidean distance from reference spatial feature Descriptor vector D 1the A of the i-th row data 2minimum value O is selected in individual Euclidean distance i,fand sub-minimum 1≤f≤A 2, 1≤f 0≤ A 2;
Setpoint distance threshold value G; If D 1the i-th row data A2 Euclidean distance in the minimum value O that selects i,fand sub-minimum ratio be less than distance threshold G, then the match point of i-th maximum value pixel in reference picture P is f the maximum value pixel of changing image S;
Make i from 1 to A 1travel through, from the A of reference picture P 1the A of individual maximum value pixel and changing image S 2the maximum value pixel of U to coupling is determined, 1≤U≤min [A in individual maximum value pixel 1, A 2];
Step 4, the Euclidean distance between the position coordinates asking for the maximum value pixel of every a pair coupling, obtains U Euclidean distance;
Setting noise holds threshold range F, if the Euclidean distance of u position coordinates is held within threshold range F at noise, then retain the maximum value pixel of a pair coupling corresponding to u Euclidean distance, otherwise, remove the maximum value pixel of a pair coupling corresponding to Euclidean distance of u position coordinates, 1≤u≤U, obtains K to preferred maximum value pixel, 1≤K≤U;
The position coordinates of the K in reference picture P preferred maximum value pixel is respectively the position coordinates of the K in changing image S preferred maximum value pixel is respectively wherein 1≤K≤U;
Step 5, the orientation of setting changing image S upwards m orientation to scale factor σ m, the maximum number of the orientation that 1≤m≤M, M represents changing image S upwards scale factor, to the K in changing image S preferred maximum value pixel position coordinates carry out orientation to change of scale, obtain revising rear maximum value pixel position coordinates and be
To K in reference picture P preferred maximum value pixel position coordinates carry out center, obtain K the centralization preferred maximum value pixel position coordinates of reference picture P to maximum value pixel position coordinates after K correction in changing image S carry out center, obtain K the centralization preferred maximum value pixel position of changing image S:
Step 6, utilizes K the centralization preferred maximum value pixel position coordinates of the K of a reference picture P centralization preferred maximum value pixel position coordinates and changing image S, anglec of rotation θ between computing reference image P and changing image S m;
Utilize anglec of rotation θ m, rotation is carried out to changing image S and obtains rotating rear changing image changing image after rotating superpose with the corresponding pixel points amplitude of reference picture P, then ask for the amplitude equalizing value of corresponding pixel points, obtain amplitude equalizing value image I 1,2, and calculate amplitude equalizing value image I 1,2entropy λ m;
Step 7, according to step 5 and step 6, utilizes M orientation to carry out orientation to change of scale and rotation to scale factor to changing image S, obtains M entropy λ 1..., λ m..., λ m; The maximum number of the orientation that M represents changing image S upwards scale factor;
From M entropy λ 1..., λ m..., λ mthe minimum entropy λ of middle selection e, determine minimum entropy λ echanging image after corresponding rotation 1≤e≤M; By minimum entropy λ echanging image after corresponding rotation as the optimization subimage of changing image S;
Step 8, by the first width subimage T in X width ISAR subimage 1as the 1st fusant image Z 1;
Using the optimization subimage of changing image s as the 2nd fusant image Z 2;
Setting integer c, 3≤c≤X, using c-1 fusant image as with reference to image P, by c width subimage T in X width subimage cas changing image S, obtain the optimization subimage of changing image S to step 7 according to step 2, using the optimization subimage of changing image S as c fusant image Z c; And then obtain X width fusant image, i.e. the 1st fusant image Z 1, the 2nd fusant image Z 2, c fusant image Z c..., X fusant image Z x, wherein c fusant image Z cpixel be N 1row N 2column matrix;
Step 9, utilizes non-negative matrix factorization method to decompose X width fusant image, obtains the base vector of X width fusant image W = [ w 1 , w 2 , · · · , w N 1 N 2 ] T , Dimension is N 1n 2× 1;
Step 10, by every N of the base vector W of X width fusant image 2individual element row is a line, obtains the basic image after resetting dimension is N 1× N 2, the basic image after rearrangement for final fused images.
Feature and further improvement of technique scheme are:
(1) step 3 comprises following sub-step:
(3a) the spatial feature Descriptor vector D of computing reference image P 1in the spatial feature Descriptor vector D of the i-th row data and changing image S 2in Euclidean distance O between the capable data of l i,l;
When l equal 1 successively, 2,3 ..., A 2time, obtain the spatial feature Descriptor vector D of reference picture P 1in the spatial feature Descriptor vector D of the i-th row data and changing image S 2the A that each row of data is formed 2individual Euclidean distance
A 2individual Euclidean distance form i-th distance vector O i, O i = [ O i , 1 , O i , 2 . . . , O i , l , . . . , O i , A 2 ] , 1≤l≤A 2
(3b) i-th distance vector O is selected iin minimum value O i,fwith sub-minimum, calculate i-th vectorial O ielement in minimum value O i,fand sub-minimum ratio, wherein f is integer, 1≤f≤A 2;
If i-th distance vector O imiddle minimum value O i,fand sub-minimum ratio be less than distance threshold G, then i-th maximum value pixel of reference picture P and f the maximum value pixel of changing image S match, 1≤i≤A 1, 1≤f≤A 2;
If i-th distance vector O imiddle minimum value O i,fand sub-minimum ratio be more than or equal to distance threshold G, then i-th maximum value pixel of reference picture P and the A of changing image S 2individual maximum value pixel does not mate;
(3c) according to step 3a) to 3b), i is from 1 to A 1travel through, obtain having the maximum value pixel of U to coupling in reference picture P and changing image S, wherein U≤min [A 1, A 2].
(2) step 5 comprises following sub-step:
(5a) the region of search σ=[σ of orientation to scale factor is built 1, σ 2..., σ m..., σ m], 1≤m≤M, wherein σ m+1m+ Δ σ, Δ σfor stepped intervals; The maximum number of the orientation that M represents changing image S upwards scale factor;
(5b) select m orientation to scale factor σ m, following orientation is carried out to change of scale to the preferred maximum value pixel of kth in changing image S, obtains revised maximum value pixel position coordinates for:
[ x ~ k 2 , y ~ k 2 ] = [ x k 2 , y k 2 ] 1 / σ m 0 0 1 , 1 ≤ k ≤ K
Wherein, for revising a kth preferred maximum value pixel coordinate position in front changing image S;
(5c) according to step (5b), carry out orientation to change of scale to the position coordinates of K in changing image S preferred maximum value pixel, after obtaining K correction, maximum value pixel position coordinates is
(5d) K preferred maximum value pixel position coordinates in reference picture P is calculated average
Center is gone to K the preferred maximum value pixel position coordinates of reference picture P, obtains K the centralization preferred maximum value pixel position coordinates of reference picture P
Wherein, go to center to a kth preferred maximum value pixel position coordinates, obtaining a kth centralization preferred maximum value pixel position coordinates is:
[ x k * 1 , y k * 1 ] = [ x k 1 - x ‾ 1 , y k 1 - y ‾ 1 ]
(5e) to K in changing image S revised maximum value pixel position coordinates go to center, obtain K the centralization preferred maximum value pixel position coordinates of changing image S
(3) step 6 comprises following sub-step:
(6a) anglec of rotation θ is utilized m, changing image S is rotated, obtains rotating rear changing image for:
S ~ m = cos θ m sin θ m - sin θ m cos θ m S
(6b) with reference to changing image after image P and rotation the superposition of corresponding pixel points amplitude, then ask for the amplitude equalizing value of corresponding pixel points, obtain amplitude equalizing value image I 1,2; Changing image the line number of pixel is N 1, columns is N 2;
(6c) amplitude equalizing value image I is calculated 1,2entropy λ m:
λ m = - Σ η = 1 N 1 Σ κ = 1 N 2 u η , κ ln u η , κ
u η , κ = | I η , κ 1,2 | / Σ η = 1 N 1 Σ κ = 1 N 2 I η , κ 1,2
Wherein, ln [] for the truth of a matter be the logarithmic function of e, η is amplitude equalizing value image I 1,2the line number of pixel, 1≤η≤N 1, κ is amplitude equalizing value image I 1,2the columns of pixel, 1≤κ≤N 2, N 1for amplitude equalizing value image I 1,2maximum number of lines, N 2for amplitude equalizing value image I 1,2maximum number of column.
(4) step 9 comprises following sub-step:
(9a) by X width fusant image Z 1, Z 2..., Z c..., Z xin c width fusant image Z cin corresponding matrix, each row headtotail of element, is converted into N 1n 2the c width fusant image Z of dimension ccolumn vector V c;
(9b) according to step 9a) obtain X width fusant image conversion column vector V 1, V 2..., V c..., V x, setting transition matrix V=[V 1, V 2..., V c..., V x], wherein, the line number of transition matrix V is N 1n 2, columns is X;
(9c) adopt non-negative matrix factorization method that base vector W is carried out Q iteration, obtain the base vector of the Q time iteration W Q = [ w 1 Q , w 2 Q , · · · , w N 1 N 2 Q ] , Q is the iteration total degree of setting;
Wherein, the q time iteration base vector W qfor:
w α q = w α q - 1 Σ β = 1 X [ h β q - 1 v α , β / ( W q - 1 H q - 1 ) α , β ] Σ β = 1 X h β q - 1
h β q = h β q - 1 Σ α = 1 N 1 N 2 [ w α q - 1 v α , β / ( W q - 1 H q - 1 ) α , β ] Σ α = 1 N 1 N 2 w α q - 1
In formula, 1≤q≤Q, Q is the iteration total degree of setting, and α is the element sequence number of base vector W, 1≤α≤N 1n 2; H qrepresent the auxiliary vector of the q time iteration, β is the element sequence number of auxiliary vector H, 1≤β≤X; be base vector W in the q time iteration qα element; be the auxiliary vector H of the q time iteration qβ element; v α, βfor the capable β row of a matrix V α element; The initial value W of base vector W 0for element number is N 1n 2random column vector, the initial value H of auxiliary vector H 0for the random row vector that element number is X; N 1n 2for the line number of transition matrix V, X is the columns of transition matrix V;
(9d) base vector of the Q time iteration is set for the base vector of X width fusant image W = [ w 1 , w 2 , · · · , w N 1 N 2 ] .
Compared with prior art, the present invention has outstanding substantive distinguishing features and significant progress.The present invention compared with the conventional method, has the following advantages:
1) some amalgamation mode aobvious compared to spy of the prior art, the present invention utilizes and carries out segmentation to full aperture ISAR data, every segment data can corresponding different attitude, target scattering under different attitude dot information can be provided, remain the complete information of target preferably, can be used for goal in research structure and change and carry out target detection;
2) the present invention utilizes local maximum pixel registration mode namely to determine the maximum value pixel mated in two images, the position coordinates of the maximum value pixel of recycling coupling carries out corner estimation, can estimate the anglec of rotation between adjacent target image accurately and stably.But, the calculating general image coefficient method of prior art carries out traversal translation to ask for the image coefficient under all pixel translations to pixels all in image, and the inventive method only utilizes relatively a small amount of maximum value pixel of extraction and change of scale interval to carry out corner to ask for.Therefore, the inventive method has higher operation efficiency;
3) the present invention utilizes Non-negative Matrix Factorization to carry out the fusion of adjacent image, the fusion of object construction effectively can be carried out to the image on different-energy rank, with prior art employing pixel or energy and Method compare, the situation causing part-structure to lack because energy jump is not too large can be avoided.
Accompanying drawing explanation
Below in conjunction with the drawings and specific embodiments, the present invention will be further described.
Fig. 1 is the realization flow figure of integrated operation of the present invention;
Fig. 2 is two adjacent ISAR images with different rotary angular velocity that emulation experiment of the present invention uses, wherein Fig. 2 (a) is reference picture, Fig. 2 (b) is changing image, and horizontal ordinate represents that orientation is to sampled point, and ordinate represents that distance is to sampled point;
Fig. 3 is that the present invention carries out the entropy Dependence Results figure of orientation yardstick search to Fig. 2, and horizontal ordinate represents that orientation is to scale factor, and ordinate represents entropy;
Fig. 4 is that the present invention carries out the result figure of maximum value pixel registration to two width ISAR subimages in Fig. 2, and horizontal ordinate represents that orientation is to sampled point, and ordinate represents that distance is to sampled point;
Fig. 5 utilizes the orientation result figure that the changing image to Fig. 2 (b) is revised to scale factor and the anglec of rotation to the present invention, and horizontal ordinate represents that orientation is to sampled point, and ordinate represents that distance is to sampled point;
Fig. 6 is the result figure merged the present invention, and horizontal ordinate represents that orientation is to sampled point, and ordinate represents that distance is to sampled point.
Embodiment
With reference to Fig. 1, a kind of ISAR image interfusion method of based target feature is described, comprises the following steps:
Step 1, obtain radar full aperture ISAR data, radar full aperture ISAR data upwards have b range unit in orientation, b is positive integer; To radar full aperture ISAR data, in orientation, b range unit upwards carries out decile segmentation, and obtain X cross-talk aperture data, X is greater than 1 and is less than or equal to b;
Carry out ISAR imaging to described X cross-talk aperture data, obtain X width ISAR subimage, the line number of each width ISAR sub-image pixels point is N 1, columns is N 2; N 1and N 2be respectively natural number;
By the first width subimage T in X width ISAR subimage 1as reference image P, the second width subimage T 2as changing image S.
Step 2, the amplitude thresholds of given pixel amplitude, is defined as maximum value pixel by the pixel being greater than amplitude thresholds;
A is determined from reference picture P 1individual maximum value pixel;
Spatial feature extraction is carried out to a maximum value pixel in reference picture P, obtains the spatial feature Descriptor vector of a maximum value pixel 1≤a≤A 1; And then the A obtained in reference picture P 1the reference spatial feature descriptor matrix that individual maximum value pixel is formed dimension is A 1× 2 ndimension, [] trepresent vector transpose;
A is determined from changing image S 2individual maximum value pixel;
Spatial feature extraction is carried out to the individual maximum value pixel of a ' in changing image S, obtains the spatial feature Descriptor vector of the individual maximum value pixel of a ' 1≤a '≤A 2; And then the A obtained in changing image S 2the conversion spatial feature descriptor matrix that individual maximum value pixel is formed dimension is A 2× 2 ndimension, [] trepresent vector transpose, wherein, A 1, A 2positive integer is with n.
(2a) from reference picture P, A is extracted 1individual maximum value pixel and A 1× 2 ndimension is with reference to spatial feature descriptor matrix D 1, n is positive integer, and its empirical value is 7.
Maximum value pixel is the pixel that the extreme point of neighbor amplitude in image is corresponding; The method extracting maximum value pixel can have multiple, such as, utilize Scale invariant transform method, fast robust characterization method.
The spatial feature descriptor matrix representative scattering point amplitude gradients changing value that the determination of spatial feature descriptor matrix is extracted about such as utilizing Scale invariant transform method with the spatial feature extracting method of selection; The spatial feature descriptor matrix representative scattering point Haar small echo response quautity utilizing fast robust characterization method to extract.
(2b) extract the method for spatial feature according to reference picture P, from changing image S, extract A 2individual maximum value pixel and A 2× 2 nthe conversion spatial feature descriptor matrix D of dimension 2, n is positive integer, and its empirical value is 7.
The amplitude Characteristics that the target signature utilized in the present invention is pixel.
Step 3, computing reference spatial feature Descriptor vector D 1the i-th row data and conversion spatial feature Descriptor vector D 2each row of data between Euclidean distance, 1≤i≤A 1, obtain with reference to spatial feature Descriptor vector D 1the A of the i-th row data 2individual Euclidean distance from reference spatial feature Descriptor vector D 1the A of the i-th row data 2minimum value O is selected in individual Euclidean distance i,fand sub-minimum 1≤f≤A 2, 1≤f 0≤ A 2;
Setpoint distance threshold value G; If D 1the i-th row data A2 Euclidean distance in the minimum value O that selects i,fand sub-minimum ratio be less than distance threshold G, then the match point of i-th maximum value pixel in reference picture P is f the maximum value pixel of changing image S;
Make i from 1 to A 1travel through, from the A of reference picture P 1the A of individual maximum value pixel and changing image S 2the maximum value pixel of U to coupling is determined, 1≤U≤min [A in individual maximum value pixel 1, A 2];
(3a) the reference spatial feature descriptor matrix D of computing reference image P 1in the conversion spatial feature descriptor matrix D of the i-th row data and changing image S 2in Euclidean distance O between the capable data of l i,l;
When l equal 1 successively, 2,3 ..., A 2time, obtain the spatial feature Descriptor vector D of reference picture P 1in the spatial feature Descriptor vector D of the i-th row data and changing image S 2the A that each row of data is formed 2individual Euclidean distance
A 2individual Euclidean distance form i-th distance vector O i, O i = [ O i , 1 , O i , 2 . . . , O i , l , . . . , O i , A 2 ] , 1≤l≤A 2
(3b) i-th distance vector O is selected iin minimum value O i,fwith sub-minimum, calculate i-th vectorial O ielement in minimum value O i, fand sub-minimum ratio, wherein f is integer, 1≤f≤A 2;
If i-th distance vector O imiddle minimum value O i,fand sub-minimum ratio be less than distance threshold G, then i-th maximum value pixel of reference picture P and f the maximum value pixel of changing image S match, 1≤i≤A 1, 1≤f≤A 2;
If i-th distance vector O imiddle minimum value O i,fand sub-minimum ratio be more than or equal to distance threshold G, then i-th maximum value pixel of reference picture P and the A of changing image S 2individual maximum value pixel does not mate;
(3c) according to step 3a) to 3b), i is from 1 to A 1travel through, obtain having the maximum value pixel of U to coupling in reference picture P and changing image S, wherein U≤min [A 1, A 2].
Step 4, the Euclidean distance between the position coordinates asking for the maximum value pixel of every a pair coupling, obtains U Euclidean distance;
Setting noise holds threshold range F, if the Euclidean distance of u position coordinates is held within threshold range F at noise, then retain the maximum value pixel of a pair coupling corresponding to u Euclidean distance, otherwise, remove the maximum value pixel of a pair coupling corresponding to Euclidean distance of u position coordinates, 1≤u≤U, obtains K to preferred maximum value pixel, 1≤K≤U;
The position coordinates of the K in reference picture P preferred maximum value pixel is respectively the position coordinates of the K in changing image S preferred maximum value pixel is respectively wherein 1≤K≤U.
(4a) U calculating coupling, to the Euclidean distance between the position coordinates of a pair maximum value pixel every in maximum value pixel, obtains U Euclidean distance p altogether 1, p 2..., p u..., p u;
(4b) U Euclidean distance p is utilized 1, p 2..., p u..., p uhold threshold range F with noise, adopt stochastic sampling coherence method, obtain K effectively Euclidean distance, and using this K the K that effectively Euclidean distance is corresponding to maximum value pixel as K to preferred maximum value pixel.
Preferably, it is 2 to 2.5 that noise holds threshold range F, can ensureing that preferred maximum value pixel is to having consistent linear relationship, if F is less than 2, then causes the maximum value pixel quantity of the coupling of reservation very few, causing linear model error large; If F is greater than 2.5, then causes the maximum value pixel quantity that matching error in the maximum value pixel of reservation is large too much, cause linear model error large equally.
The source of stochastic sampling coherence method is document Chum, O.and Matas, J.: ' Optimal randomized RANSAC ', IEEE Trans.Patt.Anal.Mach.Intell., 2008,30, (8), pp.1472-1482.
Step 5, the orientation of setting changing image S upwards m orientation to scale factor σ m, the maximum number of the orientation that 1≤m≤M, M represents changing image S upwards scale factor, to the K in changing image S preferred maximum value pixel position coordinates carry out orientation to change of scale, obtain revising rear maximum value pixel position coordinates and be
To K in reference picture P preferred maximum value pixel position coordinates carry out center, obtain K the centralization preferred maximum value pixel position coordinates of reference picture P to maximum value pixel position coordinates after K correction in changing image S carry out center, obtain K the centralization preferred maximum value pixel position of changing image S
Implementation step is as follows:
(5a) the region of search σ=[σ of orientation to scale factor is built 1, σ 2..., σ m..., σ m], 1≤m≤M, wherein σ m+1m+ Δ σ, Δ σfor stepped intervals; The maximum number of the orientation that M represents changing image S upwards scale factor;
(5b) select m orientation to scale factor σ m, following orientation is carried out to change of scale to the preferred maximum value pixel of kth in changing image S, obtains revised maximum value pixel position coordinates for:
[ x ~ k 2 , y ~ k 2 ] = [ x k 2 , y k 2 ] 1 / σ m 0 0 1 , 1 ≤ k ≤ K
Wherein, for revising a kth preferred maximum value pixel coordinate position in front changing image S;
(5c) according to step (5b), carry out orientation to change of scale to the position coordinates of K in changing image S preferred maximum value pixel, after obtaining K correction, maximum value pixel position coordinates is
(5d) K preferred maximum value pixel position coordinates in reference picture P is calculated average
Center is gone to K the preferred maximum value pixel position coordinates of reference picture P, obtains K the centralization preferred maximum value pixel position coordinates of reference picture P
Wherein, go to center to a kth preferred maximum value pixel position coordinates, obtaining a kth centralization preferred maximum value pixel position coordinates is:
[ x k * 1 , y k * 1 ] = [ x k 1 - x ‾ 1 , y k 1 - y ‾ 1 ]
(5e) to K in changing image S revised maximum value pixel position coordinates go to center, obtain K the centralization preferred maximum value pixel position coordinates of changing image S
Step 6, utilizes K the centralization preferred maximum value pixel position coordinates of the K of a reference picture P centralization preferred maximum value pixel position coordinates and changing image S, anglec of rotation θ between computing reference image P and changing image S m;
Utilize anglec of rotation θ m, rotation is carried out to changing image S and obtains rotating rear changing image changing image after rotating superpose with the corresponding pixel points amplitude of reference picture P, then ask for the amplitude equalizing value of corresponding pixel points, obtain amplitude equalizing value image I 1,2, and calculate amplitude equalizing value image I 1,2entropy λ m;
(6a) anglec of rotation θ is utilized m, changing image S is rotated, obtains rotating rear changing image for:
S ~ m = cos θ m sin θ m - sin θ m cos θ m S
(6b) with reference to changing image after image P and rotation the superposition of corresponding pixel points amplitude, then ask for the amplitude equalizing value of corresponding pixel points, obtain amplitude equalizing value image I 1,2; Changing image the line number of pixel is N 1, columns is N 2;
(6c) amplitude equalizing value image I is calculated 1,2entropy λ m:
λ m = - Σ η = 1 N 1 Σ κ = 1 N 2 u η , κ ln u η , κ
u η , κ = | I η , κ 1,2 | / Σ η = 1 N 1 Σ κ = 1 N 2 I η , κ 1,2
Wherein, ln [] for the truth of a matter be the logarithmic function of e, η is amplitude equalizing value image I 1,2the line number of pixel, 1≤η≤N 1, κ is amplitude equalizing value image I 1,2the columns of pixel, 1≤κ≤N 2, N 1for amplitude equalizing value image I 1,2maximum number of lines, N 2for amplitude equalizing value image I 1,2maximum number of column.
Anglec of rotation θ between two width subimages is calculated in step 6 mmethod have multiple, such as:
1, singular value decomposition method, its source is document Horn, B.K.P.: ' Closed-form solution of absolute orientation using unit quaternions ', Journal of the Optical Society of America A, 1987,4, (4), pp.629-642.
2, phase gradient method, its source is document Peng, S.B., Xu, J., Peng, Y.N., and Xiang, J.B.: ' ISAR rotation velocity estimation based on phase slope difference of two prominent scatterers on complex image ', IET Radar, Sonar, Navig., 2011,5, (9), pp.1002-1009.
Step 7, according to step 5 and step 6, utilizes M orientation to carry out orientation to change of scale and rotation to scale factor to changing image S, obtains M entropy λ 1..., λ m..., λ m; The maximum number of the orientation that M represents changing image S upwards scale factor;
From M entropy λ 1..., λ m..., λ mthe minimum entropy λ of middle selection e, determine minimum entropy λ echanging image after corresponding rotation 1≤e≤M; By minimum entropy λ echanging image after corresponding rotation as the optimization subimage of changing image S.
Step 8, by the first width subimage T in X width ISAR subimage 1as the 1st fusant image Z 1;
Using the optimization subimage of changing image s as the 2nd fusant image Z 2;
Setting integer c, 3≤c≤X, using c-1 fusant image as with reference to image P, by c width subimage T in X width subimage cas changing image S, obtain the optimization subimage of changing image to step 7 according to step 2, using the optimization subimage of changing image as c fusant image Z c; And then obtain X width fusant image, i.e. the 1st fusant image Z 1, the 2nd fusant image Z 2, c fusant image Z c..., X fusant image Z x, wherein c fusant image Z cpixel be N 1row N 2column matrix.
Step 9, utilizes non-negative matrix factorization method to decompose X width fusant image, obtains the base vector of X width fusant image W = [ w 1 , w 2 , · · · , w N 1 N 2 ] T , Dimension is N 1n 2× 1.
(9a) by X width fusant image Z 1, Z 2..., Z c..., Z xin c width fusant image Z cin corresponding matrix, each row headtotail of element, is converted into N 1n 2the c width fusant image Z of dimension ccolumn vector V c;
(9b) according to step 9a) obtain X width fusant image conversion column vector V 1, V 2..., V c..., V x, setting transition matrix V=[V 1, V 2..., V c..., V x], wherein, the line number of transition matrix V is N 1n 2, columns is X;
(9c) adopt non-negative matrix factorization method that base vector W is carried out Q iteration, obtain the base vector of the Q time iteration W Q = [ w 1 Q , w 2 Q , · · · , w N 1 N 2 Q ] , Q is the iteration total degree of setting;
Wherein, the q time iteration base vector W qfor:
w α q = w α q - 1 Σ β = 1 X [ h β q - 1 v α , β / ( W q - 1 H q - 1 ) α , β ] Σ β = 1 X h β q - 1
h β q = h β q - 1 Σ α = 1 N 1 N 2 [ w α q - 1 v α , β / ( W q - 1 H q - 1 ) α , β ] Σ α = 1 N 1 N 2 w α q - 1
In formula, 1≤q≤Q, Q is the iteration total degree of setting, and α is the element sequence number of base vector W, 1≤α≤N 1n 2; H qrepresent the auxiliary vector of the q time iteration, β is the element sequence number of auxiliary vector H, 1≤β≤X; be base vector W in the q time iteration qα element; be the auxiliary vector H of the q time iteration qβ element; v α, βfor the capable β row of a matrix V α element; The initial value W of base vector W 0for element number is N 1n 2random column vector, the initial value H of auxiliary vector H 0for the random row vector that element number is X; N 1n 2for the line number of transition matrix V, X is the columns of transition matrix V;
(9d) base vector of the Q time iteration is set for the base vector of X width fusant image W = [ w 1 , w 2 , · · · , w N 1 N 2 ] .
Wherein the span of Q is 1 positive integer arriving positive infinity, and empirical value is 500.If Q is less than 500, then the value of the cost function D obtained is excessive, causes error large; If Q is greater than 500, then calculated amount is excessive, is unfavorable for real-time process;
The iterative formula of step (9c) derives from non-negative matrix factorization method.
Step 10, by every N of the base vector W of X width fusant image 2individual element row is a line, obtains the basic image after resetting dimension is N 1× N 2, the basic image after rearrangement for final fused images.
So far, the ISAR image interfusion method of based target feature completes substantially.
The technical thought realizing the object of the invention is: first to full aperture ISAR data sectional, and carries out imaging processing and obtain ISAR subimage; Mate the maximum value pixel in ISAR image, the maximum value pixel after these being mated is used for corner and estimates, obtains the high-precision anglec of rotation again; Afterwards, by orientation scale factor searching method, the minimum entropy of combining image, obtains fusant image.Finally utilize non-negative matrix factorization method to obtain the base vector of reference picture and changing image, thus obtain final fused images.
Below in conjunction with emulation experiment, effect of the present invention is described further.
1, simulated conditions:
As shown in Figure 3, wherein Fig. 2 (a) is reference picture P, Fig. 2 (b) is changing image S to the two width ISAR images with different tarnsition velocity that emulation experiment of the present invention uses;
The major parameter of the present invention's emulation, as shown in Table 1:
Table one
Sub-aperture size of data 64×64
ISAR subimage size 512×512
Wavelength X 0.015m
The bandwidth transmitted 500MHz
Reference picture angular velocity of rotation 1°/s
Changing image angular velocity of rotation 1.2°/s
2, content is emulated:
Emulation 1, carry out orientation by the inventive method to Fig. 2 (a) and Fig. 2 (b) to search for scale factor, hunting zone is [0.7,0.8,, 1.3], and calculate the entropy of corresponding amplitude equalizing value image, obtain the changes of entropy curve of orientation to scale factor and amplitude equalizing value image, result is as Fig. 3.
Emulation 2, carry out maximum value pixel coupling by the inventive method to Fig. 2, result is as Fig. 4.
Emulation 3, go to center with the position coordinates of the inventive method to the maximum value pixel mated in Fig. 4, and utilize singular value decomposition method, calculating the anglec of rotation is 2.1231 °.Afterwards, rotate Fig. 2 (b), result is as Fig. 5.
Emulation 4, utilizes non-negative matrix factorization method, and carry out image co-registration to Fig. 2 (a) and Fig. 5, result is as Fig. 6.
3, analysis of simulation result:
As can be seen from Figure 2, there is rotation change in the target in two width ISAR subimages, and for changing image, its head and wing exist partial dispersion point disappearance.The orientation of its changing image is inconsistent to yardstick to the orientation of yardstick and reference picture, and this causes because both have different angular velocity of rotations;
As can be seen from Figure 3, for the entropy of the amplitude equalizing value image of different azimuth under scale factor, its difference is comparatively obvious.The entropy of amplitude equalizing value image corresponding when being 1.2 in orientation to scale factor is minimum, and now this amplitude equalizing value image has best focusing effect;
As can be seen from Figure 4, utilize spatial feature extracting method, the method rejecting erroneous point in conjunction with stochastic sampling coherence method can realize maximum value pixel coupling correct in two width subimages, and the anglec of rotation obtained is 2.1231 °;
As can be seen from Figure 5, orientation is utilized to carry out the result of orientation after yardstick and rotation correction to scale factor 1.2 and the anglec of rotation 2.1231 ° to Fig. 2 (b) consistent with the public part-structure of Fig. 2 (a);
As can be seen from Figure 6, the effect obtained after non-negative matrix factorization method can reflect supplementing to deletion construct in two width subimages well, and on public structure division, have good scattering point focusing effect.

Claims (5)

1. an ISAR image interfusion method for based target feature, is characterized in that, comprise the following steps:
Step 1, obtain radar full aperture ISAR data, radar full aperture ISAR data upwards have b range unit in orientation, b is positive integer; To radar full aperture ISAR data, in orientation, b range unit upwards carries out decile segmentation, and obtain X cross-talk aperture data, X is greater than 1 and is less than or equal to b;
Carry out ISAR imaging to described X cross-talk aperture data, obtain X width ISAR subimage, the line number of each width ISAR sub-image pixels point is N 1, columns is N 2; N 1and N 2be respectively natural number;
By the first width subimage T in X width ISAR subimage 1as reference image P, the second width subimage T 2as changing image S;
Step 2, the amplitude thresholds of given pixel amplitude, is defined as maximum value pixel by the pixel being greater than amplitude thresholds;
From reference picture P, determine that the number of maximum value pixel is A 1;
Spatial feature extraction is carried out to a maximum value pixel in reference picture P, obtains the spatial feature Descriptor vector of a maximum value pixel 1≤a≤A 1; And then the A obtained in reference picture P 1the reference spatial feature descriptor matrix that individual maximum value pixel is formed dimension is A 1× 2 n, wherein, A 1for the number of maximum value pixel in reference picture P is namely with reference to spatial feature Descriptor vector D 1line number, [] trepresent vector transpose;
From changing image S, determine that the number of maximum value pixel is A 2;
Spatial feature extraction is carried out to the individual maximum value pixel of a ' in changing image S, obtains the spatial feature Descriptor vector of the individual maximum value pixel of a ' 1≤a '≤A 2; And then the A obtained in changing image S 2the conversion spatial feature descriptor matrix that individual maximum value pixel is formed dimension is A 2× 2 n, [] trepresent vector transpose, wherein, A 2for in changing image S, the number of maximum value pixel namely converts spatial feature descriptor matrix D 2line number, A 1, A 2positive integer is with n;
Step 3, computing reference spatial feature Descriptor vector D 1the i-th row data and conversion spatial feature Descriptor vector D 2each row of data between Euclidean distance, 1≤i≤A 1, obtain with reference to spatial feature Descriptor vector D 1the A of the i-th row data 2individual Euclidean distance from reference spatial feature Descriptor vector D 1the A of the i-th row data 2minimum value O is selected in individual Euclidean distance i,fand sub-minimum 1≤f≤A 2, 1≤f 0≤ A 2;
Setpoint distance threshold value G; If D 1the i-th row data A2 Euclidean distance in the minimum value O that selects i,fand sub-minimum ratio be less than distance threshold G, then the match point of i-th maximum value pixel in reference picture P is f the maximum value pixel of changing image S;
Make i from 1 to A 1travel through, from A1 the maximum value pixel of reference picture P and the A of changing image S 2the maximum value pixel of U to coupling is determined, 1≤U≤min [A in individual maximum value pixel 1, A 2];
Step 4, the Euclidean distance between the position coordinates asking for the maximum value pixel of every a pair coupling, obtains the Euclidean distance of U position coordinates;
Setting noise holds threshold range F, if the Euclidean distance of u position coordinates is held within threshold range F at noise, then retain the maximum value pixel of a pair coupling corresponding to u Euclidean distance, otherwise, remove the maximum value pixel of a pair coupling corresponding to Euclidean distance of u position coordinates, 1≤u≤U, obtains K to preferred maximum value pixel, 1≤K≤U;
The position coordinates of the K in reference picture P preferred maximum value pixel is respectively the position coordinates of the K in changing image S preferred maximum value pixel is respectively wherein 1≤K≤U;
Step 5, the orientation of setting changing image S upwards m orientation to scale factor σ m, the maximum number of the orientation that 1≤m≤M, M represents changing image S upwards scale factor, to the K in changing image S preferred maximum value pixel position coordinates carry out orientation to change of scale, obtain revising rear maximum value pixel position coordinates and be
To K in reference picture P preferred maximum value pixel position coordinates carry out center, obtain K the centralization preferred maximum value pixel position coordinates of reference picture P to maximum value pixel position coordinates after K correction in changing image S carry out center, obtain K the centralization preferred maximum value pixel position of changing image S:
Step 6, utilizes K the centralization preferred maximum value pixel position coordinates of the K of a reference picture P centralization preferred maximum value pixel position coordinates and changing image S, anglec of rotation θ between computing reference image P and changing image S m;
Utilize anglec of rotation θ m, rotation is carried out to changing image S and obtains rotating rear changing image changing image after rotating superpose with the corresponding pixel points amplitude of reference picture P, then ask for the amplitude equalizing value of corresponding pixel points, obtain amplitude equalizing value image I 1,2, and calculate amplitude equalizing value image I 1,2entropy λ m;
Step 7, according to step 5 and step 6, utilizes M orientation to carry out orientation to change of scale and rotation to scale factor to changing image S, obtains M entropy λ 1..., λ m..., λ m; The maximum number of the orientation that M represents changing image S upwards scale factor;
From M entropy λ 1..., λ m..., λ mthe minimum entropy λ of middle selection e, determine minimum entropy λ echanging image after corresponding rotation 1≤e≤M; By minimum entropy λ echanging image after corresponding rotation as the optimization subimage of changing image S;
Step 8, by the first width subimage T in X width ISAR subimage 1as the 1st fusant image Z 1;
Using the optimization subimage of changing image s as the 2nd fusant image Z 2;
Setting integer c, 3≤c≤X, using c-1 fusant image as with reference to image P, by c width subimage T in X width subimage cas changing image S, obtain the optimization subimage of changing image S to step 7 according to step 2, using the optimization subimage of changing image S as c fusant image Z c; And then obtain X width fusant image, i.e. the 1st fusant image Z 1, the 2nd fusant image Z 2, c fusant image Z c..., X fusant image Z x, wherein c fusant image Z cpixel be N 1row N 2column matrix;
Step 9, utilizes non-negative matrix factorization method to decompose X width fusant image, obtains the base vector of X width fusant image W = [ w 1 , w 2 , · · · , w N 1 N 2 ] T , Dimension is N 1n 2× 1;
Step 10, by every N of the base vector W of X width fusant image 2individual element row is a line, obtains the basic image after resetting dimension is N 1× N 2, the basic image after rearrangement for final fused images.
2. the ISAR image interfusion method of a kind of based target feature according to claim 1, it is characterized in that, step 3 comprises following sub-step:
(3a) the spatial feature Descriptor vector D of computing reference image P 1in the spatial feature Descriptor vector D of the i-th row data and changing image S 2in Euclidean distance O between the capable data of l i,l;
When l equal 1 successively, 2,3 ..., A 2time, obtain the spatial feature Descriptor vector D of reference picture P 1in the spatial feature Descriptor vector D of the i-th row data and changing image S 2the A that each row of data is formed 2individual Euclidean distance
A 2individual Euclidean distance form i-th distance vector O i, O i = [ O i , 1 , O i , 2 . . . , O i , l , . . . , O i , A 2 ] , 1≤l≤A 2
(3b) i-th distance vector O is selected iin minimum value O i,fwith sub-minimum, calculate i-th vectorial O ielement in minimum value O i,fand sub-minimum ratio, wherein f is integer, 1≤f≤A 2;
If i-th distance vector O imiddle minimum value O i,fand sub-minimum ratio be less than distance threshold G, then i-th maximum value pixel of reference picture P and f the maximum value pixel of changing image S match, 1≤i≤A 1, 1≤f≤A 2;
If i-th distance vector O imiddle minimum value O i,fand sub-minimum ratio be more than or equal to distance threshold G, then i-th maximum value pixel of reference picture P and the A of changing image S 2individual maximum value pixel does not mate;
(3c) according to step 3a) to 3b), i is from 1 to A 1travel through, obtain having the maximum value pixel of U to coupling in reference picture P and changing image S, wherein U≤min [A 1, A 2].
3. the ISAR image interfusion method of a kind of based target feature according to claim 1, it is characterized in that, step 5 comprises following sub-step:
(5a) the region of search σ=[σ of orientation to scale factor is built 1, σ 2..., σ m..., σ m], 1≤m≤M, wherein σ m+1m+ Δ σ, Δ σfor stepped intervals; The maximum number of the orientation that M represents changing image S upwards scale factor;
(5b) select m orientation to scale factor σ m, following orientation is carried out to change of scale to the preferred maximum value pixel of kth in changing image S, obtains revised maximum value pixel position coordinates for:
[ x ~ k 2 , y ~ k 2 ] = [ x k 2 , y k 2 ] 1 / σ m 0 0 1 , 1 ≤ k ≤ K
Wherein, for revising a kth preferred maximum value pixel coordinate position in front changing image S;
(5c) according to step (5b), carry out orientation to change of scale to the position coordinates of K in changing image S preferred maximum value pixel, after obtaining K correction, maximum value pixel position coordinates is
(5d) K preferred maximum value pixel position coordinates in reference picture P is calculated average
Center is gone to K the preferred maximum value pixel position coordinates of reference picture P, obtains K the centralization preferred maximum value pixel position coordinates of reference picture P
Wherein, go to center to a kth preferred maximum value pixel position coordinates, obtaining a kth centralization preferred maximum value pixel position coordinates is:
[ x k * 1 , y k * 1 ] = [ x k 1 - x ‾ 1 , y k 1 - y ‾ 1 ]
(5e) to K in changing image S revised maximum value pixel position coordinates go to center, obtain K the centralization preferred maximum value pixel position coordinates of changing image S
4. the ISAR image interfusion method of a kind of based target feature according to claim 1, it is characterized in that, step 6 comprises following sub-step:
(6a) anglec of rotation θ is utilized m, changing image S is rotated, obtains rotating rear changing image for:
S ~ m = cos θ m sin θ m - sin θ m cos θ m S
(6b) with reference to changing image after image P and rotation the superposition of corresponding pixel points amplitude, then ask for the amplitude equalizing value of corresponding pixel points, obtain amplitude equalizing value image I 1,2; Changing image the line number of pixel is N 1, columns is N 2;
(6c) amplitude equalizing value image I is calculated 1,2entropy λ m:
λ m = - Σ η = 1 N 1 Σ κ = 1 N 2 u η , κ ln u η , κ
u η , κ = | I η , κ 1,2 | / Σ η = 1 N 1 Σ κ = 1 N 2 I η , κ 1,2
Wherein, ln [] for the truth of a matter be the logarithmic function of e, η is amplitude equalizing value image I1, the line number of the pixel of 2,1≤η≤N 1, κ is amplitude equalizing value image I 1,2the columns of pixel, 1≤κ≤N 2, N 1for amplitude equalizing value image I 1,2maximum number of lines, N 2for amplitude equalizing value image I 1,2maximum number of column.
5. the ISAR image interfusion method of a kind of based target feature according to claim 1, it is characterized in that, step 9 comprises following sub-step:
(9a) by X width fusant image Z 1, Z 2..., Z c..., Z xin c width fusant image Z cin corresponding matrix, each row headtotail of element, is converted into N 1n 2the c width fusant image Z of dimension ccolumn vector V c;
(9b) according to step 9a) obtain X width fusant image conversion column vector V 1, V 2..., V c..., V x, setting transition matrix V=[V 1, V 2..., V c..., V x], wherein, the line number of transition matrix V is N 1n 2, columns is X;
(9c) adopt non-negative matrix factorization method that base vector W is carried out Q iteration, obtain the base vector of the Q time iteration W Q = [ w 1 Q , w 2 Q , · · · , w N 1 N 2 Q ] , Q is the iteration total degree of setting;
Wherein, the q time iteration base vector W qfor:
w α q = w α q - 1 Σ β = 1 X [ h β q - 1 v α , β / ( W q - 1 H q - 1 ) α , β ] Σ β = 1 X h β q - 1
h β q = h β q - 1 Σ α = 1 N 1 N 2 [ w α q - 1 v α , β / ( W q - 1 H q - 1 ) α , β ] Σ α = 1 N 1 N 2 w α q - 1
In formula, 1≤q≤Q, Q is the iteration total degree of setting, and α is the element sequence number of base vector W, 1≤α≤N 1n 2; H qrepresent the auxiliary vector of the q time iteration, β is the element sequence number of auxiliary vector H, 1≤β≤X; be base vector W in the q time iteration qα element; be the auxiliary vector H of the q time iteration qβ element; v α, βfor the capable β row of a matrix V α element; The initial value W of base vector W 0for element number is N 1n 2random column vector, the initial value H of auxiliary vector H 0for the random row vector that element number is X; N 1n 2for the line number of transition matrix V, X is the columns of transition matrix V;
(9d) base vector of the Q time iteration is set for the base vector of X width fusant image W = [ w 1 , w 2 , · · · , w N 1 N 2 ] .
CN201410445675.7A 2014-09-03 2014-09-03 ISAR image interfusion methods based on target characteristic Expired - Fee Related CN104240212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410445675.7A CN104240212B (en) 2014-09-03 2014-09-03 ISAR image interfusion methods based on target characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410445675.7A CN104240212B (en) 2014-09-03 2014-09-03 ISAR image interfusion methods based on target characteristic

Publications (2)

Publication Number Publication Date
CN104240212A true CN104240212A (en) 2014-12-24
CN104240212B CN104240212B (en) 2017-03-29

Family

ID=52228221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410445675.7A Expired - Fee Related CN104240212B (en) 2014-09-03 2014-09-03 ISAR image interfusion methods based on target characteristic

Country Status (1)

Country Link
CN (1) CN104240212B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204437A (en) * 2016-06-28 2016-12-07 深圳市凌云视迅科技有限责任公司 A kind of image interfusion method
CN107274363A (en) * 2017-06-02 2017-10-20 北京理工大学 A kind of edge with scale-sensitive characteristic keeps image filtering method
CN109146001A (en) * 2018-09-14 2019-01-04 西安电子科技大学 Multi-angle of view ISAR image interfusion method
CN110058247A (en) * 2019-03-29 2019-07-26 杭州电子科技大学 A kind of method of synthetic aperture sonar real time imagery
CN110428369A (en) * 2019-06-20 2019-11-08 中国地质大学(武汉) CHNMF remote sensing images solution based on comentropy mixes algorithm
CN112069651A (en) * 2020-07-23 2020-12-11 西安空间无线电技术研究所 Spin-stabilized target rotating shaft estimation method based on ISAR imaging
CN113020428A (en) * 2021-03-24 2021-06-25 北京理工大学 Processing monitoring method, device and equipment of progressive die and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685154B (en) * 2008-09-27 2012-12-26 清华大学 Image fusion method of double/multiple base inverse synthetic aperture radar
CN102288963B (en) * 2011-07-21 2013-06-12 西安电子科技大学 Bistatic inverse synthetic aperture radar (ISAR) image fusion method based on sub aperture parameter estimation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZHIWEI XU ET AL: "Azimuth Scaling for Inverse Synthetic Aperture Radar Images with Feature Registration", 《2013 6TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING》 *
许然 等: "基于子孔径参数估计的双基地ISAR图像融合方法研究", 《电子与信息学报》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204437B (en) * 2016-06-28 2019-05-28 深圳市凌云视迅科技有限责任公司 A kind of image interfusion method
CN106204437A (en) * 2016-06-28 2016-12-07 深圳市凌云视迅科技有限责任公司 A kind of image interfusion method
CN107274363B (en) * 2017-06-02 2020-09-22 北京理工大学 Edge-preserving image filtering method with scale sensitivity characteristic
CN107274363A (en) * 2017-06-02 2017-10-20 北京理工大学 A kind of edge with scale-sensitive characteristic keeps image filtering method
CN109146001A (en) * 2018-09-14 2019-01-04 西安电子科技大学 Multi-angle of view ISAR image interfusion method
CN109146001B (en) * 2018-09-14 2021-09-10 西安电子科技大学 Multi-view ISAR image fusion method
CN110058247A (en) * 2019-03-29 2019-07-26 杭州电子科技大学 A kind of method of synthetic aperture sonar real time imagery
CN110428369A (en) * 2019-06-20 2019-11-08 中国地质大学(武汉) CHNMF remote sensing images solution based on comentropy mixes algorithm
CN110428369B (en) * 2019-06-20 2021-10-08 中国地质大学(武汉) CHNMF remote sensing image unmixing method based on information entropy
CN112069651A (en) * 2020-07-23 2020-12-11 西安空间无线电技术研究所 Spin-stabilized target rotating shaft estimation method based on ISAR imaging
CN112069651B (en) * 2020-07-23 2024-04-09 西安空间无线电技术研究所 Method for estimating spin-stabilized target rotation axis based on ISAR imaging
CN113020428A (en) * 2021-03-24 2021-06-25 北京理工大学 Processing monitoring method, device and equipment of progressive die and storage medium
CN113020428B (en) * 2021-03-24 2022-06-28 北京理工大学 Progressive die machining monitoring method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN104240212B (en) 2017-03-29

Similar Documents

Publication Publication Date Title
CN104240212A (en) ISAR image fusion method based on target characteristics
US11238602B2 (en) Method for estimating high-quality depth maps based on depth prediction and enhancement subnetworks
CN108596248B (en) Remote sensing image classification method based on improved deep convolutional neural network
Wu et al. Inshore ship detection based on convolutional neural network in optical satellite images
CN107563438A (en) The multi-modal Remote Sensing Images Matching Method and system of a kind of fast robust
CN105989604A (en) Target object three-dimensional color point cloud generation method based on KINECT
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
CN104318569A (en) Space salient region extraction method based on depth variation model
Scharstein et al. Semi-global stereo matching with surface orientation priors
Cai et al. MHA-Net: Multipath Hybrid Attention Network for building footprint extraction from high-resolution remote sensing imagery
CN103955701A (en) Multi-level-combined multi-look synthetic aperture radar image target recognition method
CN105931264A (en) Sea-surface infrared small object detection method
Shen et al. Coupling model-and data-driven methods for remote sensing image restoration and fusion: Improving physical interpretability
CN110197503A (en) Non-rigid point set method for registering based on enhanced affine transformation
CN105652271A (en) Super-resolution processing method for augmented Lagrangian real-beam radar angle
Zhao et al. Aliked: A lighter keypoint and descriptor extraction network via deformable transformation
Tang et al. Research on 3D human pose estimation using RGBD camera
Zhao et al. Joint learning of salient object detection, depth estimation and contour extraction
CN104318552A (en) Convex hull projection graph matching based model registration method
CN114442092B (en) SAR deep learning three-dimensional imaging method for distributed unmanned aerial vehicle
CN104463962A (en) Three-dimensional scene reconstruction method based on GPS information video
CN102663453B (en) Human motion tracking method based on second generation Bandlet transform and top-speed learning machine
Chen et al. Learning shape priors for single view reconstruction
CN104123719B (en) Method for carrying out infrared image segmentation by virtue of active outline

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170329

Termination date: 20170903

CF01 Termination of patent right due to non-payment of annual fee