CN105551013A - SAR image sequence registering method based on movement platform parameters - Google Patents

SAR image sequence registering method based on movement platform parameters Download PDF

Info

Publication number
CN105551013A
CN105551013A CN201510738155.XA CN201510738155A CN105551013A CN 105551013 A CN105551013 A CN 105551013A CN 201510738155 A CN201510738155 A CN 201510738155A CN 105551013 A CN105551013 A CN 105551013A
Authority
CN
China
Prior art keywords
image
enclosed region
registration
transform
subject
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510738155.XA
Other languages
Chinese (zh)
Other versions
CN105551013B (en
Inventor
杨志伟
唐光龙
廖桂生
郭永霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201510738155.XA priority Critical patent/CN105551013B/en
Publication of CN105551013A publication Critical patent/CN105551013A/en
Application granted granted Critical
Publication of CN105551013B publication Critical patent/CN105551013B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the field of radar data processing, and discloses an SAR image sequence registering method based on movement platform parameters. The method comprises the following steps that a reference image and an image to be registered are obtained; a preliminary transformation image of the reference image is obtained according to the movement platform parameters; multiple closed areas in the image to be registered and the preliminary transformation image are extracted; the affine invariant moment feature of each closed area and the Euclidean distance and the inclined angle of the affine invariant moment feature are calculated; matching of the closed areas of the image to be registered and the preliminary transformation image is performed; transformation parameters from the image to be registered to the preliminary transformation image are calculated according to the matching result; the transformation parameters from the image to be registered to the reference image are obtained according to the transformation parameters from the reference image to the preliminary transformation image and the transformation parameters from the image to be registered to the preliminary transformation image; and the image to be registered is registered according to the transformation parameters from the image to be registered to the reference image.

Description

Based on the SAR image sequence method for registering of motion platform parameter
Technical field
The present invention relates to radar data process field, particularly relate to the SAR image sequence method for registering based on motion platform parameter, can be used for the target detection under wide area supervision and tracking.
Background technology
Modern high technology war presents sudden strong, great depth, the characteristics of operation such as comprehensive, and battlefield surveillance and reconnaissance system need to have the ability monitoring broad regions and scout, and can provide efficiently accurately simultaneously, information timely.SAR imaging technique is adopted to combine with GMTI technology, realize supervision, moving object detection and tracking over the ground, because different visual angles, different sensors are to Same Scene imaging Existential Space mismatch problems, the registration of SAR image sequence is the key issue of SAR image process.At present, SAR image registration method is summarized and can be divided into two large classes: based on half-tone information and based on the method for registering of characteristics of image.
Method for registering based on half-tone information mainly utilizes the similarity of the grey-level statistics computed image of image itself to realize registration, and conventional method has correlation coefficient process, mutual information method, phase correlation method and the method etc. based on wavelet transformation.These class methods realize simple, but due to the visual angle of SAR imaging different, the scattering coefficient corresponding to ground scene changes greatly, and based on the method for registering noise of gray scale and complex shape no-load voltage ratio more responsive, be not suitable for the registration of SAR image with great visual angle.
Based on the method for registering of characteristics of image, first extract the feature of image or structural information as characteristics of image (comprising point, line, region, profile, structural information etc.), and carry out characteristic matching according to certain principle, obtain converting parameter.This kind of method applicability is stronger, registration efficiency is higher.Because SAR image visual angle change is compared with large and details is many, use the method for registering length consuming time of feature based and the registration parameter error obtained is large, need the registration for SAR image with great visual angle to improve.
Summary of the invention
For the deficiencies in the prior art, embodiments of the invention provide a kind of SAR image sequence method for registering based on motion platform parameter, the deficiency of existing SAR image registration method can be overcome, realize the real-time registration that wide area monitors SAR image sequence with great visual angle, and reduce calculated amount, improve registration efficiency and precision.
Technical thought of the present invention is: first make pre-transform according to Platform movement parameter to reference picture, so just obtain comparatively close to the pre-transform image of reference picture, image subject to registration just transfers to the conversion of reference picture the conversion asking image subject to registration to pre-transform image to; Secondly respectively Iamge Segmentation is carried out to pre-transform image and image subject to registration, extract corresponding enclosed region profile, and retain the more region of pixel, calculate the affine not bending moment of regional; Then regional is mated, and calculate the conversion parameter of pre-transform image to image subject to registration.
For achieving the above object, embodiments of the invention adopt following technical scheme to be achieved.
Based on the SAR image sequence method for registering of motion platform parameter, described method comprises the steps:
Step 1, obtains reference picture and image subject to registration;
Step 2, obtains motion platform parameter, and obtains the pre-transform image of described reference picture according to described motion platform parameter, wherein, determined from described reference picture to the conversion parameter of described pre-transform image by described motion platform parameter;
Step 3, extracts pixel number in described image subject to registration respectively and is greater than pixel number in multiple enclosed region of the first pixel thresholding and described pre-transform image and is greater than multiple enclosed region of the second pixel thresholding;
Step 4, calculates the affine invariant moment features of each enclosed region in multiple enclosed region of described image subject to registration, and the affine invariant moment features of each enclosed region in multiple enclosed region of described pre-transform image;
Step 5, the Euclidean distance between the affine invariant moment features calculating the affine invariant moment features of each enclosed region of described image subject to registration and each enclosed region of described pre-transform image and angle;
Step 6, according to the Euclidean distance between the affine invariant moment features of each enclosed region of described image subject to registration and the affine invariant moment features of each enclosed region of described pre-transform image and angle, described image subject to registration and described pre-transform image are carried out to the coupling of enclosed region, obtain the enclosed region of all mutual couplings in described image subject to registration and described pre-transform image;
Step 7, according to the enclosed region of described all mutual couplings, calculates the conversion parameter of described image subject to registration to described pre-transform image;
Step 8, according to the conversion parameter of described reference picture to described pre-transform image, and described image subject to registration is to the conversion parameter of described pre-transform image, obtains the conversion parameter of described image subject to registration to described reference picture;
Step 9, according to the conversion parameter of described image subject to registration to described reference picture, carries out registration to described image subject to registration.
Feature of the present invention and being further improved to:
(1) in described step 2, motion platform parameter specifically comprises:
The minimum distance R in scene center distance course line 0; Platform flying height h; Distance is to sample frequency f s; Pulse repetition rate PRF; Carrier aircraft speed v.
(2) step 3 specifically comprises:
(3a) the first pixel thresholding and the second pixel thresholding are set;
(3b) according to described first pixel thresholding, multiple enclosed region that pixel number in described image subject to registration is greater than described first pixel thresholding are extracted;
(3c) according to described second pixel thresholding, multiple enclosed region that pixel number in described pre-transform image is greater than described second pixel thresholding are extracted.
(3), in step 4, the affine invariant moment features of described enclosed region, adopts following six affine not bending moments to be described:
Wherein, represent center, the p+q rank square of enclosed region, f (x, y) represents the gray-scale value of gray level image at (x, y) place.
(4) in step 5,
The Euclidean distance dis (m, n) of image subject to registration m enclosed region and pre-transform image n-th enclosed region is:
The angle cos θ of image subject to registration m enclosed region and pre-transform image n-th enclosed region is:
Wherein, inv km () represents the affine not bending moment in k rank of image subject to registration m enclosed region, inv kn () represents the affine not bending moment in k rank of pre-transform image n-th enclosed region.
(5), in step 6, following matching principle is adopted to carry out the coupling of enclosed region to described image subject to registration and described pre-transform image:
Adopt the method for bi-directional matching, m enclosed region of and if only if image subject to registration is that m enclosed region of image subject to registration is to the minimum value in the Euclidean distance of all enclosed region of pre-transform image to the Euclidean distance of the n-th enclosed region of pre-transform image, simultaneously the n-th enclosed region of pre-transform image to the Euclidean distance of m enclosed region of image subject to registration be also the n-th enclosed region of pre-transform image to minimum value in the Euclidean distance of all enclosed region of image subject to registration time, then m enclosed region of described image subject to registration and the n-th enclosed region of described pre-transform image are couplings.
The present invention has following advantage compared with the image registration techniques of existing distinguished point based: the present invention, owing to employing the profile information of the enclosed region of image, effectively prevent noise to the impact of extracting single unique point, makes feature extraction more stable; The present invention is owing to employing the profile information of the enclosed region of image, and the enclosed region number obtained, much smaller than single unique point, reduces the calculated amount of image registration greatly, reaches the requirement of real-time registration; The present invention is owing to constructing the affine not bending moment of image closing area, and the statistics obtaining provincial characteristics describes, and improves the accuracy rate of registration; The present invention calculates conversion parameter owing to adopting least square method, and registration accuracy reaches sub-pixel.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
The SAR image registration schematic flow sheet that Fig. 1 provides for the embodiment of the present invention;
The SAR imaging geometry model schematic that Fig. 2 provides for the embodiment of the present invention;
The closed contour that Fig. 3 provides for the embodiment of the present invention extracts schematic diagram;
The reference picture that Fig. 4 provides for the embodiment of the present invention, pre-registering images and image schematic diagram subject to registration;
The image registration results schematic diagram figure that Fig. 5 provides for the embodiment of the present invention;
The image registration results schematic diagram using SIFT algorithm after pre-transform that Fig. 6 provides for the embodiment of the present invention;
The image registration results schematic diagram without use SIFT algorithm during pre-transform that Fig. 7 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The embodiment of the present invention provides a kind of SAR image sequence method for registering based on motion platform parameter, and as shown in Figure 1, described method for registering comprises the steps:
Step 1, obtains reference picture and image subject to registration.
Step 2, obtains motion platform parameter, and obtains the pre-transform image of described reference picture according to described motion platform parameter.
Wherein, determined from described reference picture to the conversion parameter of described pre-transform image by described motion platform parameter;
Concrete, as shown in Figure 2,
(2a) according to unique point pixel (i, the j) coordinate that reference picture extracts, the coordinate x of corresponding scattering point in plane coordinate system X'O'Y' is determined 0(i, j) and y 0(i, j), wherein (i, j) represents that orientation is to the i-th row, distance to a jth pixel.
SAR by receive echo time delay determine target range to position, at SAR slant-range image middle distance to the oblique distance that a jth pixel is corresponding be:
Wherein R minfor radar antenna center is to the distance of mapping band most proximal end, f sit is A/D sample frequency.The orientation of target is determined to position according to doppler characterization, the Doppler frequency f of target dfor:
Wherein θ be radar antenna radially with carrier aircraft heading angle.The position coordinates of the relative carrier aircraft platform of target can be determined so further by oblique distance and Doppler frequency.For i-th orientation to scan line, the jth point on slant-range image, meets following oblique distance-Doppler equation:
(X in formula p, Y p, Z p) be the coordinate of (i, j) pixel corresponding ground scattering point in SAR image, (X s, Y s, Z s) be the coordinate at synthetic aperture center, R i,jfor the oblique distance that (i, j) pixel is corresponding, V sx, V sy, V szbe respectively V sat the component in x-axis, y-axis, z-axis direction.Can obtain according to above formula solving equations:
In formula
Assume picture process platform linear uniform motion, and do not consider scene topographic relief, then with carrier aircraft image formation start position for initial point, with carrier aircraft heading for Y-axis, to point into the sky, set up coordinate system according to left-hand rule determination X-axis, wherein V sx≈ 0, V sz≈ 0, X s=0, △ Z=-h.Can obtain pixel (i, j) corresponding ground scattering point at the coordinate of carrier aircraft coordinate system is:
(2b) ground scatter point is obtained at plane coordinate system X obo'Y obin coordinate x ob(i, j) and y ob(i, j).X ob(i, j) and y ob(i, j) is the coordinate x of this scattering point in X'O'Y' 0(i, j) and y 0the rotation of (i, j), wherein anglec of rotation θ is determined by following formula:
(2c) according to formula 1) and formula 6) instead push away the pixel number I (i, j) of ground scatter point on imaging plane and J (i, j).
(2d) according to the pixel number that (2c) obtains, image resampling is carried out to I (i, j) and J (i, j), obtains pre-transform image.
Carry out bilinear interpolation to the pixel number through conversion, formula is as follows:
Wherein u=I (i, j)-floor (I (i, j)), floor (I (i, j)) expression rounds downwards I (i, j); V=J (i, j)-floor (J (i, j)), f (0,0) original image point [floor (I (i, j)), floor (I (i is represented, j))] gray-scale value at place, f (0,1), f (1,0) and f (1,1) gray-scale value of f (0,0) neighborhood each point is represented respectively.
Step 3, extracts pixel number in described image subject to registration respectively and is greater than pixel number in multiple enclosed region of the first pixel thresholding and described pre-transform image and is greater than multiple enclosed region of the second pixel thresholding.
It should be noted that, the value of the first pixel thresholding and the second pixel thresholding is two values relatively, concrete, and the first pixel thresholding can be made equal with the second pixel thresholding.
Specific implementation process is as follows:
(3a) Image semantic classification.
Because original image may exist the situation that details is more and picture contrast is lower, therefore need to carry out pre-service to image.Pre-service utilizes the Gaussian filter of two dimension to image filtering, and the contrast of image after boostfiltering.
(3b) image binaryzation.
Image binaryzation is that pixel gray scale gray scale in image being greater than a certain threshold value puts 1, and gray scale is less than the pixel point value 0 of this threshold value.The object done like this removes details unnecessary in a large number in image, and retain obvious contour feature.
(3c) the useless border of image is removed.
The useless border of image refers to image border or the frontier point near image border.Because image subject to registration is different from pre-transform image aspects, the border of image is also different, and the existence of these frontier points can affect the extraction of available closed contour, is therefore removed.
(3d) enclosed region is extracted.
Composition graphs 3, the pixel in 3 (a) in profile preferential Selection Center point 4 neighborhood, as working direction, is secondly 8 neighborhoods; In 3 (b) when there is figure situation as left in Fig. 3 (b), by center zero setting, become result shown in right figure; In 3 (c) when there is figure situation as left in Fig. 3 (c), by center zero setting, become result shown in right figure.
Selected a certain reference position, carries out Contour extraction according to principle shown in Fig. 3, determines behind direction the value zero setting of current location; Operate more lower more than center is repeated again.When tracing into a certain step, when current location is identical with reference position, then think that current region is an enclosed region.
(3e) geometric center of each enclosed region is calculated.
The geometric center of enclosed region can as the equivalent matched point of enclosed region, and its computing formula is as follows:
Step 4, calculates the affine invariant moment features of each enclosed region in multiple enclosed region of described image subject to registration, and the affine invariant moment features of each enclosed region in multiple enclosed region of described pre-transform image.
For same enclosed region, the p+q rank moment of the orign of image is expressed as:
m pq=∫∫x py qf(x,y)dxdy11)
Wherein f (x, y) represents the gray-scale value of gray level image at (x, y) place.The gray scale barycenter of image can be expressed as:
So, center, the p+q rank square of image:
The statistical nature of following six affine not bending moments to enclosed region can be adopted to be described:
So just obtain each enclosed region and their affine invariant moment features.
Step 5, the Euclidean distance between the affine invariant moment features calculating the affine invariant moment features of each enclosed region of described image subject to registration and each enclosed region of described pre-transform image and angle.
The Euclidean distance dis (m, n) of image subject to registration m enclosed region and pre-transform image n-th enclosed region is:
The angle cos θ of image subject to registration m enclosed region and pre-transform image n-th enclosed region is:
Wherein, inv km () represents the affine not bending moment in k rank of image subject to registration m enclosed region, inv kn () represents the affine not bending moment in k rank of pre-transform image n-th enclosed region.
Step 6, according to the Euclidean distance between the affine invariant moment features of each enclosed region of described image subject to registration and the affine invariant moment features of each enclosed region of described pre-transform image and angle, described image subject to registration and described pre-transform image are carried out to the coupling of enclosed region, obtain the enclosed region of all mutual couplings in described image subject to registration and described pre-transform image.
Ideally, if two enclosed region are mated completely, then distance dis (m, n)=0 between them and included angle cosine value cos θ=1.During actual match, using the basis of above two parameters as similarity measure, the matching area of candidate should meet following principle:
Due to the enclosed region of pre-transform image and the enclosed region similarity of image subject to registration higher, can not strong variations be there is between enclosed region, therefore require that the marginal point number of composition two enclosed region can not differ too many;
Because certain enclosed region may not exist in image subject to registration, therefore require the distance between the enclosed region of the enclosed region of pre-transform image and image subject to registration to be less than a certain threshold value and included angle cosine value enough close to 1;
Adopt the method for bi-directional matching, m enclosed region of and if only if image subject to registration is that m enclosed region of image subject to registration is to the minimum value in the Euclidean distance of all enclosed region of pre-transform image to the Euclidean distance of the n-th enclosed region of pre-transform image, simultaneously the n-th enclosed region of pre-transform image to the Euclidean distance of m enclosed region of image subject to registration be also the n-th enclosed region of pre-transform image to minimum value in the Euclidean distance of all enclosed region of image subject to registration time, then m enclosed region of described image subject to registration and the n-th enclosed region of described pre-transform image are couplings.
Step 7, according to the enclosed region of described all mutual couplings, calculates the conversion parameter of described image subject to registration to described pre-transform image.
Step 8, according to the conversion parameter of described reference picture to described pre-transform image, and described image subject to registration is to the conversion parameter of described pre-transform image, obtains the conversion parameter of described image subject to registration to described reference picture.
Step 9, according to the conversion parameter of described image subject to registration to described reference picture, carries out registration to described image subject to registration.
Its specific implementation process is as follows:
If (x 0, y 0) be ground point position in a reference image, (x 1, y 1) represent its position in image subject to registration, (x 2, y 2) represent its position after pre-transform.Suppose that platform moves along a straight line at sustained height, reference picture and image subject to registration be it not in the same time imaging obtain, then there is affine transformation relationship in reference picture and image subject to registration.So have:
Wherein with represent (x in reference picture 0, y 0) the pre-transform parameter put.Similarly, pre-transform point is as follows to the change of image subject to registration:
Wherein with represent to the conversion parameter of image subject to registration.For same point, position in a reference image and its at image positional relationship subject to registration be:
Due to reference picture pre-transform parameter a little known, image subject to registration can be transferred to ask image subject to registration to pre-transform image transformation relation to the transformation relation of reference picture.
Solve the affine transformation relationship of image subject to registration to pre-transform image.
According to the matching result of step 6, affine transformation parameter needs at least 3 pairs of matching double points to solve.If matching double points is right more than 3, then least square method is used to solve the affine transformation relationship of image subject to registration to pre-transform image.
Utilize formula 23) obtain the transformation relation of image subject to registration to reference picture.
Effect of the present invention can be illustrated by the following process to emulated data:
1. experimental situation and condition
Experimental situation: experiment adopts the standard picture of same area and the image through deformation.
Test condition:
A) light velocity c=3 × 10 8m/s;
B) the minimum distance R in scene center distance course line 0=30 × 10 3m;
C) carrier aircraft height h=10 × 10 3m;
D) distance is to sample frequency f s=80 × 10 6m;
E) pulse repetition rate PRF=100;
F) carrier aircraft speed v=150m/s;
G) image angle of squint subject to registration: 30 °;
H) pre-transform image angle of squint: 20 °.
2. experiment content and result
Image registration algorithm based on SIFT is the image registration algorithm of representative feature based, therefore uses SIFT algorithm and algorithm of the present invention to carry out registration to verify validity and the real-time of the inventive method respectively to one group of identical image.
Experiment 1, algorithm of the present invention is used to treat registering images and pre-transform image completes registration, calculate the conversion parameter of image subject to registration to reference picture again after obtaining conversion parameter, statistical nature extraction time, feature counted, the feature interpretation time, the match time of stating, correctly match count to and matching precision.
Experiment 2, use SIFT algorithm treats registering images and pre-transform image completes registration, calculate the conversion parameter of image subject to registration to reference picture again after obtaining conversion parameter, statistical nature extraction time, feature counted, the feature interpretation time, the match time of stating, correctly match count to and matching precision.
Experiment 2, uses SIFT algorithm directly to treat registering images with reference picture completes registration, obtains converting parameter, statistical nature extraction time, feature are counted, the feature interpretation time, the match time of stating, correctly match count to and matching precision.
The reference picture that Fig. 4 provides for invention, image subject to registration and pre-transform image, wherein (a): reference picture (size: 401 × 501); (b): image subject to registration (size: 401 × 501); (b): pre-transform image (size: 401 × 501).
Fig. 5 algorithmic match result of the present invention, wherein left part is divided into reference picture, and right part is divided into image subject to registration.Image closing area center marks, and matching double points line segment is connected.
As can be seen from Figure 5, the correct match point logarithm that the method that the present invention proposes obtains meets the condition solving conversion parameter matrix.
Fig. 6 SIFT algorithmic match result figure (through pre-transform), wherein left part is reference picture, and right part is image subject to registration.The unique point of two width images marks with circle respectively, and matching double points line segment is connected.
As can be seen from Figure 6, recycle the correct match point logarithm that SIFT method obtains after image pre-transform and meet the condition solving conversion parameter matrix.
Fig. 7 SIFT algorithmic match result figure (without pre-transform), wherein left part is reference picture, and right part is image subject to registration.The unique point of two width images marks with circle respectively, and matching double points line segment is connected.
As can be seen from Figure 7, the correct match point logarithm directly utilizing SIFT method to obtain meets the condition solving conversion parameter matrix.
Table 1
As can be seen from Table 1, the feature that the enclosed region number extracted based on algorithm of the present invention extracts much smaller than SIFT algorithm is counted, feature interpretation with mate consuming time much smaller than SIFT algorithm, matching efficiency is high, precision is high.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of described claim.

Claims (6)

1., based on the SAR image sequence method for registering of motion platform parameter, it is characterized in that, described method comprises the steps:
Step 1, obtains reference picture and image subject to registration;
Step 2, obtains motion platform parameter, and obtains the pre-transform image of described reference picture according to described motion platform parameter, wherein, determined from described reference picture to the conversion parameter of described pre-transform image by described motion platform parameter;
Step 3, extracts pixel number in described image subject to registration respectively and is greater than pixel number in multiple enclosed region of the first pixel thresholding and described pre-transform image and is greater than multiple enclosed region of the second pixel thresholding;
Step 4, calculates the affine invariant moment features of each enclosed region in multiple enclosed region of described image subject to registration, and the affine invariant moment features of each enclosed region in multiple enclosed region of described pre-transform image;
Step 5, the Euclidean distance between the affine invariant moment features calculating the affine invariant moment features of each enclosed region of described image subject to registration and each enclosed region of described pre-transform image and angle;
Step 6, according to the Euclidean distance between the affine invariant moment features of each enclosed region of described image subject to registration and the affine invariant moment features of each enclosed region of described pre-transform image and angle, described image subject to registration and described pre-transform image are carried out to the coupling of enclosed region, obtain the enclosed region of all mutual couplings in described image subject to registration and described pre-transform image;
Step 7, according to the enclosed region of described all mutual couplings, calculates the conversion parameter of described image subject to registration to described pre-transform image;
Step 8, according to the conversion parameter of described reference picture to described pre-transform image, and described image subject to registration is to the conversion parameter of described pre-transform image, obtains the conversion parameter of described image subject to registration to described reference picture;
Step 9, according to the conversion parameter of described image subject to registration to described reference picture, carries out registration to described image subject to registration.
2. the SAR image sequence method for registering based on motion platform parameter according to claim 1, it is characterized in that, motion platform parameter in described step 2, specifically comprises:
The minimum distance R in scene center distance course line 0; Platform flying height h; Distance is to sample frequency f s; Pulse repetition rate PRF; Carrier aircraft speed v.
3. the SAR image sequence method for registering based on motion platform parameter according to claim 1, it is characterized in that, step 3 specifically comprises:
(3a) the first pixel thresholding and the second pixel thresholding are set;
(3b) according to described first pixel thresholding, multiple enclosed region that pixel number in described image subject to registration is greater than described first pixel thresholding are extracted;
(3c) according to described second pixel thresholding, multiple enclosed region that pixel number in described pre-transform image is greater than described second pixel thresholding are extracted.
4. the SAR image sequence method for registering based on motion platform parameter according to claim 1, is characterized in that, in step 4, the affine invariant moment features of described enclosed region, adopts following six affine not bending moments to be described:
Inv 1 = 1 μ 00 4 ( μ 20 μ 02 - μ 11 2 )
Inv 2 = 1 μ 00 6 ( μ 40 μ 04 - 4 μ 31 μ 13 + 3 μ 22 2 )
Inv 3 = 1 μ 00 7 ( μ 20 μ 21 μ 30 - μ 20 μ 12 2 - μ 11 μ 03 μ 30 + μ 11 μ 21 μ 12 + μ 02 μ 30 μ 12 - μ 02 μ 21 2 )
Inv 4 = 1 μ 00 10 ( μ 30 2 μ 03 2 - 6 μ 21 μ 12 μ 03 μ 30 + 4 μ 30 μ 12 3 + 4 μ 03 μ 21 3 - 3 μ 21 2 μ 12 2 )
Inv 5 = 1 μ 00 9 ( μ 40 μ 04 μ 22 + 2 μ 22 μ 13 μ 31 - μ 40 μ 13 2 - μ 04 μ 31 2 - μ 22 3 )
Inv 6 = 1 μ 00 11 ( μ 03 2 μ 20 3 - 6 μ 20 2 μ 02 μ 21 μ 03 + 9 μ 20 2 μ 02 μ 12 2 - 6 μ 20 2 μ 11 μ 12 μ 03 - 6 μ 02 2 μ 20 μ 30 μ 12 + 9 μ 02 2 μ 20 μ 21 2 + 6 μ 20 μ 02 μ 30 μ 11 μ 03 - 18 μ 20 μ 02 μ 21 μ 11 μ 12 + 12 μ 11 2 μ 20 μ 21 μ 03 + μ 30 2 μ 02 3 - 6 μ 02 2 μ 30 μ 11 μ 21 + 12 μ 11 2 μ 02 μ 12 μ 30 - 8 μ 11 3 μ 30 μ 03 )
Wherein, represent center, the p+q rank square of enclosed region, f (x, y) represents the gray-scale value of gray level image at (x, y) place.
5. the SAR image sequence method for registering based on motion platform parameter according to claim 1, is characterized in that, in step 5,
The Euclidean distance dis (m, n) of image subject to registration m enclosed region and pre-transform image n-th enclosed region is:
d i s ( m , n ) = Σ k = 1 6 ( inv k ( m ) - inv k ( n ) ) 2
The angle cos θ of image subject to registration m enclosed region and pre-transform image n-th enclosed region is:
c o s &theta; = < inv 1 ( m ) , inv 2 ( n ) > | inv 1 ( m ) | &CenterDot; | inv 2 ( n ) |
Wherein, inv km () represents the affine not bending moment in k rank of image subject to registration m enclosed region, inv kn () represents the affine not bending moment in k rank of pre-transform image n-th enclosed region.
6. the SAR image sequence method for registering based on motion platform parameter according to claim 1, is characterized in that, in step 6, adopts following matching principle to carry out the coupling of enclosed region to described image subject to registration and described pre-transform image:
Adopt the method for bi-directional matching, m enclosed region of and if only if image subject to registration is that m enclosed region of image subject to registration is to the minimum value in the Euclidean distance of all enclosed region of pre-transform image to the Euclidean distance of the n-th enclosed region of pre-transform image, simultaneously the n-th enclosed region of pre-transform image to the Euclidean distance of m enclosed region of image subject to registration be also the n-th enclosed region of pre-transform image to minimum value in the Euclidean distance of all enclosed region of image subject to registration time, then m enclosed region of described image subject to registration and the n-th enclosed region of described pre-transform image are couplings.
CN201510738155.XA 2015-11-03 2015-11-03 SAR image sequence method for registering based on motion platform parameter Active CN105551013B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510738155.XA CN105551013B (en) 2015-11-03 2015-11-03 SAR image sequence method for registering based on motion platform parameter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510738155.XA CN105551013B (en) 2015-11-03 2015-11-03 SAR image sequence method for registering based on motion platform parameter

Publications (2)

Publication Number Publication Date
CN105551013A true CN105551013A (en) 2016-05-04
CN105551013B CN105551013B (en) 2018-09-25

Family

ID=55830189

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510738155.XA Active CN105551013B (en) 2015-11-03 2015-11-03 SAR image sequence method for registering based on motion platform parameter

Country Status (1)

Country Link
CN (1) CN105551013B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106772373A (en) * 2016-12-15 2017-05-31 西安电子科技大学 For the SAR imaging methods of any ground moving object
CN110073402A (en) * 2016-11-28 2019-07-30 Smr专利责任有限公司 For obtaining the vehicle imaging systems and method of anti-flashing super-resolution image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101126809A (en) * 2007-09-20 2008-02-20 西安电子科技大学 Method for interfering synthetic aperture radar interferometric phase estimation based on related weighing
CN101214851A (en) * 2008-01-10 2008-07-09 黄席樾 Intelligent all-weather actively safety early warning system and early warning method thereof for ship running
US20120194646A1 (en) * 2011-02-02 2012-08-02 National Tsing Hua University Method of Enhancing 3D Image Information Density
CN104851097A (en) * 2015-05-19 2015-08-19 西安电子科技大学 Multichannel SAR-GMTI method based on target shape and shadow assistance

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101126809A (en) * 2007-09-20 2008-02-20 西安电子科技大学 Method for interfering synthetic aperture radar interferometric phase estimation based on related weighing
CN101214851A (en) * 2008-01-10 2008-07-09 黄席樾 Intelligent all-weather actively safety early warning system and early warning method thereof for ship running
US20120194646A1 (en) * 2011-02-02 2012-08-02 National Tsing Hua University Method of Enhancing 3D Image Information Density
CN104851097A (en) * 2015-05-19 2015-08-19 西安电子科技大学 Multichannel SAR-GMTI method based on target shape and shadow assistance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MOREL J M, YU G S: "A New Framework for Fully Affine Invariant Image Comparison", 《IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
夏桂琴: "WAS SAR-GMTI雷达大视角图像序列配准方法研究", 《中国优秀硕士学位论文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110073402A (en) * 2016-11-28 2019-07-30 Smr专利责任有限公司 For obtaining the vehicle imaging systems and method of anti-flashing super-resolution image
CN110073402B (en) * 2016-11-28 2023-05-12 Smr专利责任有限公司 Vehicle imaging system and method for obtaining anti-flicker super-resolution images
CN106772373A (en) * 2016-12-15 2017-05-31 西安电子科技大学 For the SAR imaging methods of any ground moving object

Also Published As

Publication number Publication date
CN105551013B (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN107067415B (en) A kind of object localization method based on images match
CN109241976B (en) Method for estimating oil spilling area based on image processing and laser ranging
CN104851097B (en) The multichannel SAR GMTI methods aided in based on target shape and shade
CN102865859B (en) Aviation sequence image position estimating method based on SURF (Speeded Up Robust Features)
CN105974412B (en) A kind of target&#39;s feature-extraction method for synthetic aperture radar
CN104463877B (en) A kind of water front method for registering based on radar image Yu electronic chart information
CN103839265A (en) SAR image registration method based on SIFT and normalized mutual information
CN104318548A (en) Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN102778680B (en) Method for imaging uniformly accelerated motion rigid group targets based on parameterization
CN105809693A (en) SAR image registration method based on deep neural networks
CN103217674A (en) Method for reconstructing target three-dimensional scattering center of inverse synthetic aperture radar
CN103268616A (en) Multi-feature multi-sensor method for mobile robot to track moving body
CN104536009A (en) Laser infrared composite ground building recognition and navigation method
CN103345757A (en) Optical image and SAR image automatic registration method within multilevel multi-feature constraint
CN105427301B (en) Based on DC component than the extra large land clutter Scene Segmentation estimated
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN102903109B (en) A kind of optical image and SAR image integration segmentation method for registering
CN101964060B (en) SAR variant target identification method based on local textural feature
CN105447867B (en) Spatial target posture method of estimation based on ISAR images
CN103714547A (en) Image registration method combined with edge regions and cross-correlation
CN103839262A (en) SAR image registration method based on straight lines and FFT
CN108802725A (en) A kind of shallow-layer penetrating radar synthetic aperture imaging method
CN105467373B (en) A kind of broadband is combined bistatic radar cone target physical size estimation method
CN103913166A (en) Star extraction method based on energy distribution
CN106127258A (en) A kind of target matching method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant