CN104268866A - Video sequence registering method based on combination of motion information and background information - Google Patents

Video sequence registering method based on combination of motion information and background information Download PDF

Info

Publication number
CN104268866A
CN104268866A CN201410482399.1A CN201410482399A CN104268866A CN 104268866 A CN104268866 A CN 104268866A CN 201410482399 A CN201410482399 A CN 201410482399A CN 104268866 A CN104268866 A CN 104268866A
Authority
CN
China
Prior art keywords
point
video sequence
registration
subject
movement locus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410482399.1A
Other languages
Chinese (zh)
Other versions
CN104268866B (en
Inventor
张强
毕菲
相朋
王亚彬
王龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410482399.1A priority Critical patent/CN104268866B/en
Publication of CN104268866A publication Critical patent/CN104268866A/en
Application granted granted Critical
Publication of CN104268866B publication Critical patent/CN104268866B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a video sequence registering method based on combination of motion information and background information. The method mainly solves the problem that in the prior art, video sequences in different view points cannot be accurately registered. The method includes the steps that (1) separation of background images and motion targets is carried out on two input video sequences; (2) feature point matching pairs of the background images are obtained, and basis matrixes between the background images are calculated; (3) four sets of matching point pairs corresponding to four spatial points not on the same plane in a three-dimensional space are selected from the background images; (4) intersection points of projection lines, in another video sequence, of points in motion tracks and bipolar lines are obtained; (5) motion track points are matched and a point pair set corresponding to candidate time is obtained; (6) a time line is fitted and time transformation parameters are recovered. The time relation between video sequences can be accurately recovered, registering accuracy is improved and the method can be used for registering video sequences with static backgrounds.

Description

Based on the video sequence method for registering that movable information combines with background information
Technical field
The invention belongs to technical field of image processing, further relate to a kind of video sequence method for registering, its target calibrates the video sequence taken from different visual angles or different time, can be applicable in the video sequence registration of static background.
Background technology
Video sequence registration is a key areas of image procossing, it is by the space-time transformation parameter between the same scene different visual angles of the multiple sensor acquisition of calculating or the similar video of different time, and then these videos are carried out in time synchronous, spatially carry out geometric calibration.Generally, the method of feature based can be adopted, especially be that the method for feature carries out video sequence registration with movement objective orbit, these video sequence method for registering based on movement locus mainly consider the Epipolar geometry relation between the shape of movement locus itself or view, but the spatial relation that often have ignored between the background characteristics of video sequence and moving target, thus cause the location of tracing point accurate not, corresponding matching track point can not be obtained accurately and effectively.
Such as, Padua F L C, Carceroni R L, paper " Linear sequence-to-sequence alignment " (the IEEE Transactions on Pattern Analysis and Machine Intelligence that et al. delivers, 2010,32 (2): 304-320) a kind of video sequence method for registering based on timeline constraint is disclosed in.First the method extracts background, utilizes Background matching point to the fundamental matrix between estimated background image; Then extract the track of moving target, utilize the Epipolar geometry relation between view to search out candidate tracks point pair; RANSAC algorithm is finally used to carry out timeline matching.The deficiency that the method exists is, because the method only considers the Epipolar geometry relation between view, track is utilized to find out corresponding tracing point with to the intersection point of polar curve, therefore accurate not to the location of tracing point, and when movement locus be similar to be straight line and with it, polar curve overlapped, finding can be a lot of to the intersection point of polar curve and track, produces a large amount of error hiding tracing point pair, thus reduces registration accuracy.
Nunziati W, Sclaroff S, paper " An Invariant Representation for Matching Trajectories Across Uncalibrated Video Streams " (the Springer Berlin Heidelberg that and Del Bimbo delivers, 2005, pp.318-327) in disclose the video sequence method for registering of a kind of movement locus point projection invariant representation.First the method extracts the track of moving target in video, is carried out the Local approximation of track by the cubic spline interpolation method of least square fitting; Then, to each sampled point of geometric locus obtained, calculate the double ratio of projection that four sampled points in it and neighborhood form not structure changes, obtain the constant descriptor of projection of track self; Finally, utilize the statistical property of double ratio to carry out distance metric to the track described, obtain the orbit segment mated.The deficiency that the method exists is, because the method only considers the information of track self, the local feature in tracing point neighborhood is utilized to be described tracing point, therefore when there is multiple moving target in scene and movement locus is similar, or when there is the similar orbit segment of a large amount of local feature in movement locus, the track descriptor obtained does not have good distinction, and in these cases, the registration result error of the video sequence that the method obtains even cannot complete registration comparatively greatly.
Summary of the invention
The object of the invention is to the shortcoming for above-mentioned prior art, a kind of video sequence method for registering combined with background information based on movable information is proposed, to obtain the tracing point pair of coupling better from the video sequence of input, reduce registration error, recover the time relationship between video sequence and spatial relationship more exactly.
For achieving the above object, technical scheme of the present invention is: in conjunction with the spatial relation of background characteristics and moving target, utilize the corresponding point information of background characteristics, and set up space coordinates and Epipolar geometry relation by spatial point not coplanar in three dimensions, thus obtain time-varying parameter.Its concrete steps comprise as follows:
(1) reference video sequence l and video sequence l ' subject to registration is inputted respectively;
(2) being respectively separated of background image and moving target is carried out to two input video sequence, obtain with reference to movement locus P and movement locus P ' subject to registration;
(3) adopt feature point detection and the method for mating, the Feature Points Matching of background extraction image is to (X i, X ' i), 300≤i≤500;
(4) utilize the Feature Points Matching of the background image obtained to (X i, X ' i), calculate the basis matrix F between background image;
(5) from the background image Feature Points Matching obtained to (X i, X ' i) in, choose four groups of matching double points (x not coplanar in three dimensions j, x ' j), j=1..4:
5.1) background image Feature Points Matching is calculated to (X i, X ' i) in the induction of three groups of matching double points singly answer H;
5.2) the Transfer Error Er of residue one group of matching double points under this homography induced H is calculated;
5.3) above-mentioned Transfer Error Er and setting threshold value E=10 are compared, if Er > is E, retain matching double points, otherwise, repeat step 5.1) and step 5.2), until meet the threshold value E that Er is greater than setting;
(6) projection line of point (p, q) in another video sequence in movement locus and the intersection point (p (t), q (t)) to polar curve is obtained respectively:
6.1) obtain with reference to the projection line k of some p in video sequence subject to registration in movement locus;
6.2) obtain with reference to the some p in movement locus to polar curve l:
L=Fp, wherein, F is basis matrix;
6.3) obtain with reference to the projection line of some p in video sequence subject to registration in movement locus and intersection point q (t) to polar curve;
6.4) projection line of some q in reference video sequence in movement locus subject to registration and intersection point p (t) to polar curve is obtained:
6.4.1) the projection line k ' of some q in reference video sequence in video motion track subject to registration is obtained;
6.4.2) obtain the some q in movement locus subject to registration to polar curve l ':
L '=F -1q, wherein, F -1it is the inverse matrix of basis matrix;
6.4.3) projection line of some q in reference video sequence in movement locus subject to registration and intersection point p (t) to polar curve is obtained;
(7) set with reference to candidate time corresponding point in movement locus and movement locus subject to registration is obtained:
7.1) projection line of the some p in computing reference movement locus in video sequence subject to registration with to intersection point q (t) of polar curve and the distance of all tracing point q subject to registration, select the tracing point q (t that distance is less than setting threshold value e i) as the candidate tracks point of video sequence subject to registration, and record movement locus time point t subject to registration i;
7.2) each movement locus point q (t subject to registration is calculated i) projection line in reference video sequence and the distance to intersection point p (t) of the polar curve reference locus point p corresponding with it, select the tracing point p (t that distance is less than setting threshold value e j) as the candidate tracks point with reference to video sequence, and record is with reference to movement locus time point t j;
(8) with movement locus time point t subject to registration ifor ordinate, with reference movement locus time point t jfor horizontal ordinate, utilize RANSAC algorithm fit time line, recover time-varying parameter [α, t], wherein α is the slope of timeline, and t is timeline and longitudinal axis t iintercept.
Tool of the present invention has the following advantages:
First, present invention utilizes the description of three-dimensional coordinate system to tracing point and there is this principle of projective invariance, for movement locus be similar to be straight line and with its situation that polar curve is overlapped, the present invention can search exactly and the unique point matched with reference to movement locus point in movement locus subject to registration, improve prior art and there is the right defect of a large amount of error hiding to during above-mentioned situation coupling, make the present invention compared with prior art can obtain more accurate video sequence registration effect.
Second, the strategy that the background characteristics that present invention employs video sequence is described tracing point, for there is multiple moving target in scene and movement locus is similar, or there is the situation of the similar orbit segment of a large amount of local feature in movement locus, unique description can both be set up to tracing point, overcome in prior art and only utilize track self character and unique defect described cannot be set up to the tracing point of above-mentioned situation, the present invention is made compared with prior art to have higher ga s safety degree to the description of movement locus point, thus more accurate video sequence registration result can be obtained.
Accompanying drawing explanation
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the back-up video sequence registration simulated effect figure undertaken by the inventive method;
Fig. 3 is another group back-up video sequence registration simulated effect figure undertaken by the inventive method;
Fig. 4 is with the existing back-up video sequence registration simulated effect figure carried out based on timeline bounding algorithm;
Fig. 5 is with the existing back-up video sequence registration simulated effect figure carried out based on movement locus point projection invariant representation algorithm;
The scene video sequence registration simulated effect figure that Fig. 6 synthesizes when being and being similar to and being straight line with the movement locus that the inventive method is carried out and overlapping to polar curve with it;
Fig. 7 is that another group movement locus that the inventive method is carried out is similar to the scene video sequence registration simulated effect figure synthesized when being straight line and overlapping to polar curve with it;
The scene video sequence registration simulated effect figure that Fig. 8 synthesizes when being and being similar to and being straight line with the existing movement locus carried out based on timeline bounding algorithm and overlapping to polar curve with it;
The scene video sequence registration simulated effect figure that Fig. 9 synthesizes when being and being similar to and being straight line with the existing movement locus carried out based on movement locus point projection invariant representation algorithm and overlapping to polar curve with it.
Specific implementation method
Below in conjunction with accompanying drawing, the present invention will be further described.
With reference to Fig. 1, concrete steps of the present invention are as follows:
Step 1, inputs reference video sequence l and video sequence l ' subject to registration respectively, and carries out being separated of background image and moving target to these two input video sequence respectively, obtains with reference to movement locus P and movement locus P ' subject to registration.
Step 2, adopt feature point detection and the method for mating to the background image obtained, the Feature Points Matching of background extraction image is to (X i, X ' i), 300≤i≤500.
Step 3, utilizes the Feature Points Matching of the background image obtained to (X i, X ' i), calculate the basis matrix F between background image;
Step 4, from the background image Feature Points Matching obtained to (X i, X ' i) in, choose four groups of matching double points (x that four spatial point not coplanar in three dimensions are corresponding j, x ' j), j=1,2,3,4:
4.1) from the Background matching point obtained to (X i, X ' i) in choose arbitrarily four groups of matching double points (X j, X ' j), induce a list to answer with three groups of matching double points wherein:
H=[x′ 1,x′ 2,x′ 3][x 1,x 2,x 3] -1
Wherein x 1, x 2, x 3the background image unique point that in reference video sequence three are different respectively, x ' 1, x ' 2, x ' 3the background image unique point that in video sequence subject to registration three are different respectively;
4.2) residue one group of matching double points (x is calculated 4x ' 4) Transfer Error Er under this homography induced:
Er=d(x′ 4,Hx 4),
Wherein d (x ' 4, Hx 4) represent some x ' 4to a Hx 4euclidean distance, x 4, x ' 4be respectively the background image unique point of reference video sequence and the unique point of video frequency sequence background image subject to registration;
4.3) above-mentioned Transfer Error Er and setting threshold value E=10 are compared, if Er > is E, retain matching double points, otherwise, repeat step 5.1) and step 5.2), until meet the threshold value E that Er is greater than setting.
Step 5, obtains the projection line of point (p, q) in another video sequence in movement locus and the intersection point (p (t), q (t)) to polar curve respectively.
5.1) obtain with reference to the projection line k of some p in video sequence subject to registration in movement locus;
Suppose that the four groups of matching double points back projection's point in space obtained is X i, i=1,2,3,4, utilize these four not coplanar point set up space coordinates, and with X 1for initial point, construct 3 not coplanar networks by remaining three point, be respectively (X 2-X 1), (X 3-X 1) and (X 4-X 1), coordinate expression can be carried out under the coordinate system set up with reference to the some p in movement locus:
p=x 1+x(x 2-x 1)+y(x 3-x 1)+z(x 4-x 1)
Wherein x 1, x 2, x 3, x 4be respectively the imaging point that in reference video sequence four are different, (x, y, z) is the coordinate of reference locus point p in the space coordinates set up;
5.2) by 5.1) relational expression derive coordinate (x, y, z) relational expression:
x = Δ 34 Δ 23 z + Δ 3 Δ 23 , y = Δ 24 Δ 32 z + Δ 2 Δ 32
Wherein, Δ 34 = det x 3 x - x 1 x x 4 x - x 1 x x 3 y - x 1 y x 4 y - x 1 y , Δ 23 = det x 2 x - x 1 x x 3 x - x 1 x x 2 y - x 1 y x 3 y - x 1 y
Δ 32 = det x 3 x - x 1 x x 2 x - x 1 x x 3 y - x 1 y x 2 y - x 1 y , Δ 2 = det p x - x 1 x x 2 x - x 1 x p y - x 1 y x 2 y - x 1 y
Δ 3 = det p x - x 1 x x 3 x - x 1 x p y - x 1 y x 3 y - x 1 y , Δ 24 = det x 2 x - x 1 x x 4 x - x 1 x x 2 y - x 1 y x 4 y - x 1 y ,
(x 1x, x 1y), (x 2x, x 2y), (x 3x, x 3y), (x 4x, x 4y) and (p x, p y) be respectively the point of imaging point x and p in homogeneous coordinates in reference video sequence;
5.3) coordinate (x, y, z) relational expression is brought into following formula:
X = x 1 x ′ + x ( x 2 x ′ - x 1 x ′ ) + y ( x 3 x ′ - x 1 x ′ ) + z ( x 4 x ′ - x 1 x ′ ) Y = x 1 y ′ + x ( x 2 y ′ - x 1 y ′ ) + y ( x 3 y ′ - x 1 y ′ ) + z ( x 4 y ′ - x 1 y ′ )
Wherein x ' 1x, x ' 2x, x ' 3x, x ' 4x, x ' 1y, x ' 2y, x ' 3y, x ' 4yfor the point of sequential projection point subject to registration in homogeneous coordinates, X, Y are the coordinates of projection line k in image subject to registration;
5.4) by 5.3) relational expression obtain the expression formula of projection line k in video sequence subject to registration:
Y = g 3 g 1 X + g 1 x 1 y ′ - g 3 x 1 x ′ + g 1 g 4 - g 2 g 3 g 1 , Wherein:
g 1 = Δ 34 Δ 23 ( x 2 x ′ - x 1 x ′ ) + Δ 24 Δ 32 ( x 3 x ′ - x 1 x ′ ) + ( x 4 x ′ - x 1 x ′ ) , g 2 = Δ 3 Δ 23 ( x 2 x ′ - x 1 x ′ ) + Δ 2 Δ 32 ( x 3 x ′ - x 1 x ′ )
g 3 = Δ 34 Δ 23 ( x 2 y ′ - x 1 y ′ ) + Δ 24 Δ 32 ( x 3 y ′ - x 1 y ′ ) + ( x 4 y ′ - x 1 y ′ ) , g 4 = Δ 3 Δ 23 ( x 2 y ′ - x 1 y ′ ) + Δ 2 Δ 32 ( x 3 y ′ - x 1 y ′ ) ;
5.5) obtain with reference to the some p in movement locus to polar curve l:
L=Fp, wherein, F is basis matrix;
5.6) obtain with reference to the projection line of some p in video sequence subject to registration in movement locus and intersection point q (t) to polar curve;
5.7) projection line of some q in reference video sequence in movement locus subject to registration and intersection point p (t) to polar curve is obtained:
5.7.1) the projection line k ' of some q in reference video sequence in video motion track subject to registration is obtained;
5.7.2) obtain the some q in movement locus subject to registration to polar curve l ':
L '=F -1q, wherein, F -1it is the inverse matrix of basis matrix;
5.7.3) projection line of some q in reference video sequence in movement locus subject to registration and intersection point p (t) to polar curve is obtained;
Step 6, movement locus Point matching, obtains candidate time corresponding point to set.
6.1) projection line of the some p in computing reference movement locus in video sequence subject to registration with to intersection point q (t) of polar curve and the distance of all tracing point q subject to registration, select the tracing point q (t that distance is less than setting threshold value e i) as the candidate tracks point of video sequence subject to registration, and record movement locus time point t subject to registration i;
6.2) each movement locus point q (t subject to registration is calculated i) projection line in reference video sequence and the distance to intersection point p (t) of the polar curve reference locus point p corresponding with it, select the tracing point p (t that distance is less than setting threshold value e j) as the candidate tracks point with reference to video sequence, and record is with reference to movement locus time point t j;
The calculating formula of threshold value e is as follows:
e = 1 N Σ i N ( d ( x i ′ , x i c ) + d ( x i + x i ′ c ) ) ,
Wherein, N represents matching double points number 60≤N≤80 that static background image is manually selected, x i, x ' irepresent the background image unique point of reference video sequence and video sequence subject to registration respectively, with represent some x respectively iwith an x ' iprojection line in another video sequence and the intersection point to polar curve, represent some x ' ito point euclidean distance, represent some x ' ito point euclidean distance.
Step 7, with movement locus time point t subject to registration ifor ordinate, with reference movement locus time point t jfor horizontal ordinate, utilize RANSAC algorithm fit time line, recover time-varying parameter [α, t], wherein α is the slope of timeline, and t is timeline and longitudinal axis t iintercept.
Below in conjunction with analogous diagram, effect of the present invention is further described.
For verifying validity of the present invention and correctness, the video sequence that have employed two groups of different scenes carries out registration emulation experiment.All emulation experiments all adopt Matlab R2010a software simulating under Windows 7 operating system.
Emulation 1, by the inventive method, registration emulation is carried out to the back-up video from different viewing angles, result as Fig. 2, wherein:
The static background image of Fig. 2 (a) reference video sequence,
Fig. 2 (b) is the static background image of video sequence subject to registration.The movement objective orbit that in figure, the some labelled notation representative of white is extracted, the circle labelled notation of white represents four groups of corresponding matching double points of four spatial point not coplanar in the three dimensions selected;
Fig. 2 (c) is the timeline recovered by the inventive method, and wherein, the slope of timeline is 1.0009, and the intercept of timeline on the longitudinal axis is 5.2483.
Emulation 2, the basis of emulation 1 is selected four some the inventive method again not coplanar in one group of three dimensions and emulates, result as Fig. 3, wherein:
Fig. 3 (a) is the static background image of reference video sequence,
Fig. 2 (b) is the static background image of video sequence subject to registration,
Fig. 3 (c) is the timeline recovered by the inventive method, and wherein, the slope of timeline is 1.0028, and the intercept of timeline on the longitudinal axis is 4.7441.
As can be seen from Fig. 2 and Fig. 3, as long as select four groups of Background matching points are not coplanar to back projection's point in three dimensions, just can Describing Motion tracing point accurately, and time-varying parameter more accurately can be obtained.
Emulation 3, with existing based on timeline bounding algorithm, carry out registration emulation to the back-up video from different viewing angles, result is as Fig. 4, and wherein, the slope of the timeline that this algorithm recovers is 1.0081, and the intercept of timeline on the longitudinal axis is 2.5264;
As seen from Figure 4, the timeline application condition that the algorithm based on timeline constraint recovers is large, this is because it considers the Epipolar geometry relation between view, causes the location of tracing point accurate not.
Emulation 4, with existing based on movement locus point projection invariant representation algorithm, carries out registration emulation to the back-up video from different viewing angles, result is as Fig. 5, wherein, the slope of the timeline that this algorithm recovers is 1.0037, and the intercept of timeline on the longitudinal axis is 4.2787.
As seen from Figure 5, although the timeline application condition recovered based on movement locus point projection invariant representation algorithm is little, but there is a large amount of Mismatching points pair in the tracing point centering obtained, this is because there is the similar orbit segment of some local features in track, to the description of tracing point, not there is good distinction.
Emulation 5, to be similar to movement locus by the inventive method method and to be straight line and to carry out registration emulation with it to the overlap video sequence of synthesis scene of situation of polar curve, simulation result as Fig. 6, wherein:
The static background image of Fig. 6 (a) reference video sequence, Fig. 6 (b) is the static background image of video sequence subject to registration.The movement objective orbit that in figure, the some labelled notation representative of white is extracted, the circle labelled notation of white represents four groups of corresponding matching double points of four spatial point not coplanar in the three dimensions selected, the line markings of black represent first tracing point to polar curve;
Fig. 6 (c) is the timeline recovered by the inventive method, and wherein, the slope of timeline is 1.0031, and the intercept of timeline on the longitudinal axis is 14.9148.
Emulation 6, the basis of emulation 5 is chosen four some the inventive method again not coplanar in one group of three dimensions and emulates, simulation result as Fig. 7, wherein,
Fig. 7 (a) is the static background image of reference video sequence,
Fig. 7 (b) is the static background image of video sequence subject to registration,
Fig. 7 (c) is the timeline adopting the inventive method to recover, and wherein, the slope of timeline is 1.0080, and the intercept of timeline on the longitudinal axis is 14.8106.
As can be seen from Fig. 6 and Fig. 7, unique description can be set up to tracing point with the background characteristics that the inventive method is extracted, and time-varying parameter more accurately can be obtained.
Emulation 7, with existing based on timeline bounding algorithm, movement locus is similar to and is straight line and with it, registration emulation is carried out to the overlap video sequence of synthesis scene of situation of polar curve, result is as Fig. 8, wherein, the slope of the timeline that this algorithm recovers is 1.0352, and the intercept of timeline on the longitudinal axis is 12.7555;
Emulation 8, with existing based on movement locus point projection invariant representation algorithm, movement locus is similar to and is straight line and with it, registration emulation is carried out to the overlap video sequence of synthesis scene of situation of polar curve, result is as Fig. 9, wherein, the slope of the timeline that this algorithm recovers is 1.0362, and the intercept of timeline on the longitudinal axis is 14.2077.
As can be seen from Fig. 8 and Fig. 9, when movement locus be similar to be straight line and with it, polar curve overlapped, all there is a large amount of Mismatching points pair in the tracing point centering of two kinds of algorithm acquisitions, this is because track is too many and there is the similar orbit segment of a large amount of local feature in track to the intersection point of polar curve to it, registration accuracy is caused to reduce, and the algorithm that the present invention proposes still uniquely can describe tracing point in the case, eliminate a large amount of error hiding tracing points pair, thus time-varying parameter more accurately can be obtained.

Claims (5)

1., based on the video sequence method for registering that movable information combines with background information, comprise the steps:
(1) reference video sequence l and video sequence l ' subject to registration is inputted respectively;
(2) being respectively separated of background image and moving target is carried out to two input video sequence, obtain with reference to movement locus P and movement locus P ' subject to registration;
(3) adopt feature point detection and the method for mating, the Feature Points Matching of background extraction image is to (X i, X ' i), 300≤i≤500;
(4) utilize the Feature Points Matching of the background image obtained to (X i, X ' i), calculate the basis matrix F between background image;
(5) from the background image Feature Points Matching obtained to (X i, X ' i) in, choose four groups of matching double points (x not coplanar in three dimensions j, x ' j), j=1,2,3,4:
5.1) background image Feature Points Matching is calculated to (X i, X ' i) in the induction of three groups of matching double points singly answer H;
5.2) the Transfer Error Er of residue one group of matching double points under this homography induced H is calculated;
5.3) above-mentioned Transfer Error Er and setting threshold value E=10 are compared, if Er > is E, retain matching double points, otherwise, repeat step 5.1) and step 5.2), until meet the threshold value E that Er is greater than setting;
(6) projection line of point (p, q) in another video sequence in movement locus and the intersection point (p (t), q (t)) to polar curve is obtained respectively:
6.1) obtain with reference to the projection line k of some p in video sequence subject to registration in movement locus;
6.2) obtain with reference to the some p in movement locus to polar curve l:
L=Fp, wherein, F is basis matrix;
6.3) obtain with reference to the projection line of some p in video sequence subject to registration in movement locus and intersection point q (t) to polar curve;
6.4) projection line of some q in reference video sequence in movement locus subject to registration and intersection point p (t) to polar curve is obtained:
6.4.1) the projection line k ' of some q in reference video sequence in video motion track subject to registration is obtained;
6.4.2) obtain the some q in movement locus subject to registration to polar curve l ':
L '=F -1q, wherein, F -1it is the inverse matrix of basis matrix;
6.4.3) projection line of some q in reference video sequence in movement locus subject to registration and intersection point p (t) to polar curve is obtained;
(7) set with reference to candidate time corresponding point in movement locus and movement locus subject to registration is obtained:
7.1) projection line of the some p in computing reference movement locus in video sequence subject to registration with to intersection point q (t) of polar curve and the distance of all tracing point q subject to registration, select the tracing point q (t that distance is less than setting threshold value e i) as the candidate tracks point of video sequence subject to registration, and record movement locus time point t subject to registration i;
7.2) each movement locus point q (t subject to registration is calculated i) projection line in reference video sequence and the distance to intersection point p (t) of the polar curve reference locus point p corresponding with it, select the tracing point p (t that distance is less than setting threshold value e j) as the candidate tracks point with reference to video sequence, and record is with reference to movement locus time point t j;
(8) with movement locus time point t subject to registration ifor ordinate, with reference movement locus time point t jfor horizontal ordinate, utilize RANSAC algorithm fit time line, recover time-varying parameter [α, t], wherein α is the slope of timeline, and t is timeline and longitudinal axis t iintercept.
2. the video sequence method for registering combined with background information based on movable information according to claim 1, wherein step 5.1) described in calculating background image Feature Points Matching to (X i, X ' i) in the induction of three groups of matching double points singly answer H, by following formulae discovery:
H=[x′ 1,x′ 2,x′ 3][x 1,x 2,x 3] -1
Wherein x 1, x 2, x 3the background image unique point that in reference video sequence three are different respectively, x ' 1, x ' 2, x ' 3the background image unique point that in video sequence subject to registration three are different respectively.
3. the video sequence method for registering combined with background information based on movable information according to claim 1, wherein step 5.2) described in calculating remain one group of match point (x 4, x ' 4) to the Transfer Error Er under this homography induced H, by following formulae discovery:
Er=d(x′ 4,Hx 4),
Wherein d (x ' 4, Hx 4) represent some x ' 4to a Hx 4euclidean distance, x 4, x ' 4be respectively the background image unique point of reference video sequence and the unique point of video frequency sequence background image subject to registration.
4. the video sequence method for registering combined with background information based on movable information according to claim 1, wherein step 6.1) described in acquisition with reference to the projection line k of some p in video sequence subject to registration in movement locus, carry out according to the following steps:
Four groups of matching double points back projection's point in space that 6.1a) hypothesis obtains is X i, i=1,2,3,4, utilize these four not coplanar point set up space coordinates, and with X 1for initial point, construct 3 not coplanar networks by remaining three point, be respectively (X 2-X 1), (X 3-X 1) and (X 4-X 1);
6.1b) be expressed as under the space coordinates set up with reference to the some p in movement locus:
p=x 1+x(x 2-x 1)+y(x 3-x 1)+z(x 4-x 1)
Wherein x 1, x 2, x 3, x 4be respectively the imaging point that in reference video sequence four are different, (x, y, z) is the coordinate of reference locus point p in the space coordinates set up;
6.1c) calculate the relational expression of coordinate (x, y, z):
Wherein,
(x 1x, x 1y), (x 2x, x 2y), (x 3x, x 3y), (x 4x, x 4y) and (p x, p y) be respectively the point of imaging point x and p in homogeneous coordinates in reference video sequence;
6.1d) relational expression of coordinate (x, y, z) is substituted into following formula:
Wherein x ' 1x, x ' 2x, x ' 3x, x ' 4x, x ' 1y, x ' 2y, x ' 3y, x ' 4yfor the point of sequential projection point subject to registration in homogeneous coordinates, X, Y are the coordinates of projection line k in image subject to registration;
6.1e) by 6.1d) relational expression obtain the expression formula of projection line k in video sequence subject to registration:
Wherein,
5. the video sequence registration combined with background information based on movable information according to claim 1, wherein said step 7.1) and 7.2) in threshold value e, set by following formula:
Wherein, N represents matching double points number 60≤N≤80 that static background image is manually selected, x i, x ' irepresent the background image unique point of reference video sequence and video sequence subject to registration respectively, with represent some x respectively iwith an x ' iprojection line in another video sequence and the intersection point to polar curve, represent some x ' ito point euclidean distance, represent some x ' ito point euclidean distance.
CN201410482399.1A 2014-09-19 2014-09-19 The video sequence method for registering being combined with background information based on movable information Active CN104268866B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410482399.1A CN104268866B (en) 2014-09-19 2014-09-19 The video sequence method for registering being combined with background information based on movable information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410482399.1A CN104268866B (en) 2014-09-19 2014-09-19 The video sequence method for registering being combined with background information based on movable information

Publications (2)

Publication Number Publication Date
CN104268866A true CN104268866A (en) 2015-01-07
CN104268866B CN104268866B (en) 2017-03-01

Family

ID=52160385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410482399.1A Active CN104268866B (en) 2014-09-19 2014-09-19 The video sequence method for registering being combined with background information based on movable information

Country Status (1)

Country Link
CN (1) CN104268866B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654421A (en) * 2015-12-21 2016-06-08 西安电子科技大学 Projection transform image matching method based on transform invariant low-rank texture
CN106991690A (en) * 2017-04-01 2017-07-28 电子科技大学 A kind of video sequence synchronous method based on moving target timing information
CN107133969A (en) * 2017-05-02 2017-09-05 中国人民解放军火箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN107316008A (en) * 2017-06-09 2017-11-03 西安电子科技大学 Based on the video synchronization method for projecting constant description
CN107886515A (en) * 2017-11-10 2018-04-06 清华大学 Image partition method and device
CN107949849A (en) * 2015-04-17 2018-04-20 构造数据有限责任公司 Reduce the system and method for packing density in large data sets
CN108234819A (en) * 2018-01-30 2018-06-29 西安电子科技大学 Video synchronization method based on homograph
CN110288639A (en) * 2019-06-21 2019-09-27 深圳职业技术学院 One kind assisting virtual splicing system for computer picture
CN110544203A (en) * 2019-07-30 2019-12-06 华东师范大学 Motion least square method and line constraint combined parallax image splicing method
CN111630559A (en) * 2017-10-27 2020-09-04 赛峰电子与防务公司 Image restoration method
WO2021036275A1 (en) * 2019-08-29 2021-03-04 华为技术有限公司 Multi-channel video synchronization method, system and device
CN112601029A (en) * 2020-11-25 2021-04-02 上海卫莎网络科技有限公司 Video segmentation method, terminal and storage medium with known background prior information

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8185334B2 (en) * 2009-03-05 2012-05-22 Tektronix, Inc. Methods and systems for filtering a digital signal
CN101937565B (en) * 2010-09-16 2013-04-24 上海交通大学 Dynamic image registration method based on moving target track
CN103914847B (en) * 2014-04-10 2017-03-29 西安电子科技大学 Based on phase equalization and the SAR image registration method of SIFT

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RODRIGO L. CARCERONI 等: "Linear Sequence–to–Sequence Alignment", 《COMPUTER VISION AND PATTERN RECOGNITION》 *
易盟 等: "结合优化梯度滤波与投影不变的航拍视频配准", 《光学 精密工程》 *
杨茹: "基于时空相位一致性的多传感器视频配准算法研究与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107949849A (en) * 2015-04-17 2018-04-20 构造数据有限责任公司 Reduce the system and method for packing density in large data sets
CN107949849B (en) * 2015-04-17 2021-10-08 构造数据有限责任公司 System and method for reducing data density in large data sets
CN105654421B (en) * 2015-12-21 2019-03-26 西安电子科技大学 Based on the projective transformation image matching method for converting constant low-rank texture
CN105654421A (en) * 2015-12-21 2016-06-08 西安电子科技大学 Projection transform image matching method based on transform invariant low-rank texture
CN106991690A (en) * 2017-04-01 2017-07-28 电子科技大学 A kind of video sequence synchronous method based on moving target timing information
CN107133969B (en) * 2017-05-02 2018-03-06 中国人民解放军火箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN107133969A (en) * 2017-05-02 2017-09-05 中国人民解放军火箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN107316008A (en) * 2017-06-09 2017-11-03 西安电子科技大学 Based on the video synchronization method for projecting constant description
CN111630559A (en) * 2017-10-27 2020-09-04 赛峰电子与防务公司 Image restoration method
CN107886515A (en) * 2017-11-10 2018-04-06 清华大学 Image partition method and device
CN107886515B (en) * 2017-11-10 2020-04-21 清华大学 Image segmentation method and device using optical flow field
CN108234819A (en) * 2018-01-30 2018-06-29 西安电子科技大学 Video synchronization method based on homograph
CN108234819B (en) * 2018-01-30 2019-08-13 西安电子科技大学 Video synchronization method based on homograph
CN110288639A (en) * 2019-06-21 2019-09-27 深圳职业技术学院 One kind assisting virtual splicing system for computer picture
CN110544203A (en) * 2019-07-30 2019-12-06 华东师范大学 Motion least square method and line constraint combined parallax image splicing method
WO2021036275A1 (en) * 2019-08-29 2021-03-04 华为技术有限公司 Multi-channel video synchronization method, system and device
CN112601029A (en) * 2020-11-25 2021-04-02 上海卫莎网络科技有限公司 Video segmentation method, terminal and storage medium with known background prior information

Also Published As

Publication number Publication date
CN104268866B (en) 2017-03-01

Similar Documents

Publication Publication Date Title
CN104268866B (en) The video sequence method for registering being combined with background information based on movable information
Banerjee et al. Online camera lidar fusion and object detection on hybrid data for autonomous driving
Heng et al. Leveraging image‐based localization for infrastructure‐based calibration of a multi‐camera rig
CN102646275B (en) The method of virtual three-dimensional superposition is realized by tracking and location algorithm
Kaess et al. Probabilistic structure matching for visual SLAM with a multi-camera rig
CN104167016B (en) A kind of three-dimensional motion method for reconstructing based on RGB color and depth image
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN102609941A (en) Three-dimensional registering method based on ToF (Time-of-Flight) depth camera
CN102750537B (en) Automatic registering method of high accuracy images
Wang et al. CubemapSLAM: A piecewise-pinhole monocular fisheye SLAM system
CN103337094A (en) Method for realizing three-dimensional reconstruction of movement by using binocular camera
CN106803270A (en) Unmanned aerial vehicle platform is based on many key frames collaboration ground target localization method of monocular SLAM
CN104359464A (en) Mobile robot positioning method based on stereoscopic vision
CN101650828B (en) Method for reducing random error of round object location in camera calibration
CN105654421B (en) Based on the projective transformation image matching method for converting constant low-rank texture
CN102075686A (en) Robust real-time on-line camera tracking method
CN105513094A (en) Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation
CN104616247A (en) Method for aerial photography map splicing based on super-pixels and SIFT
CN104484881A (en) Image capture-based Visual Map database construction method and indoor positioning method using database
CN104318552A (en) Convex hull projection graph matching based model registration method
CN104574443A (en) Method for cooperative tracking of moving object by means of panoramic cameras
Xiao et al. Monocular ORB SLAM based on initialization by marker pose estimation
CN104809720A (en) Small cross view field-based double-camera target associating method
CN105118061A (en) Method used for registering video stream into scene in three-dimensional geographic information space
Das et al. Extrinsic calibration and verification of multiple non-overlapping field of view lidar sensors

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant