CN102072706A - Multi-camera positioning and tracking method and system - Google Patents

Multi-camera positioning and tracking method and system Download PDF

Info

Publication number
CN102072706A
CN102072706A CN 200910189525 CN200910189525A CN102072706A CN 102072706 A CN102072706 A CN 102072706A CN 200910189525 CN200910189525 CN 200910189525 CN 200910189525 A CN200910189525 A CN 200910189525A CN 102072706 A CN102072706 A CN 102072706A
Authority
CN
China
Prior art keywords
camera
coordinate
sigma
sin
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200910189525
Other languages
Chinese (zh)
Other versions
CN102072706B (en
Inventor
胡超
刘伟
贺庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen City Rui Kede Intelligent Technology Co Ltd
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN 200910189525 priority Critical patent/CN102072706B/en
Publication of CN102072706A publication Critical patent/CN102072706A/en
Application granted granted Critical
Publication of CN102072706B publication Critical patent/CN102072706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a multi-camera positioning and tracking method and a multi-camera positioning and tracking system. The method comprises the following steps of: continuously acquiring image signals of a measured object from different directions in real time by using a plurality of cameras; performing two-dimensional (2D) coordinate interpolation on image position 2D coordinates of the measured object in each camera at different times by an optimization algorithm; and calculating spatial three-dimensional coordinates of the measured object by performing a three-dimensional coordinate positioning algorithm on a plurality of spatial lines, in different planes, formed by a plurality of two-dimensional plane coordinates of all camera image planes. In the multi-camera positioning and tracking method and the multi-camera positioning and tracking system provided by the invention, a system measurement error caused by internal and external parameter errors of the cameras and other factors such as illumination and the like are overcome to the maximum by utilizing the characteristic of information redundancy of multi-camera imaging; high positioning accuracy can be realized, namely the positioning accuracy of 5 to 20 mm can be achieved; a measurement result is the three-dimensional coordinate information of a target to be measured; and the three-dimensional spatial coordinate measurement of a target is realized.

Description

A kind of polyphaser location and tracking and system
Technical field
The present invention relates to technical field of computer vision, in particular a kind of polyphaser location and tracking and system.
Background technology
The eighties in 20th century, the Artificial Intelligence Laboratory's of Massachusetts Institute Technology Marr proposed a kind of two-dimensional image that parallax is arranged based on two width of cloth, generation has the theory of vision computing of the solid figure of depth information, has established computer stereo vision development theory basis.Than other stereopsis method, as technology such as holography, projection 3-D display and lens board three-dimensional imagings; Computer stereo vision technique simulating human eyes are handled the mode of scenery, measure easy to be reliably, have using value in a lot of fields.
At present, comparatively popular computer stereo vision measuring technique is based on the three-dimensional measurement technology of binocular stereo vision.The binocular vision range measurement principle, as shown in Figure 1, some P (x, y, the z) spatial point of representative band measurement, P lAnd P rRepresent the projection at two camera imaging faces in P o'clock respectively, O 1And O 2Represent the photocentre of two cameras respectively, O -xyzRepresent world coordinate system.The focal distance f 1 of camera 101 is identical with the focal distance f 1 of camera 102, and the optical axis of two cameras is parallel, and the two-dimensional imaging plane overlaps with camera 101 coordinate systems and camera 102 coordinate systems.Parallax range between two cameras is b, and then the degree of depth Z of extraterrestrial target then can use following equation expression:
Z = fb d
Wherein, d represents the parallax (x of target in two camera imaging surfaces l-x r).
In actual applications, because in camera imaging device and optical lens manufacturing, all there is error in aspects such as test environment illumination and parallax calculating, these cumulative errorss have produced very big influence to the measuring accuracy of binocular vision measuring system.MVC360SAM_GE60_STEREO binocular vision product with Microvision Inc. is an example, and its direction of measurement only has the depth information of object, and precision is about 150mm.So it is lower that existing measuring technique based on binocular vision has measuring accuracy, stability is bad, is subject to shortcomings such as outside noise interference.
Therefore, prior art has yet to be improved and developed.
Summary of the invention
The binocular vision measuring method is subject to the influence that outside noise disturbs in order to overcome in the past, the object of the present invention is to provide a kind of polyphaser location and tracking and system, and it is not subjected to external influence, can realize bearing accuracy preferably, and the measurement stability height.
Technical scheme of the present invention is as follows:
A kind of polyphaser location and tracking wherein, comprise step:
A, a plurality of camera are gathered the picture signal of testee from different orientation real-time continuous;
B, each road picture signal that will collect are carried out image by the pattern distortion technology and are corrected;
C, each the road picture signal after will correcting adopt the target identification technology based on the color space model, carry out Target Recognition, find out measured object different image space two-dimensional coordinates constantly in each camera;
D, measured object different image space two-dimensional coordinates constantly in each camera are carried out 2D coordinate interpolation calculation by optimized Algorithm;
E, based on the testee that calculates through optimized Algorithm, many space different surface beelines that a plurality of two dimensional surface coordinates in all camera imaging faces are constituted calculate the 3 d space coordinate of measured object by the three-dimensional coordinate location algorithm.
Described polyphaser location and tracking, wherein, described step C specifically comprises:
C1, by repeatedly taking the method average, extract the multiway images signal colour component model [R, G, B] after testee is corrected;
C2, each color of pixel component [Rp, Gp, Bp] that contains this testee image to be identified and the color component model [R, G, B] that has extracted are done following processing:
| Rc - R p | ≤ σ r | G - G p | ≤ σ g | B - B p | ≤ σ b
σ r,, σ gAnd σ bRepresent the threshold value of three color components, if above formula is set up, then current pixel then is considered to testee;
The pixel of C3, all targets in finding image, then testee is promptly identified in this width of cloth image, and its two dimensional surface coordinate can be used centre of gravity place (x, y) expression.
Described polyphaser location and tracking, wherein, described step e specifically comprises:
E1, obtain testee in all camera imaging faces, through the two dimensional surface coordinate (x of 2D coordinate interpolation calculation i, y i), wherein, i=1,2 ... N, N represent the camera number of system;
E2, at world coordinate system O w-X wY wZ wIn, the external parameter translation vector (t of definition camera x, t y, t z) and anglec of rotation θ, ψ,
Figure G2009101895253D00032
Represent, and set up the rotation R and the translation vector t of camera:
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Wherein r 11 = cos ψ cos φ r 12 = sin θ sin ψ cos φ - cos θ sin φ r 13 = cos θ sin ψ cos φ - sin θ sin φ r 21 = cos ψ sin φ r 22 = sin θ sin ψ sin φ + cos θφ cos φ r 23 = cos θ sin ψ sin φ - sin θ cos φ r 31 = sin ψ r 32 = sin θ cos ψ r 33 = cos θ sin ψ t = tx ty tz
E3, wherein (x, the straight line that y) is linked to be are polar curve to define each camera photocentre and camera imaging face;
Make the focal length of camera represent with f, then by this camera photocentre and point (x, polar curve equation y) [Xw, Yw, Zw] are used as shown in the formula sublist to reach:
Xw Yw Zw = k x 0 y 0 z 0 + Δx Δy Δz , Wherein [ x 0 , y 0 , z 0 ] T = R T [ x , y , f ] , [ Δx , Δy , Δz ] T = - R T t , k > 0 ,
With [x 0, y 0, z 0] TBecome the direction vector v of polar curve, [Δ x, Δ y, Δ z] TBecome the translation vector r of polar curve;
E4, obtain the direction vector v of each polar curve respectively iWith translation vector r i, obtain the point of the intersection point between the polar curve in twos again;
E5, obtain the P of intersection point in twos between all different surface beelines Ij, wherein, 1≤i≤j≤N, N are the number of camera, then the 3 d space coordinate P of testee passes through following equation expression:
P = 1 k ( k - 1 ) &Sigma; 1 &le; i < j &le; k P ij
E6, by the two-dimensional coordinate (x of target at a plurality of camera imaging faces i, y i), i=1 wherein, 2 ..., N, N represent the camera number of system, obtain target 3 d space coordinate P (X, Y, Z).
Described polyphaser location and tracking, wherein, the optimized Algorithm among the described step D adopts least square method, the method for average or median method.
Described polyphaser location and tracking, wherein, when adopting least square method, described step D specifically comprises step:
D1, calculating measured object are at each magazine different image space two-dimensional coordinates (XmkYmk) constantly, and wherein, it is the camera of m that m represents label, k=0, and 1,2...n represents the different planar coordinates of target in camera m that constantly calculate;
D2, for coordinate x, make Φ (x)=ax 2+ bx+c obtains after n x coordinate brought into:
Xm 0=a0 2+b0+c
Xm 1=a1 2+b1+c
Xm 2=a2 2+b2+c
……
Xm n=an 2+bn+c
D3, according to the least square of error and, try to achieve parameter a, b, c:
a b c = &Sigma; i = 0 n i 4 &Sigma; i = 0 n i 3 &Sigma; i = 0 n i 2 &Sigma; i = 0 n i 3 &Sigma; i = 0 n i 2 &Sigma; i = 0 n i &Sigma; i = 0 n i 2 &Sigma; i = 0 n i &Sigma; i = 0 n 1 - 1 &Sigma; i = 0 n Xmi * i 2 &Sigma; i = 0 n Xmi * i &Sigma; i = 0 n Xmi
D4, obtain a, b is behind the c, again with a (n/2) 2+ b (n/2)+c is as the planar coordinate x result after optimizing;
D5, the method for pressing D1 to D4 for coordinate y, make Ψ (y)=dy 2+ ey+f,
And obtain parameter d, and e, f is with d (n/2) 2+ e (n/2)+f is as the planar coordinate y result after optimizing.
Described polyphaser location and tracking, wherein, when adopting the method for average, described step D specifically comprises step:
D11, calculating measured object are at each magazine different image space two-dimensional coordinates (XmkYmk) constantly, and wherein, it is the camera of m that m represents label, k=0, and 1,2...n represents the different planar coordinates of target in camera m that constantly calculate;
D12, employing average Xm = 1 n &Sigma; i = 0 n Xmi , Ym = 1 n &Sigma; i = 0 n Ymi As the planar coordinate.
A kind of polyphaser location and tracker wherein, comprising:
A plurality of image/video collecting devices, be used for the picture signal of gathering testee, and the picture signal of described testee is sent to the image rectification module by wired or wireless mode from different orientation real-time continuous;
With the image rectification module that a plurality of image/video collecting devices connect, be used for described each road picture signal that collects being corrected by the pattern distortion technology;
The Target Recognition module that connects with described image rectification module, be used for the target identification technology of each the road picture signal employing after correcting based on the color space model, carry out Target Recognition, find out measured object different image space two-dimensional coordinates constantly in each camera;
The 2D coordinate interpolation calculation module that connects with described Target Recognition module is used for measured object is carried out 2D coordinate interpolation calculation at the different image space two-dimensional coordinates constantly of each camera by optimized Algorithm;
The three-dimensional localization computing module that connects with described 2D coordinate interpolation calculation module, be used for based on testee through optimized Algorithm calculating, many the space different surface beelines that a plurality of two dimensional surface coordinates in all camera imaging faces are constituted are by the 3 d space coordinate of three-dimensional coordinate location algorithm calculating measured object.
A kind of polyphaser location provided by the present invention and tracking and system, utilize the information redundancy characteristics of polyphaser imaging, farthest overcome other systematic measurement errors that bring such as factor affecting such as illumination of camera inside and outside parameter sum of errors, can realize bearing accuracy preferably, reach the bearing accuracy of 5-20mm, and measurement result is Three-dimension Target coordinate information to be measured, realizes that the Three-dimension Target volume coordinate measures.
Description of drawings
Fig. 1 is the binocular range measurement principle figure of prior art;
The polyphaser measurement mechanism structured flowchart that is used to locate tracking of embodiment among Fig. 2 the present invention;
Fig. 3 is one embodiment of the invention structural representation;
Location and tracking process flow diagram that Fig. 4 provides for the embodiment of the invention based on the polyphaser technology;
Fig. 5 is an embodiment of the invention algorithm process process flow diagram;
Fig. 6 is the camera external parameter presentation graphs of the embodiment of the invention;
The many space different surface beeline intersection point method synoptic diagram that Fig. 7 provides for the embodiment of the invention;
Fig. 8 locatees and the tracker structured flowchart for the polyphaser that the embodiment of the invention provided.
Embodiment
The present invention is directed to the location and the tracking problem of extraterrestrial target, propose a kind ofly, realize that the Three-dimension Target volume coordinate measures based on the location of polyphaser technology and the method and system of tracking.
For making purpose of the present invention, technical scheme and advantage clearer, clear and definite, below develop simultaneously with reference to accompanying drawing that the present invention is described in more detail for embodiment.
The embodiment of the invention provided a kind of polyphaser location and tracking mainly partly are made up of the two dimensional image measurement of coordinates of the polyphaser measurement mechanism that is used to locate tracking, extraterrestrial target, three-dimensional coordinate location algorithm etc., as shown in Figure 2, the described polyphaser measurement mechanism that is used to locate tracking comprises a plurality of image/video collecting devices 110, data processing terminal 120 and display terminal 130;
Described image/video collecting device connects with described data processing terminal, is used to gather the image or the video information of testee, and described image and/or video information are sent to data processing terminal by wired or wireless mode; Described display terminal is used for the display process object information.
Preferred embodiment, as shown in Figure 3, described image/video collecting device 110 can be CCD (CMOS) video camera (abbreviation camera) of simulation or numeral, the number N of camera needs according to user oneself and determines; Data processing terminal is PC normally, also can be the embedded processing device; Display device is generally display.As shown in Figure 3, present embodiment adopts 4 cameras 111,112,113,114 to gather the image or the vision signal of testee 900 from different orientation, be sent to data processing terminal 130 by wired or wireless mode, through disposal route described below, final result is shown 120 at display terminal, display terminal can display space Three-dimension Target coordinate information to be measured, also can show the image and the video information that contain target.
Shown in the algorithm flow chart among Fig. 4, disposal system is finished relevant initialize routine earlier, guarantees the smooth collection of multiway images and vision signal.Below in conjunction with Fig. 4 and Fig. 5, a kind of polyphaser location of inventing is described in detail with tracking: described method may further comprise the steps:
Step 201, a plurality of camera are gathered the picture signal of testee from different orientation real-time continuous;
Be the image acquisition among Fig. 5: in order to gather multiple signals in real time, the present invention adopts the mode of hardware interrupts to obtain image and vision signal.After single camera was finished piece image and gathered, look-at-me triggered corresponding image processing program, and the Digital Image Data of gathering is saved in the middle of the related data structure; Utilize the image denoising technology then, reduce the digital picture noise that collects.
Step 202, each road picture signal that will collect are carried out image by the pattern distortion technology and are corrected;
The image that is among Fig. 5 is corrected: general optical camera lens device easily produces the pattern distortion phenomenon, and this distortion is owing to the refraction difference of optical mirror slip different parts to incident ray causes.Image after the distortion can not truly reflect the actual position of extraterrestrial target at the camera imaging face.Therefore, if the image that collects is not corrected, will influence the location of system and the precision of tracking.
The distortion in images correcting technology is comparatively ripe, mainly comprises two parts: the one, and the demarcation of optical system geometric distortion parameter; The 2nd, finish the rectification of every width of cloth image according to the distortion parameter of demarcating, its concrete distortion in images is corrected and is not repeated them here.
Step 203, each the road picture signal after will correcting adopt the target identification technology based on the color space model, carry out Target Recognition, find out measured object different image space two-dimensional coordinates constantly in each camera;
As the Target Recognition among Fig. 5: the present invention adopts the target identification technology based on the color space model.At first, extract the color component model [R, G, B] of target object by repeatedly taking the method for averaging; Each color of pixel component [Rp, Gp, Bp] of the image to be identified that contains this target and the color component model [R, G, B] that has extracted are done following processing:
| Rc - R p | &le; &sigma; r | G - G p | &le; &sigma; g | B - B p | &le; &sigma; b
σ r,, σ gAnd σ bRepresent the threshold value of three color components, if above formula is set up, then current pixel then is considered to belong to target object.The pixel of all targets in finding image, then target is promptly identified in this width of cloth image, and its two dimensional surface coordinate can be used centre of gravity place (x, y) expression.
By the extraterrestrial target recognition technology based on the color space model of step 2003, (x y) afterwards, changes step 204 over to obtain the image space coordinate of target in single camera.
Step 204, measured object different image space two-dimensional coordinates constantly in each camera are carried out 2D coordinate interpolation calculation by optimized Algorithm;
As the 2D coordinate interpolation calculation among Fig. 5: because the influence of aspects such as illumination sum of errors color space model error, by the definite two-dimensional coordinate (x of target in image of the recognizer in the step 203, y), degree of accuracy and stability are not high, usually alter a great deal, if do not handled, will produce harmful effect to follow-up location track algorithm.(x, y) repeatedly the measured value fitting method overcomes error effect to the present invention proposes utilization.By adopting optimized Algorithm to carry out 2D coordinate interpolation calculation, (x y) carries out interpolation calculation to continuous a plurality of two dimensional surface coordinates that single camera is obtained, and realizes optimization process.
In the embodiment of the invention, described optimized Algorithm can adopt least square method, the method for average or median method; Below each optimized Algorithm is done detailed description:
1) least square method: it is a kind of optimization method of mathematics, seeks the optimum matching function of data by the quadratic sum of minimum error
After single camera among the present invention obtains different time chart pictures, calculate the two dimensional surface coordinate (Xmk Ymk) of different targets constantly, launch as (Xm 0, Ym 0), (Xm 1, Ym 1), (Xm 2, Ym 2) ..., (Xmn, Ymn), it is the camera of m that m represents label; K=0,1,2...n represents the different planar coordinates of target in camera m that constantly calculate.Then order:
For coordinate x, make Φ (x)=ax 2+ bx+c obtains after n x coordinate brought into:
Xm 0=a0 2+b0+c
Xm 1=a1 2+b1+c
Xm 2=a2 2+b2+c
……
Xm n=an 2+bn+c
According to the least square of error and, try to achieve parameter a, b, c:
a b c = &Sigma; i = 0 n i 4 &Sigma; i = 0 n i 3 &Sigma; i = 0 n i 2 &Sigma; i = 0 n i 3 &Sigma; i = 0 n i 2 &Sigma; i = 0 n i &Sigma; i = 0 n i 2 &Sigma; i = 0 n i &Sigma; i = 0 n 1 - 1 &Sigma; i = 0 n Xmi * i 2 &Sigma; i = 0 n Xmi * i &Sigma; i = 0 n Xmi , Obtain a, b is behind the c, then with a (n/ 2) 2+b (n/2)+c is as the planar coordinate x result after optimizing.
Same method makes Ψ (y)=dy for coordinate y 2+ ey+f,
Same method is tried to achieve parameter d, and e is behind the f, with d (n/2) 2+ e (n/2)+f is as the planar coordinate y result after optimizing.
2) method of average:
Obtaining (Xm 0, Ym 0), (Xm 1, Ym 1), (Xm 2, Ym 2) ..., (Xmn Ymn) behind the coordinate, adopts average
Xm = 1 n &Sigma; i = 0 n Xmi , Ym = 1 n &Sigma; i = 0 n Ymi As optimizing the result
3) median method:
At Xm 0, Xm 1, Xm 2..., among the Xmn, select intermediate value Xmm as the two dimensional surface coordinate X after optimizing;
At Ym 0, Ym 1, Ym 2..., among the Ymn, select intermediate value Ymm as the two dimensional surface coordinate Y after optimizing.
Imager coordinate denoising method in certain single camera is not limited to also can adopt other method with least square method, average algorithm or median method.In a word, the final purpose of all methods be to realize target two-dimensional imaging coordinate accurately and stable, the least possiblely be subjected to other noise effects.
As shown in Figure 5, finish acquired signal when single camera and after steps such as image acquisition, image rectification, Target Recognition, can obtain the 2D image coordinate of target in single image." 2D coordinate interpolation calculation " step among Fig. 5 is meant, in single camera, obtains the 2D image coordinate (adopting continuous 16 2D coordinates in the present embodiment) of target based on continuous multiple image, adopts least square method to calculate the 2D coordinate of target at last.Finish above 2D Coordinate Calculation until all cameras, program begins extraterrestrial target 3D coordinate Calculation.Based on continuous 3D coordinate Calculation and conic fitting algorithm, the tracking of implementation space target.
After each road picture signal of all camera collections finishes by 2D coordinate interpolation calculation, then change step 205 over to.
Step 205, based on the testee that calculates through optimized Algorithm, many space different surface beelines that a plurality of two dimensional surface coordinates in all camera imaging faces are constituted calculate the 3 d space coordinate of measured object by the three-dimensional coordinate location algorithm;
Be three-dimensional coordinate space orientation and the track algorithm among Fig. 5:, obtaining the stable two dimensional surface coordinate (x of target to be measured in all camera imaging faces by above step i, y i) (i=1,2 ..., N, N represent the camera number of system) after, can obtain the three dimensional space coordinate of extraterrestrial target by the following method:
As shown in Figure 6, O w-X wY wZ wRepresent world coordinate system, (t x, t y, t z) the expression camera is with respect to the translation vector of world coordinate system, θ, ψ and
Figure G2009101895253D00111
Represent camera with respect to world coordinate system, around the x axle, the anglec of rotation of y axle and z axle.Promptly at world coordinate system O w-X wY wZ wIn, the external parameter of camera can translation vector (t x, t y, t z) and anglec of rotation θ, ψ,
Figure G2009101895253D00112
Expression.The present invention sets up the rotation R and the translation vector t of camera:
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Wherein r 11 = cos &psi; cos &phi; r 12 = sin &theta; sin &psi; cos &phi; - cos &theta; sin &phi; r 13 = cos &theta; sin &psi; cos &phi; - sin &theta; sin &phi; r 21 = cos &psi; sin &phi; r 22 = sin &theta; sin &psi; sin &phi; + cos &theta;&phi; cos &phi; r 23 = cos &theta; sin &psi; sin &phi; - sin &theta; cos &phi; r 31 = sin &psi; r 32 = sin &theta; cos &psi; r 33 = cos &theta; sin &psi; t = tx ty tz
Just like giving a definition: camera photocentre and the experienced straight line of certain point of camera imaging face are referred to as polar curve.
With four space different surface beelines is the example explanation, as shown in Figure 7.Four parallelogram are represented the imaging surface of four cameras, O C1, O C2, O C3And O C4Represent its photocentre separately respectively.Straight line L1, L2, L3 and L4 be respectively extraterrestrial target o'clock at four magazine projection optical axis, in theory these projection optical axis should cross with the space in a bit, this point is exactly the aerial image point.But in the actual measurement process, because the existence of various errors, four projection optical axis do not meet at a bit, but have constituted space different surface beeline as shown in Figure 7.In order to obtain the position of aerial image point, find out the intersection point of straight line in twos respectively, as r1 in the accompanying drawing 7, r2, r3, r4, r5, r6, r7, r8, r9, r10, r11, r12 point mark.To the 3d space coordinate of these intersection points, ask weighted mean can obtain best aerial image point position, specifically be calculated as follows:
If the target of obtaining in the step 204 is at the two dimensional surface coordinate of camera imaging face, with (focal length of camera represents with f for x, y) expression, then by this camera photocentre and point (x, polar curve equation y) [Xw, Yw, Zw], can be with reaching as shown in the formula sublist:
Xw Yw Zw = k x 0 y 0 z 0 + &Delta;x &Delta;y &Delta;z , Wherein
Figure G2009101895253D00117
Here with [x 0, y 0, z 0] TBecome the direction vector v of polar curve, [Δ x, Δ y, Δ z] TBecome the translation vector r of polar curve.
Use above formula, obtain four polar curve L1 among Fig. 7 respectively, L2, the polar curve direction vector v of L3 and L4 i(i=1,2,3,4) and translation vector r i(i=1,2,3,4).Next need to obtain the point of the intersection point between the polar curve in twos.As shown in Figure 7, be example with L1 and L2, its polar curve equation can be used following equation expression:
L 1=r 1+ tv 1, t wherein, s asks parameter for band
L 2=r 2+sv 2
L1 then, the intersection point point P of two different surface beelines of L2 12And P 21Can use following equation expression:
a = < v 1 , v 1 > b = < v 2 , v 2 > c = < v 1 , v 2 > d = < v 1 , r 1 - r 2 > e = < v 2 , r 1 - r 2 > &DoubleRightArrow; s = cd + ae c 2 - ab t = bd + ce c 2 - ab &DoubleRightArrow; p 12 = r 1 + tv 1 p 21 = r 2 + s v 2
By above method, obtain the P of intersection point in twos between all different surface beelines Ij(1≤i≤j≤N, N are the number of camera), then the volume coordinate P of target to be measured can pass through following equation expression:
P = 1 k ( k - 1 ) &Sigma; 1 &le; i < j &le; k P ij
Like this, the present invention is just by the two-dimensional coordinate (x of target at a plurality of camera imaging faces i, y i) (i=1,2 ..., N, N represent the camera number of system), obtained target 3 d space coordinate P (X, Y, Z).
Therefore, the different two dimensional surface coordinate (xs of the extraterrestrial target that obtains among the present invention in a plurality of camera imaging faces i, y i) (i=1,2 ..., N, N represent the camera number of system), the three-dimensional localization algorithm that utilizes the present invention to propose based on space different surface beeline intersection point, finally obtain imageable target three dimensional space coordinate P (X, Y, Z).
A kind of polyphaser of the present invention location also has following advantage with tracking:
At first, from information-theoretical angle, the image that polyphaser is taken is being higher than binocular vision system far away aspect the redundance of information, and therefore unnecessary information can be used for handling noise, thereby improves location and measuring accuracy.
Secondly, target two-dimensional coordinate (the x that the present invention obtains each camera, y) carry out interpolation arithmetic based on least square method, the experimental result from the imageable target two dimensional surface coordinate least square method of table 1 can reflect that the standard variance of target two dimensional surface coordinate x and y has all diminished, illustrated into handle after, planimetric coordinates (x, y) more stable, affected by noise littler, this method has greatly improved result's stability, provides good guarantee to follow-up three-dimensional localization algorithm.
Table 1
The standard variance of imageable target two-dimensional coordinate
Figure G2009101895253D00131
At last, the three-dimensional localization algorithm based on the different surface beeline intersection point of the innovation that the present invention proposes, (x is y) to the spatial coordinates calculation method of imageable target to have realized the two-dimensional coordinate of several two-dimensional images.
Based on said method, the embodiment of the invention also provides a kind of polyphaser location and tracker, and as shown in Figure 8, described system comprises:
A plurality of image/video collecting devices 110, be used for the picture signal of gathering testee, and the picture signal of described testee is sent to the image rectification module by wired or wireless mode from different orientation real-time continuous;
With the image rectification module 140 that a plurality of image/video collecting devices connect, be used for described each road picture signal that collects being corrected by the pattern distortion technology;
The Target Recognition module 150 that connects with described image rectification module, be used for the target identification technology of each the road picture signal employing after correcting based on the color space model, carry out Target Recognition, find out measured object different image space two-dimensional coordinates constantly in each camera;
The 2D coordinate interpolation calculation module 160 that connects with described Target Recognition module is used for measured object is carried out 2D coordinate interpolation calculation at the different image space two-dimensional coordinates constantly of each camera by optimized Algorithm;
The three-dimensional localization computing module 170 that connects with described 2D coordinate interpolation calculation module, be used for based on testee through optimized Algorithm calculating, many the space different surface beelines that a plurality of two dimensional surface coordinates in all camera imaging faces are constituted are by the 3 d space coordinate of three-dimensional coordinate location algorithm calculating measured object.
In sum, a kind of polyphaser location provided by the present invention and tracking and system, utilize the information redundancy characteristics of polyphaser imaging, farthest overcome other systematic measurement errors that bring such as factor affecting such as illumination of camera inside and outside parameter sum of errors, can realize bearing accuracy preferably, reach the bearing accuracy of 5-20mm, and measurement result is Three-dimension Target coordinate information to be measured, realizes that the Three-dimension Target volume coordinate measures.
Should be understood that, application of the present invention is not limited to above-mentioned giving an example, for those of ordinary skills, can be improved according to the above description or conversion, many the space different surface beeline intersection point methods that the method for for example calculating three dimensional space coordinate is not limited among the present invention to be carried, can also be additive method, as least square method etc.In a word, basic thought all with many space different surface beelines that constituted based on a plurality of two dimensional surface coordinates of the present invention, calculate the method unanimity of the volume coordinate of target.All these improvement and conversion all should belong to the protection domain of claims of the present invention.

Claims (7)

1. a polyphaser is located and tracking, it is characterized in that, comprises step:
A, a plurality of camera are gathered the picture signal of testee from different orientation real-time continuous;
B, each road picture signal that will collect are carried out image by the pattern distortion technology and are corrected;
C, each the road picture signal after will correcting adopt the target identification technology based on the color space model, carry out Target Recognition, find out measured object different image space two-dimensional coordinates constantly in each camera;
D, measured object different image space two-dimensional coordinates constantly in each camera are carried out 2D coordinate interpolation calculation by optimized Algorithm;
E, based on the testee that calculates through optimized Algorithm, many space different surface beelines that a plurality of two dimensional surface coordinates in all camera imaging faces are constituted calculate the 3 d space coordinate of measured object by the three-dimensional coordinate location algorithm.
2. according to claim 1 described polyphaser location and tracking, it is characterized in that described step C specifically comprises:
C1, by repeatedly taking the method average, extract the multiway images signal colour component model [R, G, B] after testee is corrected;
C2, each color of pixel component [Rp, Gp, Bp] that contains this testee image to be identified and the color component model [R, G, B] that has extracted are done following processing:
| Rc - R p | &le; &sigma; r | G - G p | &le; &sigma; g | B - B p | &le; &sigma; b
σ r,, σ gAnd σ bRepresent the threshold value of three color components, if above formula is set up, then current pixel then is considered to testee;
The pixel of C3, all targets in finding image, then testee is promptly identified in this width of cloth image, and its two dimensional surface coordinate can be used centre of gravity place (x, y) expression.
3. according to claim 1 described polyphaser location and tracking, it is characterized in that described step e specifically comprises:
E1, obtain testee in all camera imaging faces, through the two dimensional surface coordinate (x of 2D coordinate interpolation calculation i, y i), wherein, i=1,2 ... N, N represent the camera number of system;
E2, at world coordinate system 0 w-X wY wZ wIn, the external parameter translation vector (t of definition camera x, t y, t z) and anglec of rotation θ, ψ,
Figure F2009101895253C00021
Represent, and set up the rotation R and the translation vector t of camera:
R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 , Wherein r 11 = cos &psi; cos &phi; r 12 = sin &theta; sin &psi; cos &phi; - cos &theta; sin &phi; r 13 = cos &theta; sin &psi; cos &phi; - sin &theta; sin &phi; r 21 = cos &psi; sin &phi; r 22 = sin &theta; sin &psi; sin &phi; + cos &theta; cos &phi; r 23 = cos &theta; sin &psi; sin &phi; - sin &theta; cos &phi; r 31 = sin &psi; r 32 = sin &theta; cos &psi; r 33 = cos &theta; sin &psi; t = tx ty tz
E3, wherein (x, the straight line that y) is linked to be are polar curve to define each camera photocentre and camera imaging face;
Make the focal length of camera represent with f, then by this camera photocentre and point (x, polar curve equation y) [Xw, Yw, Zw] are used as shown in the formula sublist to reach:
Xw Yw Zw = k x 0 y 0 z 0 + &Delta;x &Delta;y &Delta;z , Wherein [ x 0 , y 0 , z 0 ] T = R T [ x , y , f ] , [ &Delta;x , &Delta;y , &Delta;z ] T = - R T t , k > 0 ,
With [x 0, y 0, z 0] TBecome the direction vector v of polar curve, [Δ x, Δ y, Δ z] TBecome the translation vector r of polar curve;
E4, obtain the direction vector v of each polar curve respectively iWith translation vector r i, obtain the point of the intersection point between the polar curve in twos again;
E5, obtain the P of intersection point in twos between all different surface beelines Ij, wherein, 1≤i≤j≤N, N are the number of camera, then the 3 d space coordinate P of testee passes through following equation expression:
P = 1 k ( k - 1 ) &Sigma; 1 &le; i < j &le; k P ij
E6, by the two-dimensional coordinate (x of target at a plurality of camera imaging faces i, y i), i=1 wherein, 2 ..., N, N represent the camera number of system, obtain target 3 d space coordinate P (X, Y, Z).
4. according to claim 1 described polyphaser location and tracking, it is characterized in that the optimized Algorithm among the described step D adopts least square method, the method for average or median method.
5. according to claim 4 described polyphaser location and tracking, it is characterized in that when adopting least square method, described step D specifically comprises step:
D1, calculating measured object are at each magazine different (X of image space two-dimensional coordinate constantly MkY Mk), wherein, it is the camera of m that m represents label, k=0, and 1,2...n represents the different planar coordinates of target in camera m that constantly calculate;
D2, for coordinate x, make Φ (x)=ax 2+ bx+c obtains after n x coordinate brought into:
Xm 0=a0 2±b0±c
Xm 1=a1 2+b1+c
Xm 2=a2 2+b2+c
......
Xm n=an 2+bn+c
D3, according to the least square of error and, try to achieve parameter a, b, c:
a b c = &Sigma; i = 0 n i 4 &Sigma; i = 0 n i 3 &Sigma; i = 0 n i 2 &Sigma; i = 0 n i 3 &Sigma; i = 0 n i 2 &Sigma; i = 0 n i &Sigma; i = 0 n i 2 &Sigma; i = 0 n i &Sigma; i = 0 n 1 - 1 &Sigma; i = 0 n Xmi * i 2 &Sigma; i = 0 n Xmi * i &Sigma; i = 0 n Xmi
D4, obtain a, b is behind the c, again with a (n/2) 2+ b (n/2)+c is as the planar coordinate x result after optimizing;
D5, the method for pressing D1 to D4 for coordinate y, make Ψ (y)=dy 2+ ey+f,
And obtain parameter d, and e, f is with d (n/2) 2+ e (n/2)+f is as the planar coordinate y result after optimizing.
6. according to claim 4 described polyphaser location and tracking, it is characterized in that when adopting the method for average, described step D specifically comprises step:
D11, calculating measured object are at each magazine different (X of image space two-dimensional coordinate constantly MkY Mk), wherein, it is the camera of m that m represents label, k=0, and 1,2...n represents the different planar coordinates of target in camera m that constantly calculate;
D12, employing average Xm = 1 n &Sigma; i = 0 n Xmi , Ym = 1 n &Sigma; i = 0 n Ymi As the planar coordinate.
7. a polyphaser is located and tracker, it is characterized in that, comprising:
A plurality of image/video collecting devices, be used for the picture signal of gathering testee, and the picture signal of described testee is sent to the image rectification module by wired or wireless mode from different orientation real-time continuous;
With the image rectification module that a plurality of image/video collecting devices connect, be used for described each road picture signal that collects being corrected by the pattern distortion technology;
The Target Recognition module that connects with described image rectification module, be used for the target identification technology of each the road picture signal employing after correcting based on the color space model, carry out Target Recognition, find out measured object different image space two-dimensional coordinates constantly in each camera;
The 2D coordinate interpolation calculation module that connects with described Target Recognition module is used for measured object is carried out 2D coordinate interpolation calculation at the different image space two-dimensional coordinates constantly of each camera by optimized Algorithm;
The three-dimensional localization computing module that connects with described 2D coordinate interpolation calculation module, be used for based on testee through optimized Algorithm calculating, many the space different surface beelines that a plurality of two dimensional surface coordinates in all camera imaging faces are constituted are by the 3 d space coordinate of three-dimensional coordinate location algorithm calculating measured object.
CN 200910189525 2009-11-20 2009-11-20 Multi-camera positioning and tracking method and system Active CN102072706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200910189525 CN102072706B (en) 2009-11-20 2009-11-20 Multi-camera positioning and tracking method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200910189525 CN102072706B (en) 2009-11-20 2009-11-20 Multi-camera positioning and tracking method and system

Publications (2)

Publication Number Publication Date
CN102072706A true CN102072706A (en) 2011-05-25
CN102072706B CN102072706B (en) 2013-04-17

Family

ID=44031343

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200910189525 Active CN102072706B (en) 2009-11-20 2009-11-20 Multi-camera positioning and tracking method and system

Country Status (1)

Country Link
CN (1) CN102072706B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566831A (en) * 2011-12-16 2012-07-11 Tcl集团股份有限公司 Target locating method and device as well as image display device
CN102798456A (en) * 2012-07-10 2012-11-28 中联重科股份有限公司 Method, device and system for measuring working amplitude of engineering mechanical arm support system
CN103712604A (en) * 2013-12-20 2014-04-09 清华大学深圳研究生院 Method and system for optically positioning multi-target three-dimensional space
CN104715587A (en) * 2013-12-13 2015-06-17 广州中国科学院先进技术研究所 Intelligent video recognition eyesight protection device and method
CN105081719A (en) * 2015-07-31 2015-11-25 北京星航机电装备有限公司 Spacecraft cabin automatic assembly system based on visual measurement and assembly method thereof
CN104541304B (en) * 2012-08-23 2017-09-12 微软技术许可有限责任公司 Use the destination object angle-determining of multiple cameras
CN107205145A (en) * 2016-03-17 2017-09-26 中航华东光电(上海)有限公司 Terminal guidance video image three dimensional data collection system
CN109758756A (en) * 2019-02-28 2019-05-17 国家体育总局体育科学研究所 Gymnastics video analysis method and system based on 3D camera
CN109920003A (en) * 2017-12-12 2019-06-21 广东虚拟现实科技有限公司 Camera calibration detection method, device and equipment
CN110020624A (en) * 2019-04-08 2019-07-16 石家庄铁道大学 image recognition method, terminal device and storage medium
CN110045740A (en) * 2019-05-15 2019-07-23 长春师范大学 A kind of Mobile Robot Real-time Motion planing method based on human behavior simulation
CN110889870A (en) * 2019-11-15 2020-03-17 深圳市吉祥云科技有限公司 Method and system for accurately positioning large-format product
CN113053057A (en) * 2019-12-26 2021-06-29 杭州海康微影传感科技有限公司 Fire point positioning system and method
CN117893989A (en) * 2024-03-14 2024-04-16 盯盯拍(深圳)技术股份有限公司 Sequential picture tracing method and system based on panoramic automobile data recorder

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2253085A1 (en) * 1998-11-06 2000-05-06 Industrial Metrics Inc. Methods and system for measuring three dimensional spatial coordinates and for external camera calibration necessary for that measurement
CN1268892C (en) * 2004-01-09 2006-08-09 中国科学院沈阳自动化研究所 Three-dimensional measurement method based on position sensor PSD
CN101464134B (en) * 2009-01-16 2010-08-11 哈尔滨工业大学 Vision measuring method for three-dimensional pose of spacing target

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102566831B (en) * 2011-12-16 2014-07-30 Tcl集团股份有限公司 Target locating method and device as well as image display device
CN102566831A (en) * 2011-12-16 2012-07-11 Tcl集团股份有限公司 Target locating method and device as well as image display device
CN102798456A (en) * 2012-07-10 2012-11-28 中联重科股份有限公司 Method, device and system for measuring working amplitude of engineering mechanical arm support system
CN102798456B (en) * 2012-07-10 2015-01-07 中联重科股份有限公司 Method, device and system for measuring working amplitude of engineering mechanical arm support system
CN104541304B (en) * 2012-08-23 2017-09-12 微软技术许可有限责任公司 Use the destination object angle-determining of multiple cameras
CN104715587A (en) * 2013-12-13 2015-06-17 广州中国科学院先进技术研究所 Intelligent video recognition eyesight protection device and method
CN103712604A (en) * 2013-12-20 2014-04-09 清华大学深圳研究生院 Method and system for optically positioning multi-target three-dimensional space
CN103712604B (en) * 2013-12-20 2016-04-06 清华大学深圳研究生院 A kind of Optical Multi-Objects three-dimensional fix method and system
CN105081719A (en) * 2015-07-31 2015-11-25 北京星航机电装备有限公司 Spacecraft cabin automatic assembly system based on visual measurement and assembly method thereof
CN107205145A (en) * 2016-03-17 2017-09-26 中航华东光电(上海)有限公司 Terminal guidance video image three dimensional data collection system
CN109920003B (en) * 2017-12-12 2023-09-15 广东虚拟现实科技有限公司 Camera calibration detection method, device and equipment
CN109920003A (en) * 2017-12-12 2019-06-21 广东虚拟现实科技有限公司 Camera calibration detection method, device and equipment
CN109758756A (en) * 2019-02-28 2019-05-17 国家体育总局体育科学研究所 Gymnastics video analysis method and system based on 3D camera
CN109758756B (en) * 2019-02-28 2021-03-23 国家体育总局体育科学研究所 Gymnastics video analysis method and system based on 3D camera
CN110020624A (en) * 2019-04-08 2019-07-16 石家庄铁道大学 image recognition method, terminal device and storage medium
CN110045740A (en) * 2019-05-15 2019-07-23 长春师范大学 A kind of Mobile Robot Real-time Motion planing method based on human behavior simulation
CN110889870A (en) * 2019-11-15 2020-03-17 深圳市吉祥云科技有限公司 Method and system for accurately positioning large-format product
CN110889870B (en) * 2019-11-15 2023-05-12 深圳市吉祥云科技有限公司 Large-format product accurate positioning method and system
CN113053057A (en) * 2019-12-26 2021-06-29 杭州海康微影传感科技有限公司 Fire point positioning system and method
CN117893989A (en) * 2024-03-14 2024-04-16 盯盯拍(深圳)技术股份有限公司 Sequential picture tracing method and system based on panoramic automobile data recorder
CN117893989B (en) * 2024-03-14 2024-06-04 盯盯拍(深圳)技术股份有限公司 Sequential picture tracing method and system based on panoramic automobile data recorder

Also Published As

Publication number Publication date
CN102072706B (en) 2013-04-17

Similar Documents

Publication Publication Date Title
CN102072706B (en) Multi-camera positioning and tracking method and system
CN110296691B (en) IMU calibration-fused binocular stereo vision measurement method and system
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
US9965870B2 (en) Camera calibration method using a calibration target
CN108269279B (en) Three-dimensional reconstruction method and device based on monocular 3 D scanning system
US10176595B2 (en) Image processing apparatus having automatic compensation function for image obtained from camera, and method thereof
CN102376089B (en) Target correction method and system
JP6192853B2 (en) Optical flow imaging system and method using ultrasonic depth detection
CN108288292A (en) A kind of three-dimensional rebuilding method, device and equipment
CN108510551B (en) Method and system for calibrating camera parameters under long-distance large-field-of-view condition
CN102692214B (en) Narrow space binocular vision measuring and positioning device and method
CN105654547B (en) Three-dimensional rebuilding method
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN110889873A (en) Target positioning method and device, electronic equipment and storage medium
CN109559349A (en) A kind of method and apparatus for calibration
CN104155765A (en) Method and equipment for correcting three-dimensional image in tiled integral imaging display
CN110738703B (en) Positioning method and device, terminal and storage medium
Crispel et al. All-sky photogrammetry techniques to georeference a cloud field
CN110021035B (en) Marker of Kinect depth camera and virtual marker tracking method based on marker
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof
JP2016114445A (en) Three-dimensional position calculation device, program for the same, and cg composition apparatus
CN113432611B (en) Orientation device and method based on all-sky-domain atmospheric polarization mode imaging
Chen et al. A structured-light-based panoramic depth camera
CN113223163A (en) Point cloud map construction method and device, equipment and storage medium
Agrawal et al. RWU3D: Real World ToF and Stereo Dataset with High Quality Ground Truth

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170517

Address after: Room office building No. 1068 Shenzhen Institute of advanced technology A-301 518000 in Guangdong city of Shenzhen province Nanshan District Shenzhen University city academy Avenue

Patentee after: Shenzhen shen-tech advanced Cci Capital Ltd

Address before: 1068 No. 518055 Guangdong city in Shenzhen Province, Nanshan District City Xili Road School of Shenzhen University

Patentee before: Shenzhen Advanced Technology Research Inst.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170605

Address after: 518000 Guangdong city of Shenzhen province Qianhai Shenzhen Hong Kong cooperation zone before Bay Road No. 1 building 201 room A (located in Shenzhen Qianhai business secretary Co. Ltd.)

Patentee after: Shenzhen City Rui Kede Intelligent Technology Co Ltd

Address before: Room office building No. 1068 Shenzhen Institute of advanced technology A-301 518000 in Guangdong city of Shenzhen province Nanshan District Shenzhen University city academy Avenue

Patentee before: Shenzhen shen-tech advanced Cci Capital Ltd