CN102034238B - Multi-camera system calibrating method based on optical imaging probe and visual graph structure - Google Patents

Multi-camera system calibrating method based on optical imaging probe and visual graph structure Download PDF

Info

Publication number
CN102034238B
CN102034238B CN2010105852616A CN201010585261A CN102034238B CN 102034238 B CN102034238 B CN 102034238B CN 2010105852616 A CN2010105852616 A CN 2010105852616A CN 201010585261 A CN201010585261 A CN 201010585261A CN 102034238 B CN102034238 B CN 102034238B
Authority
CN
China
Prior art keywords
camera
point
video camera
gauge head
optical imagery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2010105852616A
Other languages
Chinese (zh)
Other versions
CN102034238A (en
Inventor
赵宏
李进军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou cartesan Testing Technology Co. Ltd
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN2010105852616A priority Critical patent/CN102034238B/en
Publication of CN102034238A publication Critical patent/CN102034238A/en
Application granted granted Critical
Publication of CN102034238B publication Critical patent/CN102034238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a multi-camera system calibrating method based on an optical imaging test head and a visual graph structure. The method comprises the following steps: independently calibrating each camera by the optical imaging test head to obtain the initial values of the internal parameter and aberration parameter of each camera; calibrating the multiple cameras two by two, and obtaining the fundamental matrix, polar constraint, rotation matrix and translation vector between every two cameras with a plurality of overlapped regions at a view field by means of linear estimation; building the connection relationship among the multiple cameras according to the graph theory and the visual graph structure, and estimating the rotation vector quantity initial value and translation vector quantity initial value of each camera relative to the referred cameras by a shortest path method; and optimally estimating all the internal parameters and external parameters of the all cameras and the acquired three-dimensional sign point set of the optical imaging test head by a sparse bundling and adjusting algorithm to obtain a high-precision calibrating result. The multi-camera system calibrating method is simple in a calibrating process from the partial situation to the overall situation and from the robust to the precise, ensures high-precise and robust calibration, and is applied to calibrating multi-camera systems with different measurement ranges and different distribution structures.

Description

Multi-camera system scaling method based on optical imagery gauge head and vision graph structure
Technical field
The invention belongs to technical field of visual measurement, relate to a kind of multi-camera system scaling method based on optical imagery gauge head and vision graph structure.
Background technology
Use multi-camera system to carry out three-dimensional measurement, must at first accomplish the intrinsic parameter and the distortion parameter of each video camera, the relative pose between multiple-camera reaches the demarcation with reference frame pose parameter.The measuring accuracy that robustness of demarcating and precision directly determine multi-camera system, thereby robust and high-precision demarcation are prerequisite and the conditions that multi-camera system is realized high-precision calibrating.
Camera calibration is of long duration, and the development through many decades has many efficient ways.In recent years, because of multi-camera system on large scale, large-range measuring unique obviously, receive efficient, the focus that high precision becomes research of extensive attention, the especially multi-camera system of domestic and international research.Yet in traditional multiple-camera scaling method, some adopt electronic theodolite to set up world coordinate system, thereby have increased the cost and the complicacy of system; Some adopt three-dimensional, two dimension, one dimension or 0 dimension to demarcate thing; Directly utilize the estimation of the position orientation relation completion multi-camera system calibrating parameters between single camera inner parameter and video camera; It is visible in all camera field of view to require to demarcate thing on the one hand; Calibration result can be influenced by noise, the error of calculation, initial parameter error etc. on the other hand, and precision is lower; Although certain methods has also adopted binding adjustment optimization method in the final stage of demarcating, yet the gatherer process of three-dimensional demarcation point set is comparatively complicated, and need abandon point not in the know, efficient is lower, and robustness is relatively poor.
Summary of the invention
The objective of the invention is to defective or deficiency, a kind of multi-camera system scaling method based on optical imagery gauge head and vision graph structure is provided to above-mentioned existing calibration technique.This method only needs an optical imagery gauge head of having demarcated; Just can accomplish the demarcation of demarcation of single camera inside and outside parameter and multi-camera system simultaneously; Method is simple; And do not need the optical imagery gauge head visible in all camera field of view, and only need two or interior visible the getting final product of above camera field of view therein, be applicable to various measure field.Adopt vision graph structure and binding adjustment to optimize,, guaranteed result's robustness and high precision by slightly demarcating to accomplish multi-camera system smartly.
For achieving the above object, the technical scheme that the present invention adopts is:
The foundation of step 1, hardware:
Adopt distributed a plurality of video camera and an optical imagery gauge head, all video cameras do not require overlapping space, visual field, but require that overlapped fov is arranged between per at least two video cameras:
Step 2, confirm the locus of each video camera; For each video camera is confirmed one or more space orientation points, by the principle of space orientation point selection promptly in the calibration process of single camera or multiple-camera the optical imagery gauge head all can be positioned in the camera field of view of being demarcated.In the demarcation of single camera; Reference for installation locating piece (1) on its corresponding anchor point at first; The optical imagery gauge head being inserted each camera field of view scope, and aim at benchmark locating piece (1), be center of circle rotation optical imagery gauge head with the survey of optical imagery gauge head point (2); Seven monumented points (5) on the optical imagery gauge head target body 4 are that the center of circle rotates with one heart to survey point 2, the multiple image of the different attitudes of camera acquisition optical imagery gauge head; Secondly, adopt Threshold Segmentation and ellipse fitting algorithm to extract the centre coordinate of each the LED sub-pixel in every width of cloth image, adopt gravity model appoach to calculate the centre coordinate of each monumented point 5; At last, according to the concentric circles rotation relationship of monumented point 5 and the space length invariant relation of having demarcated, recover the intrinsic parameter matrix and the distortion parameter of each video camera;
Step 3, reselect and locate the space orientation point, adjustment optical imagery gauge head attitude guarantees that the optical imagery gauge head is positioned at the overlapped fov of per two or above video camera; In overlapped fov; Rotary optical imaging gauge head; Obtain the LED image of optical imagery gauge head monumented point simultaneously by the overlapping video camera in visual field; Through Flame Image Process, center identification and monumented point coupling, and, recover fundamental matrix, essential matrix, polar curve geometric relationship and rotation matrix, translation vector between local two video cameras according to the geometrical constraint between monumented point;
Step 4, be node with each video camera; Per two video cameras to have overlapped fov are the limit; Make up the vision figure of multi-camera system, the direction on every limit is definite by the local rotation matrix and the translation vector of initial alignment, and by another video camera after the last camera points conversion; In vision figure; At first confirm by beginning, to set up the annexation between all video cameras with reference to video camera with reference to video camera; Adopt the rotation matrix and the translation vector of each video camera relative reference video camera of critical path method (CPM) calculating, realize the overall initial alignment of multi-camera system;
Step 5, according to the initial alignment result, the monumented point back projection that step 2 and step 3 are gathered, discerned, obtain demarcates point set to reference to camera coordinate system with this three-dimensional of setting up in the world coordinate system; Adopt sparse binding adjustment algorithm that whole calibrating parameters and the three-dimensional point set of demarcating are optimized estimation, obtain robust and high-precision calibration result;
Step 6, if calibration result has satisfied the measuring accuracy requirement, calibration process finishes; If do not satisfy; Then increase optical imagery gauge head anchor point quantity, images acquired is increased to original monumented point with the new three-dimensional calibration point of discerning and rebuild and concentrates again; Adopt sparse binding adjustment algorithm to optimize whole calibrating parameters and the three-dimensional point set of demarcating again, demarcate each parameter again.
Described each monumented point 5 is made up of six LED, and its center of gravity is the monumented point center.
Described step 2 comprises following steps:
Step 21, set up video camera pin hole projection model:
At first, set up world coordinate system X WY WZ W, gauge head coordinate system X tY tZ t, image coordinate yardstick coordinate system xy and pixel coordinate be uv, establishing has N=7 monumented point on the optical imagery gauge head,
Figure BDA0000037914030000031
Indicator sign point is coordinate in the gauge head coordinate system,
Figure BDA0000037914030000032
Indicator sign point is coordinate in world coordinate system, and (R is rotation and the translation parameter of video camera with respect to world coordinate system T), and K is the intrinsic parameter matrix of video camera, comprises 5 parameters, is respectively level and vertical direction focal length (f x, f y), image inclination factor s and photocentre coordinate (u0, v0), (d1 d2) is respectively radially distortion parameter with the tangential for k1, k2, and 7 monumented points (5) on the gauge head target surface (4) satisfy following projection relation:
λ j x ~ j = KR X t j + KT , j ∈ 7 - - - ( 1 )
Wherein, λ jIt is the projection degree of depth of monumented point j; The normalization coordinate of indicator sign point j on image.
Step 22, uncalibrated image collection and mark point recognition: according to the principle of space orientation point selection; Select the space orientation point; And fixed reference locating piece (1); Is the optical imagery gauge head that rotate in the center of circle to survey point (2); Use image I 1 width of cloth of the different attitudes of camera acquisition; The image of gathering spreads all over the space, visual field of video camera as far as possible, adopts Threshold Segmentation and ellipse fitting algorithm to extract the sub-pixel center of each LED luminous point, adopts gravity model appoach to estimate the center of each index point;
Step 23, constant according to the concentric circles radius, when setting up each monumented point and rotating with the error function of surveying sharp distance, when the optical imagery gauge head when surveying point (2) and rotate, 7 monumented points are respectively with radius r jRotate with one heart around surveying point (2), because all 7 concentrically ringed radius of gyration r are accurately demarcated in the position of monumented point jConstant, constant according to the concentric circles radius, gauge head each monumented point in rotation process is constant to the distance of surveying point, and to rotating the I1 width of cloth image of gathering, each sign all can obtain the radius error equation:
Σ i = 1 I 1 ( r j i - r j ) = 0 - - - ( 2 )
Wherein, Represent j (j=1 ..., 7) and the radius of gyration when the i time is rotated, estimated of individual monumented point, r jRepresent j (j=1 ..., 7) and the calibration value of the individual monumented point radius of gyration;
The initial value of step 24, estimation camera interior and exterior parameter: because of the existence of error; Find the solution the initial value of camera intrinsic parameter matrix K, rotation matrix R and translation vector T through the formula of minimizing (3) radius error function, through back projection's error least estimated distortion parameter initial value
Σ j = 1 7 Σ i = 1 I 1 ( r j i - r j ) - - - ( 3 )
Step 25, the sparse binding adjustment algorithm of employing further improve stated accuracy.
Described step 3 comprises following steps:
Gauge head monumented point IMAQ in the step 31, overlapped fov: reselect and locate the space orientation point (L1, L2 ...); Adjustment optical imagery gauge head attitude; Guarantee that it is positioned at the overlapped fov of per two or above video camera; Rotary optical imaging gauge head obtains gauge head monumented point stereo-picture simultaneously by the overlapping video camera in visual field in this zone;
Step 32, stereo-picture mark point recognition and coupling: adopt Threshold Segmentation and ellipse fitting algorithm to extract the sub-pixel center of each LED luminous point, adopt gravity model appoach to estimate the center of each monumented point; According to monumented point position constraint and the constant rule of geometric moment, mate the right monumented point of stereo-picture fast;
Step 33, estimation fundamental matrix, polar curve geometric relationship:
Suppose that two video camera C1 and C2 have overlapping space, visual field, be without loss of generality that the initial point of establishing world coordinates is positioned at the center of video camera C1, the pose of video camera C1 is (R 1, T 1)=(I, 0), the pose of video camera C2 is (R 2, T 2), the relativeness of two video cameras be (R, T).Following relation is then arranged between the image coordinate
Figure BDA0000037914030000051
of the monumented point j of the same name that obtains of two video cameras:
λ 2 j x ~ 2 j = λ 1 j K 2 R 2 K 1 - 1 x ~ 1 j + K 2 T 2 - - - ( 4 )
Estimate unknown scale parameter
Figure BDA0000037914030000053
and and obtain
Figure BDA0000037914030000055
that from this formula wherein
Figure BDA0000037914030000056
promptly is the fundamental matrix of requirement; Monumented point if any enough is right, 9 parameters of linear solution fundamental matrix.Simultaneously, in step 2, the intrinsic parameter matrix K of two video camera C1 and C2 1, K 2Demarcate, use coordinate figure
Figure BDA0000037914030000057
The polar curve restriction relation of stereo-picture is estimated in (i=1,2):
x 2 jT Ex 1 j = 0 - - - ( 5 )
Wherein,
Figure BDA0000037914030000059
is essential matrix, utilizes 7 algorithms can estimate essential matrix E;
Step 34, estimate relative pose between video camera and scale factor in twos:
Because the location aware of 7 monumented points on the gauge head; Can be unique confirm rotation matrix R and the translation vector T between per two video cameras; When estimating scale factor, normalization radius
Figure BDA00000379140300000510
and the actual radius of demarcating
Figure BDA00000379140300000511
that service marking point is surveyed the point rotation relatively can obtain:
λ j = r j / r ~ j - - - ( 6 )
Because the existence of error, the mean value of mean value or N measurement of getting 7 point estimation is as final scale factor λ;
λ = Σ j = 1 7 λ j / 7 - - - ( 7 )
λ = Σ i = 1 N ( Σ j = 1 7 λ j / 7 ) / N - - - ( 8 ) .
Described step 4 comprises following steps:
Step 41, theoretical based on figure, the vision figure of structure multi-camera system:
With each video camera is node; Per two video cameras to have overlapped fov are the limit; Make up the vision figure of multi-camera system; In vision figure; The direction on every limit is temporarily definite by the local spin matrix and the translation vector of initial alignment, and by another video camera after the last camera points conversion;
Step 42, definite with reference to video camera rebulids the annexation between the video camera:
In vision figure, confirm one with reference to video camera, consider when selecting and the relation of adjacent camera with reference to video camera, by beginning, readjust and set up the annexation between the video camera with reference to video camera;
Step 43, adopt critical path method (CPM) to confirm that each video camera is with reference to rotation under the camera coordinate system and translational movement;
Use solving the shortest path from reference to the absolute position of video camera, establish C to each video camera i, C j, C kBe the video camera that is coupled to each other on a certain path of vision figure, the video camera in twos to having demarcated obtains from C respectively iTo C jAnd C jTo C kTransformation matrix (R Ij, T Ij) and (R Jk, T Jk), from C iTo C kRelation by computes:
R ik=R ijR jk,T ik=T ij+R ijT jk (9)
If from reference to unnecessary two of the node on the path of video camera, then repeatedly use following formula to find the solution, accomplish all video cameras with respect to absolute calibration with reference to video camera.
Described step 5 comprises following steps:
Step 51, obtain the initial three-dimensional point set of demarcating
By step 22 and step 32; Gather the also image coordinate of distinguishing mark point different calibration points in all video cameras; Whole calibrating parameters according to preceding 4 steps acquisition; Back projection obtains a sparse demarcation point set of three-dimensional; Because used the simple annexation of multiple-camera in demarcating, all there are error in the three-dimensional sparse demarcation point set and the calibrating parameters of this reconstruction;
Step 52, structure back projection error function
Suppose to have n three-dimensional coordinate point and m video camera, the projection coordinate on j camera review was x in i o'clock Ij, the parameter of each video camera is by vectorial a jExpression, each three-dimensional point availability vector b iExpression, the projection of function Q () definition three-dimensional point on the video camera image planes, function d (x, the y) Euclidean distance between representative graph picture point x and the y according to the projective geometry relation, are set up following back projection's error function:
min a j , b i Σ i = 1 n Σ j = 1 m d ( Q ( a j , b i ) , x ij ) 2 - - - ( 10 )
Step 53, use binding adjustment minimize back projection's error function, obtain accurate calibration result.
Described back projection error function is one and includes P ∈ MThe non-linear minimization problem of parameter vector, P ∈ R MParameter vector is by the pose parameter and the three-dimensional measurement point set X ∈ of all video cameras NForm.
The present invention is for realizing single camera inside and outside parameter and the relative pose demarcation of video camera in twos; Adopt the optical imagery gauge head as demarcating thing; 7 monumented points having demarcated the position are installed on the optical imagery gauge head, and (each monumented point is made up of 6 LED; Its center of gravity is the monumented point center), recover camera interior and exterior parameter according to monumented point on the gauge head target surface around the relation of rotatablely moving with one heart of surveying point, realize demarcating fast.Because measurement space is big; The visual field of video camera is limited; So the present invention regards multi-camera system as a vision graph structure relation, each video camera is the node of vision figure, and the limit of vision figure representes between node (video camera) overlapping field range is arranged; There is directivity on the limit of vision figure, represents pose to be transformed to another video camera by last video camera; On vision figure, optimize whole pose initial parameters of multi-camera system through shortest-path method.Simultaneously, because the error in noise and the coordinate transform can exert an influence to the entire system precision, the present invention further adopts sparse binding adjustment algorithm that the three-dimensional symbol point that obtains is optimized adjustment with whole initial alignment parameters, to improve the precision of calibrating parameters.Through using this method, the global error of system calibrating is reducing greatly, and the robustness of each parameter significantly improves.
Description of drawings
Fig. 1 is optical imagery gauge head and video camera projection relation synoptic diagram.
Label is wherein represented respectively: 1, benchmark locating piece, 2, survey point, 3, extension bar, 4, the gauge head target body, 5,7 monumented points (each monumented point is made up of 6 LED luminous points).X WY WZ WRepresent world coordinate system, X tY tZ tRepresent the gauge head coordinate system, xy image coordinate yardstick coordinate system, uv representative image pixel coordinate system; P1~P7 represents 7 monumented points on the optical imagery gauge head, and r1~r7 is respectively 7 monumented points to the distance of surveying point.
Fig. 2 has two video camera relative positions of overlapped fov to demarcate synoptic diagram.
Label wherein: C1~C8 represents 8 video cameras, and L1, L2 represent the position of anchor point, and R and T represent the relative pose transformation relation.
Fig. 3 makes up synoptic diagram for the vision figure of multi-camera system.
Label wherein: C1~C8 represents 8 video cameras.
Fig. 4 changes synoptic diagram for confirming with reference to the annexation of vision figure before and after the video camera.
Label wherein: C1~C8 represents 8 video cameras.
Fig. 5 is with reference to global calibration synoptic diagram under the camera coordinate system.
Label wherein: C1~C8 represents 8 video cameras, and L1, L2 represent the position of anchor point, and R and T represent the relative pose transformation relation.
Embodiment
Below in conjunction with accompanying drawing the present invention is described further.
Vision system to be made up of 8 video cameras is an example, specifies.The present invention uses optical imagery gauge head as shown in Figure 1.The optical imagery gauge head is mainly formed by surveying point (2), extension bar (3), target body (4), 7 monumented points (5) and LED luminous point thereof.The distribution of 8 video cameras is as shown in Figure 3, and all video camera does not require overlapping visual field, but per at least two video cameras have overlapping visual field.This method according to the space distribution of 8 video cameras, is selected and definite space orientation point earlier, utilizes the invariable rotary relation of optical imagery gauge head around the survey point, accomplishes the independent demarcation of single camera and the relative pose of per two video cameras and demarcates.Then, according to scheming the annexation that theory and vision figure make up multiple-camera, in demarcation with reference to completion multiple-camera under the camera coordinate system.For eliminating the initial alignment error, adopt sparse binding adjustment algorithm global optimization three-dimensional geometrical structure and whole calibrating parameters at last.
Below, by the demarcation order each step is specifically introduced.
Phase one: the independent of single camera intrinsic parameter and distortion parameter demarcated.
If video camera be numbered C1~C8, as shown in Figure 2.
Step 11, according to the optical imagery gauge head as shown in Figure 1 and the projection relation of video camera, set up world coordinate system X WY WZ W, gauge head coordinate system X tY tZ t, image coordinate yardstick coordinate system xy and pixel coordinate be uv.
Step 12, confirm the locus of each video camera; For each video camera is confirmed one or more space orientation points, by the principle of space orientation point selection promptly in the calibration process of single camera or multiple-camera the optical imagery gauge head all can be positioned in the camera field of view of being demarcated.In the demarcation of single camera; Fixed reference locating piece (1) on its corresponding anchor point at first; Is the optical imagery gauge head that rotate in the center of circle to survey point (2), uses image I 1 width of cloth of the different attitudes of camera acquisition, and the image of collection spreads all over the space, visual field of video camera as far as possible.Adopt Threshold Segmentation and ellipse fitting algorithm to extract the sub-pixel center of each LED luminous point; Adopt gravity model appoach to estimate centre coordinate (i=1 of each monumented point (forming) by 6 LED luminous points;, I1; J=1 ..., 7).Simultaneously, establish the coordinate of N=7 monumented point in the gauge head coordinate system on the optical imagery gauge head set up video camera according to formula (1) for for
Figure BDA0000037914030000092
coordinate in world coordinate system projection relation.
Step 13, according to the constant geometrical constraint of concentric circles radius, make up the radius error equation (2) of each monumented point.
Step 14, minimize the initial value that radius error function formula (3) recovers intrinsic parameter matrix K, rotation matrix R and the translation vector T of each video camera, through each distortion of camera parameter of back projection's error least estimated (k1, k2, d1, d2).
Step 15, adopt sparse binding adjustment algorithm, the inside and outside parameter of each video camera is optimized, improve stated accuracy.
Subordinate phase: have the relative pose of per two video cameras of overlapped fov to demarcate.
Get a pair of in 8 video cameras arbitrarily, by little numbering preceding, big numbering after sequential packet.
Step 21, as shown in Figure 2; Select and locate suitable space orientation point (L1, L2 ...); Adjustment optical imagery gauge head attitude; Guarantee that it is positioned at the overlapped fov of per two or above video camera, rotary optical imaging gauge head obtains gauge head monumented point stereo-picture simultaneously by the overlapping video camera in visual field in this zone.
Step 22, the sub-pixel center of adopting Threshold Segmentation and ellipse fitting algorithm to extract each LED luminous point adopt gravity model appoach to estimate the center of each monumented point.Based on index point position constraint and the constant rule of geometric moment, mate the right index point of stereo-picture fast.
Step 23, according to the formula (4) estimated that each of the camera unknown scale parameter
Figure BDA0000037914030000101
and
Figure BDA0000037914030000102
According relations
Figure BDA0000037914030000103
resume every two fundamental matrix of the camera; according to equation (5) to restore each of the camera epipolar constraint.
The rotation matrix R and the translation vector T of step 24, every pair of video camera of recovery calculate the yardstick factor lambda according to formula (7) and formula (8).
Phase III: based on the multi-camera system global calibration of vision figure.
Step 31, as shown in Figure 3 is a node with 8 video cameras, is the limit with per two video cameras with overlapped fov, makes up the vision figure of multi-camera system.In vision figure, the direction on every limit is temporarily confirmed by the local rotation matrix and the translation vector of initial alignment, is made up the directional vision figure shown in Fig. 4 (a).
Step 32, in vision figure, be with reference to video camera with C8, readjust and set up the annexation between the video camera, adopt critical path method (CPM) to confirm new directional vision figure, shown in Fig. 4 (b).
Step 33, according to formula (9), confirm each video camera with reference to rotation under the camera coordinate system and translational movement, as shown in Figure 5.Thus, tentatively accomplish all video cameras with respect to absolute calibration with reference to video camera.
Stage: the global optimization based on sparse binding adjustment is demarcated.
Step 41, by the initial alignment parameter, through the three-dimensional sparse demarcation point set X ∈ of backprojection reconstruction N
Step 42, set up back projection's error minimum function (10) by whole nominal datas and three-dimensional point set.
Step 43, use binding adjustment minimize back projection's error function, obtain accurate calibration result.
Five-stage: the further optimization of calibrating parameters is estimated, up to satisfying the requirement of three-dimensional measurement stated accuracy.
Characteristics of the present invention are:
This scaling method uses the imaging gauge head with fixed optics monumented point, carries out the demarcation of relative pose between camera interior and exterior parameter and video camera, and without any need for miscellaneous equipment, method is simple, and is with low cost.
This scaling method does not need the optical imagery gauge head visible in all camera field of view; Only need be it is thus clear that get final product in the overlapped fov of per two video cameras; Be applicable to the multi-camera system of different distributions structure, both can be used for the small scale space and demarcated, can be used for the large scale space again and demarcate.
Vision figure that this scaling method adopted and binding adjustment optimized Algorithm can significantly improve and demarcate efficient, robustness and precision.

Claims (4)

1. multi-camera system scaling method based on optical imagery gauge head and vision graph structure is characterized in that may further comprise the steps:
The foundation of step 1, hardware:
Adopt distributed a plurality of video camera and an optical imagery gauge head of having demarcated, all video cameras do not require overlapping space, visual field, but require between per at least two video cameras overlapped fov is arranged;
Step 2, confirm the locus of each video camera; For each video camera is confirmed one or more space orientation points, by the principle of space orientation point selection promptly in the calibration process of single camera or multiple-camera the optical imagery gauge head all can be positioned in the camera field of view of being demarcated, in the demarcation of single camera; Reference for installation locating piece (1) on its corresponding anchor point at first; The optical imagery gauge head being inserted each camera field of view scope, and aim at benchmark locating piece (1), be center of circle rotation optical imagery gauge head with the survey of optical imagery gauge head point (2); Seven monumented points (5) on the optical imagery gauge head target body (4) are that the center of circle rotates with one heart to survey point (2); The multiple image of the different attitudes of camera acquisition optical imagery gauge head, wherein each monumented point (5) is made up of six LED, and its center of gravity is the monumented point center; Secondly, adopt Threshold Segmentation and ellipse fitting algorithm to extract the centre coordinate of each the LED sub-pixel in every width of cloth image, adopt gravity model appoach to calculate the centre coordinate of each monumented point (5); At last, according to the concentric circles rotation relationship of monumented point (5) and the space length invariant relation of having demarcated, recover the intrinsic parameter matrix and the distortion parameter of each video camera;
Step 3, reselect and locate the space orientation point, adjustment optical imagery gauge head attitude guarantees that the optical imagery gauge head is positioned at the overlapped fov of per two or above video camera; In overlapped fov; Rotary optical imaging gauge head; Obtain the LED image of optical imagery gauge head monumented point simultaneously by the overlapping video camera in visual field; Through Flame Image Process, center identification and monumented point coupling, and, recover fundamental matrix, essential matrix, polar curve geometric relationship and rotation matrix, translation vector between local two video cameras according to the geometrical constraint between monumented point;
Step 4, be node with each video camera; Per two video cameras to have overlapped fov are the limit; Make up the vision figure of multi-camera system, the direction on every limit is definite by the local rotation matrix and the translation vector of initial alignment, and by another video camera after the last camera points conversion; In vision figure; At first confirm by beginning, to set up the annexation between all video cameras with reference to video camera with reference to video camera; Adopt the rotation matrix and the translation vector of each video camera relative reference video camera of critical path method (CPM) calculating, realize the overall initial alignment of multi-camera system;
Step 5, according to the overall initial alignment result of multi-camera system, the monumented point back projection that step 2 and step 3 are gathered, discerned, obtain demarcates point set to reference to camera coordinate system with this three-dimensional of setting up in the world coordinate system; Adopt sparse binding adjustment algorithm that whole calibrating parameters and the three-dimensional point set of demarcating are optimized estimation, obtain robust and high-precision calibration result;
Step 6, if calibration result has satisfied the measuring accuracy requirement, calibration process finishes; If do not satisfy; Then increase optical imagery gauge head anchor point quantity, images acquired is increased to original monumented point with the new three-dimensional calibration point of discerning and rebuild and concentrates again; Adopt sparse binding adjustment algorithm to optimize whole calibrating parameters and the three-dimensional point set of demarcating again, demarcate each parameter again.
2. the multi-camera system scaling method based on optical imagery gauge head and vision graph structure as claimed in claim 1, it is characterized in that: described step 2 comprises following steps:
Step 21, set up video camera pin hole projection model:
At first, set up world coordinate system X WY WZ W, gauge head coordinate system X tY tZ t, image coordinate yardstick coordinate system xy and pixel coordinate be uv, establishing has N=7 monumented point on the optical imagery gauge head,
Figure FDA0000146011860000021
Indicator sign point is coordinate in the gauge head coordinate system,
Figure FDA0000146011860000022
Indicator sign point is coordinate in world coordinate system, and (R is rotation and the translation parameter of video camera with respect to world coordinate system T), and K is the intrinsic parameter matrix of video camera, comprises 5 parameters, is respectively level and vertical direction focal length (f x, f y), image inclination factor s and photocentre coordinate (u0, v0), (d1 d2) is respectively radially distortion parameter with the tangential for k1, k2, and 7 monumented points (5) on the gauge head target surface (4) satisfy following projection relation:
λ j x ~ j = KRX t j + KT , j = 1,2 , . . . , 7 - - - ( 1 )
Wherein, λ jIt is the projection degree of depth of monumented point j;
Figure FDA0000146011860000024
The normalization coordinate of indicator sign point j on image;
Step 22, uncalibrated image collection and mark point recognition: according to the principle of space orientation point selection; Select the space orientation point; And fixed reference locating piece (1); Is the optical imagery gauge head that rotate in the center of circle to survey point (2); Use image I 1 width of cloth of the different attitudes of camera acquisition; The image of gathering spreads all over the space, visual field of video camera as far as possible, adopts Threshold Segmentation and ellipse fitting algorithm to extract the sub-pixel center of each LED luminous point, adopts gravity model appoach to estimate the center of each index point;
Step 23, constant according to the concentric circles radius, when setting up each monumented point and rotating with the error function of surveying sharp distance, when the optical imagery gauge head when surveying point (2) and rotate, 7 monumented points are respectively with radius r jRotate with one heart around surveying point (2), because all 7 concentrically ringed radius of gyration r are accurately demarcated in the position of monumented point jConstant, constant according to the concentric circles radius, gauge head each monumented point in rotation process is constant to the distance of surveying point, and to rotating the I1 width of cloth image of gathering, each sign all can obtain the radius error equation:
Σ i = 1 I 1 ( r j i - r j ) = 0 - - - ( 2 )
Wherein,
Figure FDA0000146011860000032
Represent j (j=1 ..., 7) and the radius of gyration when the i time is rotated, estimated of individual monumented point, r jRepresent j (j=1 ..., 7) and the calibration value of the individual monumented point radius of gyration;
The initial value of step 24, estimation camera interior and exterior parameter: because of the existence of error; Find the solution the initial value of camera intrinsic parameter matrix K, rotation matrix R and translation vector T through the formula of minimizing (3) radius error function, through back projection's error least estimated distortion parameter initial value
Σ j = 1 7 Σ i = 1 I 1 ( r j i - r j ) - - - ( 3 )
Step 25, the sparse binding adjustment algorithm of employing further improve stated accuracy.
3. the multi-camera system scaling method based on optical imagery gauge head and vision graph structure as claimed in claim 1, it is characterized in that: described step 4 comprises following steps:
Step 41, theoretical based on figure, the vision figure of structure multi-camera system:
With each video camera is node; Per two video cameras to have overlapped fov are the limit; Make up the vision figure of multi-camera system; In vision figure; The direction on every limit is temporarily definite by the local spin matrix and the translation vector of initial alignment, and by another video camera after the last camera points conversion;
Step 42, definite with reference to video camera rebulids the annexation between the video camera:
In vision figure, confirm one with reference to video camera, consider when selecting and the relation of adjacent camera with reference to video camera, by beginning, readjust and set up the annexation between the video camera with reference to video camera;
Step 43, adopt critical path method (CPM) to confirm that each video camera is with reference to rotation under the camera coordinate system and translational movement;
Use solving the shortest path from reference to the absolute position of video camera, establish C to each video camera i, C j, C kBe the video camera that is coupled to each other on a certain path of vision figure, the video camera in twos to having demarcated obtains from C respectively iTo C jAnd C jTo C kTransformation matrix (R Ij, T Ij) and (R Jk,, T Jk), from C iTo C kRelation by computes:
R ik=R ijR jk,T ik=T ij+R ijT jk (9)
If from reference to the node on the path of video camera more than two, then repeatedly use following formula to find the solution, accomplish all video cameras with respect to absolute calibration with reference to video camera.
4. the multi-camera system scaling method based on optical imagery gauge head and vision graph structure as claimed in claim 2, it is characterized in that: described step 5 comprises following steps:
Step 51, obtain the initial three-dimensional point set of demarcating
By step 22; Gather the also image coordinate of distinguishing mark point different calibration points in all video cameras; Whole calibrating parameters according to preceding 4 steps acquisition; Back projection obtains a sparse demarcation point set of three-dimensional; Because used the simple annexation of multiple-camera in demarcating, all there are error in the three-dimensional sparse demarcation point set and the calibrating parameters of this reconstruction;
Step 52, structure back projection error function
Suppose to have n three-dimensional coordinate point and m video camera, the projection coordinate on j camera review was x in i o'clock Ij, the parameter of each video camera is by vectorial a jExpression, each three-dimensional point availability vector b iExpression, the projection of function Q () definition three-dimensional point on the video camera image planes, function d (x, the y) Euclidean distance between representative graph picture point x and the y according to the projective geometry relation, are set up following back projection's error function:
min a j , b i Σ i = 1 n Σ j = 1 m d ( Q ( a j , b i ) , x ij ) 2 - - - ( 10 )
Step 53, use binding adjustment minimize back projection's error function, obtain accurate calibration result.
CN2010105852616A 2010-12-13 2010-12-13 Multi-camera system calibrating method based on optical imaging probe and visual graph structure Active CN102034238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010105852616A CN102034238B (en) 2010-12-13 2010-12-13 Multi-camera system calibrating method based on optical imaging probe and visual graph structure

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010105852616A CN102034238B (en) 2010-12-13 2010-12-13 Multi-camera system calibrating method based on optical imaging probe and visual graph structure

Publications (2)

Publication Number Publication Date
CN102034238A CN102034238A (en) 2011-04-27
CN102034238B true CN102034238B (en) 2012-07-18

Family

ID=43887091

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010105852616A Active CN102034238B (en) 2010-12-13 2010-12-13 Multi-camera system calibrating method based on optical imaging probe and visual graph structure

Country Status (1)

Country Link
CN (1) CN102034238B (en)

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8965057B2 (en) * 2012-03-02 2015-02-24 Qualcomm Incorporated Scene structure-based self-pose estimation
CN103377471B (en) * 2012-04-16 2016-08-03 株式会社理光 Object positioning method and device, optimum video camera are to determining method and apparatus
CN103196370B (en) * 2013-04-01 2015-05-27 北京理工大学 Measuring method and measuring device of conduit connector space pose parameters
CN103617622A (en) * 2013-12-10 2014-03-05 云南大学 Pose estimation orthogonal iterative optimization algorithm
CN104766291B (en) * 2014-01-02 2018-04-10 株式会社理光 Multiple cameras scaling method and system
CN103792950B (en) * 2014-01-06 2016-05-18 中国航空无线电电子研究所 A kind of method that uses the stereoscopic shooting optical parallax deviation correcting device based on piezoelectric ceramics to carry out error correction
CN105072414B (en) * 2015-08-19 2019-03-12 浙江宇视科技有限公司 A kind of target detection and tracking and system
CN105894505A (en) * 2016-03-30 2016-08-24 南京邮电大学 Quick pedestrian positioning method based on multi-camera geometrical constraint
CN106060524B (en) * 2016-06-30 2017-12-29 北京邮电大学 The method to set up and device of a kind of video camera
US10037626B2 (en) * 2016-06-30 2018-07-31 Microsoft Technology Licensing, Llc Interaction with virtual objects based on determined restrictions
CN106408614B (en) * 2016-09-27 2019-03-15 中国船舶工业系统工程研究院 Camera intrinsic parameter Calibration Method and system suitable for field application
CN106799732A (en) * 2016-12-07 2017-06-06 中国科学院自动化研究所 For the control system and its localization method of the motion of binocular head eye coordination
CN106780630A (en) * 2017-01-09 2017-05-31 上海商泰汽车信息系统有限公司 Demarcate panel assembly, vehicle-mounted camera scaling method and device, system
CN106843224B (en) * 2017-03-15 2020-03-10 广东工业大学 Method and device for cooperatively guiding transport vehicle through multi-view visual positioning
CN109813335B (en) * 2017-11-21 2021-02-09 武汉四维图新科技有限公司 Calibration method, device and system of data acquisition system and storage medium
CN109099883A (en) * 2018-06-15 2018-12-28 哈尔滨工业大学 The big visual field machine vision metrology of high-precision and caliberating device and method
CN108827156B (en) * 2018-08-24 2021-08-10 合肥工业大学 Industrial photogrammetry reference scale
CN110969662B (en) * 2018-09-28 2023-09-26 杭州海康威视数字技术股份有限公司 Method and device for calibrating internal parameters of fish-eye camera, calibration device controller and system
CN111308448B (en) * 2018-12-10 2022-12-06 杭州海康威视数字技术股份有限公司 External parameter determining method and device for image acquisition equipment and radar
CN109785373B (en) * 2019-01-22 2022-12-23 东北大学 Speckle-based six-degree-of-freedom pose estimation system and method
CN110176035B (en) * 2019-05-08 2021-09-28 深圳市易尚展示股份有限公司 Method and device for positioning mark point, computer equipment and storage medium
CN110310337B (en) * 2019-06-24 2022-09-06 西北工业大学 Multi-view light field imaging system full-parameter estimation method based on light field fundamental matrix
CN110782498B (en) * 2019-09-26 2022-03-15 北京航空航天大学 Rapid universal calibration method for visual sensing network
CN111127560B (en) * 2019-11-11 2022-05-03 江苏濠汉信息技术有限公司 Calibration method and system for three-dimensional reconstruction binocular vision system
CN110889901B (en) * 2019-11-19 2023-08-08 北京航空航天大学青岛研究院 Large-scene sparse point cloud BA optimization method based on distributed system
CN111243021A (en) * 2020-01-06 2020-06-05 武汉理工大学 Vehicle-mounted visual positioning method and system based on multiple combined cameras and storage medium
CN111598954A (en) * 2020-04-21 2020-08-28 哈尔滨拓博科技有限公司 Rapid high-precision camera parameter calculation method
CN111890354B (en) * 2020-06-29 2022-01-11 北京大学 Robot hand-eye calibration method, device and system
CN112781496B (en) * 2021-01-20 2022-03-08 湘潭大学 Measuring head pose calibration method of non-contact measuring system
CN113077519B (en) * 2021-03-18 2022-12-09 中国电子科技集团公司第五十四研究所 Multi-phase external parameter automatic calibration method based on human skeleton extraction
CN113963058B (en) * 2021-09-07 2022-11-29 于留青 On-line calibration method and device for CT (computed tomography) of preset track, electronic equipment and storage medium
CN114705122B (en) * 2022-04-13 2023-05-05 成都飞机工业(集团)有限责任公司 Large-view-field stereoscopic vision calibration method
CN115578694A (en) * 2022-11-18 2023-01-06 合肥英特灵达信息技术有限公司 Video analysis computing power scheduling method, system, electronic equipment and storage medium
CN116188602A (en) * 2023-04-26 2023-05-30 西北工业大学青岛研究院 High-precision calibration method for underwater multi-vision three-dimensional imaging system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231749A (en) * 2007-12-20 2008-07-30 昆山华恒工程技术中心有限公司 Method for calibrating industry robot
CN101285680A (en) * 2007-12-12 2008-10-15 中国海洋大学 Line structure optical sensor outer parameter calibration method
CN101334267A (en) * 2008-07-25 2008-12-31 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device
US7554575B2 (en) * 2005-10-28 2009-06-30 Seiko Epson Corporation Fast imaging system calibration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7961936B2 (en) * 2007-03-30 2011-06-14 Intel Corporation Non-overlap region based automatic global alignment for ring camera image mosaic

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7554575B2 (en) * 2005-10-28 2009-06-30 Seiko Epson Corporation Fast imaging system calibration
CN101285680A (en) * 2007-12-12 2008-10-15 中国海洋大学 Line structure optical sensor outer parameter calibration method
CN101231749A (en) * 2007-12-20 2008-07-30 昆山华恒工程技术中心有限公司 Method for calibrating industry robot
CN101334267A (en) * 2008-07-25 2008-12-31 西安交通大学 Digital image feeler vector coordinate transform calibration and error correction method and its device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Bent David Olsen等.Calibration a camera network using a domino grid.《Pattern Recognition》.2001, *
孙佳星 等.双摄像机光笔式三维坐标测量系统研究.《计测技术》.2009,第29卷(第1期), *

Also Published As

Publication number Publication date
CN102034238A (en) 2011-04-27

Similar Documents

Publication Publication Date Title
CN102034238B (en) Multi-camera system calibrating method based on optical imaging probe and visual graph structure
US9134127B2 (en) Determining tilt angle and tilt direction using image processing
CN102376089B (en) Target correction method and system
US8315425B2 (en) Method for comparison of 3D computer model and as-built situation of an industrial plant
CN101852623B (en) On-track calibration method for internal element of satellite optical remote sensing camera
CN106408601B (en) A kind of binocular fusion localization method and device based on GPS
Gerke Using horizontal and vertical building structure to constrain indirect sensor orientation
CN103256920A (en) Determining tilt angle and tilt direction using image processing
CN104200086A (en) Wide-baseline visible light camera pose estimation method
CN104835159A (en) Digital image correction method for continuous variable-focal-length optical imaging system
CN103226840B (en) Full-view image splicing and measurement system and method
US20130113897A1 (en) Process and arrangement for determining the position of a measuring point in geometrical space
CN104240262A (en) Calibration device and calibration method for outer parameters of camera for photogrammetry
CN109859269B (en) Shore-based video auxiliary positioning unmanned aerial vehicle large-range flow field measuring method and device
CN104613929A (en) Method for automatic collimation of cubic mirror based on machine vision
CN112013830A (en) Accurate positioning method for unmanned aerial vehicle inspection image detection defects of power transmission line
CN108154535B (en) Camera calibration method based on collimator
Crispel et al. All-sky photogrammetry techniques to georeference a cloud field
Yu et al. Automatic extrinsic self-calibration of mobile LiDAR systems based on planar and spherical features
CN110986888A (en) Aerial photography integrated method
Cavegn et al. A systematic comparison of direct and image-based georeferencing in challenging urban areas
Eugster et al. Integrated georeferencing of stereo image sequences captured with a stereovision mobile mapping system–approaches and practical results
CN106940185A (en) A kind of localization for Mobile Robot and air navigation aid based on depth camera
CN105203024A (en) Multiple sensor integrated icing photogrammetric method for power transmission line
CN108195359A (en) The acquisition method and system of spatial data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: SUZHOU DIKA TESTING TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: XI AN JIAOTONG UNIV.

Effective date: 20150528

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 710049 XI AN, SHAANXI PROVINCE TO: 215505 SUZHOU, JIANGSU PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20150528

Address after: 215505 Jiangsu province Suzhou Changshou City Lianfeng Road No. 58

Patentee after: Suzhou cartesan Testing Technology Co. Ltd

Address before: 710049 Xianning West Road, Shaanxi, China, No. 28, No.

Patentee before: Xi'an Jiaotong University

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20110427

Assignee: Xi'an like Photoelectric Technology Co., Ltd.

Assignor: Suzhou cartesan Testing Technology Co. Ltd

Contract record no.: 2015610000089

Denomination of invention: Multi-camera system calibrating method based on optical imaging test head and visual graph structure

Granted publication date: 20120718

License type: Exclusive License

Record date: 20150902

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model