CN105118055A - Camera positioning correction calibration method and system - Google Patents

Camera positioning correction calibration method and system Download PDF

Info

Publication number
CN105118055A
CN105118055A CN201510489677.0A CN201510489677A CN105118055A CN 105118055 A CN105118055 A CN 105118055A CN 201510489677 A CN201510489677 A CN 201510489677A CN 105118055 A CN105118055 A CN 105118055A
Authority
CN
China
Prior art keywords
camera
focusing
video camera
theta
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510489677.0A
Other languages
Chinese (zh)
Other versions
CN105118055B (en
Inventor
刘戈三
顾晓娟
王春水
唐修文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING FILM ACADEMY
Original Assignee
BEIJING FILM ACADEMY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING FILM ACADEMY filed Critical BEIJING FILM ACADEMY
Priority to CN201510489677.0A priority Critical patent/CN105118055B/en
Publication of CN105118055A publication Critical patent/CN105118055A/en
Application granted granted Critical
Publication of CN105118055B publication Critical patent/CN105118055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The present invention discloses a camera positioning correction calibration method and a system for realizing the method, belonging to the technical field of image virtual production. Through the intrinsic relationship among a lens parameter, an imaging surface and an optical tracking device, by using the world coordinate and image point coordinate of N mark points on a background screen and the internal parameter and lens distortion parameter of a camera lens, the rotation matrix between a camera coordinate system and a world coordinate system and the translation vector of a camera perspective center in the world coordinate system are obtained, combined with the current position information given by a camera posture external tracking device in a current state, camera correction calibration information and viewing angle are obtained, the lookup table with a focusing distance and a focus length relationship is established, thus when a camera position, a lens focal distance and a focusing distance change, the position of the virtual camera of the virtual production system is fully automatically positioned and corrected, and thus a real video frame picture and a virtual frame image generated by a computer can be matched perfectly.

Description

Camera position correction scaling method and system
Technical field
The present invention relates to a kind of camera position correction scaling method and realize the system of the method, belong to virtual manufacture technology field.
Background technology
Virtual camera attitude and field angle are two extremely important information needed for computer picture render engine.In virtual photography Making programme, in order to the visual effect obtained, require that real scene shooting picture mates with virtual screen.This coupling refers to that the effect of virtual screen can imitate the shooting effect of real camera, if the focal length of camera lens, focusing from changing, the field angle of virtual camera is also corresponding to be changed in real time; When video camera seat in the plane changes, also there is the change on position in the computer picture virtual camera seat in the plane generated in engine accordingly.Newauto (Beijing) Video Technology Co., Ltd. proposes a kind of acquisition methods and device (application number 200810223256.3) of calibrating parameters, the method adopts biplane coordinate scaling method or Linear Camaera Calibrating Method to calculate inner parameter and the external parameter of video camera by the volume coordinate of known abundant reference point, the inner parameter of video camera comprises optics and the geometrical property of video camera, focal length, scale factor and lens distortion etc., the external parameter of video camera comprises camera coordinates relative to the position of world coordinate system and direction, then ant group algorithm is adopted to be optimized parameter, and generate evaluation function.But this file does not provide concrete embodiment and technical scheme, and also undeclared which kind of parameter of concrete employing calculates.A kind of method of calibrating camera parameters that Material Evidence Identification Center, Ministry of Public Security proposes and device (application number 201010144764.X) are although be a kind of method of calibrating camera but its problem that will solve is different from the present invention, this application builds virtual grid according to orthoscopic zone pixel coordinate near principal point in demarcation target image thus calculates distortion of camera coefficient, complete camera intrinsic parameter to demarcate, and have nothing to do with video camera external parameter.A kind of video image positional relationship correction apparatus that Toyota Industries Co., Ltd. proposes and there is steering assistance device and the video image positional relationship modification method (application number 200480041109.4) of this video image positional relationship correction apparatus, this application based on by video camera actual acquisition and the video image reference point of display and the deviation of virtual target point on a monitor between coordinate calculate coordinate conversion parameter, thus for the monitor coordinate of virtual target of deriving.Beijing Jingwei Hirain Technologies Co., Ltd. proposes a kind of emulation stand camera marking method and real-time machine (application number 201510134249.6), scaling reference is gathered and imaging by the virtual video camera set up, set up the relation between its any point and virtual video camera plane of delineation coordinate, then gather described imaging by real camera and set up corresponding relation between the imaging point of scaling reference any point in virtual video camera and real camera plane of delineation coordinate, these two relations are utilized to solve the corresponding relation obtained between scaling reference any point and real camera plane of delineation coordinate, thus eliminate the parameter differences existed between virtual video camera to real camera.Need in the method first to set up virtual video camera, and the present invention directly calculate be consistent with actual video camera field angle, the attitude information of video camera under world coordinate system (containing rotation angle and camera position) be to image rendering engine.
In order to reach such object, we must obtain the position of video camera under given coordinate system, attitude and field angle information in real time.Present common way on video camera, installs video camera attitude external trace device as optictracking device, by the sensing station of photogrammetric camera and the skew of optictracking device, estimates the position of video camera, attitude information.The video camera tracking of the virtual preview system of existing real-time imaging, mainly measure in a manual manner, then according to green curtain gauge point or other unique points, manually adjust, in addition do not consider yet focusing from the factors such as field angle, thus cause that efficiency is lower and precision is lower, cause occurring inconsistent phenomenon from the video image of video camera and the virtual scene of Practical computer teaching, cause virtual scene to a kind of false sensation of people, the Overlay of virtual scene and real scene is undesirable.
Summary of the invention
In order to reach the object of virtual background and real scene shooting image perfect matching, the position of video camera should be identical with the position of the virtual camera that computer picture render engine accepts.Pinhole imaging system principle is obeyed in the imaging of the virtual camera of computer picture render engine, so the real camera position matched is then the centre of perspectivity of video camera, i.e. and the intersection point of entrance pupil and optical axis.So finally obtain video camera correction to demarcate the alternate position spike information that information is video camera attitude external trace device (as optictracking device) and the video camera centre of perspectivity.Based on above-mentioned principle, the invention provides a kind of camera position is carried out and revise the method and system of demarcating, pass through lens parameters, internal relation between imaging surface and optictracking device, automatic location correction is carried out to the position of the virtual camera of virtual manufacturing system, and calculate the field angle be consistent with actual video camera, camera position, video camera attitude is to image rendering engine, thus make to work as camera position, lens focus, focusing is from when changing, automatically corrected parameter demarcated by the real-time video camera that obtains, and virtual camera coordinate and the attitude of computer picture render engine is obtained according to demarcation corrected parameter, make the virtual frames image perfect matching that real scene shooting frame of video picture and computing machine produce.
In order to solve the problems of the technologies described above, the invention provides a kind of camera position is carried out and revising the method for demarcating, comprising the following steps:
S1. N number of gauge point A on background extraction curtain 1..., A ncoordinate under world coordinate system: A 1(x 1, y 1, z 1) ..., A n(x n, y n, z n), wherein the coordinate of i-th gauge point under world coordinate system is A i(x i, y i, z i), i=1 ~ N; Described N number of gauge point at least comprises 3 not collinear gauge points; Described gauge point coordinate is the home position of gauge point and described gauge point is all positioned at the picture that video camera gathers;
S2. numerical value focusing distance sample FD is from small to large determined 1, FD 2..., FD j..., FD j; Wherein j=1 ~ J, FD 1and FD jbe respectively camera lens the maximum focusing of minimum focusing Distance geometry from;
If camera lens is zoom lens, then also need to determine numerical value focal length sampled point FL from small to large 1, FL 2..., FL k..., FL k; Wherein k=1 ~ K, FL 1and FL kminimum focus and the maximum focal length of camera lens respectively;
If S3. camera lens is tight shot, obtain each focusing distance sample FD jcorresponding camera lens inner parameter and lens distortion parameter;
If camera lens is zoom lens, obtain each focusing distance sample FD jwith each focal length sampled point FL kcorresponding camera lens inner parameter and lens distortion parameter;
Described camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
If S4. camera lens is tight shot, when focusing is from getting a jth focusing distance sample FD jtime, adjustment camera position makes gauge point on background screen can blur-free imaging;
If camera lens is zoom lens, when focusing is from getting a jth focusing distance sample FD jand focal length is at a kth sampled point FL kduring place, adjustment camera position makes gauge point on background screen can blur-free imaging;
S5. the current location information that under acquisition current state, video camera attitude external trace device provides, comprise the rotation Eulerian angle for making world coordinate system overlap with the self-defining coordinate system of video camera attitude external trace device after the rotation of its X-axis, Y-axis and Z axis respectively successively after translation, and the position coordinates of video camera attitude external trace device under world coordinate system;
The camera lens inner parameter that picpointed coordinate under video camera imaging coordinate system of the world coordinates of the N number of gauge point S6. utilizing S1 to obtain, gauge point and S3 obtain and lens distortion parameter, obtain rotation matrix between camera coordinates system and world coordinate system and the video camera centre of perspectivity translation vector at world coordinate system, i.e. the attitude information of video camera under world coordinate system;
S7. the current location information that under the current state utilizing S5 to obtain, video camera attitude external trace device provides, and the attitude information of the video camera of trying to achieve in S6 under world coordinate system, try to achieve video camera correction and demarcate information and field angle;
If S8. camera lens is tight shot: adjustment focusing from, to each focusing distance sample FD jperform step S4 ~ S7, until travel through all focusing distance sample, obtain correction corresponding to each focusing distance sample and demarcate information and field angle, set up focusing from-revise the look-up table LUT1 of demarcation information-field angle;
If camera lens is zoom lens: adjustment focusing Distance geometry focal length, to each focusing distance sample FD jwith each focal length sampled point FL kperform step S4 ~ S7, until travel through all focusing distance sample and focal length sampled point, obtain each focusing distance sample and the correction corresponding to focal length sampled point and demarcate information and field angle, set up the look-up table LUT2 of focusing from-focal length-correction demarcation information-field angle;
If S9. camera lens is tight shot:, according to described look-up table LUT1, revise demarcation information and field angle by focusing from acquisition is corresponding.
If camera lens is zoom lens:, according to described look-up table LUT2, obtain corresponding correction demarcation information and field angle by focusing Distance geometry focal length.
Further, camera position correction scaling method proposed by the invention also comprises following preferred technical scheme:
One, as preferred, in step S3,
(1) when camera lens is tight shot:
For a jth focusing distance sample FD j, corresponding camera lens inner parameter is: f xj, f yj, c xj, c yj, wherein:
F xj: represent that focusing is from being FD jtime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jwith the ratio of the horizontal width dx of each unit of video camera imager, namely
F yj: represent that focusing is from being FD jtime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jwith the ratio of the vertical height dy of each unit of video camera imager, namely
C xj: represent that focusing is from being FD jtime, the optical axis of camera lens and the intersection point of imaging surface, with the pixel count of the horizontal-shift at imaging surface center;
C yj: represent that focusing is from being FD jtime, the optical axis of camera lens and the intersection point of imaging surface, with the pixel count of the vertical shift at imaging surface center;
For a jth focusing distance sample FD j, corresponding camera lens distortions parameter is: k 1j, k 2j, k 3j, p 1jand p 2j, wherein k 1j, k 2j, k 3jfor focusing is from being FD jtime radial distortion parameter, wherein k 3jfor selected parameter, use k when camera lens is fish eye lens 1j, k 2j, k 3jthese three radial distortion parameter, when camera lens is non-fish eye lens, k 3j=0, namely other camera lenses only use k 1j, k 2jthis two radial distortion parameter; p 1j, p 2jfor focusing is from being FD jtime tangential distortion parameter;
Further, each focusing is obtained from FD junder internal reference matrix M jfor:
M j = f xj 0 c xj 0 f yj c yj 0 0 1 .
(2) when camera lens is zoom lens:
For a jth focusing distance sample FD jwith a kth focal length sampled point FL k, corresponding camera lens inner parameter is: f xjk, f yjk, c xjk, c yjk, wherein:
F xjk: represent that focusing is from being FD j, focal length is FL ktime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jkwith the ratio of the horizontal width dx of each unit of imager, namely
F yjk: represent that focusing is from being FD j, focal length is FL ktime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jkwith the ratio of the vertical height dy of each unit of imager, namely
C xjk: represent that focusing is from being FD j, focal length is FL ktime, the pixel count of the intersection point of optical axis and imaging surface and the horizontal-shift at imaging surface center;
C yjk: represent that focusing is from being FD j, focal length is FL ktime, the pixel count of the intersection point of optical axis and imaging surface and the vertical shift at imaging surface center;
For a jth focusing distance sample FD jwith a kth focal length sampled point FL k, corresponding camera lens distortions parameter is: k 1jk, k 2jk, k 3jk, p 1jkand p 2jk, wherein k 1jk, k 2jkand k 3jkfor focusing is from being FD jand focal length is FL ktime radial distortion parameter, wherein k 3jkfor selected parameter, use k when camera lens is fish eye lens 1jk, k 2jkand k 3jkthese three radial distortion parameter; When camera lens is non-fish eye lens, k 3jk=0, namely other camera lenses only use k 1jk, k 2jkthese two radial distortion parameter; p 1jk, p 2jkfor focusing is from being FD jand focal length is FL ktime tangential distortion parameter;
And obtain each focusing from FD jwith focal length FL kunder internal reference matrix be M jk:
M jk = f xjk 0 c xjk 0 f yjk c yjk 0 0 1 .
Two, as preferred, in step S5,
(1) when camera lens is tight shot:
The current location information that described video camera attitude external trace device provides is [θ xj, θ yj, θ zj, T xj, T yj, T zj]; Wherein [θ xj, θ yj, θ zj] be the rotation Eulerian angle that overlap with the self-defining coordinate system of video camera attitude external trace device after world coordinate system is rotated around X-axis, Y-axis and Z axis respectively successively after translation, [T xj, T yj, T zj] be the position coordinates of video camera attitude external trace device under world coordinate system, the current location information that video camera attitude external trace device provides is expressed as matrix form:
H Ij = R Ij - R Ij T Ij 0 1 ;
Here
R Ij = cos θ zj sin θ zj 0 - sin θ zj cos θ zj 0 0 0 1 cos θ yj 0 - sin θ yj 0 1 0 sin θ yj 0 cos θ yj 1 0 0 0 cos θ xj sin θ xj 0 - sin θ xj cos θ xj
T Ij=[T xjT yjT zj] T
0=[0,0,0];
(2) when camera lens is zoom lens:
The current location information that described video camera attitude external trace device provides is [θ xjk, θ yjk, θ zjk, T xjk, T yjk, T zjk]; Wherein [θ xjk, θ yjk, θ zjk] be the rotation Eulerian angle that overlap with the self-defining coordinate system of video camera attitude external trace device after world coordinate system is rotated around X-axis, Y-axis and Z axis respectively successively after translation, [T xjk, T yjk, T zjk] be the position coordinates of video camera attitude external trace device under world coordinate system; The current location information that video camera attitude external trace device provides is expressed as matrix form:
H Ijk = R Ijk - R Ijk T Ijk 0 1 ;
Here
R Ijk =
cos θ zjk sin θ zjk 0 - sin θ zjk cos θ zjk 0 0 0 1 cos θ yjk 0 - sin θ yjk 0 1 0 sin θ yjk 0 cos θ yjk 1 0 0 0 cos θ xjk sin θ xjk 0 - sin θ xjk cos θ xjk ;
T Ijk=[T xjkT yjkT zjk] T
0=[0,0,0]
Three, as preferred, in step S6:
(1) when camera lens is tight shot:
Current focusing is from being FD jtime, obtain the rotation matrix R between camera coordinates system and world coordinate system pj, the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, the attitude matrix H of video camera under world coordinate system pjmethod be:
According to the picpointed coordinate of i-th gauge point and current focusing is from FD junder lens distortion parameter, obtain i-th gauge point after correction picpointed coordinate (x ' ij, y ' ij):
x ij ′ = x ij c + x ij c ( 1 + k 1 j r ij 2 + k 2 j r ij 4 + k 3 j r ij 6 ) + 2 p 1 j y ij c + p 2 j ( r ij 2 + 2 ( x ij c ) 2 )
y ij ′ = y ij c + y ij c ( 1 + k 1 j r ij 2 + k 2 j r ij 4 + k 3 j r ij 6 ) + p 1 j ( r ij 2 + 2 ( y ij c ) 2 ) + 2 p 2 j x ij c
r ij = ( x ij c ) 2 + ( y ij c ) 2
If when wherein camera lens is non-fish eye lens, k 3j=0;
World coordinates (the x of i-th gauge point i, y i, z i) picpointed coordinate after the correction corresponding with it (x ' ij, y ' ij) between relation be expressed as:
x ij ′ y ij ′ 1 = λ j M j [ R pj , T pj ] x i y i z i 1 - - - ( 1 )
Wherein λ jbe called scale factor,
R pj = cos θ zj c sin θ zj c 0 - sin θ zj c cos θ zj c 0 0 0 1 cos θ yj c 0 - sin θ yj c 0 1 0 sin θ yj c 0 cos θ yj c 1 0 0 0 cos θ xj c sin θ xj c 0 - sin θ xj c cos θ xj c - - - ( 2 )
Rotation matrix R wherein between camera coordinates system and world coordinate system pjfor orthogonal matrix, T pjfor the video camera centre of perspectivity is at the translation vector of world coordinate system:
T pj = T xj c T yj c T zj c T
World coordinates (the x of through type (1) (2) and each gauge point i, y i, z i) and corresponding correction after picpointed coordinate (x ' ij, y ' ij), obtain matrix O jj[R pj, T pj] and the attitude information of video camera under world coordinate system wherein for the rotation Eulerian angle making world coordinate system overlap with camera coordinates system after the rotation of X-axis, Y-axis and Z axis respectively successively after translation, T xj c T yj c T zj c T For the translation vector of the video camera centre of perspectivity under world coordinate system;
Again because matrix R pjbe unit orthogonal matrix, have:
λ j=1/||O j(:,1)||;
Wherein O j(:, 1) be matrix O jthe 1st row, || || represent vector Euclid norm;
According to the rotation matrix R between the camera coordinates system obtained and world coordinate system pjand the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, the attitude information of video camera under world coordinate system is expressed as matrix H pj:
H pj = R pj T pj 0 1 .
(2) when camera lens is zoom lens:
Current focusing is from being FD jand when front focal length is FL ktime, obtain the rotation matrix R between camera coordinates system and world coordinate system pjk, the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, the attitude matrix H of video camera under world coordinate system pjkmethod be:
According to current focusing from FD jwith as front focal length FL kthe picpointed coordinate of lower i-th gauge point and lens distortion parameter obtain its correct after picpointed coordinate (x ' ijk, y ' ijk):
x ijk ′ = x ijk c + x ijk c ( 1 + k 1 jk r ijk 2 + k 2 jk r ijk 4 + k 3 jk r ijk 6 ) + 2 p 1 jk y ijk c + p 2 jk ( r ijk 2 + 2 ( x ijk c ) 2 )
y ijk ′ = y ijk c + y ijk c ( 1 + k 1 j r ijk 2 + k 2 j r ijk 4 + k 3 jk r ijk 6 ) + p 1 j ( r ijk 2 + 2 ( y ijk c ) 2 ) + 2 p 2 jk x ijk c
r ijk = ( x ijk c ) 2 + ( y ijk c ) 2
If when wherein camera lens is non-fish eye lens, k 3jk=0;
World coordinates (the x of i-th gauge point i, y i, z i) picpointed coordinate after the correction corresponding with it (x ' ijk, y ' ijk) between relation be expressed as:
x ijk ′ y ijk ′ 1 = λ jk M jk [ R pjk , T pjk ] x i y i z i 1 - - - ( 3 )
Wherein λ jkbe called scale factor,
R pjk = cos θ zjk c sin θ zjk c 0 - sin θ zjk c cos θ zjk c 0 0 0 1 cos θ yjk c 0 - sin θ yjk c 0 1 0 sin θ yjk c 0 cos θ yjk c 1 0 0 0 cos θ xjk c sin θ xjk c 0 - sin θ xjk c cos θ xjk c - - - ( 4 )
Rotation matrix R wherein between camera coordinates system and world coordinate system pjkfor orthogonal matrix, T pjkfor the video camera centre of perspectivity is at the translation vector of world coordinate system:
T pjk = T xjk c T yjk c T zjk c T
World coordinates (the x of through type (3) (4) and each gauge point i, y i, z i) and corresponding correct after picpointed coordinate (x ' ijk, y ' ijk), obtain matrix O jkjk[R pjk, T pjk] and the attitude information of video camera under world coordinate system wherein for the rotation Eulerian angle making world coordinate system overlap with camera coordinates system after the rotation of X-axis, Y-axis and Z axis respectively successively after translation, T xjk c T yjk c T zjk c T For the translation vector of the video camera centre of perspectivity under world coordinate system;
Because matrix R jkbe unit orthogonal matrix, have:
λ jk=1/||O jk(:,1)||;
Wherein O jk(:, 1) be matrix O jkthe 1st row, || || represent vector Euclid norm;
According to the rotation matrix R between the camera coordinates system obtained and world coordinate system pjkand the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, the attitude information of video camera under world coordinate system is expressed as matrix H pjk:
H pjk = R pjk T pjk 0 1 .
Four, as preferred, step S7 comprises the steps:
(1) when camera lens is tight shot:
S7.1 calculates video camera attitude rectification matrix H j:
H j = H pj · H Ij - 1 - - - ( 5 )
S7.2 is by video camera attitude rectification matrix H jbe transformed to and revise hypercomplex number (θ j, n xj, n yj, n zj) and translation vector T j:
Described video camera attitude rectification matrix H jbe the matrix of 4 × 4, its live part is H j(1:3,1:4), i.e. H jfirst three rows, by H j(1:3,1:4) is expressed as form:
H j(1:3,1:4)=[R j,T j];
Wherein rotation matrix R jbe the real number matrix of 3 × 3, translation vector T j=[t xj, t yj, t zj] tbe 3 dimensional vectors;
By rotation matrix R jbe converted to according to following formula and revise hypercomplex number (θ j, n xj, n yj, n zj):
Here n j → = ( n xj , n yj , n zj ) , Be 3 dimension row vectors;
Thus R is stored jonly need store and revise hypercomplex number (θ j, n xj, n yj, n zj), therefore video camera attitude rectification matrix H jbe converted to video camera correction and demarcate information (θ j, n xj, n yj, n zj, t xj, t yj, t zj);
S7.3 calculates camera coverage angle: when focusing is from being FD jtime, calculate camera coverage angle α according to following formula j:
α j = 2 arctan ( 1 2 W 2 f xj 2 + H 2 f yj 2 )
Wherein W, H are respectively wide, the high resolving power of video camera.
(2) when camera lens is zoom lens:
S7.1 calculates video camera attitude rectification matrix H jk:
H jk = H pjk · H Ijk - 1 - - - ( 6 )
S7.2 is by video camera attitude rectification matrix H jkbe transformed to and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk) and translation vector T jk:
Described video camera attitude rectification matrix H jkbe the matrix of 4 × 4, its live part is H jk(1:3,1:4), i.e. H jkfirst three rows, by H jk(1:3,1:4) is expressed as form:
H jk(1:3,1:4)=[R jk,T jk];
Wherein rotation matrix R jkbe the real number matrix of 3 × 3, translation vector T jk=[t xjk, t yjk, t zjk] tbe 3 dimensional vectors;
By rotation matrix R jkbe converted to according to following formula and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk):
Here be 3 dimension row vectors;
Thus R is stored jkonly need store and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk), therefore video camera attitude rectification matrix H jkbe converted to video camera correction and demarcate information (θ jk, n xjk, n yjk, n zjk, t xjk, t yjk, t zjk);
S7.3 calculates camera coverage angle: when focusing is from being FD jand focal length is FL ktime, calculate camera coverage angle α according to following formula jk:
α jk = 2 arctan ( 1 2 W 2 f xjk 2 + H 2 f yjk 2 )
Wherein W, H are respectively wide, the high resolving power of video camera.
Five, as preferred, in step S9,
When actual focusing from FD not any one focusing distance sample FD jplace, or real focal length FL is not at any one focusing distance sample FL kplace, or work as actual focusing from FD not at any one focusing distance sample FD jplace and real focal length FL be not also at any one focusing distance sample FL kduring place, information and field angle are demarcated in the correction adopting the mode of following interpolation arithmetic to obtain its correspondence:
Adopt SLERP interpolation method to carry out interpolation for the correction hypercomplex number revised in demarcation information, translation vector adopts linear interpolation; Field angle adopts linear interpolation method to carry out interpolation.
Further,
(1) when camera lens is tight shot:
When actual focusing from FD not any one focusing distance sample FD jplace, the method using SLERP algorithm to carry out interpolation to correction hypercomplex number is as follows:
If FD j< FD < FD j+1, video camera in focusing from FD jand FD j+1the video camera attitude rectification matrix at place is respectively H jand H j+1, its rotation matrix is respectively R jand R j+1, rotation matrix is expressed as correction hypercomplex number q j = ( cos &theta; j , n j &RightArrow; sin &theta; j ) With then current actual focusing is calculated from correction hypercomplex number q during FD by following formula:
q = slerp ( q j , q j + 1 , t ) = q j ( q j - 1 q j + 1 ) t ;
Here q jinverse,
q j - 1 = ( cos &theta; j , - n j &RightArrow; sin &theta; j ) ; n j &RightArrow; = ( n xj , n yj , n zj ) ,
t = FD j + 1 - FD FD j + 1 - FD j ;
(2) when camera lens is zoom lens:
When actual focusing from FD not any one focusing distance sample FD jplace, or real focal length FL is not at any one focusing distance sample FL kplace, or work as actual focusing from FD not at any one focusing distance sample FD jplace and real focal length FL be not also at any one focusing distance sample FL kduring place, the method using SLERP algorithm to carry out interpolation to correction hypercomplex number is as follows:
If 1. FD j< FD < FD j+1and FL k< FL < FL k+1, namely when actual focusing from FD not any one focusing distance sample FD jplace, and real focal length FL is not also at any one focusing distance sample FL kplace, adopts and combines (FD with its focusing Distance geometry focal length sampled point closed on most j, FL k), (FD j, FL k+1), (FD j+1, FL k), (FD j+1, FL k+1) corresponding to correction hypercomplex number q j, k, q j, k+1, q j+1, k, q j+1, k+1, carry out interpolation according to the following formula, then for current actual focusing from FD and real focal length is FL time correction hypercomplex number q l, dcalculated by following formula:
q i , d = slerp ( q j , k , q j , k + 1 , t 2 ) = q j , k ( q j , k - 1 q j , k + 1 ) t 2
q i + 1 , d = slerp ( q j + 1 , k , q j + 1 , k + 1 , t 2 ) = q j + 1 , k ( q j + 1 , k - 1 q j + 1 , k + 1 ) t 2
q l , d = slerp ( q i , d , q i + 1 , d , t 1 ) = q i , d ( q i , d - 1 q i + 1 , d ) t 1
Wherein:
t 1 = FD j + 1 - FD FD j + 1 - FD j ;
t 2 = FL k + 1 - FL FL k + 1 - FL k ;
If 2. FD j< FD < FD j+1, FL=FL ktime, namely when actual focusing from FD not any one focusing distance sample FD jplace and real focal length FL is focal length sampled point FL ktime, adopt and combine (FD with its focusing closed on most from sampled point j, FL k), (FD j+1, FL k) corresponding to correction hypercomplex number information q j, k, q j+1, k, carry out interpolation according to the following formula, then for current actual focusing from FD and real focal length is FL ktime correction hypercomplex number q l, dcalculated by following formula
q l , d = slerp ( q j , k , q j + 1 , k , t ) = q j , k ( q j , k - 1 q j + 1 , k ) t
Wherein
t = FD j + 1 - FD FD j + 1 - FD j ;
If 3. FD=FD j, FL k< FL < FL k+1time, namely actual focusing is focusing distance sample FD from FD jand real focal length FL is not at any one focusing distance sample FL kduring place, adopt and combine (FD with its focal length sampled point closed on most j, FL k), (FD j, FL k+1) corresponding to correction hypercomplex number information q j, k, q j, k+1, carry out interpolation according to the following formula, then for current actual focusing from FD jand correction hypercomplex number q when real focal length is FL l, dcalculated by following formula:
q l , d = slerp ( q j , k , q j , k + 1 , t ) = q j , k ( q j , k - 1 q j , k + 1 ) t
Wherein
t = FL k + 1 - FL FL k + 1 - FL k .
In addition, according to as above method, the present invention also proposes a kind of camera position correction calibration system, comprising: servo-motor control system, video camera, video camera attitude external trace device, background screen, gauge point, space measurement equipment, data processing equipment, computer picture render engine;
Wherein, servo-motor control system is connected with video camera, for the focal length of adjustable lens and focusing from; Servo-motor control system is also connected with data processing equipment, for sending the focal length of camera lens and focusing range information to data processing equipment, calculates revise demarcation information and field angle and set up look-up table to make data processing equipment;
Data processing equipment is also connected with video camera, for reading video stream data in real time;
Video camera attitude external trace device is arranged at video camera outside, for by the position of photogrammetric camera and the skew of video camera attitude external trace device, estimates position and the attitude information of video camera; Video camera attitude external trace device is also connected with data processing equipment, for sending measurement data information;
Background screen is arranged the gauge point of at least 3 not conllinear, these gauge point radiuses are identical, and color and background screen have contrast;
Space measurement equipment is connected with data processing equipment respectively with image rendering engine, space measurement equipment is used for the world coordinates of measurement markers point home position and sends to data processing equipment, and the look-up table that image rendering engine is used for setting up according to data processing equipment obtains revises demarcation information and field angle accordingly.
Compared with prior art, beneficial effect of the present invention is, not only realize video camera correction calibration process full automation, without the need to manual intervention, and the position of video camera under given coordinate system, attitude and field angle information can be calculated in real time, and by Computer image genration engine according to above-mentioned information accurately image, make the virtual frames image perfect matching that real scene shooting frame of video picture and computing machine produce.
Accompanying drawing explanation
Fig. 1 is the composition schematic diagram of the camera position correction calibration system that the present invention proposes; Wherein, 1-servo-motor control system; 2-video camera, 3-camera lens, 4-video camera attitude external trace device; 5-background screen; 6-gauge point; 7-data processing equipment;
Fig. 2 is the method flow diagram carrying out revising demarcation to tight shot camera position that the present invention proposes;
Fig. 3 is the method flow diagram carrying out revising demarcation to zoom lens camera position that the present invention proposes.
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in detail, also described technical matters, principle and beneficial effect that technical solution of the present invention solves simultaneously.
As shown in Figure 1, for the structural drawing of camera position correction calibration system proposed according to the present invention, as shown in the figure, the camera position correction calibration system that the present invention proposes comprises: servo-motor control system, video camera, video camera attitude external trace device, background screen, gauge point, space measurement equipment (not shown), data processing equipment, image rendering engine (not shown);
Wherein, servo-motor control system is connected with video camera, for the focal length of adjustable lens and focusing from; Servo-motor control system is also connected with data processing equipment, for sending the focal length of camera lens and focusing range information to data processing equipment, calculates revise demarcation information and field angle and set up look-up table to make data processing equipment;
Data processing equipment is also connected with video camera, for reading video stream data in real time;
Video camera attitude external trace device is arranged at video camera outside, for by the position of photogrammetric camera and the skew of video camera attitude external trace device, estimates position and the attitude information of video camera; Video camera attitude external trace device is also connected with data processing equipment, for sending measurement data information;
Background screen is arranged the gauge point of at least 3 not conllinear, these gauge point radiuses are identical, and color and background screen have contrast;
Space measurement equipment is connected with data processing equipment respectively with image rendering engine, space measurement equipment is used for the world coordinates of measurement markers point home position and sends to data processing equipment, and the look-up table that image rendering engine is used for setting up according to data processing equipment obtains revises demarcation information and field angle accordingly.
As preferably, background screen is green curtain or blue curtain.
As preferably, described video camera attitude external trace device is optictracking device or mechanical arm.
The workflow of the camera position correction calibration system that the present invention proposes comprises the steps:
S1. the coordinate of N number of gauge point under world coordinate system on usage space measuring equipment background extraction curtain, A 1(x 1, y 1, z 1) ..., A n(x n, y n, z n), and the world coordinates of gauge point is sent to data processing equipment, described N number of gauge point at least comprises 3 not collinear gauge points;
S2. multiple numerical value focusing distance sample FD is from small to large determined 1, FD 2..., FD j..., FD j(j=1 ~ J), FD 1and FD jbe respectively camera lens the maximum focusing of minimum focusing Distance geometry from;
If camera lens is zoom lens, then also need to determine numerical value focal length sampled point FL from small to large 1, FL 2..., FL k..., FL k; Wherein k=1 ~ K, FL 1and FL kminimum focus and the maximum focal length of camera lens respectively;
As preferably, described focusing distance sample and focal length sampled point adopt the sampled point in camera lens calibration process; Demarcating about camera lens please refer to as Publication about Document: [1] DuaneCB.Close-rangecameracalibration [J] .Photogram.Eng.RemoteSens, 1971,37:855-866. and [2] ZhangZ.Flexiblecameracalibrationbyviewingaplanefromunkno wnorientations [C] //ComputerVision, 1999.TheProceedingsoftheSeventhIEEEInternationalConferen ceon.IEEE, 1999,1:666-673.
Focusing is from choosing method: (as [0,1]) between the rotational area being placed in the servo-motor control system of the adjustment focus of camera lens is divided into J-1 section, altogether J end points, what these end points were corresponding is focusing from sampling point value; J generally chooses the integer between 10-30.
The choosing method of focal length sampled point: (as [0,1]) between the rotational area being placed in the servo-motor control system of focusing of camera lens is divided into K-1 section, altogether K end points, what these end points were corresponding is focal length sampling point value; K generally suitably chooses according to the zooming range of camera lens.
S3. camera inner parameter corresponding to each focusing distance sample and distortion parameter is obtained by data processing equipment; Described camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
S4. servo-motor control system sends the focal length of camera lens and focusing range information to data processing equipment, when focusing is from a jth sampled point FD jplace, adjustment camera position makes gauge point on background screen can blur-free imaging;
If camera lens is zoom lens, when focusing is from getting a jth focusing distance sample FD jand focal length is at a kth sampled point FL kduring place, adjustment camera position makes gauge point on background screen can blur-free imaging;
S5. employing video camera attitude external trace device such as optictracking device or mechanical arm transmits the positional information under its current state to data processing equipment; Comprise the rotation Eulerian angle for making world coordinate system overlap with the self-defining coordinate system of video camera attitude external trace device after the rotation of its X-axis, Y-axis and Z axis respectively successively after translation, and the position coordinates of video camera attitude external trace device under world coordinate system;
Described optictracking device is as total powerstation etc.;
S6. the camera lens inner parameter that obtains of picpointed coordinate under video camera imaging coordinate system of the world coordinates of data processing equipment N number of gauge point of utilizing S1 to obtain, gauge point and S3 and lens distortion parameter, obtain rotation matrix between camera coordinates system and world coordinate system and the video camera centre of perspectivity translation vector at world coordinate system, i.e. the attitude information of video camera under world coordinate system;
As preferably, video camera imaging coordinate system initial point is set in field of view center.
S7. data processing equipment utilize S5 to obtain current state under the current location information that provides of video camera attitude external trace device, and the attitude information of the video camera of trying to achieve in S6 under world coordinate system, tries to achieve video camera correction and demarcates information and field angle;
If S8. camera lens is tight shot: adjustment focusing from, to each focusing distance sample FD jperform step S4 ~ S7, until travel through all focusing distance sample, obtain correction corresponding to each focusing distance sample and demarcate information and field angle, data processing equipment set up focusing from-revise the look-up table LUT1 of demarcation information-field angle;
If camera lens is zoom lens: adjustment focusing Distance geometry focal length, to each focusing distance sample FD jwith each focal length sampled point FL kperform step S4 ~ S7, until travel through all focusing distance sample and focal length sampled point, obtain each focusing distance sample and the correction corresponding to focal length sampled point and demarcate information and field angle, the look-up table LUT2 of focusing from-focal length-correction demarcation information-field angle set up by data processing equipment;
If S9. camera lens is tight shot: the look-up table LUT1 that image rendering engine is set up according to data processing equipment, revise demarcation information and field angle by focusing from acquisition is corresponding.
If camera lens is zoom lens: the look-up table LUT2 that image rendering engine is set up according to data processing equipment, obtain corresponding correction demarcation information and field angle by focusing Distance geometry focal length.
The camera position correction calibration system related in the present invention its go for tight shot, also zoom lens is applicable to, here respectively for the camera position correction calibration system with tight shot and zoom lens, the technical scheme of camera position correction scaling method of the present invention is described:
One, tight shot (asking for an interview accompanying drawing 2)
S1. the world coordinates of gauge point is obtained:
Utilize the world coordinates A of N number of gauge point on space measurement device measuring background screen 1(x 1, y 1, z 1) ..., A n(x n, y n, z n), described N number of gauge point at least comprises 3 not collinear gauge points; Described gauge point coordinate is the home position of gauge point and described gauge point is all positioned at the picture that video camera gathers; Wherein the coordinate of i-th gauge point under world coordinate system is A i(x i, y i, z i), i=1 ~ N;
S2. distance sample of focusing is determined:
By servo-motor control system, according to camera lens calibration process determination numerical value focusing distance sample FD from small to large 1, FD 2..., FD j..., FD j(j=1 ~ J), FD 1and FD jbe respectively camera lens the maximum focusing of minimum focusing Distance geometry from;
S3. obtain camera inner parameter corresponding to each focusing distance sample and distortion parameter to go forward side by side line item, described camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
Specifically, for a jth focusing distance sample FD j, corresponding camera lens inner parameter is: f xj, f yj, c xj, c yj, wherein:
F xj: represent that focusing is from being FD jtime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jwith the ratio of the horizontal width dx of each unit of video camera imager, namely
F yj: represent that focusing is from being FD jtime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jwith the ratio of the vertical height dy of each unit of video camera imager, namely
As preferably, video camera imager described herein is CCD or CMOS.
C xj: represent that focusing is from being FD jtime, the optical axis of camera lens and the intersection point of imaging surface, with the pixel count of the horizontal-shift at imaging surface center;
C yj: represent that focusing is from being FD jtime, the optical axis of camera lens and the intersection point of imaging surface, with the pixel count of the vertical shift at imaging surface center;
For a jth focusing distance sample FD j, corresponding camera lens distortions parameter is: k 1j, k 2j, k 3j, p 1jand p 2j, wherein k 1j, k 2j, k 3jfor focusing is from being FD jtime radial distortion parameter, wherein k 3jfor selected parameter, use k when camera lens is fish eye lens 1j, k 2j, k 3jthese three radial distortion parameter, when camera lens is non-fish eye lens, k 3j=0, namely other camera lenses only use k 1j, k 2jthis two radial distortion parameter; p 1j, p 2jfor focusing is from being FD jtime tangential distortion parameter;
Further, each focusing is obtained from FD junder internal reference matrix M jfor:
M j = f xj 0 c xj 0 f yj c yj 0 0 1 .
S4. when focusing is from getting a jth focusing distance sample FD jtime, adjustment camera position makes gauge point on background screen can blur-free imaging;
S5. data processing equipment obtains the current location information [θ that video camera attitude external trace device is transmitted as optictracking device xj, θ yj, θ zj, T xj, T yj, T zj].Wherein [θ xj, θ yj, θ zj] be the rotation Eulerian angle that overlap with the self-defining coordinate system of video camera attitude external trace device after world coordinate system is rotated around X-axis, Y-axis and Z axis respectively successively after translation, [T xj, T yj, T zj] be the position coordinates of video camera attitude external trace device under world coordinate system, the current location information that video camera attitude external trace device provides is expressed as matrix form:
H Ij = R Ij - R Ij T Ij 0 1 ;
Here
R Ij = cos &theta; zj sin &theta; zj 0 - sin &theta; zj cos &theta; zj 0 0 0 1 cos &theta; yj 0 - sin &theta; yj 0 1 0 sin &theta; yj 0 cos &theta; yj 1 0 0 0 cos &theta; xj sin &theta; xj 0 - sin &theta; xj cos &theta; xj
T Ij=[T xjT yjT zj] T
0=[0,0,0];
The camera lens inner parameter that picpointed coordinate under video camera imaging coordinate system of the world coordinates of the N number of gauge point S6. utilizing S1 to obtain, gauge point and S3 obtain and lens distortion parameter, obtain the rotation matrix R between camera coordinates system and world coordinate system pjand the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, i.e. the attitude information of video camera under world coordinate system;
As preferably, current focusing is from being FD jtime, obtain the rotation matrix R between camera coordinates system and world coordinate system pj, the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, the attitude matrix H of video camera under world coordinate system pjmethod be:
According to the picpointed coordinate of i-th gauge point and current focusing is from FD junder lens distortion parameter, obtain i-th gauge point after correction picpointed coordinate (x ' ij, y ' ij):
x ij &prime; = x ij c + x ij c ( 1 + k 1 j r ij 2 + k 2 j r ij 4 + k 3 j r ij 6 ) + 2 p 1 j y ij c + p 2 j ( r ij 2 + 2 ( x ij c ) 2 )
y ij &prime; = y ij c + y ij c ( 1 + k 1 j r ij 2 + k 2 j r ij 4 + k 3 j r ij 6 ) + p 1 j ( r ij 2 + 2 ( y ij c ) 2 ) + 2 p 2 j x ij c
r ij = ( x ij c ) 2 + ( y ij c ) 2
If when wherein camera lens is non-fish eye lens, k 3j=0;
World coordinates (the x of i-th gauge point i, y i, z i) picpointed coordinate after the correction corresponding with it (x ' ij, y ' ij) between relation be expressed as:
x ij &prime; y ij &prime; 1 = &lambda; j M j [ R pj , T pj ] x i y i z i 1 - - - ( 1 )
Wherein λ jbe called scale factor,
R pj = cos &theta; zj c sin &theta; zj c 0 - sin &theta; zj c cos &theta; zj c 0 0 0 1 cos &theta; yj c 0 - sin &theta; yj c 0 1 0 sin &theta; yj c 0 cos &theta; yj c 1 0 0 0 cos &theta; xj c sin &theta; xj c 0 - sin &theta; xj c cos &theta; xj c - - - ( 2 )
Rotation matrix R wherein between camera coordinates system and world coordinate system pjfor orthogonal matrix, T pjfor the video camera centre of perspectivity is at the translation vector of world coordinate system:
T pj = T xj c T yj c T zj c T
World coordinates (the x of through type (1) (2) and each gauge point i, y i, z i) and corresponding correction after picpointed coordinate (x ' ij, y ' ij), obtain matrix O jj[R pj, T pj] and the attitude information of video camera under world coordinate system wherein for the rotation Eulerian angle making world coordinate system overlap with camera coordinates system after the rotation of X-axis, Y-axis and Z axis respectively successively after translation, T xj c T yj c T zj c T For the translation vector of the video camera centre of perspectivity under world coordinate system;
Again because matrix R pjbe unit orthogonal matrix, have:
λ j=1/||O j(:,1)||;
Wherein O j(:, 1) be matrix O jthe 1st row, || || represent vector Euclid norm;
According to the rotation matrix R between the camera coordinates system obtained and world coordinate system pjand the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, the attitude information of video camera under world coordinate system is expressed as matrix H pj:
H pj = R pj T pj 0 1 .
S7. the current location information that under the current state utilizing S5 to obtain, video camera attitude external trace device provides, and the attitude information of the video camera of trying to achieve in S6 under world coordinate system, try to achieve video camera correction and demarcate information and field angle;
First S7.1 calculates video camera attitude rectification matrix.Calculate the unchangeability that video camera attitude rectification matrix mainly utilizes the relative position relation of video camera and optictracking device, here relative position relation is constant is the term of validity of one-step installation, reinstall optictracking device, need again to demarcate camera position.Following formula can be utilized video camera attitude rectification matrix H is calculated when calculating j:
H j = H pj &CenterDot; H Ij - 1 - - - ( 5 )
S7.2 is by video camera attitude rectification matrix H jbe transformed to and revise hypercomplex number (θ j, n xj, n yj, n zj) and translation vector T j:
Due to the calculating video camera attitude rectification matrix H that formula (5) obtains jbe the matrix of 4 × 4, it is not easy to store, and is also not easy to carry out interpolation calculation in subsequent arithmetic, therefore, needs to be transformed to the form being convenient to storage and computing.
Described video camera attitude rectification matrix H jbe the matrix of 4 × 4, its live part is H j(1:3,1:4), i.e. H jfirst three rows, by H j(1:3,1:4) is expressed as form:
H j(1:3,1:4)=[R j,T j];
Wherein rotation matrix R jbe the real number matrix of 3 × 3, translation vector T j=[t xj, t yj, t zj] tbe 3 dimensional vectors;
Due to rotation matrix R jmutually can change with hypercomplex number, therefore by rotation matrix R jbe converted to according to following formula and revise hypercomplex number (θ j, n xj, n yj, n zj):
Here n j &RightArrow; = ( n xj , n yj , n zj ) , Be 3 dimension row vectors.
Here n j &RightArrow; = ( n xj , n yj , n zj ) , Be 3 dimension row vectors;
Thus R is stored jonly need store and revise hypercomplex number (θ j, n xj, n yj, n zj), therefore video camera attitude rectification matrix H jbe converted to video camera correction and demarcate information (θ j, n xj, n yj, n zj, t xj, t yj, t zj); Namely wherein correction hypercomplex number and translation vector are collectively referred to as and revise demarcation information.
S7.3 calculates camera coverage angle: when focusing is from being FD jtime, calculate camera coverage angle α according to following formula j:
&alpha; j = 2 arctan ( 1 2 W 2 f xj 2 + H 2 f yj 2 )
Wherein W, H are wide, the high resolving power of video camera.
S8. adjust focusing from, to each focusing distance sample FD jperform step S4 ~ S7, until travel through all focusing distance sample, information (θ is demarcated in the correction obtained corresponding to each focusing distance sample j, n xj, n yj, n zj, t xj, t yj, t zj) and field angle α j, set up focusing from-revise the look-up table LUT1 of demarcation information-field angle;
S9. export LUT1 to image rendering engine, image rendering engine, according to described look-up table LUT1, revises demarcation information and field angle by focusing from acquisition is corresponding.
Table 1 focusing from-revise the form of demarcation information-field angle look-up table LUT1
When camera lens is tight shot, Computer image genration engine is when utilizing look-up table LUT1 to demarcate information and field angle according to focusing from the correction obtaining correspondence, because the focusing distance sample recorded in look-up table LUT1 is limited, the focusing not at every turn chosen from be all just sample point focusing from.For non-sampled point place focusing from, information is demarcated in the correction that the mode of interpolation arithmetic can be adopted to obtain its correspondence, owing to revising hypercomplex number, translation vector and its mathematical characteristic difference of field angle, therefore in order to reach more accurate interpolation arithmetic effect, different interpolation algorithms can be adopted respectively.Specifically, SLERP interpolation method is adopted to carry out interpolation, translation vector (t for the correction hypercomplex number revised in demarcation information x, t y, t z) adopting linear interpolation, field angle adopts linear interpolation, or other forms carry out interpolation.
Wherein, as preferably, to carry out the method for interpolation as follows to revising hypercomplex number to use SLERP algorithm:
If FD j< FD < FD j+1, video camera in focusing from FD jand FD j+1the video camera attitude rectification matrix at place is respectively H jand H j+1, its rotation matrix is respectively R jand R j+1, rotation matrix is expressed as correction hypercomplex number q j = ( cos &theta; j , n j &RightArrow; sin &theta; j ) With then current actual focusing is calculated from correction hypercomplex number q during FD by following formula:
q = slerp ( q j , q j + 1 , t ) = q j ( q j - 1 q j + 1 ) t ;
Here q jinverse,
q j - 1 = ( cos &theta; j , - n j &RightArrow; sin &theta; j ) ; n j &RightArrow; = ( n xj , n yj , n zj ) ,
t = FD j + 1 - FD FD j + 1 - FD j .
In focusing from under any value, corresponding correction can be obtained and demarcates information and field angle thus.
Two, zoom lens (asking for an interview accompanying drawing 3)
For the camera position correction calibration system adopting zoom lens, except need consider focusing from change, also should consider the change of focal length, the correction that should obtain respectively under each focusing distance sample, focal length sampled point demarcates information and field angle, specifically see Fig. 3 simultaneously:
S1. the world coordinates of gauge point is obtained:
Utilize N number of gauge point A on space measurement device measuring background screen 1..., A nworld coordinate system under coordinate A 1(x 1, y 1, z 1) ..., A n(x n, y n, z n), described N number of gauge point at least comprises 3 not collinear gauge points; Described gauge point coordinate is the home position of gauge point and described gauge point is all positioned at the picture that video camera gathers; Wherein the coordinate of i-th gauge point under world coordinate system is A i(x i, y i, z i), i=1 ~ N;
S2. focusing distance sample and focal length sampled point is determined:
By servo-motor control system, according to camera lens calibration process determination numerical value focusing distance sample FD from small to large 1, FD 2..., FD j..., FD j(j=1 ~ J), FD 1and FD jbe respectively camera lens the maximum focusing of minimum focusing Distance geometry from; In addition, numerical value focal length sampled point FL is from small to large determined 1, FL 2..., FL k..., FL k(k=1 ~ K), FL 1and FL kminimum focus and the maximum focal length of camera lens respectively.
S3. obtain each focusing distance sample and camera lens inner parameter corresponding to focal length sampled point and lens distortion parameter to go forward side by side line item, described camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
As preferably, for a jth focusing distance sample FD jwith a kth focal length sampled point FL k, corresponding camera lens inner parameter is: f xjk, f yjk, c xjk, c yjk, wherein:
F xjk: represent that focusing is from being FD j, focal length is FL ktime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jkwith the ratio of the horizontal width dx of each unit of imager, namely
F yjk: represent that focusing is from being FD j, focal length is FL ktime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jkwith the ratio of the vertical height dy of each unit of imager, namely
C xjk: represent that focusing is from being FD j, focal length is FL ktime, the pixel count of the intersection point of optical axis and imaging surface and the horizontal-shift at imaging surface center;
C yjk: represent that focusing is from being FD j, focal length is FL ktime, the pixel count of the intersection point of optical axis and imaging surface and the vertical shift at imaging surface center;
For a jth focusing distance sample FD jwith a kth focal length sampled point FL k, corresponding camera lens distortions parameter is: k 1jk, k 2jk, k 3jk, p 1jkand p 2jk, wherein k 1jk, k 2jkand k 3jkfor focusing is from being FD jand focal length is FL ktime radial distortion parameter, wherein k 3jkfor selected parameter, use k when camera lens is fish eye lens 1jk, k 2jkand k 3jkthese three radial distortion parameter; When camera lens is non-fish eye lens, k 3jk=0, namely other camera lenses only use k 1jk, k 2jkthese two radial distortion parameter; p 1jk, p 2jkfor focusing is from being FD jand focal length is FL ktime tangential distortion parameter;
And obtain each focusing from FD jwith focal length FL kunder internal reference matrix be M jk:
M jk = f xjk 0 c xjk 0 f yjk c yjk 0 0 1 .
S4. when focusing is from getting a jth focusing distance sample FD jand focal length is at a kth sampled point FL kduring place, adjustment camera position makes gauge point on background screen can blur-free imaging;
S5. current location information [the θ that under acquisition current state, video camera attitude external trace device is transmitted as optictracking device or mechanical arm xjk, θ yjk, θ zjk, T xjk, T yjk, T zjk].Wherein [θ xjk, θ yjk, θ zjk] be the rotation Eulerian angle that overlap with the self-defining coordinate system of video camera attitude external trace device after world coordinate system is rotated around X-axis, Y-axis and Z axis respectively successively after translation, [T xjk, T yjk, T zjk] be the position coordinates of video camera attitude external trace device under world coordinate system; The current location information that video camera attitude external trace device provides is expressed as matrix form:
H Ijk = R Ijk - R Ijk T Ijk 0 1 ;
Here
R Ijk = cos &theta; zjk sin &theta; zjk 0 - sin &theta; zjk cos &theta; zjk 0 0 0 1 cos &theta; yjk 0 - sin &theta; yjk 0 1 0 sin &theta; yjk 0 cos &theta; yjk 1 0 0 0 cos &theta; xjk sin &theta; xjk 0 - sin &theta; xjk cos &theta; xjk ;
T Ijk=[T xjkT yjkT zjk] T
0=[0,0,0]
The camera lens inner parameter that picpointed coordinate under video camera imaging coordinate system of the world coordinates of the N number of gauge point S6. utilizing S1 to obtain, gauge point and S3 obtain and lens distortion parameter, obtain the rotation matrix R between camera coordinates system and world coordinate system pjkand the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, i.e. the attitude information of video camera under world coordinate system;
Current focusing is from being FD jand when front focal length is FL ktime, obtain the rotation matrix R between camera coordinates system and world coordinate system pjk, the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, the attitude matrix H of video camera under world coordinate system pjkmethod be:
According to current focusing from FD jwith as front focal length FL kthe picpointed coordinate of lower i-th gauge point and lens distortion parameter obtain its correct after picpointed coordinate (x ' ijk, y ' ijk):
x ijk &prime; = x ijk c + x ijk c ( 1 + k 1 jk r ijk 2 + k 2 jk r ijk 4 + k 3 jk r ijk 6 ) + 2 p 1 jk y ijk c + p 2 jk ( r ijk 2 + 2 ( x ijk c ) 2 )
y ijk &prime; = y ijk c + y ijk c ( 1 + k 1 j r ijk 2 + k 2 j r ijk 4 + k 3 jk r ijk 6 ) + p 1 j ( r ijk 2 + 2 ( y ijk c ) 2 ) + 2 p 2 jk x ijk c
r ijk = ( x ijk c ) 2 + ( y ijk c ) 2
If when wherein camera lens is non-fish eye lens, k 3jk=0;
World coordinates (the x of i-th gauge point i, y i, z i) picpointed coordinate after the correction corresponding with it (x ' ijk, y ' ijk) between relation be expressed as:
x ijk &prime; y ijk &prime; 1 = &lambda; jk M jk [ R pjk , T pjk ] x i y i z i 1 - - - ( 3 )
Wherein λ jkbe called scale factor,
R pjk = cos &theta; zjk c sin &theta; zjk c 0 - sin &theta; zjk c cos &theta; zjk c 0 0 0 1 cos &theta; yjk c 0 - sin &theta; yjk c 0 1 0 sin &theta; yjk c 0 cos &theta; yjk c 1 0 0 0 cos &theta; xjk c sin &theta; xjk c 0 - sin &theta; xjk c cos &theta; xjk c - - - ( 4 )
Rotation matrix R wherein between camera coordinates system and world coordinate system pjkfor orthogonal matrix, T pjkfor the video camera centre of perspectivity is at the translation vector of world coordinate system:
T pjk = T xjk c T yjk c T zjk c T
World coordinates (the x of through type (3) (4) and each gauge point i, y i, z i) and corresponding correct after picpointed coordinate (x ' ijk, y ' ijk), obtain matrix O jkjk[R pjk, T pjk] and the attitude information of video camera under world coordinate system wherein for the rotation Eulerian angle making world coordinate system overlap with camera coordinates system after the rotation of X-axis, Y-axis and Z axis respectively successively after translation, T xjk c T yjk c T zjk c T For the translation vector of the video camera centre of perspectivity under world coordinate system;
Because matrix R jkbe unit orthogonal matrix, have:
λ jk=1/||O jk(:,1)||;
Wherein O jk(:, 1) be matrix O jkthe 1st row, || || represent vector Euclid norm;
According to the rotation matrix R between the camera coordinates system obtained and world coordinate system pjkand the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, the attitude information of video camera under world coordinate system is expressed as matrix H pjk:
H pjk = R pjk T pjk 0 1 .
S7. the current location information that under the current state utilizing S5 to obtain, video camera attitude external trace device provides, and the attitude information of the video camera of trying to achieve in S6 under world coordinate system, try to achieve video camera correction and demarcate information and field angle;
S7.1 calculates video camera attitude rectification matrix:
Calculate the unchangeability that video camera attitude rectification matrix mainly utilizes the relative position relation of video camera and optictracking device, here relative position relation is constant is the term of validity of one-step installation, reinstall optictracking device, need again to demarcate camera position.Formula (6) can be utilized video camera attitude rectification matrix H is calculated when calculating jk.
H jk = H pjk &CenterDot; H Ijk - 1 - - - ( 6 )
Wherein
H pjk = R pjk T pjk 0 1
S7.2 is by video camera attitude rectification matrix H jkbe transformed to and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk) and translation vector T jk:
Due to the calculating video camera attitude rectification matrix H that formula (6) obtains jkbe the matrix of 4 × 4, it is not easy to store, and is also not easy to carry out interpolation calculation in subsequent arithmetic, therefore, needs to be transformed to the form being convenient to storage and computing.
Described video camera attitude rectification matrix H jkbe the matrix of 4 × 4, its live part is H jk(1:3,1:4), i.e. H jkfirst three rows, by H jk(1:3,1:4) is expressed as form:
H jk(1:3,1:4)=[R jk,T jk],
Wherein rotation matrix R jkbe the real number matrix of 3 × 3, translation vector T jk=[t xjk, t yjk, t zjk] tbe 3 dimensional vectors;
Due to rotation matrix R jkmutually can change with hypercomplex number, by rotation matrix R jkbe converted to according to following formula and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk):
Here be 3 dimension row vectors.
Thus R is stored jkonly need to store (θ jk, n xjk, n yjk, n zjk), therefore video camera attitude rectification matrix H jkbe converted to video camera correction and demarcate information (θ jk, n xjk, n yjk, n zjk, t xjkk, t yjk, t zjk); Wherein correction hypercomplex number and translation vector are collectively referred to as and revise demarcation information.
S7.3 calculates camera coverage angle: when focusing is from being FD j, focal length is FL ktime, camera coverage angle α jkcalculate according to formula below:
&alpha; jk = 2 arctan ( 1 2 W 2 f xjk 2 + H 2 f yjk 2 )
Wherein W, H are wide, the high resolving power of video camera.
S8. adjustment focusing Distance geometry focal length, to each focusing distance sample FD jwith each focal length sampled point FL kperform step S4 ~ S7, until travel through all focusing distance sample and focal length sampled point, obtain each focusing distance sample and the correction corresponding to focal length sampled point and demarcate information and field angle, set up the look-up table LUT2 of focusing from-focal length-correction demarcation information-field angle; Form example is in table 2;
Specifically, focal length priority principle can be taked, namely first fix focusing from, for this focusing under each focal length sampled point repeat S4 ~ S7, then adjust focusing from, repeat above-mentioned steps, until all focusing distance sample of traversal traversal, thus step S4 ~ S7 is performed to each focusing distance sample and focal length sampled point, obtain correction hypercomplex number (θ, the n of its correspondence x, n y, n z), translation vector (t x, t y, t z) and field angle α.
S9. export LUT to Computer image genration engine, be convenient to Computer image genration engine according to focusing from and focal length obtain and corresponding revise demarcation information and field angle.
Table 2 focal length-focusing from-revise the form of demarcation information-field angle look-up table LUT2
Under camera lens is zoom lens situation, Computer image genration engine is when utilizing look-up table LUT2 to obtain corresponding correction demarcation information and field angle according to current focusing Distance geometry focal length, due to the focusing distance sample that records in look-up table LUT2 and focal length sampled point limited, the focusing not at every turn chosen is the focusing Distance geometry focal length of sample point from, focal length all just.For non-sampled point place focusing Distance geometry focal length (such as when actual focusing from FD not any one focusing distance sample FD jplace, or real focal length FL is not at any one focusing distance sample FL kplace, or work as actual focusing from FD not at any one focusing distance sample FD jplace and real focal length FL be not also at any one focusing distance sample FL kplace), information is demarcated in the correction that the mode of interpolation arithmetic can be adopted to obtain its correspondence, owing to revising hypercomplex number, translation vector and its mathematical characteristic difference of field angle, therefore in order to reach more accurate interpolation arithmetic effect, can adopt different interpolation algorithms respectively.Specifically, revising hypercomplex number adopts SLERP interpolation method to carry out interpolation, translation vector T jk=[t xjk, t yjk, t zjk] tadopt linear interpolation, field angle adopts linear interpolation, or other forms carry out interpolation.
Wherein, as preferably, to carry out the method for interpolation as follows to revising hypercomplex number to use SLERP algorithm:
If 1. FD j< FD < FD j+1and FL k< FL < FL k+1, namely when actual focusing from FD not any one focusing distance sample FD jplace, and real focal length FL is not also at any one focusing distance sample FL kplace, adopts and combines (FD with its focusing Distance geometry focal length sampled point closed on most j, FL k), (FD j, FL k+1), (FD j+1, FL k), (FD j+1, FL k+1) corresponding to correction hypercomplex number q j, kk, q j, k+1, q j+1, k, q j+1, k+1, carry out interpolation according to the following formula, then for current actual focusing from FD and real focal length is FL time correction hypercomplex number q l, dcalculated by following formula:
q i , d = slerp ( q j , k , q j , k + 1 , t 2 ) = q j , k ( q j , k - 1 q j , k + 1 ) t 2
q i + 1 , d = slerp ( q j + 1 , k , q j + 1 , k + 1 , t 2 ) = q j + 1 , k ( q j + 1 , k - 1 q j + 1 , k + 1 ) t 2
q l , d = slerp ( q i , d , q i + 1 , d , t 1 ) = q i , d ( q i , d - 1 q i + 1 , d ) t 1
Wherein:
t 1 = FD j + 1 - FD FD j + 1 - FD j ;
t 2 = FL k + 1 - FL FL k + 1 - FL k .
If 2. FD j< FD < FD j+1, FL=FL ktime, namely when actual focusing from FD not any one focusing distance sample FD jplace and real focal length FL is focal length sampled point FL ktime, adopt and combine (FD with its focusing closed on most from sampled point j, FL k), (FD j+1, FL k) corresponding to correction hypercomplex number information q j, k, q j+1, k, carry out interpolation according to the following formula, then for current actual focusing from FD and real focal length is FL ktime correction hypercomplex number q l, dcalculated by following formula
q l , d = slerp ( q j , k , q j + 1 , k , t ) = q j , k ( q j , k - 1 q j + 1 , k ) t
Wherein
t = FD j + 1 - FD FD j + 1 - FD j .
If 3. FD=FD j, FL k< FL < FL k+1time, namely actual focusing is focusing distance sample FD from FD jand real focal length FL is not at any one focusing distance sample FL kduring place, adopt and combine (FD with its focal length sampled point closed on most j, FL k), (FD j, FL k+1) corresponding to correction hypercomplex number information q j, k, q j, k+1, carry out interpolation according to the following formula, then for current actual focusing from FD jand correction hypercomplex number q when real focal length is FL l, dcalculated by following formula:
q l , d = slerp ( q j , k , q j , k + 1 , t ) = q j , k ( q j , k - 1 q j , k + 1 ) t
Wherein
t = FL k + 1 - FL FL k + 1 - FL k
Under can being any value at focusing Distance geometry focal length thus, demarcate information and field angle by the correction of tabling look-up and interpolation acquisition is corresponding.
The above; be only the specific embodiment of the present invention; but protection scope of the present invention is not limited thereto; any people being familiar with this technology is in the technical scope disclosed by the present invention; the conversion and replacement expected can be understood; all should be encompassed in and of the present inventionly comprise within scope, therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (20)

1. a camera position correction scaling method, is characterized in that, comprises the steps:
S1. N number of gauge point A on background extraction curtain 1..., A ncoordinate under world coordinate system: A 1(x 1, y 1, z 1) ..., A n(x n, y n, z n), wherein the coordinate of i-th gauge point under world coordinate system is A i(x i, y i, z i), i=1 ~ N; Described N number of gauge point at least comprises 3 not collinear gauge points; Described gauge point coordinate is the home position of gauge point and described gauge point is all positioned at the picture that video camera gathers;
S2. numerical value focusing distance sample FD is from small to large determined 1, FD 2..., FD j..., FD j; Wherein j=1 ~ J, FD 1and FD jbe respectively camera lens the maximum focusing of minimum focusing Distance geometry from;
If camera lens is zoom lens, then also need to determine numerical value focal length sampled point FL from small to large 1, FL 2..., FL k..., FL k; Wherein k=1 ~ K, FL 1and FL kminimum focus and the maximum focal length of camera lens respectively;
If S3. camera lens is tight shot, obtain each focusing distance sample FD jcorresponding camera lens inner parameter and lens distortion parameter;
If camera lens is zoom lens, obtain each focusing distance sample FD jwith each focal length sampled point FL kcorresponding camera lens inner parameter and lens distortion parameter;
Described camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
If S4. camera lens is tight shot, when focusing is from getting a jth focusing distance sample FD jtime, adjustment camera position makes gauge point on background screen can blur-free imaging;
If camera lens is zoom lens, when focusing is from getting a jth focusing distance sample FD jand focal length is at a kth sampled point FL kduring place, adjustment camera position makes gauge point on background screen can blur-free imaging;
S5. the current location information that under acquisition current state, video camera attitude external trace device provides, comprise the rotation Eulerian angle for making world coordinate system overlap with the self-defining coordinate system of video camera attitude external trace device after the rotation of its X-axis, Y-axis and Z axis respectively successively after translation, and the position coordinates of video camera attitude external trace device under world coordinate system;
The camera lens inner parameter that picpointed coordinate under video camera imaging coordinate system of the world coordinates of the N number of gauge point S6. utilizing S1 to obtain, gauge point and S3 obtain and lens distortion parameter, obtain rotation matrix between camera coordinates system and world coordinate system and the video camera centre of perspectivity translation vector at world coordinate system, i.e. the attitude information of video camera under world coordinate system;
S7. the current location information that under the current state utilizing S5 to obtain, video camera attitude external trace device provides, and the attitude information of the video camera of trying to achieve in S6 under world coordinate system, try to achieve video camera correction and demarcate information and field angle;
If S8. camera lens is tight shot: adjustment focusing from, to each focusing distance sample FD jperform step S4 ~ S7, until travel through all focusing distance sample, obtain correction corresponding to each focusing distance sample and demarcate information and field angle, set up focusing from-revise the look-up table LUT1 of demarcation information-field angle;
If camera lens is zoom lens: adjustment focusing Distance geometry focal length, to each focusing distance sample FD jwith each focal length sampled point FL kperform step S4 ~ S7, until travel through all focusing distance sample and focal length sampled point, obtain each focusing distance sample and the correction corresponding to focal length sampled point and demarcate information and field angle, set up the look-up table LUT2 of focusing from-focal length-correction demarcation information-field angle;
If S9. camera lens is tight shot:, according to described look-up table LUT1, revise demarcation information and field angle by focusing from acquisition is corresponding;
If camera lens is zoom lens:, according to described look-up table LUT2, obtain corresponding correction demarcation information and field angle by focusing Distance geometry focal length.
2. a kind of camera position correction scaling method according to claim 1, is characterized in that, in step S3,
(1) when camera lens is tight shot:
For a jth focusing distance sample FD j, corresponding camera lens inner parameter is: f xj, f yj, c xj, c yj, wherein:
F xj: represent that focusing is from being FD jtime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jwith the ratio of the horizontal width dx of each unit of video camera imager, namely
F yj: represent that focusing is from being FD jtime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jwith the ratio of the vertical height dy of each unit of video camera imager, namely
C xj: represent that focusing is from being FD jtime, the optical axis of camera lens and the intersection point of imaging surface, with the pixel count of the horizontal-shift at imaging surface center;
C yj: represent that focusing is from being FD jtime, the optical axis of camera lens and the intersection point of imaging surface, with the pixel count of the vertical shift at imaging surface center;
For a jth focusing distance sample FD j, corresponding camera lens distortions parameter is: k 1j, k 2j, k 3j, p 1jand p 2j, wherein k 1j, k 2j, k 3jfor focusing is from being FD jtime radial distortion parameter, wherein k 3jfor selected parameter, use k when camera lens is fish eye lens 1j, k 2j, k 3jthese three radial distortion parameter, when camera lens is non-fish eye lens, k 3j=0, namely other camera lenses only use k 1j, k 2jthis two radial distortion parameter; p 1j,p 2jfor focusing is from being FD jtime tangential distortion parameter;
Further, each focusing is obtained from FD junder internal reference matrix M jfor:
M j = f xj 0 c xj 0 f yj c yj 0 0 1 ;
(2) when camera lens is zoom lens:
For a jth focusing distance sample FD jwith a kth focal length sampled point FL k, corresponding camera lens inner parameter is: f xjk, f yjk, c xjk, c yjk, wherein:
F xjk: represent that focusing is from being FD j, focal length is FL ktime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jkwith the ratio of the horizontal width dx of each unit of imager, namely
F yjk: represent that focusing is from being FD j, focal length is FL ktime, the emergent pupil of camera lens and camera lens optical axis intersection point are to the distance f of imaging surface jkwith the ratio of the vertical height dy of each unit of imager, namely
C xjk: represent that focusing is from being FD j, focal length is FL ktime, the pixel count of the intersection point of optical axis and imaging surface and the horizontal-shift at imaging surface center;
C yjk: represent that focusing is from being FD j, focal length is FL ktime, the pixel count of the intersection point of optical axis and imaging surface and the vertical shift at imaging surface center;
For a jth focusing distance sample FD jwith a kth focal length sampled point FL k, corresponding camera lens distortions parameter is: k 1jk, k 2jk, k 3jk, p 1jkand p 2jk, wherein k 1jk, k 2jkand k 3jkfor focusing is from being FD jand focal length is FL ktime radial distortion parameter, wherein k 3jkfor selected parameter, use k when camera lens is fish eye lens 1jk, k 2jkand k 3jkthese three radial distortion parameter; When camera lens is non-fish eye lens, k 3jk=0, namely other camera lenses only use k 1jk, k 2jkthese two radial distortion parameter; p 1jk, p 2jkfor focusing is from being FD jand focal length is FL ktime tangential distortion parameter;
And obtain each focusing from FD jwith focal length FL kunder internal reference matrix be M jk:
M jk = f xjk 0 c xjk 0 f yjk c yjk 0 0 1 .
3. a kind of camera position correction scaling method according to claim 2, is characterized in that, in step S5,
(1) when camera lens is tight shot:
The current location information that described video camera attitude external trace device provides is
xj, θ yj, θ zj, T xj, T yj, T zj]; Wherein [θ xj, θ yj, θ zj] be the rotation Eulerian angle that overlap with the self-defining coordinate system of video camera attitude external trace device after world coordinate system is rotated around X-axis, Y-axis and Z axis respectively successively after translation, [T xj, T yj, T zj] be the position coordinates of video camera attitude external trace device under world coordinate system, the current location information that video camera attitude external trace device provides is expressed as matrix form:
H Ij = R Ij - R Ij T Ij 0 1 ;
Here
R Ij = cos &theta; zj sin &theta; zj 0 - sin &theta; zj cos &theta; zj 0 0 0 1 cos &theta; yj 0 - sin yj 0 1 0 sin &theta; yj 0 cos &theta; yj 1 0 0 0 cos &theta; xj sin &theta; xj 0 - sin &theta; xj cos &theta; xj
T Ij=[T xjT yjT zj] T
0=[0,0,0];
(2) when camera lens is zoom lens:
Current location information [the θ that described video camera attitude external trace device provides xjk, θ yjk, θ zjk, T xjk, T yjk, T zjk]; Wherein [θ xjk, θ yjk, θ zjk] be the rotation Eulerian angle that overlap with the self-defining coordinate system of video camera attitude external trace device after world coordinate system is rotated around X-axis, Y-axis and Z axis respectively successively after translation, [T xjk, T yjk, T zjk] be the position coordinates of video camera attitude external trace device under world coordinate system; The current location information that video camera attitude external trace device provides is expressed as matrix form:
H Ijk = R Ijk - R Ijk T Ijk 0 1 ;
Here
R pjk = cos &theta; zjk sin &theta; zjk 0 - sin &theta; zjk cos &theta; zjk 0 0 0 1 cos &theta; yjk 0 - sin &theta; yjk 0 1 0 sin &theta; yjk 0 cos &theta; yjk 1 0 0 0 cos &theta; xjk sin &theta; xjk 0 - sin &theta; xjk cos &theta; xjk ;
T Ijk=[T xjkT yjkT zjk] T
0=[0,0,0]。
4. a kind of camera position correction scaling method according to claim 3, is characterized in that, in step S6,
(1) when camera lens is tight shot:
Current focusing is from being FD jtime, obtain the rotation matrix R between camera coordinates system and world coordinate system pj, the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, the attitude matrix H of video camera under world coordinate system pjmethod be:
According to the picpointed coordinate of i-th gauge point and current focusing is from FD junder lens distortion parameter, obtain i-th gauge point after correction picpointed coordinate (x ' ij, y ' ij):
X ij &prime; = x ij c + x ij c ( 1 + k 1 j r ij 2 + k 2 j r ij 4 + k 3 j r ij 6 ) + 2 p 1 j y cj c + p 2 j ( r ij 2 + 2 ( x ij c ) 2 )
y ij &prime; = y ij c + y ij c ( 1 + k 1 j r ij 2 + k 2 j r ij 4 + k 3 j r ij 6 ) + p 1 j ( r ij 2 + 2 ( x ij c ) 2 ) + 2 p 2 j x ij c
r ij = ( x ij c ) 2 + ( y ij c ) 2
If when wherein camera lens is non-fish eye lens, k 3j=0;
World coordinates (the x of i-th gauge point i, y i, z i) picpointed coordinate after the correction corresponding with it (x ' ij, y ' ij) between relation be expressed as:
x ij &prime; y ij &prime; 1 = &lambda; j M j R pj , T pj x i y i z i 1 - - - ( 1 )
Wherein λ jbe called scale factor,
R pj = cos &theta; zj c sin &theta; zj c 0 - sin &theta; zj c cos &theta; zj c 0 0 0 1 cos &theta; yj c 0 - sin &theta; yj c 0 1 0 sin &theta; yj c 0 cos &theta; yj c 1 0 0 0 cos &theta; xj c sin &theta; xj c 0 - sin &theta; xj c cos &theta; xj c - - - ( 2 )
Rotation matrix R wherein between camera coordinates system and world coordinate system pjfor orthogonal matrix, T pjfor the video camera centre of perspectivity is at the translation vector of world coordinate system:
T pj = T xj c T yj c T zj c T ;
World coordinates (the x of through type (1) (2) and each gauge point i, y i, z i) and corresponding correction after picpointed coordinate (x ' ij, y ' ij), obtain matrix O jj[R pj, T pj] and the attitude information of video camera under world coordinate system wherein for the rotation Eulerian angle making world coordinate system overlap with camera coordinates system after the rotation of X-axis, Y-axis and Z axis respectively successively after translation, T xj c T yj c T zj c T For the translation vector of the video camera centre of perspectivity under world coordinate system;
Again because matrix R pjbe unit orthogonal matrix, have:
λ j=1/||O j(:,1)||;
Wherein O j(:, 1) be matrix O jthe 1st row, || || represent vector Euclid norm;
According to the rotation matrix R between the camera coordinates system obtained and world coordinate system pjand the video camera centre of perspectivity is at the translation vector T of world coordinate system pj, the attitude information of video camera under world coordinate system is expressed as matrix H pj:
H pj = R pj T pj 0 1 ;
(2) when camera lens is zoom lens:
Current focusing is from being FD jand when front focal length is FL ktime, obtain the rotation matrix R between camera coordinates system and world coordinate system pjk, the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, the attitude matrix H of video camera under world coordinate system pjkmethod be:
According to current focusing from FD jwith as front focal length FL kthe picpointed coordinate of lower i-th gauge point and lens distortion parameter obtain its correct after picpointed coordinate (x ' ijk, y ' ijk):
x ijk &prime; = x ijk c + x ijk c ( 1 + k 1 jk r ijk 2 + k 2 jk r ijk 4 + k 3 jk r ijk 6 ) + 2 p 1 jk y ijk c + p 2 jk ( r ijk 2 + 2 ( x ijk c ) 2 )
y ijk &prime; = y ijk c + y ijk c ( 1 + k 1 j r ijk 2 + k 2 j r ijk 4 + k 3 jk r ijk 6 ) + p 1 j ( r ijk 2 + 2 ( y ijk c ) 2 ) + 2 p 2 jk x ijk c
r ijk = ( x ijk c ) 2 + ( y ijk c ) 2
If when wherein camera lens is non-fish eye lens, k 3jk=0;
World coordinates (the x of i-th gauge point i, y i, z i) picpointed coordinate after the correction corresponding with it (x ' ijk, y ' ijk) between relation be expressed as:
x ijk &prime; y ijk &prime; 1 = &lambda; jk M jk [ R pjk , T pjk ] x i y i z i 1 - - - ( 3 )
Wherein λ jkbe called scale factor,
R pjk = cos &theta; zjk c sin &theta; zjk c 0 - sin &theta; zjk c cos &theta; zjk c 0 0 0 1 cos &theta; yjk c 0 - sin &theta; yjk c 0 1 0 sin &theta; yjk c 0 cos &theta; yjk c 1 0 0 0 cos &theta; xjk c sin &theta; xjk c 0 - sin &theta; xjk c cos &theta; xjk c - - - ( 4 )
Rotation matrix R wherein between camera coordinates system and world coordinate system pjkfor orthogonal matrix, T pjkfor the video camera centre of perspectivity is at the translation vector of world coordinate system:
T pjk = T xjk c T yjk c T zjk c T
World coordinates (the x of through type (3) (4) and each gauge point i, y i, z i) and corresponding correct after picpointed coordinate (x ' ijk, y ' ijk), obtain matrix O jkjk[R pjk, T pjk] and the attitude information of video camera under world coordinate system wherein for the rotation Eulerian angle making world coordinate system overlap with camera coordinates system after the rotation of X-axis, Y-axis and Z axis respectively successively after translation, T xjk c T yjk c T zjk c T For the translation vector of the video camera centre of perspectivity under world coordinate system;
Because matrix R jkbe unit orthogonal matrix, have:
λ jk=1/||O jk(:,1)||;
Wherein O jk(:, 1) be matrix O jkthe 1st row, || || represent vector Euclid norm;
According to the rotation matrix R between the camera coordinates system obtained and world coordinate system pjkand the video camera centre of perspectivity is at the translation vector T of world coordinate system pjk, the attitude information of video camera under world coordinate system is expressed as matrix H pjk:
H pjk = R pjk T pjk 0 1 .
5. a kind of camera position correction scaling method according to claim 4, it is characterized in that, step S7 comprises the steps:
(1) when camera lens is tight shot:
S7.1 calculates video camera attitude rectification matrix H j:
H j = H pj &CenterDot; H Ij - 1 - - - ( 5 )
S7.2 is by video camera attitude rectification matrix H jbe transformed to and revise hypercomplex number (θ j, n xj, n yj, n zj) and translation vector T j:
Described video camera attitude rectification matrix H jbe the matrix of 4 × 4, its live part is H j(1: 3,1: 4), i.e. H jfirst three rows, by H j(1: 3,1: 4) is expressed as form:
H j(1∶3,1∶4)=[R j,T j];
Wherein rotation matrix R jbe the real number matrix of 3 × 3, translation vector T j=[t xj, t yj, t zj] tbe 3 dimensional vectors;
By rotation matrix R jbe converted to according to following formula and revise hypercomplex number (θ j, n xj, n yj, n zj):
Here be 3 dimension row vectors;
Thus R is stored jonly need store and revise hypercomplex number (θ j, n xj, n yj, n zj), therefore video camera attitude rectification matrix H jbe converted to video camera correction and demarcate information (θ j, n xj, n yj, n zj, t xj, t yj, t zj);
S7.3 calculates camera coverage angle: when focusing is from being FD jtime, calculate camera coverage angle α according to following formula j:
&alpha; j = 2 arctan ( 1 2 W 2 f xj 2 + H 2 f yj 2 )
Wherein W, H are respectively wide, the high resolving power of video camera;
(2) when camera lens is zoom lens:
S7.1 calculates video camera attitude rectification matrix H jk:
H jk = H pjk &CenterDot; H Ijk - 1 - - - ( 6 )
S7.2 is by video camera attitude rectification matrix H jkbe transformed to and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk) and translation vector T jk:
Described video camera attitude rectification matrix H jkbe the matrix of 4 × 4, its live part is H jk(1: 3,1: 4), i.e. H jkfirst three rows, by H jk(1: 3,1: 4) is expressed as form:
H jk(1∶3,1∶4)=[R jk,T jk];
Wherein rotation matrix R jkbe the real number matrix of 3 × 3, translation vector T jk=[t xjk, t yjk, t zjk] tbe 3 dimensional vectors;
By rotation matrix R jkbe converted to according to following formula and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk):
Here be 3 dimension row vectors;
Thus R is stored jkonly need store and revise hypercomplex number (θ jk, n xjk, n yjk, n zjk), therefore video camera attitude rectification matrix H jkbe converted to video camera correction and demarcate information (θ jk, n xjk, n yjk, n zjk, t xjk, t yjk, t zjk);
S7.3 calculates camera coverage angle: when focusing is from being FD jand focal length is FL ktime, calculate camera coverage angle α according to following formula jk:
&alpha; jk = 2 arctan ( 1 2 W 2 f xjk 2 + H 2 f yjk 2 )
Wherein W, H are respectively wide, the high resolving power of video camera.
6. a kind of camera position correction scaling method according to claim 5, is characterized in that, in step S9, when actual focusing from FD not at any one focusing distance sample FD jplace, or real focal length FL is not at any one focusing distance sample FL kplace, or work as actual focusing from FD not at any one focusing distance sample FD jplace and real focal length FL be not also at any one focusing distance sample FL kduring place, information and field angle are demarcated in the correction adopting the mode of following interpolation arithmetic to obtain its correspondence:
Adopt SLERP interpolation method to carry out interpolation for the correction hypercomplex number revised in demarcation information, translation vector adopts linear interpolation; Field angle adopts linear interpolation method to carry out interpolation.
7. a kind of camera position correction scaling method according to claim 6, is characterized in that, in step S9,
(1) when camera lens is tight shot:
When actual focusing from FD not any one focusing distance sample FD jplace, the method using SLERP algorithm to carry out interpolation to correction hypercomplex number is as follows:
If FD j< FD < FD j+1, video camera in focusing from FD jand FD j+1the video camera attitude rectification matrix at place is respectively H jand H j+1, its rotation matrix is respectively R jand R j+1, rotation matrix is expressed as correction hypercomplex number with then current actual focusing is calculated from correction hypercomplex number q during FD by following formula:
q = slerp ( q j , q j + 1 , t ) = q j ( q j - 1 q j + 1 ) t ;
Here q jinverse,
q j - 1 = ( cos &theta; j , - n J &RightArrow; sin &theta; j ) ; n J &RightArrow; = ( n xj , n yj , n zj ) ,
t = FD j + 1 - FD FD j + 1 - FD j ;
(2) when camera lens is zoom lens:
When actual focusing from FD not any one focusing distance sample FD jplace, or real focal length FL is not at any one focusing distance sample FL kplace, or work as actual focusing from FD not at any one focusing distance sample FD jplace and real focal length FL be not also at any one focusing distance sample FL kduring place, the method using SLERP algorithm to carry out interpolation to correction hypercomplex number is as follows:
If 1. FD j< FD < FD j+1and FL k< FL < FL k+1, namely when actual focusing from FD not any one focusing distance sample FD jplace, and real focal length FL is not also at any one focusing distance sample FL kplace, adopts and combines (FD with its focusing Distance geometry focal length sampled point closed on most j, FL k), (FD j, FL k+1), (FD j+1,fL k), (FD j+1, FL k+1) corresponding to correction hypercomplex number q j, k, q j, k+1, q j+1, k, q j+1, k+1, carry out interpolation according to the following formula, then for current actual focusing from FD and real focal length is FL time correction hypercomplex number q l, dcalculated by following formula:
q i , d = slerp ( q j , k , q j , k + 1 , t 2 ) = q j , k ( q j , k - 1 q j , k + 1 ) t 2
q i + 1 , d = slerp ( q j + 1 , k , q j + 1 , k + 1 , t 2 ) = q j + 1 , k ( q j + 1 , k - 1 q j + 1 , k + 1 ) t 2
q l , d = slerp ( q i , d , q i + 1 , d , t 1 ) = q i , d ( q i , d - 1 q i + 1 , d ) t 1
Wherein:
t 1 = FD j + 1 - FD FD j + 1 - FD j ;
t 2 = FL k + 1 - FL FL k + 1 - FL k ;
If 2. FD j< FD < FD j+1, FL=FL ktime, namely when actual focusing from FD not any one focusing distance sample FD jplace and real focal length FL is focal length sampled point FL ktime, adopt and combine (FD with its focusing closed on most from sampled point j, FL k), (FD j+1, FL k) corresponding to correction hypercomplex number information q j, k, q j+1, k, carry out interpolation according to the following formula, then for current actual focusing from FD and real focal length is FL ktime correction hypercomplex number q l, dcalculated by following formula
q l , d = slerp ( q j , k , q j + 1 , k , t ) = q j , k ( q j , k - 1 q j + 1 , k ) t
Wherein
t = FD j + 1 - FD FD j + 1 - FD j ;
If 3. FD=FD j, FL k< FL < FL k+1time, namely actual focusing is focusing distance sample FD from FD jand real focal length FL is not at any one focusing distance sample FL kduring place, adopt and combine (FD with its focal length sampled point closed on most j, FL k), (FD j, FL k+1) corresponding to correction hypercomplex number information q j, k, q j, k+1, carry out interpolation according to the following formula, then for current actual focusing from FD jand correction hypercomplex number q when real focal length is FL l, dcalculated by following formula:
q l , d = slerp ( q j , k , q j , k + 1 , t ) = q j , k ( q j , k - 1 q j , k + 1 ) t
Wherein
t = FL k + 1 - FL FL k + 1 - FL k .
8. a camera position correction calibration system, is characterized in that, comprising: servo-motor control system, video camera, video camera attitude external trace device, background screen, gauge point, space measurement equipment, data processing equipment, image rendering engine;
Wherein, servo-motor control system is connected with video camera, for the focal length of adjustable lens and focusing from; Servo-motor control system is also connected with data processing equipment, for sending the focal length of camera lens and focusing range information to data processing equipment, calculates revise demarcation information and field angle and set up look-up table to make data processing equipment;
Data processing equipment is also connected with video camera, for reading video stream data in real time;
Video camera attitude external trace device is arranged at video camera outside, for by the position of photogrammetric camera and the skew of video camera attitude external trace device, estimates position and the attitude information of video camera; Video camera attitude external trace device is also connected with data processing equipment, for sending measurement data information;
Background screen is arranged the gauge point of at least 3 not conllinear, these gauge point radiuses are identical, and color and background screen have contrast;
Space measurement equipment is connected with data processing equipment respectively with image rendering engine, space measurement equipment is used for the world coordinates of measurement markers point home position and sends to data processing equipment, and the look-up table that image rendering engine is used for setting up according to data processing equipment obtains revises demarcation information and field angle accordingly.
9. a kind of camera position correction scaling method according to claim 1, is characterized in that, adopts space measurement equipment to obtain N number of gauge point A on background screen in S1 1..., A ncoordinate under world coordinate system.
10. any one camera position correction scaling method according to claim 1, it is characterized in that, described video camera attitude external trace device is optictracking device or mechanical arm.
11. a kind of camera position correction scaling methods according to claim 1, it is characterized in that, video camera imaging coordinate system initial point is set in field of view center.
12. a kind of camera position correction scaling methods according to claim 1, described focusing distance sample and focal length sampled point adopt the sampled point got in camera lens calibration process.
CN201510489677.0A 2015-08-11 2015-08-11 Camera position amendment scaling method and system Active CN105118055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510489677.0A CN105118055B (en) 2015-08-11 2015-08-11 Camera position amendment scaling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510489677.0A CN105118055B (en) 2015-08-11 2015-08-11 Camera position amendment scaling method and system

Publications (2)

Publication Number Publication Date
CN105118055A true CN105118055A (en) 2015-12-02
CN105118055B CN105118055B (en) 2017-12-15

Family

ID=54666029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510489677.0A Active CN105118055B (en) 2015-08-11 2015-08-11 Camera position amendment scaling method and system

Country Status (1)

Country Link
CN (1) CN105118055B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106990776A (en) * 2017-02-27 2017-07-28 广东省智能制造研究所 Robot goes home localization method and system
CN107223255A (en) * 2016-10-20 2017-09-29 深圳达闼科技控股有限公司 A kind of image preview method and device based on iris recognition
CN107306325A (en) * 2016-04-22 2017-10-31 宁波舜宇光电信息有限公司 The device of utilization space measurement of coordinates camera module visual field and its application
CN107492126A (en) * 2017-08-03 2017-12-19 厦门云感科技有限公司 Calibration method, device, system, medium and the equipment of camera central shaft
CN107562189A (en) * 2017-07-21 2018-01-09 广州励丰文化科技股份有限公司 A kind of space-location method and service equipment based on binocular camera
CN107610185A (en) * 2017-10-12 2018-01-19 长沙全度影像科技有限公司 A kind of fisheye camera fast calibration device and scaling method
CN107665483A (en) * 2017-09-27 2018-02-06 天津智慧视通科技有限公司 Exempt from calibration easily monocular camera lens fish eye images distortion correction method
CN107809610A (en) * 2016-09-08 2018-03-16 松下知识产权经营株式会社 Camera parameter set calculating apparatus, camera parameter set calculation method and program
CN107959794A (en) * 2017-11-29 2018-04-24 天津聚飞创新科技有限公司 Data Modeling Method, device and data capture method, device and electronic equipment
CN108282651A (en) * 2017-12-18 2018-07-13 北京小鸟看看科技有限公司 Antidote, device and the virtual reality device of camera parameter
CN108702450A (en) * 2016-02-19 2018-10-23 快图有限公司 Stablize the method for image sequence
CN109523597A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 The scaling method and device of Camera extrinsic
CN109788277A (en) * 2019-01-08 2019-05-21 浙江大华技术股份有限公司 Compensation method, device and the storage medium of the optical axis deviation of stabilization machine core
CN109887041A (en) * 2019-03-05 2019-06-14 中测国检(北京)测绘仪器检测中心 A kind of method of mechanical arm control digital camera photo centre position and posture
CN110266944A (en) * 2019-06-21 2019-09-20 大庆安瑞达科技开发有限公司 A kind of calibration quick focusing method of remote optical monitoring system
CN110389349A (en) * 2018-04-17 2019-10-29 北京京东尚科信息技术有限公司 Localization method and device
CN110487249A (en) * 2019-07-17 2019-11-22 广东工业大学 A kind of unmanned plane scaling method for structure three-dimensional vibration measurement
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN111311671A (en) * 2020-05-12 2020-06-19 创新奇智(南京)科技有限公司 Workpiece measuring method and device, electronic equipment and storage medium
CN113409391A (en) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113766131A (en) * 2021-09-15 2021-12-07 广州市明美光电技术有限公司 Multi-target-point focusing method and application thereof
CN114299167A (en) * 2022-03-11 2022-04-08 杭州灵西机器人智能科技有限公司 Monocular calibration method, system, device and medium for zoom lens

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991437A (en) * 1996-07-12 1999-11-23 Real-Time Geometry Corporation Modular digital audio system having individualized functional modules
JP2005003463A (en) * 2003-06-10 2005-01-06 Topcon Corp Calibration chart image display device, calibration apparatus, and calibration method
CN1906943A (en) * 2004-01-30 2007-01-31 株式会社丰田自动织机 Video image positional relationship correction apparatus, steering assist apparatus having the video image positional relationship correction apparatus and video image positional relationship correcti
CN101447073A (en) * 2007-11-26 2009-06-03 新奥特(北京)视频技术有限公司 Zoom lens calibration method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991437A (en) * 1996-07-12 1999-11-23 Real-Time Geometry Corporation Modular digital audio system having individualized functional modules
JP2005003463A (en) * 2003-06-10 2005-01-06 Topcon Corp Calibration chart image display device, calibration apparatus, and calibration method
CN1906943A (en) * 2004-01-30 2007-01-31 株式会社丰田自动织机 Video image positional relationship correction apparatus, steering assist apparatus having the video image positional relationship correction apparatus and video image positional relationship correcti
CN101447073A (en) * 2007-11-26 2009-06-03 新奥特(北京)视频技术有限公司 Zoom lens calibration method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
C.S. FRASER ET AL: "Zoom-Dependent camera calibration in digital close-range photogrammetry", 《PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING》 *
董明利 等: "随对焦状态与物距变化的畸变模型及标定方法", 《仪器仪表学报》 *
闫利 等: "利用四元数描述线阵CCD影像的空间后方交会", 《武汉大学学报.信息科学版》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108702450B (en) * 2016-02-19 2020-10-27 快图有限公司 Camera module for image capture device
CN108702450A (en) * 2016-02-19 2018-10-23 快图有限公司 Stablize the method for image sequence
CN107306325A (en) * 2016-04-22 2017-10-31 宁波舜宇光电信息有限公司 The device of utilization space measurement of coordinates camera module visual field and its application
CN107809610B (en) * 2016-09-08 2021-06-11 松下知识产权经营株式会社 Camera parameter set calculation device, camera parameter set calculation method, and recording medium
US11233983B2 (en) 2016-09-08 2022-01-25 Panasonic Intellectual Property Management Co., Ltd. Camera-parameter-set calculation apparatus, camera-parameter-set calculation method, and recording medium
CN107809610A (en) * 2016-09-08 2018-03-16 松下知识产权经营株式会社 Camera parameter set calculating apparatus, camera parameter set calculation method and program
CN107223255A (en) * 2016-10-20 2017-09-29 深圳达闼科技控股有限公司 A kind of image preview method and device based on iris recognition
CN106990776A (en) * 2017-02-27 2017-07-28 广东省智能制造研究所 Robot goes home localization method and system
CN107562189A (en) * 2017-07-21 2018-01-09 广州励丰文化科技股份有限公司 A kind of space-location method and service equipment based on binocular camera
CN107492126A (en) * 2017-08-03 2017-12-19 厦门云感科技有限公司 Calibration method, device, system, medium and the equipment of camera central shaft
CN107492126B (en) * 2017-08-03 2019-11-05 厦门云感科技有限公司 Calibration method, device, system, medium and the equipment of camera central axis
CN109523597A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 The scaling method and device of Camera extrinsic
CN107665483B (en) * 2017-09-27 2020-05-05 天津智慧视通科技有限公司 Calibration-free convenient monocular head fisheye image distortion correction method
CN107665483A (en) * 2017-09-27 2018-02-06 天津智慧视通科技有限公司 Exempt from calibration easily monocular camera lens fish eye images distortion correction method
CN107610185A (en) * 2017-10-12 2018-01-19 长沙全度影像科技有限公司 A kind of fisheye camera fast calibration device and scaling method
CN107959794A (en) * 2017-11-29 2018-04-24 天津聚飞创新科技有限公司 Data Modeling Method, device and data capture method, device and electronic equipment
CN108282651A (en) * 2017-12-18 2018-07-13 北京小鸟看看科技有限公司 Antidote, device and the virtual reality device of camera parameter
CN110389349A (en) * 2018-04-17 2019-10-29 北京京东尚科信息技术有限公司 Localization method and device
CN110389349B (en) * 2018-04-17 2021-08-17 北京京东尚科信息技术有限公司 Positioning method and device
CN110858403A (en) * 2018-08-22 2020-03-03 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109788277B (en) * 2019-01-08 2020-08-04 浙江大华技术股份有限公司 Method and device for compensating optical axis deviation of anti-shake movement and storage medium
CN109788277A (en) * 2019-01-08 2019-05-21 浙江大华技术股份有限公司 Compensation method, device and the storage medium of the optical axis deviation of stabilization machine core
CN109887041A (en) * 2019-03-05 2019-06-14 中测国检(北京)测绘仪器检测中心 A kind of method of mechanical arm control digital camera photo centre position and posture
CN110266944A (en) * 2019-06-21 2019-09-20 大庆安瑞达科技开发有限公司 A kind of calibration quick focusing method of remote optical monitoring system
CN110487249A (en) * 2019-07-17 2019-11-22 广东工业大学 A kind of unmanned plane scaling method for structure three-dimensional vibration measurement
CN111311671A (en) * 2020-05-12 2020-06-19 创新奇智(南京)科技有限公司 Workpiece measuring method and device, electronic equipment and storage medium
CN111311671B (en) * 2020-05-12 2020-08-07 创新奇智(南京)科技有限公司 Workpiece measuring method and device, electronic equipment and storage medium
CN113409391A (en) * 2021-06-25 2021-09-17 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113766131A (en) * 2021-09-15 2021-12-07 广州市明美光电技术有限公司 Multi-target-point focusing method and application thereof
CN114299167A (en) * 2022-03-11 2022-04-08 杭州灵西机器人智能科技有限公司 Monocular calibration method, system, device and medium for zoom lens
CN114299167B (en) * 2022-03-11 2022-07-26 杭州灵西机器人智能科技有限公司 Monocular calibration method, system, device and medium of zoom lens

Also Published As

Publication number Publication date
CN105118055B (en) 2017-12-15

Similar Documents

Publication Publication Date Title
CN105118055A (en) Camera positioning correction calibration method and system
CN106846415B (en) A kind of multichannel fisheye camera binocular calibration device and method
CN103198487B (en) A kind of automatic marking method for video monitoring system
US7479982B2 (en) Device and method of measuring data for calibration, program for measuring data for calibration, program recording medium readable with computer, and image data processing device
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
TWI555378B (en) An image calibration, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN104574425B (en) A kind of demarcation of the master-slave camera chain based on rotating model and interlock method
CN107424118A (en) Based on the spherical panorama mosaic method for improving Lens Distortion Correction
CN107665483B (en) Calibration-free convenient monocular head fisheye image distortion correction method
CN109961485A (en) A method of target positioning is carried out based on monocular vision
CN103729839B (en) A kind of method and system of sensor-based outdoor camera tracking
CN105469412A (en) Calibration method of assembly error of PTZ camera
KR20090078463A (en) Distorted image correction apparatus and method
CN108020175A (en) A kind of more optical grating projection binocular vision tongue body surface three dimension entirety imaging methods
CN111461963A (en) Fisheye image splicing method and device
CN206460515U (en) A kind of multichannel fisheye camera caliberating device based on stereo calibration target
CN206460516U (en) A kind of multichannel fisheye camera binocular calibration device
CN107564070B (en) Large-scene binocular ranging and correcting method in overhead power transmission channel image monitoring
CN112907647B (en) Three-dimensional space size measurement method based on fixed monocular camera
CN112949478A (en) Target detection method based on holder camera
CN113724337A (en) Camera dynamic external parameter calibration method and device without depending on holder angle
CN105608706A (en) Structure design and geometric calibration method for polarization vision sensor
CN112258581A (en) On-site calibration method for multi-fish glasses head panoramic camera
CN112785655A (en) Method, device and equipment for automatically calibrating external parameters of all-round camera based on lane line detection and computer storage medium
CN102968784B (en) Method for aperture synthesis imaging through multi-view shooting

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant