CN105118055B - Camera position amendment scaling method and system - Google Patents

Camera position amendment scaling method and system Download PDF

Info

Publication number
CN105118055B
CN105118055B CN201510489677.0A CN201510489677A CN105118055B CN 105118055 B CN105118055 B CN 105118055B CN 201510489677 A CN201510489677 A CN 201510489677A CN 105118055 B CN105118055 B CN 105118055B
Authority
CN
China
Prior art keywords
mrow
mtd
msub
mtr
msubsup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510489677.0A
Other languages
Chinese (zh)
Other versions
CN105118055A (en
Inventor
刘戈三
顾晓娟
王春水
唐修文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING FILM ACADEMY
Original Assignee
BEIJING FILM ACADEMY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING FILM ACADEMY filed Critical BEIJING FILM ACADEMY
Priority to CN201510489677.0A priority Critical patent/CN105118055B/en
Publication of CN105118055A publication Critical patent/CN105118055A/en
Application granted granted Critical
Publication of CN105118055B publication Critical patent/CN105118055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Lenses (AREA)
  • Studio Devices (AREA)

Abstract

The invention discloses a kind of camera position amendment scaling method and the system for realizing this method, belong to virtual manufacture technology field.Pass through lens parameters, internal relation between imaging surface and optictracking device, utilize the world coordinates of N number of mark point on background screen, picpointed coordinate and camera lens inner parameter and lens distortion parameter, obtain the translation vector of spin matrix between camera coordinates system and world coordinate system and the video camera centre of perspectivity in world coordinate system, with reference to the current location information that video camera posture external trace device provides under current state, try to achieve video camera amendment demarcation information and the angle of visual field, establish its look-up table with focal distance and with focal length relation, so that work as camera position, lens focus, when focal distance changes, automatic location amendment is carried out to the position of the virtual camera of virtual manufacturing system, so that real scene shooting frame of video picture and virtual two field picture perfect matching caused by computer.

Description

Camera position amendment scaling method and system
Technical field
The present invention relates to a kind of camera position amendment scaling method and the system for realizing this method, belong to virtual system Make technical field.
Background technology
Virtual camera posture and the angle of visual field are two extremely important information needed for computer picture rendering engine. In virtual photography Making programme, for the visual effect obtained, it is desirable to which real scene shooting picture matches with virtual screen.This matching is The shooting effect of real camera can be imitated by referring to the effect of virtual screen, if the focal length of camera lens, focal distance are sent out Raw to change, the angle of visual field of virtual camera is also corresponding to be changed in real time;When video camera seat in the plane changes, computer picture generates Change on position also accordingly occurs for the virtual camera seat in the plane in engine.Newauto (Beijing) Video Technology Co., Ltd. carries Go out the acquisition methods and device (application number 200810223256.3) of a kind of calibrating parameters, this method passes through known enough ginsengs The space coordinates of examination point calculates inner parameter and the outside of video camera using double flat areal coordinate scaling method or Linear Camaera Calibrating Method Parameter, the inner parameter of video camera include the optics and geometrical property, focal length of video camera, scale factor and lens distortion etc., The external parameter of video camera includes position and direction of the camera coordinates relative to world coordinate system, then using ant group algorithm pair Parameter optimizes, and generates evaluation function.But this document does not provide specific embodiment and technical scheme, and also not Which kind of illustrate specifically to be calculated using parameter.A kind of method for calibrating camera parameters that Material Evidence Identification Center, Ministry of Public Security proposes And device (application number 201010144764.X) is although be a kind of method of calibrating camera but it will be solved the problems, such as and this Invention is different, this application according in demarcation target image near principal point orthoscopic zone pixel point coordinates structure virtual grid so as to Distortion of camera coefficient is calculated, completes camera intrinsic parameter demarcation, and it is unrelated with video camera external parameter.Japanese Toyota is automatic A kind of video image positional relationship correction apparatus and set with the video image positional relationship amendment that loom Co., Ltd. proposes Standby steering assistance device and video image positional relationship modification method (application number 200480041109.4), this application are based on logical The video image reference point and virtual target point crossed video camera actual acquisition and the shown drift gage between coordinate on a monitor Coordinate conversion parameter is calculated, so as to the monitor coordinate for deriving virtual target.Beijing Jingwei Hirain Technologies Co., Ltd. proposes One kind emulation stand camera marking method and real-time machine (application number 201510134249.6), pass through the virtual video camera of foundation Collection scaling reference is simultaneously imaged, the relation established between its any point and virtual video camera plane of delineation coordinate, then by true Imaging point of the scaling reference any point in virtual video camera and real camera figure are established in imaging described in real camera acquisition Corresponding relation between image plane coordinate, solve to obtain scaling reference any point and real camera figure using the two relations Corresponding relation between image plane coordinate, so as to eliminate virtual video camera to existing parameter differences between real camera.Should Need first to establish virtual video camera in method, and the present invention directly calculates the angle of visual field being consistent with actual video camera, video camera Attitude information under world coordinate system gives image rendering engine (containing the anglec of rotation and camera position).
In order to reach such purpose, we allow for obtaining in real time under given coordinate system the position of video camera, Posture and visual field angle information.Usual way be on video camera install video camera posture external trace device such as optics with Track equipment, by the skew of the sensing station and optictracking device of photogrammetric camera, estimate that the position of video camera, posture are believed Breath.Existing real-time imaging virtually previews the video camera tracking of system, mainly measures in a manual manner, then according to green curtain Mark point or other characteristic points, are manually adjusted, and also do not account for the factor such as focal distance and the angle of visual field in addition, thus are led Cause less efficient and precision relatively low, cause the virtual scene that the video image from video camera and computer generate to misplace Phenomenon, virtual scene is caused to give people a kind of false sensation, the Overlay of virtual scene and real scene is undesirable.
The content of the invention
In order to reach the purpose of virtual background and real scene shooting image perfect matching, the position of video camera should be with computer picture wash with watercolours The position for contaminating the virtual camera that engine receives is identical.Aperture is obeyed in the imaging of the virtual camera of computer picture rendering engine Image-forming principle, so matching real camera position is then the intersection point of the centre of perspectivity, i.e. entrance pupil and optical axis of video camera. So it is video camera posture external trace device (such as optictracking device) and photography to finally obtain video camera amendment demarcation information The alternate position spike information of the machine centre of perspectivity.Based on above-mentioned principle, present invention offer is a kind of to be modified demarcation to camera position Method and system, by the internal relation between lens parameters, imaging surface and optictracking device, to the void of virtual manufacturing system Intend video camera position carry out automatic location amendment, and calculate be consistent with actual video camera the angle of visual field, camera position, Video camera posture gives image rendering engine, so that when camera position, lens focus, focal distance change, from It is dynamic to obtain video camera demarcation corrected parameter in real time, and the virtual of computer picture rendering engine is obtained according to demarcation corrected parameter Camera coordinates and posture so that real scene shooting frame of video picture and virtual two field picture perfect matching caused by computer.
In order to solve the above-mentioned technical problem, the invention provides a kind of method that demarcation is modified to camera position, Comprise the following steps:
S1. N number of mark point A on background screen is obtained1..., ANCoordinate under world coordinate system:A1(x1, y1, z1) ..., AN(xN, yN, zN), wherein coordinate of i-th of mark point under world coordinate system is Ai(xi, yi, zi), i=1~N;Institute State N number of mark point and comprise at least 3 not collinear mark points;The mark point coordinates is the home position of mark point and described Mark point is respectively positioned in the picture of video camera collection;
S2. the focal distance sampled point FD of numerical value from small to large is determined1, FD2..., FDj..., FDJ;Wherein j=1~J, FD1And FDJIt is the minimum focal distance of camera lens and maximum focal distance respectively;
If camera lens is zoom lens, also need to determine the focal length sampled point FL of numerical value from small to large1, FL2..., FLk..., FLK;Wherein k=1~K, FL1And FLKIt is the minimum focus and maximum focal length of camera lens respectively;
If S3. camera lens is tight shot, each focal distance sampled point FD is obtainedjIn corresponding camera lens Portion's parameter and lens distortion parameter;
If camera lens is zoom lens, each focal distance sampled point FD is obtainedjWith each focal length sampled point FLkIt is right The camera lens inner parameter and lens distortion parameter answered;
The camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
If S4. camera lens is tight shot, when focal distance takes j-th of focal distance sampled point FDjWhen, adjustment is taken the photograph Put the mark point blur-free imaging enabled on background screen in shadow seat in the plane;
If camera lens is zoom lens, when focal distance takes j-th of focal distance sampled point FDjAnd focal length is in kth Individual sampled point FLkDuring place, adjustment camera position enables the mark point blur-free imaging on background screen;
S5. the current location information that video camera posture external trace device provides under current state is obtained, including to make generation Boundary's coordinate system after translation successively respectively around its X-axis, Y-axis and Z axis rotation after with video camera posture external trace device from The rotation Eulerian angles that the coordinate system of definition overlaps, and position of the video camera posture external trace device under world coordinate system are sat Mark;
S6. sat using the picture point of the world coordinates of the S1 N number of mark points obtained, mark point under video camera imaging coordinate system The camera lens inner parameter and lens distortion parameter that mark and S3 are obtained, are obtained between camera coordinates system and world coordinate system Spin matrix and the video camera centre of perspectivity in the translation vector of world coordinate system, i.e. appearance of the video camera under world coordinate system State information;
S7. the current location information that video camera posture external trace device provides under the current state obtained using S5, and Attitude information of the video camera tried to achieve in S6 under world coordinate system, try to achieve video camera amendment demarcation information and the angle of visual field;
S8. if camera lens is tight shot:Focal distance is adjusted, to each focal distance sampled point FDjPerform step Rapid S4~S7, until traveling through all focal distance sampled points, obtain the amendment demarcation letter corresponding to each focal distance sampled point Breath and the angle of visual field, establish the look-up table LUT1 of focal distance-amendment demarcation information-angle of visual field;
If camera lens is zoom lens:Focal distance and focal length are adjusted, to each focal distance sampled point FDjWith it is every Individual focal length sampled point FLkStep S4~S7 is performed, until traveling through all focal distance sampled points and focal length sampled point, is obtained each Amendment demarcation information and the angle of visual field corresponding to focal distance sampled point and focal length sampled point, establish focal distance-focal length-amendment Demarcate the look-up table LUT2 of information-angle of visual field;
S9. if camera lens is tight shot:Then according to the look-up table LUT1, obtained by focal distance corresponding Amendment demarcation information and the angle of visual field.
If camera lens is zoom lens:Then according to the look-up table LUT2, phase is obtained by focal distance and focal length The amendment demarcation information and the angle of visual field answered.
Further, camera position amendment scaling method proposed by the invention also includes following preferable technical side Case:
First, preferably, in step S3,
(1) when camera lens is tight shot:
For j-th of focal distance sampled point FDj, corresponding camera lens inner parameter is:fxj, fyj, cxj, cyj, its In:
fxj:Expression focal distance is FDjWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjWith photography The ratio between horizontal width dx of each unit of machine imager, i.e.,
fyj:Expression focal distance is FDjWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjWith photography The ratio between vertical height dy of each unit of machine imager, i.e.,
cxj:Expression focal distance is FDjWhen, the optical axis of camera lens and the intersection point of imaging surface, with the level at imaging surface center partially The pixel count of shifting;
cyj:Expression focal distance is FDjWhen, the optical axis of camera lens and the intersection point of imaging surface, it is vertical with imaging surface center partially The pixel count of shifting;
For j-th of focal distance sampled point FDj, corresponding camera lens distortions parameter is:k1j, k2j, k3j, p1jWith p2j, wherein k1j, k2j, k3jIt is FD for focal distancejWhen radial distortion parameter, wherein k3jFor selected parameter, when camera lens is flake K is used during camera lens1j, k2j, k3jThese three radial distortion parameters, when camera lens is non-fish eye lens, k3j=0, i.e. other camera lenses Use k1j, k2jThis two radial distortion parameter;p1j, p2jIt is FD for focal distancejWhen tangential distortion parameter;
Also, obtain each focal distance FDjUnder internal reference matrix MjFor:
(2) when camera lens is zoom lens:
For j-th of focal distance sampled point FDjWith k-th of focal length sampled point FLk, corresponding camera lens inside is joined Number is:fxjk, fyjk, cxjk, cyjk, wherein:
fxjk:Expression focal distance is FDj, focal length FLkWhen, the emergent pupil of camera lens and camera lens optical axis intersection point to imaging surface away from From fjkThe ratio between with the horizontal width dx of each unit of imager, i.e.,
fyjk:Expression focal distance is FDj, focal length FLkWhen, the emergent pupil of camera lens and camera lens optical axis intersection point to imaging surface away from From fjkThe ratio between with the vertical height dy of each unit of imager, i.e.,
cxjk:Expression focal distance is FDj, focal length FLkWhen, the intersection point of optical axis and imaging surface, the water with imaging surface center The pixel count of flat skew;
cyjk:Expression focal distance is FDj, focal length FLkWhen, the intersection point of optical axis and imaging surface, with hanging down for imaging surface center The pixel count directly offset;
For j-th of focal distance sampled point FDjWith k-th of focal length sampled point FLk, corresponding camera lens distortions ginseng Number is:k1jk, k2jk, k3jk, p1jkAnd p2jk, wherein k1jk, k2jkAnd k3jkIt is FD for focal distancejAnd focal length is FLkWhen radial direction it is abnormal Variable element, wherein k3jkFor selected parameter, k is used when camera lens is fish eye lens1jk, k2jkAnd k3jkThese three radial distortions are joined Number;When camera lens is non-fish eye lens, k3jk=0, i.e. other camera lenses only use k1jk, k2jkThe two radial distortion parameters;p1jk, p2jkIt is FD for focal distancejAnd focal length is FLkWhen tangential distortion parameter;
And obtain each focal distance FDjWith focal length FLkUnder internal reference matrix be Mjk
2nd, preferably, in step S5,
(1) when camera lens is tight shot:
The current location information that the video camera posture external trace device provides is [θxj, θyj, θzj, Txj, Tyj, Tzj]; Wherein [θxj, θyj, θzj] for make world coordinate system after translation successively respectively around X-axis, Y-axis and Z axis rotation after with video camera The rotation Eulerian angles that the customized coordinate system of posture external trace device overlaps, [Txj, Tyj, Tzj] for outside video camera posture with Position coordinates of the track equipment under world coordinate system, the current location information that video camera posture external trace device is provided represent For following matrix form:
Here
TIj=[Txj Tyj Tzj]T
0=[0,0,0];
(2) when camera lens is zoom lens:
The current location information that the video camera posture external trace device provides is [θxjk, θyjk, θzjk, Txjk, Tyjk, Tzjk];Wherein [θxjk, θyjk, θzjk] it is after world coordinate system is surrounded X-axis, Y-axis and Z axis rotation respectively successively after translation The rotation Eulerian angles overlapped with the customized coordinate system of video camera posture external trace device, [Txjk, Tyjk, Tzjk] it is video camera Position coordinates of the posture external trace device under world coordinate system;The present bit that video camera posture external trace device is provided Confidence breath is expressed as matrix form:
Here
TIjk=[Txjk Tyjk Tzjk]T
0=[0,0,0]
3rd, preferably, in step S6:
(1) when camera lens is tight shot:
Current focal distance is FDjWhen, obtain the spin matrix R between camera coordinates system and world coordinate systempj, photography Translation vector T of the machine centre of perspectivity in world coordinate systempj, attitude matrix H of the video camera under world coordinate systempjMethod be:
According to the picpointed coordinate of i-th of mark pointAnd current focal distance FDjUnder lens distortion ginseng Number, obtain the picpointed coordinate (x ' of i-th of mark point after correctionij, y 'ij):
Wherein if camera lens is non-fish eye lens, k3j=0;
World coordinates (the x of i-th of mark pointi, yi, zi) picpointed coordinate (x ' after corresponding correctionij, y 'ij) it Between relation be expressed as:
Wherein λjReferred to as scale factor,
Spin matrix R wherein between camera coordinates system and world coordinate systempjFor orthogonal matrix, TpjIt is saturating for video camera Depending on center world coordinate system translation vector:
Pass through the world coordinates (x of formula (1) (2) and each mark pointi, yi, zi) and corresponding correction after picpointed coordinate (x 'ij, y′ij), obtain matrix Ojj[Rpj, Tpj] and attitude information of the video camera under world coordinate system WhereinFor make world coordinate system after translation successively respectively around X-axis, Y-axis and Z axis rotation after with photography The rotation Eulerian angles that machine coordinate system overlaps,For translation of the video camera centre of perspectivity under world coordinate system Vector;
Again because matrix RpjIt is unit orthogonal matrix, has:
λj=1/ | | Oj(:, 1) | |;
Wherein Oj(:, 1) and it is matrix OjThe 1st row, | | | | represent vector Euclid norm;
According to the spin matrix R between the camera coordinates system of acquisition and world coordinate systempjAnd the video camera centre of perspectivity In the translation vector T of world coordinate systempj, attitude information of the video camera under world coordinate system is expressed as matrix Hpj
(2) when camera lens is zoom lens:
Current focal distance is FDjAnd current focus is FLkWhen, obtain between camera coordinates system and world coordinate system Spin matrix Rpjk, the video camera centre of perspectivity world coordinate system translation vector Tpjk, appearance of the video camera under world coordinate system State matrix HpjkMethod be:
According to current focal distance FDjWith current focus FLkThe picpointed coordinate of lower i-th of mark point And Lens distortion parameter obtains the picpointed coordinate (x ' after its correctionijk, y 'ijk):
Wherein if camera lens is non-fish eye lens, k3jk=0;
World coordinates (the x of i-th of mark pointi, yi, zi) picpointed coordinate (x ' after corresponding correctionijk, y 'ijk) it Between relation be expressed as:
Wherein λjkReferred to as scale factor,
Spin matrix R wherein between camera coordinates system and world coordinate systempjkFor orthogonal matrix, TpjkIt is saturating for video camera Depending on center world coordinate system translation vector:
Pass through the world coordinates (x of formula (3) (4) and each mark pointi, yi, zi) and corresponding correction after picpointed coordinate (x′ijk, y 'ijk), obtain matrix Ojkjk[Rpjk, Tpjk] and attitude information of the video camera under world coordinate systemWhereinFor make world coordinate system after translation successively The rotation Eulerian angles overlapped respectively after X-axis, Y-axis and Z axis rotation with camera coordinates system, For translation vector of the video camera centre of perspectivity under world coordinate system;
Because matrix RpjkIt is unit orthogonal matrix, has:
λjk=1/ | | Ojk(:, 1) | |;
Wherein Ojk(:, 1) and it is matrix OjkThe 1st row, | | | | represent vector Euclid norm;
According to the spin matrix R between the camera coordinates system of acquisition and world coordinate systempjkAnd the video camera centre of perspectivity In the translation vector T of world coordinate systempjk, attitude information of the video camera under world coordinate system is expressed as matrix Hpjk
4th, preferably, step S7 comprises the following steps:
(1) when camera lens is tight shot:
S7.1 calculates video camera attitude rectification matrix Hj
S7.2 is by video camera attitude rectification matrix HjIt is transformed to correct quaternary number (θj, nxj, nyj, nzj) and translation vector Tj
The video camera attitude rectification matrix HjIt is 4 × 4 matrix, its live part is Hj(1:3,1:4), i.e. HjBefore Three rows, by Hj(1:3,1:4) it is expressed as form:
Hj(1:3,1:4)=[Rj, Tj];
Wherein spin matrix RjFor 3 × 3 real number matrix, translation vector Tj=[txj, tyj, tzj]TFor 3-dimensional column vector;
By spin matrix RjAmendment quaternary number (θ is converted to according to following formulaj, nxj, nyj, nzj):
HereFor 3-dimensional row vector;
Thus storage RjAmendment quaternary number (θ need to only be storedj, nxj, nyj, nzj), therefore video camera attitude rectification matrix HjBe converted to video camera amendment demarcation information (θj, nxj, nyj, nzj, txj, tyj, tzj);
S7.3 calculates camera coverage angle:When focal distance is FDjWhen, camera coverage angle α is calculated according to following formulaj
Wherein W, H are respectively the width of video camera, high-resolution.
(2) when camera lens is zoom lens:
S7.1 calculates video camera attitude rectification matrix Hjk
S7.2 is by video camera attitude rectification matrix HjkIt is transformed to correct quaternary number (θjk, nxjk, nyjk, nzjk) and translation vector Tjk
The video camera attitude rectification matrix HjkIt is 4 × 4 matrix, its live part is Hjk(1:3,1:4), i.e. Hjk's First three rows, by Hjk(1:3,1:4) it is expressed as form:
Hjk(1:3,1:4)=[Rjk, Tjk];
Wherein spin matrix RjkFor 3 × 3 real number matrix, translation vector Tjk=[txjk, tyjk, tzjk]TFor 3-dimensional column vector;
By spin matrix RjkAmendment quaternary number (θ is converted to according to following formulajk, nxjk, nyjk, nzjk):
HereFor 3-dimensional row vector;
Thus storage RjkAmendment quaternary number (θ need to only be storedjk, nxjk, nyjk, nzjk), therefore video camera attitude rectification Matrix HjkBe converted to video camera amendment demarcation information (θjk, nxjk, nyjk, nzjk, txjk, tyjk, tzjk);
S7.3 calculates camera coverage angle:When focal distance is FDjAnd focal length is FLkWhen, video camera is calculated according to following formula and regarded Rink corner αjk
Wherein W, H are respectively the width of video camera, high-resolution.
5th, preferably, in step S9,
When actual focal distance FD is not in any one focal distance sampled point FDjPlace, or real focal length FL be not in office What focal length sampled point FLkPlace, or when actual focal distance FD is not in any one focal distance sampled point FDjPlace and Real focal length FL is not also in any one focal length sampled point FLkDuring place, obtained by the way of following interpolation arithmetic corresponding to it Amendment demarcation information and the angle of visual field:
Row interpolation is entered using SLERP interpolation methods for the amendment quaternary number that amendment is demarcated in information, translation vector uses Linear interpolation;The angle of visual field enters row interpolation using linear interpolation method.
Further,
(1) when camera lens is tight shot:
When actual focal distance FD is not in any one focal distance sampled point FDjPlace, using SLERP algorithms to amendment four The method that first number enters row interpolation is as follows:
If FDj< FD < FDj+1, video camera is in focal distance FDjAnd FDj+1The video camera attitude rectification matrix at place is respectively HjWith Hj+1, its spin matrix is respectively RjAnd Rj+1, spin matrix is expressed as to correct quaternary number WithAmendment quaternary number q when then for currently practical focal distance FD is by following formula Calculate:
HereIt is qjIt is inverse,
(2) when camera lens is zoom lens:
When actual focal distance FD is not in any one focal distance sampled point FDjPlace, or real focal length FL be not in office What focal length sampled point FLkPlace, or when actual focal distance FD is not in any one focal distance sampled point FDjPlace and Real focal length FL is not also in any one focal length sampled point FLkDuring place, row interpolation is entered to amendment quaternary number using SLERP algorithms Method is as follows:
If 1. FDj< FD < FDj+1And FLk< FL < FLk+1, i.e., when actual focal distance FD is not in any one focusing From sampled point FDjPlace, and real focal length FL is not also in any one focal length sampled point FLkPlace, using with its closest to focusing Distance and focal length sampled point combination (FDj, FLk), (FDj, FLk+1), (FDj+1, FLk), (FDj+1, FLk+1) corresponding to amendment four First number qJ, k, qJ, k+1, qJ+1, k, qJ+1, k+1, enter row interpolation according to the following formula, then for currently practical focal distance FD and actual Jiao Away from for FL when amendment quaternary number qL, dCalculated by following formula:
Wherein:
If 2. FDj< FD < FDj+1, FL=FLkWhen, i.e., when actual focal distance FD does not sample in any one focal distance Point FDjPlace and real focal length FL is focal length sampled point FLkWhen, using with its closest to focal distance sampled point combine (FDj, FLk), (FDj+1, FLk) corresponding to amendment quaternary number information qJ, k, qJ+1, k, enter row interpolation according to the following formula, then for currently practical Focal distance FD and amendment quaternary number q when real focal length is FLL, dCalculated by following formula
Wherein
If 3. FD=FDj, FLk< FL < FLk+1When, i.e., when actual focal distance FD is focal distance sampled point FDjIt is and real Border focal length FL is not in any one focal length sampled point FLkDuring place, using with its closest to focal length sampled point combine (FDj, FLk), (FDj, FLk+1) corresponding to amendment quaternary number information qJ, k, qJ, k+1, enter row interpolation according to the following formula, then for currently practical focusing Distance FD and amendment quaternary number q when real focal length is FLL, dCalculated by following formula:
Wherein
In addition, according to as above method, the present invention also proposes a kind of camera position amendment calibration system, including:Servo horse Up at control system, video camera, video camera posture external trace device, background screen, mark point, space measurement equipment, data Manage equipment, computer picture rendering engine;
Wherein, servo-motor control system is connected with video camera, for adjusting the focal length and focal distance of camera lens;Servo horse Also it is connected up to control system with data processing equipment, the focal length for sending camera lens to data processing equipment is believed with focal distance Breath, so that data processing equipment calculates amendment demarcation information and the angle of visual field and establishes look-up table;
Data processing equipment is also connected with video camera, for reading video stream data in real time;
Video camera posture external trace device is arranged at outside video camera, for the position by photogrammetric camera and photography The skew of machine posture external trace device, estimate position and the attitude information of video camera;Video camera posture external trace device is also It is connected with data processing equipment, for sending measurement data information;
At least three not conllinear mark point is set on background screen, and these mark point radiuses are identical, color and background screen With contrast;
Space measurement equipment and image rendering engine are connected with data processing equipment respectively, and space measurement equipment is used to measure The world coordinates of mark point home position is simultaneously sent to data processing equipment, and image rendering engine is used for according to data processing equipment The look-up table of foundation obtains corresponding amendment demarcation information and the angle of visual field.
Compared with prior art, the beneficial effects of the present invention are not only realize video camera amendment calibration process completely certainly Dynamicization, without manual intervention, and the position of video camera, posture and angle of visual field letter under given coordinate system can be calculated in real time Breath, and engine is generated according to above- mentioned information accurately image by image so that real scene shooting frame of video picture and void caused by computer Intend two field picture perfect matching.
Brief description of the drawings
Fig. 1 is the composition schematic diagram of camera position amendment calibration system proposed by the present invention;Wherein, 1- servo motors control System processed;2- video cameras, 3- camera lens, 4- video camera posture external trace devices;5- background screens;6- mark points;7- Data processing equipment;
Fig. 2 is the method flow diagram proposed by the present invention that demarcation is modified to tight shot camera position;
Fig. 3 is the method flow diagram proposed by the present invention that demarcation is modified to zoom lens camera position.
Embodiment
The present invention is described in detail below in conjunction with drawings and examples, while also describes technical solution of the present invention Technical problem, principle and the beneficial effect of solution.
As shown in figure 1, be the structure chart according to camera position amendment calibration system proposed by the present invention, as illustrated, Camera position amendment calibration system proposed by the present invention includes:Servo-motor control system, video camera, outside video camera posture Tracking equipment, background screen, mark point, space measurement equipment (not shown), data processing equipment, image rendering engine (not shown);
Wherein, servo-motor control system is connected with video camera, for adjusting the focal length and focal distance of camera lens;Servo horse Also it is connected up to control system with data processing equipment, the focal length for sending camera lens to data processing equipment is believed with focal distance Breath, so that data processing equipment calculates amendment demarcation information and the angle of visual field and establishes look-up table;
Data processing equipment is also connected with video camera, for reading video stream data in real time;
Video camera posture external trace device is arranged at outside video camera, for the position by photogrammetric camera and photography The skew of machine posture external trace device, estimate position and the attitude information of video camera;Video camera posture external trace device is also It is connected with data processing equipment, for sending measurement data information;
At least three not conllinear mark point is set on background screen, and these mark point radiuses are identical, color and background screen With contrast;
Space measurement equipment and image rendering engine are connected with data processing equipment respectively, and space measurement equipment is used to measure The world coordinates of mark point home position is simultaneously sent to data processing equipment, and image rendering engine is used for according to data processing equipment The look-up table of foundation obtains corresponding amendment demarcation information and the angle of visual field.
Preferably, background screen is green curtain or blue curtain.
Preferably, the video camera posture external trace device is optictracking device or mechanical arm.
The workflow of camera position amendment calibration system proposed by the present invention comprises the following steps:
S1. coordinate of N number of mark point under world coordinate system on background screen, A are obtained using space measurement equipment1(x1, y1, z1) ..., AN(xN, yN, zN), and the world coordinates of mark point is sent to data processing equipment, N number of mark point is at least Including 3 not collinear mark points;
S2. the focal distance sampled point FD of multiple numerical value from small to large is determined1, FD2..., FDj..., FDJ(j=1~J), FD1And FDJIt is the minimum focal distance of camera lens and maximum focal distance respectively;
If camera lens is zoom lens, also need to determine the focal length sampled point FL of numerical value from small to large1, FL2..., FLk..., FLK;Wherein k=1~K, FL1And FLKIt is the minimum focus and maximum focal length of camera lens respectively;
Preferably, the focal distance sampled point and focal length sampled point use the sampled point in camera lens calibration process; Following document is refer on camera lens demarcation:[1]Duane C B.Close-range camera calibration[J] .Photogram.Eng.Remote Sens, 1971,37:855-866. and [2] Zhang Z.Flexible camera Calibration by viewing a plane from unknown orientations [C] //Computer Vision, 1999.The Proceedings of the Seventh IEEE International Conference on.IEEE, 1999,1:666-673.
Focal distance choosing method:It is placed on the rotation section of the servo-motor control system of the regulation focus of camera lens (such as [0,1]) J-1 sections are divided into, common J end points, these corresponding end points are focal distance sampling point value;J typically chooses 10-30 Between integer.
The choosing method of focal length sampled point:It is placed on the rotation section of the servo-motor control system focused of camera lens K-1 sections are divided into (such as [0,1]), common K end points, these corresponding end points are focal length sampling point value;K is typically according to camera lens Zooming range is suitably chosen.
S3. camera inner parameter corresponding to each focal distance sampled point is obtained by data processing equipment and distortion is joined Number;The camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
S4. servo-motor control system sends the focal length and focal distance information of camera lens to data processing equipment, works as focusing Distance is in j-th of sampled point FDjPlace, adjustment camera position enable the mark point blur-free imaging on background screen;
If camera lens is zoom lens, when focal distance takes j-th of focal distance sampled point FDjAnd focal length is in kth Individual sampled point FLkDuring place, adjustment camera position enables the mark point blur-free imaging on background screen;
S5. passed using video camera posture external trace device such as optictracking device or mechanical arm to data processing equipment Pass the positional information under its current state;Including to make world coordinate system surround its X-axis, Y-axis and Z respectively successively after translation The rotation Eulerian angles overlapped after axle rotation with the customized coordinate system of video camera posture external trace device, and video camera posture Position coordinates of the external trace device under world coordinate system;
The optictracking device such as total powerstation etc.;
S6. data processing equipment utilizes the world coordinates of N number of mark point of S1 acquisitions, mark point in video camera imager coordinate The camera lens inner parameter and lens distortion parameter that picpointed coordinate and S3 under system obtain, obtain camera coordinates system and generation Spin matrix and the video camera centre of perspectivity between boundary's coordinate system are in the translation vector of world coordinate system, i.e., video camera is in the world Attitude information under coordinate system;
Preferably, video camera imaging coordinate system origin is set in field of view center.
S7. video camera posture external trace device provides current under the current state that data processing equipment is obtained using S5 Attitude information of the video camera tried to achieve in positional information, and S6 under world coordinate system, try to achieve video camera amendment demarcation information and The angle of visual field;
S8. if camera lens is tight shot:Focal distance is adjusted, to each focal distance sampled point FDjPerform step Rapid S4~S7, until traveling through all focal distance sampled points, obtain the amendment demarcation letter corresponding to each focal distance sampled point Breath and the angle of visual field, data processing equipment establish the look-up table LUT1 of focal distance-amendment demarcation information-angle of visual field;
If camera lens is zoom lens:Focal distance and focal length are adjusted, to each focal distance sampled point FDjWith it is every Individual focal length sampled point FLkStep S4~S7 is performed, until traveling through all focal distance sampled points and focal length sampled point, is obtained each Amendment demarcation information and the angle of visual field, data processing equipment corresponding to focal distance sampled point and focal length sampled point establish focusing From the look-up table LUT2 of-focal length-amendment demarcation information-angle of visual field;
S9. if camera lens is tight shot:The look-up table that image rendering engine is established according to data processing equipment LUT1, corresponding amendment demarcation information and the angle of visual field are obtained by focal distance.
If camera lens is zoom lens:The look-up table that image rendering engine is established according to data processing equipment LUT2, corresponding amendment demarcation information and the angle of visual field are obtained by focal distance and focal length.
It goes for tight shot to the camera position amendment calibration system being related in the present invention, is equally applicable to become Zoom lens, illustrate this hair by taking the camera position amendment calibration system with tight shot and zoom lens as an example respectively here The technical scheme of the bright camera position amendment scaling method:
First, tight shot (see accompanying drawing 2)
S1. the world coordinates of mark point is obtained:
The world coordinates A of N number of mark point on utilization space measuring apparatus measurement background screen1(x1, y1, z1) ..., AN(xN, yN, zN), N number of mark point comprises at least 3 not collinear mark points;The mark point coordinates is the home position of mark point And the mark point is respectively positioned in the picture of video camera collection;Coordinate of wherein i-th of the mark point under world coordinate system be Ai(xi, yi, zi), i=1~N;
S2. focal distance sampled point is determined:
By servo-motor control system, the focal distance of numerical value from small to large is determined according to camera lens calibration process Sampled point FD1, FD2..., FDj..., FDJ(j=1~J), FD1And FDJIt is the minimum focal distance of camera lens and maximum focusing respectively Distance;
S3. camera inner parameter and distortion parameter corresponding to each focal distance sampled point are obtained and is recorded, institute State camera lens inner parameter and lens distortion parameter is obtained by camera lens calibration process;
Specifically, for j-th of focal distance sampled point FDj, corresponding camera lens inner parameter is:fxj, fyj, cxj, cyj, wherein:
fxj:Expression focal distance is FDjWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjWith photography The ratio between horizontal width dx of each unit of machine imager, i.e.,
fyj:Expression focal distance is FDjWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjWith photography The ratio between vertical height dy of each unit of machine imager, i.e.,
Preferably, video camera imager described herein is CCD or CMOS.
cxj:Expression focal distance is FDjWhen, the optical axis of camera lens and the intersection point of imaging surface, with the level at imaging surface center partially The pixel count of shifting;
cyj:Expression focal distance is FDjWhen, the optical axis of camera lens and the intersection point of imaging surface, it is vertical with imaging surface center partially The pixel count of shifting;
For j-th of focal distance sampled point FDj, corresponding camera lens distortions parameter is:k1j, k2j, k3j, p1jWith p2j, wherein k1j, k2j, k3jIt is FD for focal distancejWhen radial distortion parameter, wherein k3jFor selected parameter, when camera lens is flake K is used during camera lens1j, k2j, k3jThese three radial distortion parameters, when camera lens is non-fish eye lens, k3j=0, i.e. other camera lenses Use k1j, k2jThis two radial distortion parameter;p1j, p2jIt is FD for focal distancejWhen tangential distortion parameter;
Also, obtain each focal distance FDjUnder internal reference matrix MjFor:
S4. when focal distance takes j-th of focal distance sampled point FDjWhen, adjustment camera position causes on background screen Mark point being capable of blur-free imaging;
S5. data processing equipment obtains the current location of video camera posture external trace device such as optictracking device transmission Information [θxj, θyj, θzj, Txj, Tyj, Tzj].Wherein [θxj, θyj, θzj] it is world coordinate system is surrounded respectively successively after translation The rotation Eulerian angles overlapped after X-axis, Y-axis and Z axis rotation with the customized coordinate system of video camera posture external trace device, [Txj, Tyj, Tzj] it is position coordinates of the video camera posture external trace device under world coordinate system, by video camera posture external trace The current location information that equipment provides is expressed as matrix form:
Here
TIj=[Txj Tyj Tzj]T
0=[0,0,0];
S6. sat using the picture point of the world coordinates of the S1 N number of mark points obtained, mark point under video camera imaging coordinate system The camera lens inner parameter and lens distortion parameter that mark and S3 are obtained, are obtained between camera coordinates system and world coordinate system Spin matrix RpjAnd the video camera centre of perspectivity is in the translation vector T of world coordinate systempj, i.e., video camera is in world coordinate system Under attitude information;
Preferably, current focal distance is FDjWhen, obtain the spin moment between camera coordinates system and world coordinate system Battle array Rpj, the video camera centre of perspectivity world coordinate system translation vector Tpj, attitude matrix H of the video camera under world coordinate systempj Method be:
According to the picpointed coordinate of i-th of mark pointAnd current focal distance FDjUnder lens distortion ginseng Number, obtain the picpointed coordinate (x ' of i-th of mark point after correctionij, y 'ij):
Wherein if camera lens is non-fish eye lens, k3j=0;
World coordinates (the x of i-th of mark pointi, yi, zi) picpointed coordinate (x ' after corresponding correctionij, y 'ij) it Between relation be expressed as:
Wherein λjReferred to as scale factor,
Spin matrix R wherein between camera coordinates system and world coordinate systempjFor orthogonal matrix, TpjIt is saturating for video camera Depending on center world coordinate system translation vector:
Pass through the world coordinates (x of formula (1) (2) and each mark pointi, yi, zi) and corresponding correction after picpointed coordinate (x′ij, y 'ij), obtain matrix Ojj[Rpj, Tpj] and attitude information of the video camera under world coordinate systemWhereinTo make world coordinate system surround X respectively successively after translation The rotation Eulerian angles overlapped after axle, Y-axis and Z axis rotation with camera coordinates system,Had an X-rayed for video camera Translation vector of the center under world coordinate system;
Again because matrix RpjIt is unit orthogonal matrix, has:
λj=1/ | | Oj(:, 1) | |;
Wherein Oj(:, 1) and it is matrix OjThe 1st row, | | | | represent vector Euclid norm;
According to the spin matrix R between the camera coordinates system of acquisition and world coordinate systempjAnd the video camera centre of perspectivity In the translation vector T of world coordinate systempj, attitude information of the video camera under world coordinate system is expressed as matrix Hpj
S7. the current location information that video camera posture external trace device provides under the current state obtained using S5, and Attitude information of the video camera tried to achieve in S6 under world coordinate system, try to achieve video camera amendment demarcation information and the angle of visual field;
S7.1 calculates video camera attitude rectification matrix first.Calculate video camera attitude rectification matrix and mainly utilize video camera With the consistency of the relative position relation of optictracking device, constant relative position relation here is the effective of once mounting Phase, optictracking device is reinstalled, it is necessary to be re-scaled to camera position.Following formula can be utilized to calculate when calculating to take the photograph Shadow machine attitude rectification matrix Hj
S7.2 is by video camera attitude rectification matrix HjIt is transformed to correct quaternary number (θj, nxj, nyj, nzj) and translation vector Tj
The calculating video camera attitude rectification matrix H obtained due to formula (5)jIt is 4 × 4 matrix, it is not easy to store, and And be also not easy to carry out interpolation calculation in subsequent arithmetic, therefore, it is necessary to transform it into the form for being easy to storage and computing.
The video camera attitude rectification matrix HjIt is 4 × 4 matrix, its live part is Hj(1:3,1:4), i.e. HjBefore Three rows, by Hj(1:3,1:4) it is expressed as form:
Hj(1:3,1:4)=[Rj, Tj];
Wherein spin matrix RjFor 3 × 3 real number matrix, translation vector Tj=[txj, tyj, tzj]TFor 3-dimensional column vector;
Due to spin matrix RjCan mutually it be changed with quaternary number, therefore by spin matrix RjAmendment is converted to according to following formula Quaternary number (θj, nxj, nyj, nzj):
HereFor 3-dimensional row vector.
HereFor 3-dimensional row vector;
Thus storage RjAmendment quaternary number (θ need to only be storedj, nxj, nyj, nzj), therefore video camera attitude rectification matrix HjBe converted to video camera amendment demarcation information (θj, nxj, nyj, nzj, txj, tyj, tzj);Wherein correct quaternary number and translation vector It is collectively referred to as amendment demarcation information.
S7.3 calculates camera coverage angle:When focal distance is FDjWhen, camera coverage angle α is calculated according to following formulaj
Wherein W, H are the width of video camera, high-resolution.
S8. focal distance is adjusted, to each focal distance sampled point FDjStep S4~S7 is performed, it is all right until traveling through Defocus distance sampled point, obtain the amendment demarcation information (θ corresponding to each focal distance sampled pointj, nxj, nyj, nzj, txj, tyj, tzj) and angle of visual field αj, establish the look-up table LUT1 that information-angle of visual field is demarcated in focal distance-amendment;
S9. export LUT1 and focal distance is passed through according to the look-up table LUT1 to image rendering engine, image rendering engine Obtain corresponding amendment demarcation information and the angle of visual field.
1 focal distance of table-amendment demarcation information-angle of visual field look-up table LUT1 form
When camera lens are tight shot, image generation engine is being obtained using look-up table LUT1 according to focal distance It is not every because the focal distance sampled point recorded in look-up table LUT1 is limited when corresponding amendment demarcation information and the angle of visual field The focal distance of secondary selection is all precisely the focal distance of sample point.For the focal distance at non-sampled point, can use The mode of interpolation arithmetic obtains amendment demarcation information corresponding to its, due to correcting quaternary number, translation vector and the angle of visual field its number It is different to learn characteristic, therefore in order to reach more accurate interpolation arithmetic effect, different interpolation algorithms can be respectively adopted.It is specific next Say, the amendment quaternary number demarcated for amendment in information enters row interpolation, translation vector (t using SLERP interpolation methodsx, ty, tz) Using linear interpolation, the angle of visual field uses linear interpolation, or other forms to enter row interpolation.
Wherein, preferably, the method for entering row interpolation to amendment quaternary number using SLERP algorithms is as follows:
If FDj< FD < FDj+1, video camera is in focal distance FDjAnd FDj+1The video camera attitude rectification matrix at place is respectively HjAnd Hj+1, its spin matrix is respectively RjAnd Rj+1, spin matrix is expressed as to correct quaternary numberWithWhen then for currently practical focal distance FD Amendment quaternary number q calculated by following formula:
HereIt is qjIt is inverse,
It is possible thereby in the case where focal distance is any value, corresponding amendment demarcation information and the angle of visual field are obtained.
2nd, zoom lens (see accompanying drawing 3)
For the camera position amendment calibration system using zoom lens, the change except needing consideration focal distance, The change of focal length should also be considered simultaneously, the amendment mark under each focal distance sampled point, focal length sampled point should be obtained respectively Information and the angle of visual field are determined, referring specifically to Fig. 3:
S1. the world coordinates of mark point is obtained:
N number of mark point A on utilization space measuring apparatus measurement background screen1..., ANWorld coordinate system under coordinate A1 (x1, y1, z1) ..., AN(xN, yN, zN), N number of mark point comprises at least 3 not collinear mark points;The mark point is sat It is designated as the home position of mark point and the mark point is respectively positioned in the picture of video camera collection;Wherein i-th of mark point exists Coordinate under world coordinate system is Ai(xi, yi, zi), i=1~N;
S2. focal distance sampled point and focal length sampled point are determined:
By servo-motor control system, the focal distance of numerical value from small to large is determined according to camera lens calibration process Sampled point FD1, FD2..., FDj..., FDJ(j=1~J), FD1And FDJIt is the minimum focal distance of camera lens and maximum focusing respectively Distance;In addition, determine the focal length sampled point FL of numerical value from small to large1, FL2..., FLk..., FLK(k=1~K), FL1And FLKPoint It is not the minimum focus and maximum focal length of camera lens.
S3. it is abnormal that camera lens inner parameter and camera lens corresponding to each focal distance sampled point and focal length sampled point are obtained Variable element is simultaneously recorded, and the camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
Preferably, for j-th of focal distance sampled point FDjWith k-th of focal length sampled point FLk, corresponding video camera mirror Head inner parameter be:fxjk, fyjk, cxjk, cyjk, wherein:
fxjk:Expression focal distance is FDj, focal length FLkWhen, the emergent pupil of camera lens and camera lens optical axis intersection point to imaging surface away from From fjkThe ratio between with the horizontal width dx of each unit of imager, i.e.,
fyjk:Expression focal distance is FDj, focal length FLkWhen, the emergent pupil of camera lens and camera lens optical axis intersection point to imaging surface away from From fjkThe ratio between with the vertical height dy of each unit of imager, i.e.,
cxjk:Expression focal distance is FDj, focal length FLkWhen, the intersection point of optical axis and imaging surface, the water with imaging surface center The pixel count of flat skew;
cyjk:Expression focal distance is FDj, focal length FLkWhen, the intersection point of optical axis and imaging surface, with hanging down for imaging surface center The pixel count directly offset;
For j-th of focal distance sampled point FDjWith k-th of focal length sampled point FLk, corresponding camera lens distortions ginseng Number is:k1jk, k2jk, k3jk, p1jkAnd p2jk, wherein k1jk, k2jkAnd k3jkIt is FD for focal distancejAnd focal length is FLkWhen radial direction it is abnormal Variable element, wherein k3jkFor selected parameter, k is used when camera lens is fish eye lens1jk, k2jkAnd k3jkThese three radial distortions are joined Number;When camera lens is non-fish eye lens, k3jk=0, i.e. other camera lenses only use k1jk, k2jkThe two radial distortion parameters;p1jk, p2jkIt is FD for focal distancejAnd focal length is FLkWhen tangential distortion parameter;
And obtain each focal distance FDjWith focal length FLkUnder internal reference matrix be Mjk
S4. when focal distance takes j-th of focal distance sampled point FDjAnd focal length is in k-th of sampled point FLkDuring place, adjustment is taken the photograph Put the mark point blur-free imaging enabled on background screen in shadow seat in the plane;
S5. obtain current state under video camera posture external trace device such as optictracking device or mechanical arm transmission work as Front position information [θxjk, θyjk, θzjk, Txjk, Tyjk, Tzjk].Wherein [θxjk, θyjk, θzjk] it is to make world coordinate system after translation The rotation overlapped respectively after X-axis, Y-axis and Z axis rotation with the customized coordinate system of video camera posture external trace device successively Turn Eulerian angles, [Txjk, Tyjk, Tzjk] it is position coordinates of the video camera posture external trace device under world coordinate system;Will photography The current location information that machine posture external trace device provides is expressed as matrix form:
Here
TIjk=[Txjk Tyjk Tzjk]T
0=[0,0,0]
S6. sat using the picture point of the world coordinates of the S1 N number of mark points obtained, mark point under video camera imaging coordinate system The camera lens inner parameter and lens distortion parameter that mark and S3 are obtained, are obtained between camera coordinates system and world coordinate system Spin matrix RpjkAnd the video camera centre of perspectivity is in the translation vector T of world coordinate systempjk, i.e., video camera is in world coordinate system Under attitude information;
Current focal distance is FDjAnd current focus is FLkWhen, obtain between camera coordinates system and world coordinate system Spin matrix Rpjk, the video camera centre of perspectivity world coordinate system translation vector Tpjk, appearance of the video camera under world coordinate system State matrix HpjkMethod be:
According to current focal distance FDjWith current focus FLkThe picpointed coordinate of lower i-th of mark pointAnd Lens distortion parameter obtains the picpointed coordinate (x ' after its correctionijk, y 'ijk):
Wherein if camera lens is non-fish eye lens, k3jk=0;
World coordinates (the x of i-th of mark pointi, yi, zi) picpointed coordinate (x ' after corresponding correctionijk, y 'ijk) it Between relation be expressed as:
Wherein λjkReferred to as scale factor,
Spin matrix R wherein between camera coordinates system and world coordinate systempjkFor orthogonal matrix, TpjkIt is saturating for video camera Depending on center world coordinate system translation vector:
Pass through the world coordinates (x of formula (3) (4) and each mark pointi, yi, zi) and corresponding correction after picpointed coordinate (x′ijk, y 'ijk), obtain matrix Ojkjk[Rpjk, Tpjk] and attitude information of the video camera under world coordinate systemWhereinFor make world coordinate system after translation successively The rotation Eulerian angles overlapped respectively after X-axis, Y-axis and Z axis rotation with camera coordinates system, For translation vector of the video camera centre of perspectivity under world coordinate system;
Because matrix RpjkIt is unit orthogonal matrix, has:
λjk=1/ | | Ojk(:, 1) | |;
Wherein Ojk(:, 1) and it is matrix OjkThe 1st row, | | | | represent vector Euclid norm;
According to the spin matrix R between the camera coordinates system of acquisition and world coordinate systempjkAnd the video camera centre of perspectivity In the translation vector T of world coordinate systempjk, attitude information of the video camera under world coordinate system is expressed as matrix Hpjk
S7. the current location information that video camera posture external trace device provides under the current state obtained using S5, and Attitude information of the video camera tried to achieve in S6 under world coordinate system, try to achieve video camera amendment demarcation information and the angle of visual field;
S7.1 calculates video camera attitude rectification matrix:
It is mainly to utilize the relative position relation of video camera and optictracking device to calculate video camera attitude rectification matrix Consistency, it is the term of validity of once mounting that relative position relation here is constant, reinstalls optictracking device, it is necessary to taking the photograph The positioning of shadow machine is re-scaled.Formula (6) can be utilized to calculate video camera attitude rectification matrix H when calculatingjk
Wherein
S7.2 is by video camera attitude rectification matrix HjkIt is transformed to correct quaternary number (θjk, nxjk, nyjk, nzjk) and translation vector Tjk
The calculating video camera attitude rectification matrix H obtained due to formula (6)jkIt is 4 × 4 matrix, it is not easy to store, and And be also not easy to carry out interpolation calculation in subsequent arithmetic, therefore, it is necessary to transform it into the form for being easy to storage and computing.
The video camera attitude rectification matrix HjkIt is 4 × 4 matrix, its live part is Hjk(1:3,1:4), i.e. Hjk's First three rows, by Hjk(1:3,1:4) it is expressed as form:
Hjk(1:3,1:4)=[Rjk, Tjk],
Wherein spin matrix RjkFor 3 × 3 real number matrix, translation vector Tjk=[txjk, tyjk, tzjk]TFor 3-dimensional column vector;
Due to spin matrix RjkCan mutually it be changed with quaternary number, by spin matrix RjkAmendment four is converted to according to following formula First number (θjk, nxjk, nyjk, nzjk):
HereFor 3-dimensional row vector.
Thus storage RjkOnly need to store (θjk, nxjk, nyjk, nzjk), therefore video camera attitude rectification matrix HjkConversion Information (θ is demarcated for video camera amendmentjk, nxjk, nyjk, nzjk, txjk, tyjk, tzjk);Wherein correct quaternary number and translation vector is collectively referred to as Information is demarcated for amendment.
S7.3 calculates camera coverage angle:When focal distance is FDj, focal length FLkWhen, camera coverage angle αjkUnder Face formula is calculated:
Wherein W, H are the width of video camera, high-resolution.
S8. focal distance and focal length are adjusted, to each focal distance sampled point FDjWith each focal length sampled point FLkPerform step Rapid S4~S7, until traveling through all focal distance sampled points and focal length sampled point, obtain each focal distance sampled point and focal length Amendment demarcation information and the angle of visual field corresponding to sampled point, establish looking into for focal distance-focal length-amendment demarcation information-angle of visual field Look for table LUT2;Form example is shown in Table 2;
Specifically, focal length priority principle can be taken, i.e., focal distance is first fixed, for each under the focal distance Focal length sampled point repeats S4~S7, then adjusts focal distance, repeats above-mentioned steps, until traversal travels through all focal distances Sampled point, step S4~S7 thus is performed to each focal distance sampled point and focal length sampled point, obtain amendment four corresponding to it First number (θ, nx, ny, nz), translation vector (tx, ty, tz) and angle of visual field α.
S9. export LUT and generate engine to image, be easy to image generation engine to be obtained according to focal distance and focal length corresponding The amendment demarcation information and angle of visual field.
2 focal lengths of table-focal distance-amendment demarcation information-angle of visual field look-up table LUT2 form
In the case of camera lens is zoom lens, image is generated engine and focused using look-up table LUT2 according to current When distance and focal length obtain corresponding amendment demarcation information and the angle of visual field, due to the focal distance sampling recorded in look-up table LUT2 Point and focal length sampled point are limited, and the focal distance not chosen every time, focal length are all precisely focal distance and Jiao of sample point Away from.For the focal distance at non-sampled point and focal length (such as when actual focal distance FD does not adopt in any one focal distance Sampling point FDjPlace, or real focal length FL is not in any one focal distance sampled point FLkPlace, or when actual focal distance FD not In any one focal distance sampled point FDjPlace and real focal length FL is not also in any one focal distance sampled point FLk Place), amendment demarcation information corresponding to its can be obtained using interpolation arithmetic by the way of, due to amendment quaternary number, translation vector with And its mathematical characteristic of the angle of visual field is different, therefore in order to reach more accurate interpolation arithmetic effect, different insert can be respectively adopted Value-based algorithm.Specifically, amendment quaternary number enters row interpolation, translation vector T using SLERP interpolation methodsjk=[txjk, tyjk, tzjk ]TUsing linear interpolation, the angle of visual field uses linear interpolation, or other forms to enter row interpolation.
Wherein, preferably, the method for entering row interpolation to amendment quaternary number using SLERP algorithms is as follows:
If 1. FDj< FD < FDj+1And FLk< FL < FLk+1, i.e., when actual focal distance FD is not in any one focusing From sampled point FDjPlace, and real focal length FL is not also in any one focal length sampled point FLkPlace, using with its closest to focusing Distance and focal length sampled point combination (FDj, FLk), (FDj, FLk+1), (FDj+1, FLk), (FDj+1, FLk+1) corresponding to amendment four First number qJ, k, qJ, k+1, qJ+1, k, qJ+1, k+1, enter row interpolation according to the following formula, then for currently practical focal distance FD and actual Jiao Away from for FL when amendment quaternary number qL, dCalculated by following formula:
Wherein:
If 2. FDj< FD < FDj+1, FL=FLkWhen, i.e., when actual focal distance FD does not sample in any one focal distance Point FDjPlace and real focal length FL is focal length sampled point FLkWhen, using with its closest to focal distance sampled point combine (FDj, FLk), (FDj+1, FLk) corresponding to amendment quaternary number information qJ, k, qJ+1, k, enter row interpolation according to the following formula, then for currently practical Focal distance FD and real focal length are FLkWhen amendment quaternary number qL, dCalculated by following formula
Wherein
If 3. FD=FDj, FLk< FL < FLk+1When, i.e., actual focal distance FD is focal distance sampled point FDjIt is and actual Focal length FL is not in any one focal length sampled point FLkDuring place, using with its closest to focal length sampled point combine (FDj, FLk), (FDj, FLk+1) corresponding to amendment quaternary number information qJ, k, qJ, k+1, enter row interpolation according to the following formula, then for currently practical focusing Distance FDjAnd amendment quaternary number q when real focal length is FLL, dCalculated by following formula:
Wherein
It is possible thereby in the case where focal distance and focal length are any value, corresponding amendment demarcation is obtained with interpolation by tabling look-up Information and the angle of visual field.
The foregoing is only a specific embodiment of the invention, but protection scope of the present invention is not limited thereto, any Be familiar with the people of the technology disclosed herein technical scope in, it will be appreciated that the conversion and replacement expected, should all cover at this Within the scope of invention, therefore, protection scope of the present invention should be defined by the protection domain of claims.

Claims (12)

1. a kind of camera position amendment scaling method, it is characterised in that comprise the following steps:
S1. N number of mark point A on background screen is obtained1..., ANCoordinate under world coordinate system:A1(x1, y1, z1) ..., AN (xN, yN, zN), wherein coordinate of i-th of mark point under world coordinate system is Ai(xi, yi, zi), i=1~N;N number of mark Note point comprises at least 3 not collinear mark points;Home position and the mark point of the mark point coordinates for mark point It is respectively positioned in the picture of video camera collection;
S2. the focal distance sampled point FD of numerical value from small to large is determined1, FD2..., FDj..., FDJ;Wherein j=1~J, FD1With FDJIt is the minimum focal distance of camera lens and maximum focal distance respectively;
If camera lens is zoom lens, also need to determine the focal length sampled point FL of numerical value from small to large1, FL2..., FLk..., FLK;Wherein k=1~K, FL1And FLKIt is the minimum focus and maximum focal length of camera lens respectively;
If S3. camera lens is tight shot, each focal distance sampled point FD is obtainedjJoin inside corresponding camera lens Number and lens distortion parameter;
If camera lens is zoom lens, each focal distance sampled point FD is obtainedjWith each focal length sampled point FLkIt is corresponding Camera lens inner parameter and lens distortion parameter;
The camera lens inner parameter and lens distortion parameter are obtained by camera lens calibration process;
If S4. camera lens is tight shot, when focal distance takes j-th of focal distance sampled point FDjWhen, adjust video camera Position enables the mark point blur-free imaging on background screen;
If camera lens is zoom lens, when focal distance takes j-th of focal distance sampled point FDjAnd focal length is in k-th of sampling Point FLkDuring place, adjustment camera position enables the mark point blur-free imaging on background screen;
S5. the current location information that video camera posture external trace device provides under current state is obtained, including to sit the world Mark system is self-defined with video camera posture external trace device after its X-axis, Y-axis and Z axis rotation respectively successively after translation The rotation Eulerian angles that overlap of coordinate system, and position coordinates of the video camera posture external trace device under world coordinate system;
S6. using the S1 N number of mark points obtained picpointed coordinate under video camera imaging coordinate system of world coordinates, mark point and The camera lens inner parameter and lens distortion parameter that S3 is obtained, obtain the rotation between camera coordinates system and world coordinate system Torque battle array and the video camera centre of perspectivity are in the translation vector of world coordinate system, i.e. posture letter of the video camera under world coordinate system Breath;
S7. the current location information that video camera posture external trace device provides under the current state obtained using S5, and in S6 Attitude information of the video camera tried to achieve under world coordinate system, try to achieve video camera amendment demarcation information and the angle of visual field;
S8. if camera lens is tight shot:Focal distance is adjusted, to each focal distance sampled point FDjExecution step S4~ S7, until traveling through all focal distance sampled points, obtain the demarcation information of the amendment corresponding to each focal distance sampled point and regard Rink corner, establish the look-up table LUT1 of focal distance-amendment demarcation information-angle of visual field;
If camera lens is zoom lens:Focal distance and focal length are adjusted, to each focal distance sampled point FDjWith each Jiao Away from sampled point FLkStep S4~S7 is performed, until traveling through all focal distance sampled points and focal length sampled point, obtains each focusing Amendment demarcation information and the angle of visual field corresponding to distance sample and focal length sampled point, establish focal distance-focal length-amendment demarcation The look-up table LUT2 of information-angle of visual field;
S9. if camera lens is tight shot:Then according to the look-up table LUT1, corresponding amendment is obtained by focal distance Demarcate information and the angle of visual field;
If camera lens is zoom lens:Then according to the look-up table LUT2, obtained by focal distance and focal length corresponding Amendment demarcation information and the angle of visual field.
A kind of 2. camera position amendment scaling method according to claim 1, it is characterised in that in step S3,
(1) when camera lens is tight shot:
For j-th of focal distance sampled point FDj, corresponding camera lens inner parameter is:fxj, fyj, cxj, cyj, wherein:
fxj:Expression focal distance is FDjWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjWith video camera into The ratio between horizontal width dx as each unit of instrument, i.e.,
fyj:Expression focal distance is FDjWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjWith video camera into The ratio between vertical height dy as each unit of instrument, i.e.,
cxj:Expression focal distance is FDjWhen, the optical axis of camera lens and the intersection point of imaging surface, with the horizontal-shift at imaging surface center Pixel count;
cyj:Expression focal distance is FDjWhen, the optical axis of camera lens and the intersection point of imaging surface, with the vertical shift at imaging surface center Pixel count;
For j-th of focal distance sampled point FDj, corresponding camera lens distortions parameter is:k1j, k2j, k3j, p1jAnd p2j, its Middle k1j, k2j, k3jIt is FD for focal distancejWhen radial distortion parameter, wherein k3jFor selected parameter, when camera lens is fish eye lens When use k1j, k2j, k3jThese three radial distortion parameters, when camera lens is non-fish eye lens, k3j=0, i.e. other camera lenses only use k1j, k2jThe two radial distortion parameters;p1j, p2jIt is FD for focal distancejWhen tangential distortion parameter;
Also, obtain each focal distance FDjUnder internal reference matrix MjFor:
<mrow> <msub> <mi>M</mi> <mi>j</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>c</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>c</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
(2) when camera lens is zoom lens:
For j-th of focal distance sampled point FDjWith k-th of focal length sampled point FLk, corresponding camera lens inner parameter is: fxjk, fyjk, cxjk, cyjk, wherein:
fxjk:Expression focal distance is FDj, focal length FLkWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjk The ratio between with the horizontal width dx of each unit of imager, i.e.,
fyjk:Expression focal distance is FDj, focal length FLkWhen, the distance f of the emergent pupil and camera lens optical axis intersection point to imaging surface of camera lensjk The ratio between with the vertical height dy of each unit of imager, i.e.,
cxjk:Expression focal distance is FDj, focal length FLkWhen, the intersection point of optical axis and imaging surface is inclined with the level at imaging surface center The pixel count of shifting;
cyjk:Expression focal distance is FDj, focal length FLkWhen, the intersection point of optical axis and imaging surface is vertical with imaging surface center inclined The pixel count of shifting;
For j-th of focal distance sampled point FDjWith k-th of focal length sampled point FLk, corresponding camera lens distortions parameter is: k1jk, k2jk, k3jk, p1jkAnd p2jk, wherein k1jk, k2jkAnd k3jkIt is FD for focal distancejAnd focal length is FLkWhen radial distortion ginseng Number, wherein k3jkFor selected parameter, k is used when camera lens is fish eye lens1jk, k2jkAnd k3jkThese three radial distortion parameters;When When camera lens is non-fish eye lens, k3jk=0, i.e. other camera lenses only use k1jk, k2jkThe two radial distortion parameters;p1jk, p2jkFor Focal distance is FDjAnd focal length is FLkWhen tangential distortion parameter;
And obtain each focal distance FDjWith focal length FLkUnder internal reference matrix be Mjk
<mrow> <msub> <mi>M</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>f</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>c</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <msub> <mi>f</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>c</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow>
A kind of 3. camera position amendment scaling method according to claim 2, it is characterised in that in step S5,
(1) when camera lens is tight shot:
The current location information that the video camera posture external trace device provides is [θxj, θyj, θzj, Txj, Tyj, Tzj];Wherein [θxj, θyj, θzj] for make world coordinate system after translation successively respectively around X-axis, Y-axis and Z axis rotation after with video camera posture The rotation Eulerian angles that the customized coordinate system of external trace device overlaps, [Txj, Tyj, Tzj] set for video camera posture external trace The standby position coordinates under world coordinate system, by the current location information that video camera posture external trace device provides be expressed as Lower matrix form:
<mrow> <msub> <mi>H</mi> <mrow> <mi>I</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>j</mi> </mrow> </msub> <msub> <mi>T</mi> <mrow> <mi>I</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Here
<mrow> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
TIj=[Txj Tyj Tzj]T
0=[0,0,0];
(2) when camera lens is zoom lens:
Current location information [the θ that the video camera posture external trace device providesxjk, θyjk, θzjk, Txjk, Tyjk, Tzjk];Its In [θxjk, θyjk, θzjk] for make world coordinate system after translation successively respectively around X-axis, Y-axis and Z axis rotation after with video camera The rotation Eulerian angles that the customized coordinate system of posture external trace device overlaps, [Txjk, Tyjk, Tzjk] for outside video camera posture Position coordinates of the tracking equipment under world coordinate system;The current location information table that video camera posture external trace device is provided It is shown as following matrix form:
<mrow> <msub> <mi>H</mi> <mrow> <mi>I</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msub> <mi>T</mi> <mrow> <mi>I</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
Here
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mrow> <mi>I</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msub> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> <mtd> <mrow> <msub> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
TIjk=[Txjk Tyjk Tzjk]T
0=[0,0,0].
A kind of 4. camera position amendment scaling method according to claim 3, it is characterised in that in step S6,
(1) when camera lens is tight shot:
Current focal distance is FDjWhen, obtain the spin matrix R between camera coordinates system and world coordinate systempj, video camera it is saturating Depending on center world coordinate system translation vector Tpj, attitude matrix H of the video camera under world coordinate systempjMethod be:
According to the picpointed coordinate of i-th of mark pointAnd current focal distance FDjUnder lens distortion parameter, obtain Picpointed coordinate (the x ' of i-th of mark point after correctionij, y 'ij):
<mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>+</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>4</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>3</mn> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>6</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> </msub> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>+</mo> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>
<mrow> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>+</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>4</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>3</mn> <mi>j</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>6</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>j</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>j</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow>
<mrow> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
Wherein if camera lens is non-fish eye lens, k3j=0;
World coordinates (the x of i-th of mark pointi, yi, zi) picpointed coordinate (x ' after corresponding correctionij, y 'ij) between relation It is expressed as:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msub> <mi>&amp;lambda;</mi> <mi>j</mi> </msub> <msub> <mi>M</mi> <mi>j</mi> </msub> <mo>&amp;lsqb;</mo> <msub> <mi>R</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>&amp;rsqb;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
Wherein λjReferred to as scale factor,
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </mrow>
Spin matrix R wherein between camera coordinates system and world coordinate systempjFor orthogonal matrix, TpjFor the video camera centre of perspectivity In the translation vector of world coordinate system:
<mrow> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>T</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>T</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>T</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> <mi>c</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> <mo>;</mo> </mrow>
Pass through the world coordinates (x of formula (1) (2) and each mark pointi, yi, zi) and corresponding correction after picpointed coordinate (x 'ij, y 'ij), Obtain matrix Ojj[Rpj, Tpj] and attitude information of the video camera under world coordinate system WhereinFor make world coordinate system after translation successively respectively around X-axis, Y-axis and Z axis rotation after with photography The rotation Eulerian angles that machine coordinate system overlaps,For translation of the video camera centre of perspectivity under world coordinate system Vector;
Again because matrix RpjIt is unit orthogonal matrix, has:
λj=1/ | | Oj(:, 1) | |;
Wherein Oj(:, 1) and it is matrix OjThe 1st row, | | | | represent vector Euclid norm;
According to the spin matrix R between the camera coordinates system of acquisition and world coordinate systempjAnd the video camera centre of perspectivity is alive The translation vector T of boundary's coordinate systempj, attitude information of the video camera under world coordinate system is expressed as matrix Hpj
<mrow> <msub> <mi>H</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>R</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>;</mo> </mrow>
(2) when camera lens is zoom lens:
Current focal distance is FDjAnd current focus is FLkWhen, obtain the rotation between camera coordinates system and world coordinate system Matrix Rpjk, the video camera centre of perspectivity world coordinate system translation vector Tpjk, posture square of the video camera under world coordinate system Battle array HpjkMethod be:
According to current focal distance FDjWith current focus FLkThe picpointed coordinate of lower i-th of mark point And camera lens is abnormal Variable element obtains the picpointed coordinate (x ' after its correctionijk, y 'ijk):
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mo>+</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>1</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>2</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>4</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>3</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>6</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <mn>2</mn> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> <mo>=</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mo>+</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>1</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>2</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>4</mn> </msup> <mo>+</mo> <msub> <mi>k</mi> <mrow> <mn>3</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msup> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>6</mn> </msup> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>p</mi> <mrow> <mn>1</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mrow> <mo>(</mo> <msup> <msub> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mn>2</mn> <msub> <mi>p</mi> <mrow> <mn>2</mn> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <msub> <mi>r</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msqrt> <mrow> <msup> <mrow> <mo>(</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </msqrt> </mrow>
Wherein if camera lens is non-fish eye lens, k3jk=0;
World coordinates (the x of i-th of mark pointi, yi, zi) picpointed coordinate (x ' after corresponding correctionijk, y 'ijk) between close System is expressed as:
<mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <msubsup> <mi>y</mi> <mrow> <mi>i</mi> <mi>j</mi> <mi>k</mi> </mrow> <mo>&amp;prime;</mo> </msubsup> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>=</mo> <msub> <mi>&amp;lambda;</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <msub> <mi>M</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;lsqb;</mo> <msub> <mi>R</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;rsqb;</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>x</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>y</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <msub> <mi>z</mi> <mi>i</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow>
Wherein λjkReferred to as scale factor,
<mrow> <mtable> <mtr> <mtd> <mrow> <msub> <mi>R</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mn>1</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>-</mo> <msubsup> <mi>sin&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> <mtd> <mrow> <msubsup> <mi>cos&amp;theta;</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>4</mn> <mo>)</mo> </mrow> </mrow>
Spin matrix R wherein between camera coordinates system and world coordinate systempjkFor orthogonal matrix, TpjkIn being had an X-rayed for video camera Translation vector of the heart in world coordinate system:
<mrow> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msup> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msubsup> <mi>T</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>T</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mtd> <mtd> <msubsup> <mi>T</mi> <mrow> <mi>z</mi> <mi>j</mi> <mi>k</mi> </mrow> <mi>c</mi> </msubsup> </mtd> </mtr> </mtable> </mfenced> <mi>T</mi> </msup> </mrow>
Pass through the world coordinates (x of formula (3) (4) and each mark pointi, yi, zi) and corresponding correction after picpointed coordinate (x 'ijk, y 'ijk), Obtain matrix Ojkjk[Rpjk, Tpjk] and attitude information of the video camera under world coordinate system WhereinTo make world coordinate system be surrounded respectively successively after translation after X-axis, Y-axis and Z axis rotate with taking the photograph The rotation Eulerian angles that shadow machine coordinate system overlaps,It is the video camera centre of perspectivity under world coordinate system Translation vector;
Because matrix RpjkIt is unit orthogonal matrix, has:
λjk=1/ | | Ojk(:, 1) | |;
Wherein Ojk(:, 1) and it is matrix OjkThe 1st row, | | | | represent vector Euclid norm;
According to the spin matrix R between the camera coordinates system of acquisition and world coordinate systempjkAnd the video camera centre of perspectivity is alive The translation vector T of boundary's coordinate systempjk, attitude information of the video camera under world coordinate system is expressed as matrix Hpjk
<mrow> <msub> <mi>H</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <msub> <mi>R</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> <mtd> <msub> <mi>T</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>.</mo> </mrow> 5
5. a kind of camera position amendment scaling method according to claim 4, it is characterised in that step S7 includes following step Suddenly:
(1) when camera lens is tight shot:
S7.1 calculates video camera attitude rectification matrix Hj
<mrow> <msub> <mi>H</mi> <mi>j</mi> </msub> <mo>=</mo> <msub> <mi>H</mi> <mrow> <mi>p</mi> <mi>j</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msubsup> <mi>H</mi> <mrow> <mi>I</mi> <mi>j</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>5</mn> <mo>)</mo> </mrow> </mrow>
S7.2 is by video camera attitude rectification matrix HjIt is transformed to correct quaternary number (θj, nxj, nyj, nzj) and translation vector Tj
The video camera attitude rectification matrix HjIt is 4 × 4 matrix, its live part is Hj(1: 3,1: 4), i.e. HjFirst three rows, By Hj(1: 3,1: 4) be expressed as form:
Hj(1: 3,1: 4)=[Rj, Tj];
Wherein spin matrix RjFor 3 × 3 real number matrix, translation vector Tj=[txj, tyj, tzj]TFor 3-dimensional column vector;
By spin matrix RjAmendment quaternary number (θ is converted to according to following formulaj, nxj, nyj, nzj):
HereFor 3-dimensional row vector;
Thus storage RjAmendment quaternary number (θ need to only be storedj, nxj, nyj, nzj), therefore video camera attitude rectification matrix HjTurn It is changed to video camera amendment demarcation information (θj, nxj, nyj, nzj, txj, tyj, tzj);
S7.3 calculates camera coverage angle:When focal distance is FDjWhen, camera coverage angle α is calculated according to following formulaj
<mrow> <msub> <mi>&amp;alpha;</mi> <mi>j</mi> </msub> <mo>=</mo> <mn>2</mn> <mi>arctan</mi> <mrow> <mo>(</mo> <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msqrt> <mrow> <mfrac> <msup> <mi>W</mi> <mn>2</mn> </msup> <msubsup> <mi>f</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> <mo>+</mo> <mfrac> <msup> <mi>H</mi> <mn>2</mn> </msup> <msubsup> <mi>f</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> </mrow> </msqrt> </mrow> <mo>)</mo> </mrow> </mrow>
Wherein W, H are respectively the width of video camera, high-resolution;
(2) when camera lens is zoom lens:
S7.1 calculates video camera attitude rectification matrix Hjk
<mrow> <msub> <mi>H</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>H</mi> <mrow> <mi>p</mi> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>&amp;CenterDot;</mo> <msubsup> <mi>H</mi> <mrow> <mi>I</mi> <mi>j</mi> <mi>k</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>6</mn> <mo>)</mo> </mrow> </mrow>
S7.2 is by video camera attitude rectification matrix HjkIt is transformed to correct quaternary number (θjk, nxjk, nyjk, nzjk) and translation vector Tjk
The video camera attitude rectification matrix HjkIt is 4 × 4 matrix, its live part is Hjk(1: 3,1: 4), i.e. HjkFirst three OK, by Hjk(1: 3,1: 4) be expressed as form:
Hjk(1: 3,1: 4)=[Rjk, Tjk];
Wherein spin matrix RjkFor 3 × 3 real number matrix, translation vector Tjk=[txjk, tyjk, tzjk]TFor 3-dimensional column vector;
By spin matrix RjkAmendment quaternary number (θ is converted to according to following formulajk, nxjk, nyjk, nzjk):
HereFor 3-dimensional row vector;
Thus storage RjkAmendment quaternary number (θ need to only be storedjk, nxjk, nyjk, nzjk), therefore video camera attitude rectification matrix HjkBe converted to video camera amendment demarcation information (θjk, nxjk, nyjk, nzjk, txjk, tyjk, tzjk);
S7.3 calculates camera coverage angle:When focal distance is FDjAnd focal length is FLkWhen, camera coverage angle is calculated according to following formula αjk
<mrow> <msub> <mi>&amp;alpha;</mi> <mrow> <mi>j</mi> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> <mi>arctan</mi> <mrow> <mo>(</mo> <mrow> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> <msqrt> <mrow> <mfrac> <msup> <mi>W</mi> <mn>2</mn> </msup> <msubsup> <mi>f</mi> <mrow> <mi>x</mi> <mi>j</mi> <mi>k</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> <mo>+</mo> <mfrac> <msup> <mi>H</mi> <mn>2</mn> </msup> <msubsup> <mi>f</mi> <mrow> <mi>y</mi> <mi>j</mi> <mi>k</mi> </mrow> <mn>2</mn> </msubsup> </mfrac> </mrow> </msqrt> </mrow> <mo>)</mo> </mrow> </mrow>
Wherein W, H are respectively the width of video camera, high-resolution.
6. a kind of camera position amendment scaling method according to claim 5, it is characterised in that in step S9, work as reality Focal distance FD is not in any one focal distance sampled point FDjPlace, or real focal length FL do not sample in any one focal length Point FLkPlace, or when actual focal distance FD is not in any one focal distance sampled point FDjPlace and real focal length FL are not yet In any one focal length sampled point FLkDuring place, obtained by the way of following interpolation arithmetic amendment demarcation information corresponding to it and The angle of visual field:
Row interpolation is entered using SLERP interpolation methods for the amendment quaternary number that amendment is demarcated in information, translation vector is using three-dimensional Linear interpolation;The angle of visual field enters row interpolation using linear interpolation method.
A kind of 7. camera position amendment scaling method according to claim 6, it is characterised in that in step S9,
(1) when camera lens is tight shot:
When actual focal distance FD is not in any one focal distance sampled point FDjPlace, using SLERP algorithms to correcting quaternary number The method for entering row interpolation is as follows:
If FDj< FD < FDj+1, video camera is in focal distance FDjAnd FDj+1The video camera attitude rectification matrix at place is respectively HjAnd Hj+1, Its spin matrix is respectively RjAnd Rj+1, spin matrix is expressed as to correct quaternary numberWithAmendment quaternary number q when then for currently practical focal distance FD is by following formula meter Calculate:
<mrow> <mi>q</mi> <mo>=</mo> <mi>s</mi> <mi>l</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>q</mi> <mi>j</mi> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mi>j</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mi>t</mi> </msup> <mo>;</mo> </mrow>
HereIt is qjIt is inverse,
<mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>q</mi> <mi>j</mi> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>cos&amp;theta;</mi> <mi>j</mi> </msub> <mo>,</mo> <mo>-</mo> <mover> <msub> <mi>n</mi> <mi>j</mi> </msub> <mo>&amp;RightArrow;</mo> </mover> <msub> <mi>sin&amp;theta;</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mo>;</mo> </mrow> </mtd> <mtd> <mrow> <mover> <msub> <mi>n</mi> <mi>j</mi> </msub> <mo>&amp;RightArrow;</mo> </mover> <mo>=</mo> <mrow> <mo>(</mo> <msub> <mi>n</mi> <mrow> <mi>x</mi> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>n</mi> <mrow> <mi>y</mi> <mi>j</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>n</mi> <mrow> <mi>z</mi> <mi>j</mi> </mrow> </msub> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> </mtr> </mtable> </mfenced>
<mrow> <mi>t</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mi>FD</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>F</mi> <mi>D</mi> </mrow> <mrow> <msub> <mi>FD</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>FD</mi> <mi>j</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow>
(2) when camera lens is zoom lens:
When actual focal distance FD is not in any one focal distance sampled point FDjPlace, or real focal length FL is not at any one Focal length sampled point FLkPlace, or when actual focal distance FD is not in any one focal distance sampled point FDjPlace and actual Jiao Away from FL also not in any one focal length sampled point FLkDuring place, the method for row interpolation is entered such as to amendment quaternary number using SLERP algorithms Under:
If 1. FDj< FD < FDj+1And FLk< FL < FLk+1, i.e., when actual focal distance FD does not adopt in any one focal distance Sampling point FDjPlace, and real focal length FL is not also in any one focal length sampled point FLkPlace, using with its closest to focal distance (FD is combined with focal length sampled pointj, FLk), (FDj, FLk+1), (FDj+1, FLk), (FDj+1, FLk+1) corresponding to amendment quaternary number qJ, k, qJ, k+1, qJ+1, k, qJ+1, k+1, enter row interpolation according to the following formula, be then for currently practical focal distance FD and real focal length Amendment quaternary number q during FLL, dCalculated by following formula:
<mrow> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mi>s</mi> <mi>l</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> </msup> </mrow>
<mrow> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mi>s</mi> <mi>l</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> </msup> </mrow> 7
<mrow> <msub> <mi>q</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mi>s</mi> <mi>l</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>d</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>q</mi> <mrow> <mi>i</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>)</mo> </mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> </msup> </mrow>
Wherein:
<mrow> <msub> <mi>t</mi> <mn>1</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>FD</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>F</mi> <mi>D</mi> </mrow> <mrow> <msub> <mi>FD</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>FD</mi> <mi>j</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow>
<mrow> <msub> <mi>t</mi> <mn>2</mn> </msub> <mo>=</mo> <mfrac> <mrow> <msub> <mi>FL</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>F</mi> <mi>L</mi> </mrow> <mrow> <msub> <mi>FL</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>FL</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow>
If 2. FDj< FD < FDj+1, FL=FLkWhen, i.e., when actual focal distance FD is not in any one focal distance sampled point FDjPlace and real focal length FL is focal length sampled point FLkWhen, using with its closest to focal distance sampled point combine (FDj, FLk), (FDj+1, FLk) corresponding to amendment quaternary number information qJ, k, qJ+1, k, enter row interpolation according to the following formula, then for currently practical Focal distance FD and amendment quaternary number q when real focal length is FLL, dCalculated by following formula
<mrow> <msub> <mi>q</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mi>s</mi> <mi>l</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>)</mo> </mrow> <mi>t</mi> </msup> </mrow>
Wherein
<mrow> <mi>t</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mi>FD</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>F</mi> <mi>D</mi> </mrow> <mrow> <msub> <mi>FD</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>FD</mi> <mi>j</mi> </msub> </mrow> </mfrac> <mo>;</mo> </mrow>
If 3. FD=FDj, FLk< FL < FLk+1When, i.e., actual focal distance FD is focal distance sampled point FDjAnd real focal length FL is not in any one focal length sampled point FLkDuring place, using with its closest to focal length sampled point combine (FDj, FLk), (FDj, FLk+1) corresponding to amendment quaternary number information qJ, k, qJ, k+1, enter row interpolation according to the following formula, then for currently practical focal distance FD And amendment quaternary number q when real focal length is FLL, dCalculated by following formula:
<mrow> <msub> <mi>q</mi> <mrow> <mi>l</mi> <mo>,</mo> <mi>d</mi> </mrow> </msub> <mo>=</mo> <mi>s</mi> <mi>l</mi> <mi>e</mi> <mi>r</mi> <mi>p</mi> <mrow> <mo>(</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msubsup> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi>q</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>)</mo> </mrow> <mi>t</mi> </msup> </mrow>
Wherein
<mrow> <mi>t</mi> <mo>=</mo> <mfrac> <mrow> <msub> <mi>FL</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mi>F</mi> <mi>L</mi> </mrow> <mrow> <msub> <mi>FL</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <msub> <mi>FL</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>.</mo> </mrow>
8. a kind of camera position amendment scaling method according to claim 1, it is characterised in that space measurement is used in S1 Equipment obtains N number of mark point A on background screen1..., ANCoordinate under world coordinate system.
9. any camera position amendment scaling method according to claim 1-5, it is characterised in that the video camera appearance State external trace device is optictracking device or mechanical arm.
A kind of 10. camera position amendment scaling method according to claim 1, it is characterised in that video camera imager coordinate It is that origin is set in field of view center.
11. a kind of camera position amendment scaling method according to claim 1, the focal distance sampled point and focal length Sampled point uses the sampled point taken in camera lens calibration process.
A kind of 12. camera position amendment calibration system, it is characterised in that including:Servo-motor control system, video camera, take the photograph Shadow machine posture external trace device, background screen, mark point, space measurement equipment, data processing equipment, image rendering engine;
Wherein, servo-motor control system is connected with video camera, for adjusting the focal length and focal distance of camera lens;Servo motor control System processed is also connected with data processing equipment, for sending the focal length and focal distance information of camera lens to data processing equipment, with Data processing equipment is calculated amendment demarcation information and the angle of visual field and establish look-up table;
Data processing equipment is also connected with video camera, for reading video stream data in real time;
Video camera posture external trace device is arranged at outside video camera, for the position by photogrammetric camera and video camera appearance The skew of state external trace device, estimate position and the attitude information of video camera;Video camera posture external trace device also with number It is connected according to processing equipment, for sending measurement data information;
At least three not conllinear mark point is set on background screen, and these mark point radiuses are identical, and color has with background screen Contrast;
Space measurement equipment and image rendering engine are connected with data processing equipment respectively, and space measurement equipment is used for measurement markers The world coordinates of point home position is simultaneously sent to data processing equipment, and image rendering engine is used to be established according to data processing equipment Look-up table obtain corresponding amendment demarcation information and the angle of visual field.
CN201510489677.0A 2015-08-11 2015-08-11 Camera position amendment scaling method and system Active CN105118055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510489677.0A CN105118055B (en) 2015-08-11 2015-08-11 Camera position amendment scaling method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510489677.0A CN105118055B (en) 2015-08-11 2015-08-11 Camera position amendment scaling method and system

Publications (2)

Publication Number Publication Date
CN105118055A CN105118055A (en) 2015-12-02
CN105118055B true CN105118055B (en) 2017-12-15

Family

ID=54666029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510489677.0A Active CN105118055B (en) 2015-08-11 2015-08-11 Camera position amendment scaling method and system

Country Status (1)

Country Link
CN (1) CN105118055B (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11477382B2 (en) 2016-02-19 2022-10-18 Fotonation Limited Method of stabilizing a sequence of images
CN108702450B (en) * 2016-02-19 2020-10-27 快图有限公司 Camera module for image capture device
CN107306325A (en) * 2016-04-22 2017-10-31 宁波舜宇光电信息有限公司 The device of utilization space measurement of coordinates camera module visual field and its application
CN107809610B (en) * 2016-09-08 2021-06-11 松下知识产权经营株式会社 Camera parameter set calculation device, camera parameter set calculation method, and recording medium
WO2018072179A1 (en) * 2016-10-20 2018-04-26 深圳达闼科技控股有限公司 Iris recognition-based image preview method and device
CN106485758B (en) * 2016-10-31 2023-08-22 成都通甲优博科技有限责任公司 Unmanned aerial vehicle camera calibration device, calibration method and assembly line calibration implementation method
CN106990776B (en) * 2017-02-27 2020-08-11 广东省智能制造研究所 Robot homing positioning method and system
JP7038345B2 (en) * 2017-04-20 2022-03-18 パナソニックIpマネジメント株式会社 Camera parameter set calculation method, camera parameter set calculation program and camera parameter set calculation device
CN107562189B (en) * 2017-07-21 2020-12-11 广州励丰文化科技股份有限公司 Space positioning method based on binocular camera and service equipment
CN107492126B (en) * 2017-08-03 2019-11-05 厦门云感科技有限公司 Calibration method, device, system, medium and the equipment of camera central axis
CN109523597B (en) * 2017-09-18 2022-06-03 百度在线网络技术(北京)有限公司 Method and device for calibrating external parameters of camera
CN107665483B (en) * 2017-09-27 2020-05-05 天津智慧视通科技有限公司 Calibration-free convenient monocular head fisheye image distortion correction method
CN107610185A (en) * 2017-10-12 2018-01-19 长沙全度影像科技有限公司 A kind of fisheye camera fast calibration device and scaling method
CN107959794A (en) * 2017-11-29 2018-04-24 天津聚飞创新科技有限公司 Data Modeling Method, device and data capture method, device and electronic equipment
CN108282651A (en) * 2017-12-18 2018-07-13 北京小鸟看看科技有限公司 Antidote, device and the virtual reality device of camera parameter
CN110389349B (en) * 2018-04-17 2021-08-17 北京京东尚科信息技术有限公司 Positioning method and device
CN110858403B (en) * 2018-08-22 2022-09-27 杭州萤石软件有限公司 Method for determining scale factor in monocular vision reconstruction and mobile robot
CN109788277B (en) * 2019-01-08 2020-08-04 浙江大华技术股份有限公司 Method and device for compensating optical axis deviation of anti-shake movement and storage medium
CN109949369B (en) * 2019-02-18 2022-11-29 先壤影视制作(上海)有限公司 Method for calibrating virtual picture and real space and computer readable storage medium
CN109887041B (en) * 2019-03-05 2020-11-20 中测国检(北京)测绘仪器检测中心 Method for controlling position and posture of shooting center of digital camera by mechanical arm
CN110266944A (en) * 2019-06-21 2019-09-20 大庆安瑞达科技开发有限公司 A kind of calibration quick focusing method of remote optical monitoring system
CN110487249A (en) * 2019-07-17 2019-11-22 广东工业大学 A kind of unmanned plane scaling method for structure three-dimensional vibration measurement
CN110969668B (en) * 2019-11-22 2023-05-02 大连理工大学 Stereo calibration algorithm of long-focus binocular camera
CN111080713B (en) * 2019-12-11 2023-03-28 四川深瑞视科技有限公司 Camera calibration system and method
CN111311671B (en) * 2020-05-12 2020-08-07 创新奇智(南京)科技有限公司 Workpiece measuring method and device, electronic equipment and storage medium
CN111986257A (en) * 2020-07-16 2020-11-24 南京模拟技术研究所 Bullet point identification automatic calibration method and system supporting variable distance
CN113409391B (en) * 2021-06-25 2023-03-03 浙江商汤科技开发有限公司 Visual positioning method and related device, equipment and storage medium
CN113766131B (en) * 2021-09-15 2022-05-13 广州市明美光电技术有限公司 Multi-target point focusing method and digital slice scanner applying same
CN115060229A (en) * 2021-09-30 2022-09-16 西安荣耀终端有限公司 Method and device for measuring a moving object
CN114299167B (en) * 2022-03-11 2022-07-26 杭州灵西机器人智能科技有限公司 Monocular calibration method, system, device and medium of zoom lens
CN116991298B (en) * 2023-09-27 2023-11-28 子亥科技(成都)有限公司 Virtual lens control method based on antagonistic neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991437A (en) * 1996-07-12 1999-11-23 Real-Time Geometry Corporation Modular digital audio system having individualized functional modules
CN1906943A (en) * 2004-01-30 2007-01-31 株式会社丰田自动织机 Video image positional relationship correction apparatus, steering assist apparatus having the video image positional relationship correction apparatus and video image positional relationship correcti
CN101447073A (en) * 2007-11-26 2009-06-03 新奥特(北京)视频技术有限公司 Zoom lens calibration method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4270949B2 (en) * 2003-06-10 2009-06-03 株式会社トプコン Calibration chart image display device, calibration device, and calibration method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5991437A (en) * 1996-07-12 1999-11-23 Real-Time Geometry Corporation Modular digital audio system having individualized functional modules
CN1906943A (en) * 2004-01-30 2007-01-31 株式会社丰田自动织机 Video image positional relationship correction apparatus, steering assist apparatus having the video image positional relationship correction apparatus and video image positional relationship correcti
CN101447073A (en) * 2007-11-26 2009-06-03 新奥特(北京)视频技术有限公司 Zoom lens calibration method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Zoom-Dependent camera calibration in digital close-range photogrammetry;C.S. Fraser et al;《Photogrammetric Engineering & Remote Sensing》;20060930;第72卷(第9期);第1017-1026页 *
利用四元数描述线阵CCD影像的空间后方交会;闫利 等;《武汉大学学报.信息科学版》;20100228;第35卷(第2期);第201-204、232页 *
随对焦状态与物距变化的畸变模型及标定方法;董明利 等;《仪器仪表学报》;20131231;第34卷(第12期);第2653-2659页 *

Also Published As

Publication number Publication date
CN105118055A (en) 2015-12-02

Similar Documents

Publication Publication Date Title
CN105118055B (en) Camera position amendment scaling method and system
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN103198487B (en) A kind of automatic marking method for video monitoring system
CN105447850B (en) A kind of Panoramagram montage synthetic method based on multi-view image
CN105488810B (en) A kind of focusing light-field camera inside and outside parameter scaling method
CN107424118A (en) Based on the spherical panorama mosaic method for improving Lens Distortion Correction
US9686479B2 (en) Method for combining multiple image fields
CN107274336B (en) A kind of Panorama Mosaic method for vehicle environment
CN110782394A (en) Panoramic video rapid splicing method and system
CN111243033B (en) Method for optimizing external parameters of binocular camera
CN108805801A (en) A kind of panoramic picture bearing calibration and system
CN104240262B (en) Calibration device and calibration method for outer parameters of camera for photogrammetry
CN109615663A (en) Panoramic video bearing calibration and terminal
CN109544484A (en) A kind of method for correcting image and device
CN109961485A (en) A method of target positioning is carried out based on monocular vision
CN105654476B (en) Binocular calibration method based on Chaos particle swarm optimization algorithm
CN107665483B (en) Calibration-free convenient monocular head fisheye image distortion correction method
CN108648241A (en) A kind of Pan/Tilt/Zoom camera field calibration and fixed-focus method
CN206460516U (en) A kind of multichannel fisheye camera binocular calibration device
CN107154014A (en) A kind of real-time color and depth Panorama Mosaic method
CN101505433A (en) Real acquisition real display multi-lens digital stereo system
CN103729839B (en) A kind of method and system of sensor-based outdoor camera tracking
CN112949478A (en) Target detection method based on holder camera
CN108200360A (en) A kind of real-time video joining method of more fish eye lens panoramic cameras
CN206460515U (en) A kind of multichannel fisheye camera caliberating device based on stereo calibration target

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant