CN105308627A - A method of calibrating a camera and a system therefor - Google Patents

A method of calibrating a camera and a system therefor Download PDF

Info

Publication number
CN105308627A
CN105308627A CN201280024667.4A CN201280024667A CN105308627A CN 105308627 A CN105308627 A CN 105308627A CN 201280024667 A CN201280024667 A CN 201280024667A CN 105308627 A CN105308627 A CN 105308627A
Authority
CN
China
Prior art keywords
camera
point
pixel
energy source
focal length
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201280024667.4A
Other languages
Chinese (zh)
Other versions
CN105308627B (en
Inventor
杰森·皮特·德·维利尔斯
雅克·克龙涅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Council for Scientific and Industrial Research CSIR
Original Assignee
Council for Scientific and Industrial Research CSIR
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Council for Scientific and Industrial Research CSIR filed Critical Council for Scientific and Industrial Research CSIR
Publication of CN105308627A publication Critical patent/CN105308627A/en
Application granted granted Critical
Publication of CN105308627B publication Critical patent/CN105308627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/80
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/65Control of camera operation in relation to power supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Abstract

A system and method for calibrating a camera includes an energy source and a camera to be calibrated, with at least one of the energy source and the camera being mounted on a mechanical actuator so that it is movable relative to the other. A processor is connected to the energy source, the mechanical actuator and the camera and is programmed to control the mechanical actuator to move at least one of the energy source and the camera relative to the other through a plurality of discrete points on a calibration target pattern. The processor further, at each of the discrete points, controls the camera to take a digital image and perform a lens distortion characterisation on each image. A focal length of the camera is determined including any lens connected to the camera and an extrinsic camera position for each image is then determined.

Description

A kind of method of calibration for cameras and system thereof
Background technology
The application relates to a kind of method and system thereof of calibration for cameras.
First the method is described lens distortion, then determines focal length, finally determines camera external position.
Use said method, the invention provides a kind of method and a corresponding system of improved calibration for cameras.
Summary of the invention
According to an exemplary embodiment, a kind of system of calibration for cameras, this system comprises:
An energy source and a camera to be calibrated, and energy source and magazine at least one to be installed on mechanical operating mechanism, can move relative to another;
A processor be connected with energy source, mechanical operating mechanism and camera, processor is used for through programming:
Control mechanical operating mechanism, by energy source with magazine at least one move relative to another, and make it by many discrete points be positioned on calibration target route;
On each discrete point, control camera and take a width digital picture;
Perform lens distortion to every width image to describe;
Determine the focal length of any camera lens be connected on camera; And
For every width image determines an external camera position.
Processor carries out lens distortion description by the following method:
Select a distortion correction model, and for the first guess of this model determination parameter is to correct the distortion observed;
Select a kind of linearity module, be used for measuring and the alignment of point of quantification and sample straight line; And
Use linearity module, and numerical value improvement is carried out to the parameter through just estimating, until the straight line distorted in image is corrected.
Processor determines focal length by the following method:
Select an initial focal length;
Use algorithm, and in conjunction with initial focal length, real Pixel Dimensions, the orthoscopic image coordinate of energy source on each point in turn, and the accurate location of mechanical operating mechanism on each point in turn, calculate the position of camera relative to each discrete point;
Judge the gathering tightness degree of camera position; And
Numerical value improvement is carried out to initial focal length, until the discrete point determined farthest flocks together.
External camera position determined by the following method by processor:
Create one clump of vector based on geometry; Create one clump of vector based on picture processing;
Select a kind of module to measure the similarity of two Cong Xiangliang; And
The estimated position of camera relative to energy source is improved, the similarity of two Cong Xiangliang is maximized.
In one example, after digital picture is captured, processor will perform following map treatment step further:
Determine that the intensity of which adjacent pixel regions in image is higher than selected critical flow velocity dividing value;
Generate a list, list these regions and belong to pixel coordinate and the intensity of each area pixel;
To have very little or the region of too many composition pixel is removed from list, composition pixel number determined by the characteristic of camera, camera lens and energy source;
All regions not meeting shape criteria are removed from list; And
Determine the center in maximum residual region.
Processor is by carrying out ellipse fitting to area pixel and using the method for elliptical center or determine center by the method for the center of gravity of pixel in zoning.
Shape criteria is symmetry, wherein, symmetric method of testing is, finds an xsect in the zone, the distance of last pixel that first pixel that described xsect runs into runs into it is the longest, and this distance and the distance obtained with most major axis vertical line is compared.
In one example, processor controls mechanical operating mechanism and moves it, and this makes a series of point be divided into several groups, and often group comprises at least 3 and to be positioned at conplane point and at least 1 and to put by other the out-of-plane point that forms.
The accurate relative displacement of these points utilizes the position of mechanical operating mechanism feedback to know by processor.
Such as, each group point all by carrying out different 6DOF translations to the point of standard group without conversion and rotation offset obtains, thus produces the new discrete point that a group has identical relative position.
According to another exemplary embodiment, a kind of method of calibration for cameras, the method comprises:
Control mechanical operating mechanism, by energy source with magazine at least one move relative to another, and make it by many discrete points be positioned on calibration target route;
On each discrete point, take a width digital picture with camera;
Perform lens distortion to every width image to describe;
Determine the focal length of any camera lens being connected to camera; And
For every width image determines an external camera position.
Accompanying drawing explanation
Fig. 1 be one for demarcating the example system of digital camera;
Fig. 2 is the structural drawing of the processor of system shown in Figure 1.
Embodiment
Hereafter described system and method is a kind of method about calibration for cameras and system thereof.
The inside and outside parameter of the present invention's camera describes one and has the camera that (known) sensitivity is general arbitrarily.Inner parameter refers to that those affect the real-world scene studied and project parameter on camera imaging element, this comprises at least one lens distortion parameter, lens focus, Pixel Dimensions, and the orthogonality of image-forming component and camera lens optical axis (at least one).
External parameter specify that at least one following situation, and namely camera is relative to the position of reference point, and camera is relative to the direction of selected axis.
As shown in drawings, system 10 comprises at least one camera 12 and/or at least one energy source 14.
Preferably can use some cameras 12 and some energy sources 14.
System also comprises a processor 16 be connected with energy source and/or camera.Processor 16 comprises some modules, and the function of these modules can be described in more detail below.
In an exemplary embodiment, hereinafter described module is realized by the machine-readable medium that is equipped with order, and when machine performs these orders, machine can perform above-mentioned any method.
In another exemplary embodiment, use firmware to realize module, these firmwares are special in perform methods described herein through programming.
It is pointed out that embodiments of the invention are not limited to these structures, and well application can be obtained in the system of distributed or peering structure equally.Therefore, the module shown in figure can be positioned on one or more servers of being run by one or more mechanism.
Also it is noted that in any case above-mentioned, the physical apparatus of these module compositions is with the physical module being used for performing methods described herein step specially.
Internal memory 28 is connected with processor 16.
In an exemplary embodiment, in order to demarcate, the mechanical operating mechanism 18 occurred with mechanical arm form is for energy source mobile within the scope of camera fields of view.
In illustrated exemplary embodiment, mechanical operating mechanism 18 drives by being subject to processing the servomotor (not shown) that device 16 controls.
Mechanical arm 18 is known in the position of any preset time, and phase chance catches the image of energy source.
Image that each width comprises mechanical arm and energy source view all sends processor 16 to by cameras capture.Camera 12, through configuration, can reduce the time shutter and/or open the iris of camera lens, makes energy source while the most of background of elimination keep visible.No matter whether carry out these configurations, the algorithm hereafter described in detail is all useful, and a required just width comprises the single picture in visible energy source.
First processor 16 is described lens distortion, then determines focal length, finally determines camera external position.These steps will more specifically be explained hereinafter.
After digital image capture, image processing module 20 will perform following image processing step, to implement the above described method:
1) in process decision chart picture the intensity of which adjacent pixel regions higher than selected critical value.Generate a list, list these regions and belong to pixel coordinate and the intensity of each area pixel.
2) will to have very little or the region of too many composition pixel removes from list, composition pixel number determined by the characteristic of camera, camera lens and energy source.
3) all regions not meeting shape criteria (such as symmetrical) are removed from list.Symmetrical method of testing finds an xsect in the zone, and the distance of last pixel that first pixel that this xsect runs into runs into it is the longest.This distance and the distance obtained with most major axis vertical line are compared.If the ratio of these two distances is greater than an increment of specifying, then this region is removed.
4) center in maximum residual region is sought.Find the method at center can be by an ellipse fitting area pixel, then use oval center, or carry out alignment by the center of gravity of pixel in zoning.
Lens distortion describes step and is realized by the lens distortion describing module 22 in processor 16, as follows:
1) camera rigidity be placed on such position, this position makes camera unhinderedly can observe the situation that mechanical arm moves energy source.
2) make mechanical arm along a series of rectilinear movement energy source, although advise that these straight lines preferably cover the whole visual field of camera, this is not strictly required.
3) the several points (minimum 3) on every bar straight line, obtain (camera from being described) and process energy source picture as above.
4) a distortion correction model is selected, and for the first guess of this model determination parameter is to correct the distortion observed.This work employs an enhanced form of Blang's lens distortion model, and draws an initial reference position by experience or genetic algorithm.
5) select a kind of linearity module, be used for measuring and the alignment of quantization step 2 mid point and sample straight line.This work carries out least square line matching to each sample point of every bar straight line.Module is exactly the root mean square of each sample point to the vertical range of its best-fitting straight line.
6) module in step 5 is used.Numerical value improvement is carried out to the parameter first guess in step 4, until the straight line in distortion correction picture (produced by distortion model and parameter current) is the most straight.In this work, minimize the module in step 5 by multidimensional nonlinear numerical optimization technique.Exactly, Leapfrog algorithm and Fletcher-Reeves method of conjugate gradient is employed.
Above after these work, the focal length determination module 24 in processor 16 achieves focal length determination, as follows:
1) the lens distortion characteristic of camera is judged.
2) camera rigidity be placed on such position, this position makes camera unhinderedly can observe the situation that mechanical arm moves energy source.
3) mobile mechanical arm, makes it stop at series of discrete point place.This series of point is divided into several groups.Often group comprises at least 3 and to be positioned at conplane point and at least 1 and to put by other the out-of-plane point that forms.The accurate relative displacement of these points can be obtained by the position feedback of mechanical arm.Each group point all by carrying out different 6DOF translations to the point of standard group without conversion and rotation offset obtains, thus produces the new discrete point that a group has identical relative position.Such as, in a prototype embodiment, each group point is all arranged into tetrahedral point by four and forms.In each in five positions, all create four tetrahedrons, angled with upper left, upper right, lower-left and bottom right.Give so altogether 20 tetrahedrons.
4) on each discrete point of step 3 description, by method process camera image mentioned above, to find the coordinate of energy source.
5) be described as each coordinate be caught in produced by step 4 with distortion and find undistorted location of pixels.
6) initial focal length is selected, can the nominal focal length claimed of use experience or manufacturer.
7) with consistent (RANSAC) paper (the MartinA.FischlerandRobertC.Bolles.1981.Randomsampleconse nsus:aparadigmformodelfittingwithapplicationstoimageanal ysisandautomatedcartography.Commun.ACM24 of random sampling, 6 (June1981), 381-395.DOI=10.1145/358669.358692 http://doi.acm.org/10.1145/358669.358692) in or the paper (L.Kneip of triumphant general (Kieper), D.Scaramuzza, R.Siegwart, " ANovelParametrizationofthePerspective-Three-PointProblem foraDirectComputationofAbsoluteCameraPositionandOrientat ion ", Proc.ofTheIEEEConferenceonComputerVisionandPatternRecogn ition (CVPR), ColoradoSprings, USA.June2011) algorithm described in, and combine the focal length of hypothesis, the size of real image element, the orthoscopic image coordinate of energy source on each point in turn, and the accurate location of mechanical arm on each point in turn, judge that camera is relative to the position often organized a little.
8) tightness degree of fixed 6DOF camera position is judged.Should be identical with the camera position that rigidity in step 2 is installed on the situation theory of camera.This work using the standard deviation of each 6DOF position and as the tolerance of tight ness rating.By observing series of points from two rigid location, and use ought to be the standard deviation of the camera relative position of constant value improve its sensitivity with, this work.
9) numerical value improvement is carried out to the lens focus of hypothesis, until the camera point determined as step 8 the most closely flock together the module that judges.Due to module discontinuous essence and it is one-dimensional degree, this work have employed simple thick to smart brute-force search (bruteforcesearch) algorithm.
Next, realize the judgement to camera external position by the external camera location determination module 26 in processor 16, as follows:
1) describe the lens distortion of camera, at least need important radial direction and tangential distortion parameter.
2) focal length (or it being joined in unknown number list described below) of camera is determined.
3) camera rigidity be arranged on required mechanical platform make application-specific, then mechanical platform rigidity is arranged on enable camera unobstructed observe the place that mechanical arm moves energy source.
4) mechanical arm is moved on series of discrete point.Each discrete point catches image, and finds out the center of energy source in camera picture as previously described.Each discrete point also will catch the accurate location of the energy source fed back by mechanical arm.
5) determine that camera estimates position relative to the first of mechanical arm.Information when useful previous physical is set up or with genetic algorithm do this work.
6) one clump of vector based on image is set up.By using distortion characterising parameter, in conjunction with size and the focal length of real image element, carry out this work.Focal length can be known, also can join as the 7th unknown number that needs are determined.
7) one clump of vector based on geometry is set up.These vectors can with the 6DOF camera position (step 5) of hypothesis and on each point in turn the known location of energy source set up.
8) select a kind of module to measure the similarity of this two Cong Xiangliang.Angle sum between corresponding vector can be used as module.
9) camera of estimation is improved relative to the 6DOF position (and focal length, if previous undetermined words) of mechanical arm, maximize to make the similarity of two Cong Xiangliang.Available Fletcher Keanu Reeves (FletcherReeves) conjugate gradient algorithm (Fletcher, R.andReeves, C., " Functionminimizationbyconjugategradients, " ComputerJournal7, 140 – 054 (1964)) or (Leapfrog) multidimensional nonlinear numerical optimisation algorithms (Snyman that leapfrogs, J., " Animprovedversionoftheoriginalleap-frogdynamicmethodforu nconstrainedminimization:Lfop1 (b), " AppliedMathematicsandModelling7, 216 – 218 (1983)) carry out this work.
10) if having and wish to demarcate multiple camera, a camera or other known points can be selected to organize the frame of reference of camera as this, and this point express the camera position determined relatively.
Fundamentals of Mathematics needed for above treatment step will describe in detail hereinafter.
Hereafter mathematics mark used is listed below: v abc be one from apoint points to bpoint, with it oneself in orthogonal coordinate system caxle on the tri-vector stated of projection. v abc be used in importance when vector unknown or unessential time.
t abc representative bpoint relative to athe translation of point or displacement. r ab represent quadrature shaft arelative to (and being projected on according to it) quadrature shaft bone 3 take advantage of 3 euler rotation matrix.The component of tri-vector is called as x, yor z, and two dimension (2D) vector component be called as level ( h) with vertical ( v), in order to avoid obscure.
The correlation module of processor 16 performs the following step to the view data captured:
1.1) the bright pixel of critical value is selected to all ratios and carry out the demarcation of connection object.
1.2) remove all objects not meeting dimensional standard, dimensional standard is determined by energy source type and camera resolution.
1.3) remove all objects not meeting symmetrical shape standard, namely each connection object be handled as follows:
A) each pixel in an object is carried out to the matching of best-fitting straight line:
Equation 1
Equation 2
C) width of object perpendicular to best-fitting straight line is determined.
Equation 3
Then, this object, by the ratio of comparison length (LA) than width (LP), if ratio is not in specified standard, is then removed by processor.
1.4) by suitable method---such as following two for illustration of method---determine the center of each object:
A) center of gravity
Equation 4
B) the method fitted ellipse of metric standard is minimized with (such as):
Equation 5
lens distortion describes
In order to carry out lens distortion description, the fact on the straight line that the straight line after employing being corrected when distorting here in real world must project in image space.In order to accomplish this point, mechanical arm (and the energy source adhered on it) moves on a series of straight line, and several aspects on every bar straight line stops image is captured.This makes nbar line is captured, and every bar line has M i, the individual point of i ε (0, N-1).These points are called as , illustrate ion bar line joriginal (namely distorting) picture position of individual point.
After this, Blang's lens distortion model (BrownDC (1966). " Decenteringdistortionoflenses. " .PhotogrammetricEngineering.7:444 – 462) any number parameter can numerically be determined.
Can use any multi dimensional numerical optimizer, although due to intrinsic remaining noise in the measurement of the high correlation between its mathematical skill, parameter and input data, the performance of some programs is poor.
In an original shape of the present invention, one that employs Brownian Model strengthens version, and by a radial magnification ratio factor is applied to radial distortion parameter, helping may by camera lens to the change caused by the nonorthogonality of optical axis or other manufacturing defect.This can not affect the generality of this work, because the normal conditions announced with document conforms to.We do not lose generality at hypothesis---in order to be described--- for following form:
Equation 6
In order to measure the straight line degree of a series of straight line, a module be used.This module determines best-fitting straight line (square journey 1) of each point on the straight line that is caught in, further defines root-mean-square distance a little arriving its straight line.This is used for determining that the step of best-fitting straight line provides in equation 1.This module is:
Equation 7
The following step is used for the residual distortion determining to be caused by a set of given parameter:
2.1) proportion adjustment from the insensitive space-reception of gradient to parameter:
Equation 8
2.2) parameter that usage ratio is adjusted, to make as removing distortion and each point on every bar line of catching is removed distortion, namely
Equation 9
2.3) best-fitting straight line by every bar line in input data group having been removed the point of distortion is determined with equation 1.
2.4) determine a little to the root mean square vertical range of its straight line with equation 7.
The step of distortion characterising parameter being carried out to numerical optimization provides hereinafter.
The numerical optimization that distortion characterising parameter carries out is calculated as follows by processor 16:
3.1) judge which parameter needs to optimize.That is: quantity that is radial and tangential distortion parameter is selected; The central point of image whether used or optimum center of distortion whether found; Select a radial gain function.
3.2) be each Selecting parameter initial value.Three kinds of common methods are:
A) all parameters are set to 0
B) about initial value is selected by experience
C) specify a scope for each parameter and carry out rough global optimization, as brute-force algorithm or genetic algorithm.
3.3) parameter of each input of proportion adjustment, makes gradient same with the disturbance of a constant size responsive in each dimension.This will consider more accurate gradient estimation, need make by carrying out the better local optimizer described.Reference equation 8, equation 10 illustrates proportion adjustment step
Equation 10
3.4) with the improvement that local optimizer carries out numerically to the standardization initial parameter that equation 10 provides, minimized value is needed to specify in algorithm 2.Specifically, leapfrog (Leapfrog) and Fletcher Keanu Reeves (Fletcher-Reeves) algorithm can this work in for minimizing.
3.5) distortion fed back is made to describe nonstandardized technique with equation 8.
Focal length determination
When focus parameter determined by processor 16, need to carry out without the need to external parameter is joined to go in that batch of parameter of numerical value improvement.
While determining external parameter, determine that the method for focal length will be described below.Independently carry out the dimension that focal length determination can reduce external parameter calculating, and the ambiguity hindering focal length determination when orthogonal observation plane pattern (or a series of manipulator motion being positioned at a plane) can be eliminated.
A, tetrahedron perspective problem
Here demarcation make use of three-point perspective problem, this has formal statement the paper (both referenced hereinbefore) of consistent (RANSAC) paper of random sampling and Kai Pu (KIEPER) is inner, and in order to explain explanation, here briefly reaffirm.When a camera looks into fee is to the point of three known spacings mutually, camera can draw by analyzing relative to the direction of three points and translation.
This can pass through to calculate from camera to the unit direction vector (square journey 20) of the vector of each point, and the angle (passing through dot product) calculated subsequently between these direction vectors completes.
The cosine law provided in equation 11 is Pythagorean theorem vague generalization in on-right angle triangle.
Equation 11
Use the cosine law, and expect that two vectorial dot products equal the folder cosine of an angle between them, just can according to distance known between camera to the vector of unit length and triangle each point of point, set up a series of simultaneous equations express three from camera the unknown lengths to the vector of each observation point, as follows:
Equation 12
Equation 12 has four groups of solutions (computing method provide in the consistent paper of random sampling and Kai Pu paper, and both quote hereinbefore), although and not all solution is all the reality solution in complex plane.Being correct to find out which solution, needing one to be positioned at outside first three some place plane and known the 4th point of the translations of its other points relatively.Along with the appearance of the 4th point in top, these four points constitute a tetrahedron.To often organizing Xie Eryan, first calculating the position of camera relative to the point (i.e. tetrahedral bottom surface) on triangle, then calculating the position of the 4th point relative to camera.Point to the vector of the position of the 4th point calculated, compare with the vector calculated according to the image coordinate of the 4th point, the solution making the angle between these two vectors minimum is exactly correct solution.
Equation 13 summarizes whole process, as follows.
Equation 13
Focal length determination make use of and describes in detail hereinbefore and the tetrahedron problem be summarised in equation 13.Note, equation 13 make use of equation 20 hereafter, and equation 20 depends on the lens distortion parameter that hypothesis has used the method described in algorithm 3 calibrated, and as the focal length of target that this describes.
Mechanical arm, for energy source being placed on some positions being positioned at camera fields of view, can create several tetrahedrons like this.In a typical embodiment, create 20 tetrahedrons, often organize 5, totally 4 groups.Often organize one and be tetrahedrally centrally located at a special position, the center of each group is formed one "+".On each position, tetrahedron all carries out angled skew, makes the optical axis of camera cannot be orthogonal with tetrahedral bottom surface like this.From the visual angle of camera, the tetrahedron in a group can be relatively angled towards upper right, bottom right, lower-left and upper left.
In order to measure focal length, camera is placed on two places to observe tetrahedron.Camera is first rigidly mounted first position and observes all tetrahedrons, and then camera is installed in second position again and again observes mechanical arm movement in same a group tetrahedron and shuttles back and forth.When tetrahedron is observed in each position, camera transfixion.This means that the relative displacement of camera is constant.Because mechanical arm is used for energy source to move to each tetrahedral each follow-up location, the translational movement of the point therefore on tetrahedron is known.If distortion parameter is known, so focal length is exactly the remaining required unique parameters of equation 13.
For the focal length of a hypothesis, camera relative to mechanical arm with reference to (i.e. a coordinate system, energy source translational movement is wherein (namely in equation 13 t icc ) be known) and position and direction can calculate.
The track of the camera position calculated is minimum on correct focal length.This inherently can be used as a module and goes to measure ideal focal distance, but this susceptibility for focal length can increase along with to the comparison in the second place each tetrahedral position relative to camera in primary importance and direction.This may be the high duplication due to used mechanical arm (being ABBIRB120 in an exemplary embodiment).The variable quantity of relative position is exactly minimized module.The calculating of this module is provided in hereafter algorithm 4.
Camera is calculated by processor subsequently relative to the variable quantity of tetrahedron group 6DOF position and focal length, as follows:
4.1) focal length calculating the vector of unit length in treated energy source picture, distortion parameter and specify,
As described in equation 20:
Equation 14
4.2) position of camera at each point is calculated to all tetrahedrons (note, mechanical arm is with reference to being used as tetrahedron axle)
Equation 15
4.3) to each tetrahedron, camera is calculated at position b relative to the position of camera at position a
Equation 16
4.4) iteration result, and the angle of heel, trim and yaw is extracted the euler rotation matrix of the relative direction of two positions from camera.(namely from Rc b; ic a; i).
4.5) to all tetrahedrons calculated, the standard deviation of the X-coordinate of the relative translation of camera is calculated.Same calculating ycoordinate, zthe standard deviation of coordinate and the heel just now calculated, trim and yaw position.
4.6) weighted sum of standard deviation is calculated, used as module:
Equation 17
In an exemplary embodiment, the weight that equation 17 uses is K 0=K 1=K 2=1:0 and K 3=K 4=K 5=10:0.
In order to find ideal focal distance, need the minimum value of searching for equation 17 in a scope centered by camera lens nominal/design focal length.This can adopt any linear search technology, is such as used for the method for Bao Weier of the zero cross point finding out derivative and the method for newton.Because module is discontinuous and be one dimension, exemplary embodiments is made up of a brute-force search from coarse to fine.
external parameter measures
Need camera relative to the 6DOF position of mechanical arm, focal length (if not deciding as in Part VI) and optical crosspoint location of pixels (known to principal point).In order to accomplish this point, any one standard in equation 18 or equation 19, the robust algorithm of available such as Fletcher Keanu Reeves (Fletcher-Reeves) or (Leapfrog) (both the quoting hereinbefore) that leapfrogs carries out numerical optimization.First module is slow but more accurate, and this is because (computationally intensive) inverse cosine function adds susceptibility for two almost parallel vectors.
Equation 18 and 19
Module is the comparison between two Cong Xiangliang.Equation 20 hereafter shows, once image passes through process as described by Part IV, and with the final distortion determined in equation 6 and Part V, releasing distortion being described, a vector bundle how to generate from the image of the energy source installed on the robotic arm.Suppose from tables of data, to be aware of Pixel Dimensions, so the only non-cicada of remaining focal length (likely) and optical axis point of crossing.Logarithm value is optimized, and reasonable initial supposition is nominal focal length and each center of distortion of manufacturer.
Equation 20
Second vector bundle is calculated by equation 21.Suppose the energy source on each mechanical arm position position is known.After this, mechanical arm reference unknown spatial deviation and mechanical arm with reference to relative to camera ( r rc ) (unknown equally) Euler's rotation, can be used to determine vector and make vector orthogonal with each energy source locations.It should be appreciated that, if use planar wave reference fixture, and fixture is arranged on the position vertical with camera optical axis, and so two kinds of modules all exist a singular point.
Equation 21
How following algorithmic translation, assuming that obtain a series of corresponding energy source locations from mechanical arm and use the method described in algorithm 3 to obtain the pixel coordinate at energy source place in camera image, find external parameter.Here hypothesis distortion describes and determines like that as described above.
The calculating of camera external parameter is completed by processor, as follows:
5.1) for needing nine the Selecting parameter initial values optimized.For focal length, nominal/design load is a good starting point, and for principal point, center of distortion is a good starting point.For three translation parameterss and three direction parameters, rough physical measurement can be used as starting point.In addition, rough Overall Optimization Technology---such as sparse brute-force (bruteforce) sampling or (as in exemplary embodiments) genetic algorithm---may be used for producing initial value.
5.2) be the insensitive scale factor of each Selecting parameter gradient.Value used in exemplary embodiments as follows listed by.
5.3) each parameter is divided by corresponding scale factor, to produce normalizing parameter (marking with subscript n).
5.4) by the module described in algorithm 6, numerical value improvement is carried out to the parameter through transformation of scale.Any non-linear multi dimensional numerical this locality optimization all can use.Employ in exemplary embodiments and leapfrog (Leapfrog) or Fletcher Keanu Reeves (Fletcher-Reeves) (both referenced hereinbefore).
5.5) scale factor corresponding to them through the parameter of transformation of scale fed back is multiplied, obtain the mechanical arm that represents with camera axis relative to camera position ( r rc ), and the mechanical arm represented with camera axis is relative to the translational movement of camera .
5.6) position of camera relative to mechanical arm is calculated:
Equation 22
Calculate now external parameter improvement module:
6.1) its corresponding for the parameter received scale factor is multiplied:
Equation 23
6.2) from yaw, pitchwith rollangle computing machine mechanical arm relative to camera euler rotation matrix ( r rc ).
6.3) will x, ywith zvalue connects the translation of forming machine mechanical arm relative to camera: .
6.4) with equation 20, focal length, principal point, Pixel Dimensions, the distortion correction parameter of equation 6 and the location of pixels at a series of energy source center , calculate the vector of unit length clump based on image.(if focal length and principal point are not the parts in Optimal Parameters bag, and this need perform once).
6.5) by equation 21, a series of energy source locations in mechanical arm coordinate axis , mechanical arm relative to the direction of camera existing estimation ( r rc ) and mechanical arm relative to the existing estimation of the translation of camera , calculate based on the vector bundle of camera relative to the position of mechanical arm.
6.6) similarity of two vector bundles is measured with equation 18.
If said method and system can be associated with the process of calibration for cameras, and utilize exemplary embodiments to do, that is best.Exemplary embodiments is the possible instantiation of the intellecture property stated in a unique this patent.Namely
1) utilize mechanical arm to catch point on straight line, and linear projection in real space is to the method on the straight line in image space, carry out the distortion parameter of calibration for cameras by using suitable lens distortion model to guarantee.Distortion model can including but not limited to Blang's lens distortion model (quoting above)---to have or without radial gain function and neural network.
2) track by minimizing camera (3 or 6 dimension) position determines focal length, and camera position is calculated by three or more point at grade of observation and one or more out-of-plane point.
3) by studying the image of a series of energy source with mechanical arm movement, and compare based on a Cong Xiangliang of graphical analysis and calculated by (the 3 or 6 tie up) camera position supposed and the second next Cong Xiangliang, determining the position of camera.
Once camera completes demarcation, camera can be used for the exact position finding energy source.

Claims (22)

1. a system for calibration for cameras, is characterized in that, this system comprises:
An energy source and a camera to be calibrated, and energy source and magazine at least one to be installed on mechanical operating mechanism, can move relative to another;
A processor be connected with energy source, mechanical operating mechanism and camera, processor is used for through programming:
Control mechanical operating mechanism, by energy source with magazine at least one move relative to another, and make it by many discrete points be positioned on calibration target route;
On each discrete point, control camera and take a width digital picture;
Perform lens distortion to every width image to describe;
Determine the focal length of any camera lens be connected on described camera; And
For every width image determines an external camera position.
2. the system as claimed in claim 1, is characterized in that, described processor carries out lens distortion description by the following method:
Select a distortion correction model, and for the first guess of this model determination parameter is to correct the distortion observed;
Select a kind of linearity module, be used for measuring and the alignment of point of quantification and sample straight line; And
Use linearity module, and numerical value improvement is carried out to the parameter through just estimating, until the straight line distorted in image is corrected.
3. the system as claimed in claim 1, is characterized in that, described processor determines focal length by the following method:
Select an initial focal length;
Use algorithm, and in conjunction with initial focal length, real Pixel Dimensions, the orthoscopic image coordinate of energy source on each point in turn, and the accurate location of mechanical operating mechanism on each point in turn, calculate the position of camera relative to each discrete point;
Judge the gathering tightness degree of camera position; And
Numerical value improvement is carried out to initial focal length, until the discrete point determined farthest flocks together.
4. the system as claimed in claim 1, is characterized in that, external camera position determined by the following method by described processor:
Create one clump of vector based on geometry;
Create one clump of vector based on picture processing;
Select a kind of module to measure the similarity of two Cong Xiangliang; And
The estimated position of camera relative to energy source is improved, the similarity of two Cong Xiangliang is maximized.
5. the system as claimed in claim 1, is characterized in that, after digital picture is captured, described processor will perform following map treatment step further:
Determine that the intensity of which adjacent pixel regions in image is higher than selected critical value;
Generate a list, list these regions and belong to pixel coordinate and the intensity of each area pixel;
To have very little or the region of too many composition pixel is removed from list, composition pixel number determined by the characteristic of camera, camera lens and energy source;
All regions not meeting shape criteria are removed from list; And
Determine the center in maximum residual region.
6. system as claimed in claim 5, it is characterized in that, described processor is by carrying out ellipse fitting to area pixel and using the method for elliptical center or determine center by the method for the center of gravity of pixel in zoning.
7. system as claimed in claim 5, it is characterized in that, described shape criteria is symmetry.
8. system as claimed in claim 7, it is characterized in that, described symmetric method of testing is: find an xsect in the zone, the distance of last pixel that first pixel that described xsect runs into runs into it is the longest, and this distance and the distance obtained with most major axis vertical line is compared.
9. the system as claimed in claim 1, it is characterized in that, described processor controls mechanical operating mechanism and moves it, and this makes a series of point be divided into several groups, and often group comprises at least 3 and to be positioned at conplane point and at least 1 and to put by other the out-of-plane point that forms.
10. system as claimed in claim 9, is characterized in that, the accurate relative displacement of described point utilizes the position of mechanical operating mechanism feedback to know by processor.
11. systems as claimed in claim 9, it is characterized in that, each organizes described point all by carrying out different 6DOF translations to the point of standard group without conversion and rotation offset obtains, thus produces the new discrete point that a group has identical relative position.
The method of 12. 1 kinds of calibration for cameras, is characterized in that, the method comprises:
Control mechanical operating mechanism, by energy source with magazine at least one move relative to another, and make it by many discrete points be positioned on calibration target route;
On each discrete point, take a width digital picture with camera;
Perform lens distortion to every width image to describe;
Determine the focal length of any camera lens be connected on described camera; And
For every width image determines an external camera position.
13. methods as claimed in claim 12, is characterized in that, carry out lens distortion description by the following method:
Select a distortion correction model, and for the first guess of this model determination parameter is to correct the distortion observed;
Select a kind of linearity module, be used for measuring and the alignment of point of quantification and sample straight line; And
Use linearity module, and numerical value improvement is carried out to the parameter through just estimating, until the straight line distorted in image is corrected.
14. methods as claimed in claim 12, is characterized in that, determine focal length by the following method:
Select an initial focal length;
Use algorithm, and in conjunction with initial focal length, real Pixel Dimensions, the orthoscopic image coordinate of energy source on each point in turn, and the accurate location of mechanical operating mechanism on each point in turn, calculate the position of camera relative to each discrete point;
Judge the gathering tightness degree of camera position; And
Numerical value improvement is carried out to initial focal length, until the discrete point determined farthest flocks together.
15. methods as claimed in claim 12, is characterized in that, determine external camera position by the following method:
Create one clump of vector based on geometry;
Create one clump of vector based on picture processing;
Select a kind of module to measure the similarity of two Cong Xiangliang; And
The estimated position of camera relative to energy source is improved, the similarity of two Cong Xiangliang is maximized.
16. methods as claimed in claim 15, is characterized in that, after digital picture is captured, the method will carry out following map treatment step further:
Determine that the intensity of which adjacent pixel regions in image is higher than selected critical value;
Generate a list, list these regions and belong to pixel coordinate and the intensity of each area pixel;
To have very little or the region of too many composition pixel is removed from list, composition pixel number determined by the characteristic of camera, camera lens and energy source;
All regions not meeting shape criteria are removed from list; And
Determine the center in maximum residual region.
17. methods as claimed in claim 16, is characterized in that, by carrying out ellipse fitting to area pixel and using the method for elliptical center or determine center by the method for the center of gravity of pixel in zoning.
18. methods as claimed in claim 16, it is characterized in that, described shape criteria is symmetry.
19. methods as claimed in claim 18, it is characterized in that, symmetric method of testing is, find an xsect in the zone, the distance of last pixel that first pixel that this xsect runs into runs into it is the longest, and this distance and the distance obtained with most major axis vertical line is compared.
20. methods as claimed in claim 12, it is characterized in that, described mechanical operating mechanism moves, and this makes a series of point be divided into several groups, and often group comprises at least 3 and to be positioned at conplane point and at least 1 and to put by other the out-of-plane point that forms.
21. methods as claimed in claim 20, is characterized in that, the accurate relative displacement of described point is known by the position feedback of mechanical operating mechanism.
22. methods as claimed in claim 20, it is characterized in that, each organizes described point all by carrying out different 6DOF translations to the point of standard group without conversion and rotation offset obtains, thus produces the new discrete point that a group has identical relative position.
CN201280024667.4A 2012-11-29 2012-11-29 A kind of method and its system of calibration for cameras Active CN105308627B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2012/056820 WO2014083386A2 (en) 2012-11-29 2012-11-29 A method of calibrating a camera and a system therefor

Publications (2)

Publication Number Publication Date
CN105308627A true CN105308627A (en) 2016-02-03
CN105308627B CN105308627B (en) 2018-10-30

Family

ID=50436405

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280024667.4A Active CN105308627B (en) 2012-11-29 2012-11-29 A kind of method and its system of calibration for cameras

Country Status (8)

Country Link
US (1) US9330463B2 (en)
EP (1) EP2926543B1 (en)
KR (1) KR101857472B1 (en)
CN (1) CN105308627B (en)
IL (1) IL228659A (en)
RU (1) RU2601421C2 (en)
WO (1) WO2014083386A2 (en)
ZA (1) ZA201306469B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871329A (en) * 2017-12-18 2018-04-03 横琴峰云视觉技术有限公司 A kind of quick calibrating method and device at camera opticses center
CN109146978A (en) * 2018-07-25 2019-01-04 南京富锐光电科技有限公司 A kind of high speed camera image deformation calibrating installation and method
CN111627073A (en) * 2020-04-30 2020-09-04 贝壳技术有限公司 Calibration method, calibration device and storage medium based on human-computer interaction

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3076657B1 (en) 2015-04-02 2017-05-24 Axis AB Method for determination of focal length for a zoom lens
CN107532881B (en) * 2015-05-15 2020-02-14 华为技术有限公司 Measurement method and terminal
US11820025B2 (en) 2017-02-07 2023-11-21 Veo Robotics, Inc. Safe motion planning for machinery operation
ES2927177T3 (en) 2017-02-07 2022-11-03 Veo Robotics Inc Workspace safety monitoring and equipment control
CN110495163B (en) 2017-03-31 2021-12-10 松下知识产权经营株式会社 Imaging system and correction method
JP7122694B2 (en) 2017-03-31 2022-08-22 パナソニックIpマネジメント株式会社 Imaging system and calibration method
GB2572956B (en) * 2018-04-16 2021-09-08 Sony Interactive Entertainment Inc Calibration system and method
RU2699401C1 (en) * 2018-08-03 2019-09-05 Общество с ограниченной ответственностью "ДиСиКон" (ООО "ДСК") Method and system for determining calibration parameters of a ptz camera
WO2020053240A1 (en) * 2018-09-12 2020-03-19 Brainlab Ag Intra-operative determination of a focal length of a camera for medical applications
US11074720B1 (en) * 2020-02-07 2021-07-27 Aptiv Technologies Limited System and method for calibrating intrinsic parameters of a camera using optical raytracing techniques
RU2749363C1 (en) * 2020-07-22 2021-06-09 Федеральное государственное бюджетное образовательное учреждение высшего образования "Рязанский государственный радиотехнический университет имени В.Ф. Уткина" Device for automated calibration of video cameras of various spectral ranges
US20220264072A1 (en) * 2021-02-12 2022-08-18 Sony Group Corporation Auto-calibrating n-configuration volumetric camera capture array
CN113223175B (en) * 2021-05-12 2023-05-05 武汉中仪物联技术股份有限公司 Pipeline three-dimensional nonlinear model construction method and system based on real attitude angle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444481A (en) * 1993-01-15 1995-08-22 Sanyo Machine Works, Ltd. Method of calibrating a CCD camera
CN1454366A (en) * 2000-03-27 2003-11-05 卢克动态公司 Apparatus and method for characterizing, encoding. storing, and searching images by shape
US20100295941A1 (en) * 2009-05-21 2010-11-25 Koh Young Technology Inc. Shape measurement apparatus and method
US20110063417A1 (en) * 2009-07-17 2011-03-17 Peters Ii Richard Alan System and method for automatic calibration of stereo images

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08268393A (en) * 1995-03-29 1996-10-15 Toshiba Tesco Kk Calibration method for aircraft parking position instruction device
AU2003239171A1 (en) * 2002-01-31 2003-09-02 Braintech Canada, Inc. Method and apparatus for single camera 3d vision guided robotics
JP3735344B2 (en) * 2002-12-27 2006-01-18 オリンパス株式会社 Calibration apparatus, calibration method, and calibration program
RU2289111C2 (en) * 2004-02-16 2006-12-10 Курский государственный технический университет Method of adaptive graduation of radial distortion of optical subsystem of technical vision system
JP4298757B2 (en) * 2007-02-05 2009-07-22 ファナック株式会社 Robot mechanism calibration apparatus and method
EP2147296A1 (en) 2007-04-18 2010-01-27 Micronic Laser Systems Ab Method and apparatus for mura detection and metrology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5444481A (en) * 1993-01-15 1995-08-22 Sanyo Machine Works, Ltd. Method of calibrating a CCD camera
CN1454366A (en) * 2000-03-27 2003-11-05 卢克动态公司 Apparatus and method for characterizing, encoding. storing, and searching images by shape
US20100295941A1 (en) * 2009-05-21 2010-11-25 Koh Young Technology Inc. Shape measurement apparatus and method
US20110063417A1 (en) * 2009-07-17 2011-03-17 Peters Ii Richard Alan System and method for automatic calibration of stereo images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
H CHEN,D YE,R S CHEN AND G CHEN: "A Technique for Binocular Stereo Vision System Calibration", 《JOURNAL OF PHYSICS》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871329A (en) * 2017-12-18 2018-04-03 横琴峰云视觉技术有限公司 A kind of quick calibrating method and device at camera opticses center
CN107871329B (en) * 2017-12-18 2021-09-07 北京峰云视觉技术有限公司 Method and device for quickly calibrating optical center of camera
CN109146978A (en) * 2018-07-25 2019-01-04 南京富锐光电科技有限公司 A kind of high speed camera image deformation calibrating installation and method
CN109146978B (en) * 2018-07-25 2021-12-07 南京富锐光电科技有限公司 High-speed camera imaging distortion calibration device and method
CN111627073A (en) * 2020-04-30 2020-09-04 贝壳技术有限公司 Calibration method, calibration device and storage medium based on human-computer interaction
CN111627073B (en) * 2020-04-30 2023-10-24 贝壳技术有限公司 Calibration method, calibration device and storage medium based on man-machine interaction

Also Published As

Publication number Publication date
EP2926543B1 (en) 2017-11-22
RU2013141224A (en) 2015-03-20
ZA201306469B (en) 2015-09-30
KR101857472B1 (en) 2018-05-14
IL228659A0 (en) 2014-03-31
WO2014083386A2 (en) 2014-06-05
EP2926543A4 (en) 2016-09-14
IL228659A (en) 2017-12-31
US20150287196A1 (en) 2015-10-08
US9330463B2 (en) 2016-05-03
CN105308627B (en) 2018-10-30
KR20150101484A (en) 2015-09-04
RU2601421C2 (en) 2016-11-10
EP2926543A2 (en) 2015-10-07
WO2014083386A3 (en) 2015-08-06

Similar Documents

Publication Publication Date Title
CN105308627A (en) A method of calibrating a camera and a system therefor
CN105069743B (en) Detector splices the method for real time image registration
CN101887585B (en) Method for calibrating camera based on non-coplanar characteristic point
Hui et al. Line-scan camera calibration in close-range photogrammetry
CN109859272A (en) A kind of auto-focusing binocular camera scaling method and device
CN102184545B (en) Single-chart self-calibration method of catadioptric omnibearing camera mirror plane pose
CN109523595A (en) A kind of architectural engineering straight line corner angle spacing vision measuring method
Muñoz et al. Environmental applications of camera images calibrated by means of the Levenberg–Marquardt method
Kochi et al. A 3D shape-measuring system for assessing strawberry fruits
CN108898629B (en) Projection coding method for enhancing aerial luggage surface texture in three-dimensional modeling
Hui et al. Determination of line scan camera parameters via the direct linear transformation
CN109741389A (en) One kind being based on the matched sectional perspective matching process of region base
CN114092388A (en) Obstacle detection method based on monocular camera and odometer
Tang et al. Algorithm of object localization applied on high-voltage power transmission lines based on line stereo matching
Koljonen et al. Searching strain field parameters by genetic algorithms
Fiedler et al. A Novel Method for Digitalisation of Test Fields by Laser Scanning
Osgood et al. Minimisation of alignment error between a camera and a laser range finder using Nelder-Mead simplex direct search
Chen et al. A Method of Calibration and Measuring Foeal Length for Pan-Tilt-Zoom Camera
CN114184086B (en) Photoelectric tracking image alignment method for anti-sniper robot
De Villiers et al. Effects of lens distortion calibration patterns on the accuracy of monocular 3D measurements
Yang et al. Calibrating a Robot Camera.
JP5355377B2 (en) Image pattern matching apparatus and method
Koljonen Computer vision and optimization methods applied to the measurements of in-plane deformations
CN116320481A (en) Stereoscopic DIC method and system with camera motion correction and reference frame construction
Meng et al. Analysis of image motion on autofocus precision for aerial cameras

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant