WO2004092826A1 - Method and system for obtaining optical parameters of camera - Google Patents

Method and system for obtaining optical parameters of camera Download PDF

Info

Publication number
WO2004092826A1
WO2004092826A1 PCT/IB2004/001109 IB2004001109W WO2004092826A1 WO 2004092826 A1 WO2004092826 A1 WO 2004092826A1 IB 2004001109 W IB2004001109 W IB 2004001109W WO 2004092826 A1 WO2004092826 A1 WO 2004092826A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
target
imaged
point
Prior art date
Application number
PCT/IB2004/001109
Other languages
French (fr)
Inventor
Gwo-Jen Jan
Chuang-Jan Chang
Original Assignee
Appro Technology Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appro Technology Inc. filed Critical Appro Technology Inc.
Publication of WO2004092826A1 publication Critical patent/WO2004092826A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/02Testing optical properties
    • G01M11/0221Testing optical properties by determining the optical axis or position of lenses

Definitions

  • the present invention relates to a method and system for obtaining the optical
  • extrinsic parameters can be employed in visual applications in the quest for improved
  • a fisheye camera also termed a fisheye image sensor mounted with a fisheye lens
  • the optical parameters are hard to be precisely deduced by
  • fisheye images images of the fisheye camera (simplified as "fisheye images” hereinafter).
  • angle cameras such as ones mounted with the fisheye lens.
  • an image-based algorithm aims at a specific camera which mounts a specific
  • FIG. 1 A and IB wherein FIG.
  • 1A expresses the imageable area 1 of a fisheye image in a framed oval/circular region
  • FIG. IB is the hemispherical spatial projecting geometry corresponding to FIG. 1 A, both
  • is the angle referring to the mapping domain 13' of the prime
  • ⁇ /2- ⁇ is regarded as latitude and ⁇ as longitude. Therefore, if several imaged points
  • incident rays would be on the same meridional plane (like the sector determined by the arc
  • points D, E, F, and G in FIG. 1 A corresponding to points D', E', F', and G' in FIG. IB.
  • the image-based algorithm makes
  • the imageable area 1 of the fisheye image is an analyzable
  • the value of ⁇ at point E is supposed to be ⁇ /4 since it is located in the middle of the radius
  • EDP ⁇ hereinafter.
  • qualified camera body mounted with a qualified lens is utterly necessary. Generally it is a
  • focal length constant (f) can be obtained by dividing the radius of the imageable area 1
  • FIGs. 1 A and IB so as to monitor a hemispherical field of view (180
  • the natural projection mechanism of the fisheye lens might be the following:
  • the coverage of the FOV is not constantly equal to ⁇ , perhaps being either
  • a third factor concerns the errors caused in locating the image border even
  • the Gaussian optics model is a convenient means for describing the imaging logic of
  • model regards an optical system (such as a camera) as a black box whose features have
  • Gaussian optics model comprise the first and second focal points FI and F2, the first and
  • the nodal points are regarded as the principal points
  • the first principal point PI is also termed the front nodal point (FNP),
  • focal point FI will turn to parallel the optical axis 224 at the first principal plane 141, like
  • optics model is an ideal imaging logic which average cameras seek to emulate.
  • angle lens has to attain this imaging mechanism and is quite different from the fisheye
  • Another object of this invention is to provide a method and system for obtaining
  • optical parameters comprising the viewpoint, the orientation of the optical axis, the focal
  • Another object of this invention is to provide a method and system for analyzing
  • Another object of this invention is to provide a method and system capable of
  • the present invention refers to the
  • PCP pattern
  • the object space i.e. the sight rays
  • the image i.e. the image
  • the optical parameters of the camera system can be any optical parameters of the camera system.
  • the present invention does not borrow any assumptions from existent ideal
  • calibration marks and the corresponding respective imaged coordinates comprise a
  • the invention makes a breakthrough
  • the invention is suitable for application to the fisheye camera or the kind with
  • its inverse projection function can calibrate the image distortions and further be applied in the fields of stereology and 3-D metering.
  • FIGs. 1A and IB show the schematic view of a calibration method based on an
  • FIG. 2 sketches three typical projection functions of the fisheye lens
  • FIG. 3 shows the schematic view of the mapping optical path of the Gaussian optics
  • FIG. 4 cubically shows the 3-D optical paths between the PCP and the fisheye camera
  • FIG. 5 shows one embodiment of the PCP which is an octagonal symmetric pattern
  • FIG. 6 shows the first embodiment of the theoretical model disclosed in the invention
  • FIG. 7 shows the first embodiment of the measuring system disclosed in the
  • FIG. 8 shows the second embodiment of the measuring system disclosed in the
  • FIG. 9 shows the second embodiment of the theoretical model disclosed in the
  • FIG. 10 statistically shows the moving traces of the camera in the platform
  • FIG. 11 statistically shows the pixel coordinates of the imaged center measured in the
  • FIG. 12A statistically shows the profiles of the average image heights (p) defined by
  • FIG. 12B shows the varying ranges and the overlapping situation of the average
  • FIG. 13 shows the beautiful overlapping profiles of the zenithal distances ( ⁇ ) to the
  • FIG. 14 shows the divergent profiles of the zenithal distances ( ⁇ ) to the image heights
  • FIG. 15 shows the beautiful overlapping profiles of the zenithal focal length (zFL) to
  • FIG. 16 shows the divergent profiles of the zenithal focal length (zFL) to the image
  • FIG. 17 shows the divergent length composed of multiple traces of the zFL, which is
  • C'(x', y') or P'(p', ⁇ ') can correspond to the
  • C(u,v) or the polar coordinate system of P(p, ⁇ ) can represent as well the pixel coordinate system of I(u,v) in which I(u c , v c ) is the origin.
  • the camera inner-space coordinate system of S( ⁇ ', ⁇ ', / ) describes the
  • the sampling sequence is indicated by means of an array.
  • the fisheye lens is severely diverged from the Gaussian optics model to be a non ⁇
  • the projection mechanism in space can
  • incident rays including active or inactive reflective light rays
  • FIG. 4 shows the
  • PCP physical central-symmetric pattern
  • the PCP 31 is composed of a central mark 38 located at the geometric center thereof and
  • ICP imaged central-symmetric pattern
  • the position of the optical axis 224 in space can be determined absolutely by referring to
  • the target 30 because its absolute position is man-made and given in advance.
  • imaged blob is regarded as the principal point 227 on the image plane 225.
  • FCP front cardinal point
  • zeniths are separately at the FCP 222 and BCP 223 are fonned.
  • the distance between the two cardinal points 222 and 223 is arbitrary because it
  • the present invention therefore merges the two
  • model is concerned the different object points on the sight ray 80 (such as the three
  • Another aspect is that if at
  • ray 80 defined by these object points can be determined by the spatial absolute
  • the FCP 222 or, instead, the VP.
  • fisheye lens can be explained by the Gaussian optics model.
  • the sight ray 80 is assumed
  • the BNP 223' matches the BCP 223, and the focal length constant (/) can be derived by an object distance, an object height and an image
  • Gaussian optics model can obtain the same focal length as a constant wherever the
  • mapping geometry of the camera is totally describable without the need
  • the zFL can also be called the image-height focal length because image heights of
  • mapping/distortion mechanism of the fisheye camera can also be described by the zFL, one of the parameters of the camera.
  • zenithal distance ( ⁇ ) is the angular distance of an incident ray away
  • the image height (p) is
  • projection space is modally infinite, and it is reasonable to regard the camera as a point.
  • intersection of the incident sight ray 80 and the optical axis 224 situates the FCP 222;
  • the unique image height (p) corresponding to the sight ray 80 can infer the BCP
  • both of which are the extrinsic parameters of the camera, can represent the position of the
  • the invention develops a measuring system and an analyzing
  • mapping mechanism described above is a key basis to design
  • the arrangement of the measuring system is
  • FIG. 7 in which the movement of the target 30 refers to the one in FIG. 6.
  • measuring system employs a computer program executed automatically for the
  • automatized measuring procedures comprising the capture of images, the extraction of
  • the measuring system is a composition of hardware devices
  • FIG. 6 coupled with the measuring system in FIG. 7 present the first embodiment of the
  • the first embodiment will cause irregular illumination on
  • the present invention defines four coordinate systems depending on each other in the
  • the camera coordinate system 26 is composed of N( ⁇ , ⁇ , h) and S( ⁇ ', ⁇ ',/), in which
  • ⁇ and ⁇ have been defined hereinbefore while ⁇ ' and ⁇ ' are the corresponding angular
  • FIG. 4 again, S( ⁇ ', ⁇ ',. ) defines the refractive light rays bounded on the inner light cone
  • N( ⁇ , ⁇ , h) defines the corresponding sight ray 80 bounded on the outer light cone whose zenith is at the FCP 222.
  • ⁇ ' is not equal to ⁇ but ⁇ ' is
  • the image-plane coordinate system 27' defines the dimensions of images on the
  • the pixel coordinate system 27 expresses the dimensions of images displayed on the
  • the absolute coordinate system 28 regards the center of the
  • PCP 31 i.e. the barycentric coordinate of the central mark 38 as the origin, and defines
  • the position of the target 30 is keeping fixed so that the
  • the camera 22 is moved within a particular object space and
  • the image changes enable the mapping mechanism of the sight ray 80 defined by ⁇ and ⁇
  • the camera 22 is fixed on the adjusting platform 23 which
  • platform coordinate system 29 have to be parallel with the ones (X, Y, Z) of the absolute
  • an omnidirectional base 70 is installed on the Y' -axis (under the camera 22)
  • the mechanical arrangement can collimate
  • the pixel coordinate system 27 is utilized to express the two-dimensional memory
  • the value in the pixel coordinate system 27 can represent the dimensions of images on the
  • a square image displayed on the screen might not
  • the image mapped on the image plane 225 is displayed on the screen for
  • the measuring system not only builds a mechanical structure for the coordinate
  • the camera 22 a BW camera applied to surveillance, which is equipped with a
  • CCD charge coupled device
  • fisheye lens with a vendor's
  • CMOS Complementary Metal Oxide Semiconductor
  • An illuminant 24 an important element in the invention. The category and
  • the invention takes
  • the relative position of the illuminant 24 and the target 30 is fixed
  • a platform controller 21 utilized to control the movement of the adjusting
  • a processing unit 25 a normal personal computer (PC), which is employed to
  • CPU 251 is utilized to execute the software, handle the entire operation and
  • frame grabber 252 is employed to process digital signals in order to extract
  • the frame grabber 252 is utilized to turn analogical signals
  • the target 30 fixed in the FOV of the camera 22 as a reference for analyzing
  • PCP physical central-symmetric pattern
  • the PCP 31 is composed of a central origin located at the
  • the radii of the three concentric circles are 20mm, 40mm
  • the PCP 31 is drafted by a computer-aided designer (CAD) and printed on a
  • LEDs light emitting diodes
  • the embodiment of the PCP 31 is not limited in FIG. 5, depicting three regular
  • octagons defined by three concentric circles. It performs well as long as the PCP 31 fits a
  • polygons shaped by a number of calibration marks are all possible forms for the PCP 31.
  • optical axis 224 to W(0,0,z) by regularizing the image of the PCP 31 to achieve an
  • a concentric and symmetric image i.e. the ICP 226, can be
  • the spatial disposition of the measuring system is adjusted in light of the
  • the camera coordinate system 26 is therefore
  • the geometric center of the ICP 226 i.e. the feature coordinate of the imaged blob
  • omnidirectional base 70 is mounted at the bottom of the camera 22 to manually
  • the optical axis 224 denoted as S(0,0,f) in the
  • platform coordinate system 29 and the optical axis 224 in the camera coordinate system 26 in order to align them along the same straight line.
  • indexes are displayed on the computer screen and as a
  • n may be equal to 38, 311-318, 321-328
  • the measuring system actively adjusts the
  • indexes are defined as follows:
  • the imaged-distortion indexes (su[m][k], sv[m][k]) are the summation of
  • the horizontal deviation index is the standard deviation of the v- vectors of
  • n is equal to 335, 325, 315, 38, 311, 321 and 331, so the horizontal
  • V 325 M v 315 [k], v 38 [k], v 311 [k], v 321 [k] and v 331 [k].
  • the vertical deviation index is the standard deviation of the u- vectors of
  • I yL n , v n )[k] which are the feature coordinates of all vertical imaged blobs in
  • index is the standard deviation of the series composed of u 333 [k], u 323 [k],
  • the aspect ratio is also a parameter in the field of camera calibration.
  • Any single imaged point on the image plane 225 can be modally analyzed to
  • the camera 22 is moved further along
  • This offset data couples with the feature coordinate of the particular imaged
  • calibration marks 311-318, 321-328, 331-338 can infer the absolute location of the
  • the present invention takes the first embodiment as an
  • the identical sight ray 80 corresponding to the imaged point 91 can be
  • diameter of the PCP 31 is constantly pe ⁇ endicular to the optical axis 224, the particular
  • Tsai the radial alignment constraint and is a characteristic of the radial-symmetric mapping
  • viewpoint (VP) representing the absolute coordinate of the camera 22 in
  • FIG. 1 In addition to marking the imaged point 91 1(u, v) distorted by the fisheye lens, FIG.
  • the invention adopts the second embodiment and implements it in an
  • the procedure for data extraction is divided into two parts: (1) each time after the
  • the target 30 as D the aim of the calculation (note: generally the orientation of
  • each offset of the camera 22 i.e. the dZ'
  • FIG. 10 illustrates the distribution of the series of positions of the
  • platform coordinate system 29 is not perfectly collimated to the absolute
  • FIG.11 illustrates the feature
  • each data pair also stands for the principal point 227 practically measured in the pixel coordinate
  • I 38 (u,v)[k] should be a constant and does not vary with the
  • deviations of the principal point 227 are separately 0.25 pixel in the u- vector and
  • principal point 227 is a constant.
  • FIG. 12 A illustrates the profiles of three average
  • FIG. 12B redrawn from FIG.
  • optical parameters of the camera are deduced according to the data measured in
  • W c is the optical origin, the zenithal distance ( ⁇ ), i.e. the angular distance of the sight ray
  • R tl 3 denotes the radii of the three concentric circles on the target 30, namely the
  • a line segment is determined by W 313 [p], W 323 [q] and W 333 [r] if
  • the image heights have an inverse proportion to the object distances (i.e. the distances of the camera 22);
  • FIG 12A shows this phenomenon, from which the image
  • each point is supposed to be the FCP 222
  • the object-to-image conjugate-coordinate array is found to be capable
  • the method disclosed in the invention can be any method disclosed in the invention.
  • the projection curve is able to describe the mapping mechanism of the camera 22,
  • the lens used in the embodiment is similar to the one with the EDP so
  • the zFL also termed the focal length constant in the
  • Gaussian optics model is the distance between BNP 223' and the principal point 227,
  • the zFL can be regarded as the focal length
  • the zFL will vary along with different image
  • zFL-profiles also has the capability of locating the FCP 222 of the camera 22. This is the
  • optical axis 224 one by one, each of which point is supposed to be the FCP 222 so that
  • the initial distance (D[0]) is accordingly determined. Further, the image heights
  • FIG. 15 illustrates the profiles of the zFL-function showing a pretty good overlapping phenomenon. It also signifies that the FCP 222 on the optical axis 224 can be truly
  • the zFL will be the focal length of the lens mounted in the camera 22; generally ideal
  • lenses take this value as their focal length constant.
  • the overlapping degree of the overlapping portion is calculated. If the divergent length is the minimum, like the curve notated as zFL in FIG. 17, the overlapping degree of
  • tracks of zFL(p) reflect a longer divergent length, like the one notated as zFL_shift.
  • the invention utilizes the nature of the image projected from the
  • mapping mechanisms of some cameras with defects are
  • the present invention can guide or modify the arrangement of the measuring system as

Landscapes

  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention is a method and system for analyzing the mapping mechanism and accordingly obtaining the optical parameters of a camera. A specific mapping characteristic that one sight ray exclusively corresponds to one imaged point is utilized; with reference to a particular imaged point, absolute spatial coordinates conforming to the mapping characteristic are searched for analyzing the mapping mechanism of the camera. A planar target with a physical central-symmetric pattern (PCP) is employed to locate the principal point and absolutely orient the optical axis, with the aid of the both centers of the PCP and its corresponding image with the similar geometric feature, termed the imaged central-symmetric pattern (ICP). Then refer to the optical axis and actively adjust the relative distance between the camera and the target to enable the mapping traces, cast from different calibration marks on the target, to overlap on the image plane. Based on this phenomenon, the sight ray can be analyzed by the overlapping mechanism and a methodology developed thereby to obtain the optical parameters of the camera. Because the invention totally and solely employs the measured data to deduce the parameters, which means in other words that no postulation of a given mapping mechanism is necessary; it is most suitable for application to the kind of camera with an unknown optical model. Actually, the bigger the deformation of the image, the higher the sensitivity of the invention in its operations. Hence, the applications of wide-angle cameras can be widely expanded and, furthermore, the invention can evaluate or determine the specifications of the camera. The operating procedure of the invention is simple and low-cost, wherefore it has a major practicability commercially as well as industrially.

Description

METHOD AND SYSTEM FOR OBTAINING OPTICAL PARAMETERS OF
CAMERA
BACKGROUND OF THE INVENTION
Field of Invention
The present invention relates to a method and system for obtaining the optical
parameters of a camera. Particularly, it is a method and system for analyzing the camera
with a lens seriously diverging from the rectilinear projection mechanism, such as a
fisheye lens, to obtain the optical parameters comprising the principal point, the
viewpoint, the focal length constant and the projection function.
Related Art
The camera systems in the field of artificial vision have preferred using lenses with a
narrow field of view (FOV) in order to obtain images approaching an ideal perspective
projection mechanism for precise measurement and easy image processes. The pinhole
model is usually a basis to deduce the camera's parameters. The obtained intrinsic and
extrinsic parameters can be employed in visual applications in the quest for improved
precision, for instance in 3-D cubical inference, stereoscopy, automatic optical inspection,
etc. As for the image deformation, a polynomial function is used to describe the deviation
between original images and the ideal model or to conduct the job of calibration. These
applications, however, currently have a common limitation of narrow visual angles and
an insufficient depth of field. A fisheye camera (also termed a fisheye image sensor) mounted with a fisheye lens,
which focuses deeper and wider, can capture a clear image with a FOV of as much as over
180 degrees, but a severe barrel distortion develops. If the application is a surveillance
system and the request is only to monitor the movement of people or things, a partial
distortion in images can be tolerated. If the purpose is to take pictures for virtual reality
(VR), it is also acceptable that images "look like" normal ones. However, if the purpose
involves the measurement of an object's physical size or 3-D image metering, it has to be
admitted techniques for precisely obtaining the optical parameters of the fisheye camera
are still absent.
Because the optical geometry of the fisheye camera is far from the rectilinear
perspective projection model, the optical parameters are hard to be precisely deduced by
those methods employed in the related art for normal cameras. Therefore, technologies
developed for the usual visual disciplines have not resulted in the capability to process
images of the fisheye camera (simplified as "fisheye images" hereinafter).
R. Y. Tsai [1987] ( Tsai, "A versatile camera calibration technique for high-accuracy
3D machine vision metrology using off-the-shelf TV camera and lenses", IEEE Journal of
Robotics and Automation, Vol. RA-3, No 4, Aug, 1987, pp 323-344 ) brought up a radial
alignment constraint in the radial-symmetric projection mechanism to derive the
parameters of the camera. He employed five non-coplanar points of known absolute
coordinates in viewed space and the positions of their corresponding images in the image
plane, referring to the radial alignment of the optical axis constraining the vectors of the
image distance and absolute distance to deduce the coefficients of a rotation matrix and
translation matrix, which stand for the orientation, displacement and viewpoint of the camera. The focal length is obtained through the hypothesis of the rectilinear projection
geometry, but a non-linear function is taken to describe the distortion mechanism of the
image. Its chief merit is the ability to obtain the parameters of the camera with only
simple experimental devices. In cameras with little distortion, the results from Tsai's
model are quite accurate. But its demonstration is also based on the hypothesis of the
rectilinear projection; it will involve a large error under a severely nonlinear projection
system like the fisheye lens, and the results will be dependent on the arrangement of
calibration marks. Hence, Tsai's model cannot be directly quoted in the case of wide-
angle cameras, such as ones mounted with the fisheye lens.
However, if an artificial vision system has the advantages of wide-angle views, clear
images and the capability of handling a cubical projection mechanism, it will be
substantially more functional and competitive with a wider application field. Moreover,
the excellent advantages of a nearly infinite view depth, simple structure and tiny volume
are strengths vis-a-vis which other kinds of lenses are scarcely comparable to the fisheye
lens. However, the severe distortion is a vital disadvantage in some applications, so the
issue of identifying the features and irregular mapping mechanism of the fisheye lens and
accordingly developing the related methodology is extremely important. Further, the
applications depending on the accuracy of image calibration, for example in the
stereoscope or autonomous robotic vision, are difficult to accurately handle without the
precise optical parameters of the fisheye camera.
Owing to the poor deduced accuracy of the optical parameters of a camera based on
the rectilinear perspective projection model, some alternative solutions have been
advanced for handling the transformation of the fisheye image. Among these alternative approaches an image-based algorithm aims at a specific camera which mounts a specific
lens conforming to a specific projection mechanism so as to deduce the optical parameters
based simply on the images displayed. With reference to FIGs. 1 A and IB, wherein FIG.
1A expresses the imageable area 1 of a fisheye image in a framed oval/circular region and
FIG. IB is the hemispherical spatial projecting geometry corresponding to FIG. 1 A, both
figures note the zenithal distance of α, which is the angle defined by an incident ray and
the optical axis 21, and the azimuthal distance of β, which is the angular vector in the
polar coordinate system whose origin is set at the principal point. Quoting the positioning
concept of a globe, β is the angle referring to the mapping domain 13' of the prime
meridian 13 on the equatorial plane in the polar coordinate system, shown in FIG. IB.
Thus, π/2-α is regarded as latitude and β as longitude. Therefore, if several imaged points
are situated along the same radius of the imageable area 1, their corresponding spatial
incident rays would be on the same meridional plane (like the sector determined by the arc
C'E'G' and two spherical radii); namely, their azimuthal distances (β) are invariant, such
as points D, E, F, and G in FIG. 1 A corresponding to points D', E', F', and G' in FIG. IB.
(Note: the phenomenon utilized by the image-based algorithm is not only relevant to the
fisheye lens; actually, it is the radial alignment constraint in Tsai's model on condition of
using a rectilinear perspective projection lens.)
In addition to the specific projection mechanism, the image-based algorithm makes
several basic postulates: first, the imageable area 1 of the fisheye image is an analyzable
oval or circle, and the intersection of the major axis 11 and the minor axis 12 (or two
diameters instead) situates the principal point, which is cast by the optical axis 21 shown
in FIG. IB; secondly, the boundary of the image is projected by the light rays of α=π/2; third, α and p are linearly related, wherein p, termed a principal distance, is the length
between an imaged point (such as point E) and the principal point (point C). For example,
the value of α at point E is supposed to be π/4 since it is located in the middle of the radius
of the imageable area 1 and, therefore, the sight ray corresponding to point E is destined to
pass through point E' in the hemispherical sight space, as shown in FIG. IB. The same
occurs with points C and C\ points D and D', points F and F', and so on. An imaged point
on the image plane can be denoted as (u, v) in the Cartesian coordinate system or as (p, β)
in the polar coordinate system, both taking the principal point as their origin; the vector
coordinate of its corresponding sight ray in space is denoted as (α, β).
Although the mapping mechanism was not really put on discussion in the image-
based algorithm, it is actually the equidistant projection (simplified as the EDP
hereinafter) with the postulation of a 180-degree visual angle (simplified totally as the
EDPπ hereinafter). The EDP's projection function is p=kα wherein k is a constant and,
actually, the focal length constant off. In order to fit the postulations described above, a
qualified camera body mounted with a qualified lens is utterly necessary. Generally it is a
special combination with no room for flexibility. Based on the EDPπ postulation, the
focal length constant (f) can be obtained by dividing the radius of the imageable area 1
with π/2; the spatial angle (α, β) of the corresponding incident ray can also be analyzed
from the planar coordinates (u, v) in the imageable area 1.
Therefore, in light of the known skills of image-analysis, an "ideal EDPπ image" can
be transformed into the image remapped by the rectilinear perspective projection referring
to any projection line as a datum axis. This image-based algorithm is easy and no extra
calibrating object is needed. The US patent 5,185,667 accordingly developed a method to transform fisheye
images conforming to the rectilinear perspective projection model along the projection
mechanism shown in FIGs. 1 A and IB so as to monitor a hemispherical field of view (180
degrees by 360 degrees). This patented technology has been applied in endoscopy,
surveillance and remote control as disclosed in US patents 5,313,306, 5,359,363 and
5,384,588. However, it is worth noting that these serial US patents did not concretely
demonstrate a general fitness toward average fisheye lenses. Thus, the image-transformed
accuracy of the patented technology is a big question when no specific fisheye lens is used.
Currently, in practice system application manufacturers ask for limited-specification
fisheye lenses combined with particular camera bodies and provide exclusive software,
and then the patented technology (US patent 5,185,667) will have practical and
commercial value.
Major parts of the image-based postulates mentioned, however, are unrealistic
because many essential factors or variations have not been taken into consideration. First,
the EDPπ might just be a special case among possible proj ection geometric models (note:
however, it is the most familiar projection model of the fisheye lens). Referring to FIG. 2,
three possible and typical projection curves of the fisheye lens are shown, implying
moreover that the natural projection mechanism of the fisheye lens might be the following:
the stereographic projection (or SGP, whose projection function is p = 2/xtan(α/2)) and
the orthographic projection ( or OGP, whose projection function is p =/ sin(α)).
Moreover, the coverage of the FOV is not constantly equal to π, perhaps being either
larger or smaller. From the curves in FIG. 2, the differences respectively between the
three projection models are obviously increasing along the growing zenithal distances (α). Thus, distortions will develop if all projection geometries are locked on the EDPπ to
transform images accordingly. Secondly, the FOV of π is hard to evaluate since the shape
of the imageable area 1 is always presented as a circle, irrespective of the angular scale of
the FOV. A third factor concerns the errors caused in locating the image border even
though the FOV is certainly equal to π. The radial decay caused by the radiometric
response is an unavoidable phenomenon in a lens, especially when dealing with a larger
FOV. This property will induce a radial decay on the image intensity, occurring especially
with some simple lenses, so that the actual boundary is extremely hard to set under that
bordering effect. Perhaps no real border feature even exists under the consideration of the
diffraction phenomenon of light. Finally, if the imageable area 1 of a camera is larger than
the sensitive zone of a CCD, only parts of the "boundary" of an image will show; hence
the image transformation cannot be effectively executed. Consequently, the image-based
algorithm depends extensively on the selected devices irrespective of whether the lens
conforms to the ideal EDPπ postulation or not. Alternatively, the method will result in
poor accuracy, modeling errors, a doubtful imageable area 1 extracted, an unstable
principal point situated, and practical limitations; these problems would keep the methods
in the related art from accurately solving the extrinsic and intrinsic parameters of the
camera in the interests of developing computer vision systems, not to mention the
viewpoint, in behalf of a camera's absolute position, which plays a key role in 3-D
metering.
Furthermore, Margaret M. Fleck [ Perspective Projection: The Wrong Image Model,
1994] has demonstrated that the projection mechanisms of lenses hardly fit a single ideal
model across the whole angular range in practice; otherwise, optics engineers could develop lenses with special projection functions, such as the fovea lens, in light of the
different requirements in applications. Thus, to obligate the postulation of the EDP on all
fisheye cameras is an extreme imposition.
On the other hand, although a lens is usually designed with a specific projective
mechanism, the refractivity of light limited by the properties of the material in question
keeps the lens from a perfect design. Moreover, following manufacture it is difficult to
verify whether they match the expected specifications or not. Further, when a fisheye lens
is installed in a real system (such as a camera), its focal length constant may vary
accordingly (depending on the precision of the mechanical installation). Consequently, if
a simple and common technology is developed which can verify the optical features of the
fabricated devices being produced in order to provide a guarantee of quality for the
products at their sale, it would significantly increase their value.
The Gaussian optics model is a convenient means for describing the imaging logic of
an optical system. It is usually the reference model in tracing a camera's errors. The
model regards an optical system (such as a camera) as a black box whose features have
been defined by several cardinal points. That is to say, the complicated projection
geometry is ignored and the projective behavior of light rays is logically analyzed directly
with the aid of the cardinal points. Referring to FIG. 3, the cardinal points defined by the
Gaussian optics model comprise the first and second focal points FI and F2, the first and
second principal points PI and P2, and the first and second nodal points. If the incident
medium of the optical system is air, the nodal points are regarded as the principal points;
at the same time, the first principal point PI is also termed the front nodal point (FNP),
and the second principal points P2 is called the back nodal point (BNP). Otherwise, two principal planes 141 and 142 are defined as the datum planes turning the proceeding
directions of light rays being projected into the optical system. The intersections
detennined by the two principal planes 141 and 142 and the optical axis 224 are simply
the two principal points PI and P2. In accordance with the cardinal points FI, F2, PI, and
P2 and the principal planes 141 and 142, the infinite light rays passing tlirough the first
focal point FI will turn to parallel the optical axis 224 at the first principal plane 141, like
the lines OC and CO'; conversely, if light rays are projected into the optical system in
parallel directions, they will turn to pass the second focal point F2 when meeting the
second principal plane 142, like the lines OB and BO'. A characteristic of this mapping
mechanism is that a light ray from the object point O projected toward the first principal
point PI (i.e. the line OP1) will turn in the direction along the optical axis 224 after
passing through PI, and turn again in the direction parallel to the line OP1 after passing
through P2 (i.e. the line P2O') until it's mapped on the sensitive element to form the
imaged point O'. In other words, the incident ray passing through PI is parallel to the
spatial traces of the light ray passing through P2. In the case of a single lens, the
phenomenon appears only in the paraxial zone of a thin lens. However, the Gaussian
optics model is an ideal imaging logic which average cameras seek to emulate. A wide-
angle lens has to attain this imaging mechanism and is quite different from the fisheye
lens.
Regarding a lens such as the fisheye lens, specialists skilled in the art think of no
"single viewpoint"; this is correct in the aspect of Gaussian optics. However, if the limits
of Gaussian optics could be overcome, the inherent mapping mechanism of the fisheye
lens might be analyzable so that the "single viewpoint" could be logically positioned and the optical parameters can be deduced thereby. At this point, not only the reliability of
analyzing fisheye images is raised but the applications can also be largely expanded in the
field of 3-D metering and so forth. Thus, the present invention will carefully look into
these issues and free the procedures of camera-parameterization from the ideal image-
based postulations, such as the EDPπ and the image boundary, so as to precisely obtain
the optical parameters of the fisheye camera.
SUMMARY OF THE INVENTION
In view of the foregoing, it is an object of this invention to provide a method and
system aiming at cameras with lenses of the non-linear perspective proj ection mechanism,
in order to analyze the natural optical projection properties of an optical system.
Another object of this invention is to provide a method and system for obtaining
optical parameters (comprising the viewpoint, the orientation of the optical axis, the focal
length constant and the projection mechanism of the camera) based simply on the natural
optical projection phenomena of a lens so as to extend the applications of the fisheye
camera to the fields of stereo graphic measuring and 3-D metering.
Another object of this invention is to provide a method and system for analyzing
image distortion according to the coordinates on the image plane, which can directly
quantify image distortion by the zenithal distances (α) deduced from the coordinates of
imaged points.
Another object of this invention is to provide a method and system capable of
examining a lens or the spatial mapping mechanism of the camera mounted with the lens
to act as a basis for detennining the specifications or testing the qualities of products. In accordance with the objects described above, the present invention refers to the
degree of distortion of an image projected from a target with a physical central-symmetric
pattern (PCP) so as to adjust the absolute coordinate of a camera in order to make the
features of the image similar to the PCP; namely, an imaged central-symmetric pattern
(ICP) appears on the image plane. Next, an object-to-image conjugate coordinate array,
composed of the spatial absolute coordinates of calibration marks on the target and the
corresponding image coordinates on the image plane, is sampled and utilized to describe
the projecting behavior between object space and the image plane. Accordingly, the
projection relationship between the object space (i.e. the sight rays) and the image (i.e. the
imaged coordinates) is deduced. Thus, the optical parameters of the camera system can be
obtained.
The present invention does not borrow any assumptions from existent ideal
projection functions. The deduction of the optical projection mechanism and the
quantification of the optical parameters of a camera utterly and totally with only the
assistance of the measured projection relationships between the given coordinates of the
calibration marks and the corresponding respective imaged coordinates comprise a
significant characteristic of the present invention. The invention makes a breakthrough
from the limitations and presumptions that those skilled in the related art have strongly
believed. The invention is suitable for application to the fisheye camera or the kind with
special projection functions and even, as reverse engineering to analyze camera devices
having unknown projection models.
Owing to the capability of the invention to precisely deduce the projection function
of the camera, its inverse projection function can calibrate the image distortions and further be applied in the fields of stereology and 3-D metering.
Further scope of applicability of the present invention will become apparent from the
detailed description given hereinafter. However, it should be understood that the detailed
description and specific examples, while indicating preferred embodiments of the
invention, are given by way of illustration only, since various changes and modifications
within the spirit and scope of the invention will become apparent to those skilled in the art
from this detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS
The present invention will become more fully understood from the detailed
description given herein below by illustration only, which illustrations are not limitative
of the present invention, and wherein:
FIGs. 1A and IB show the schematic view of a calibration method based on an
image-based algorithm aiming at the EDPπ of the fisheye images in the related art;
FIG. 2 sketches three typical projection functions of the fisheye lens;
FIG. 3 shows the schematic view of the mapping optical path of the Gaussian optics
model;
FIG. 4 cubically shows the 3-D optical paths between the PCP and the fisheye camera
in the invention;
FIG. 5 shows one embodiment of the PCP which is an octagonal symmetric pattern
defined by three concentric circles;
FIG. 6 shows the first embodiment of the theoretical model disclosed in the invention
where the specific sight ray is determined by three different calibration marks while the target is moved to three different positions;
FIG. 7 shows the first embodiment of the measuring system disclosed in the
invention and the related coordinate systems referred to;
FIG. 8 shows the second embodiment of the measuring system disclosed in the
invention and the related coordinate systems referred to;
FIG. 9 shows the second embodiment of the theoretical model disclosed in the
invention which takes the solid center of the PCP as the origin of the absolute coordinate
system and moves the camera to equivalently deduce the specific sight ray;
FIG. 10 statistically shows the moving traces of the camera in the platform
coordinate system measured in an experiment as the ICP is attained, the above traces
representing as well the spatial traces of the optical axis in the platform coordinate
system;
FIG. 11 statistically shows the pixel coordinates of the imaged center measured in the
experiment;
FIG. 12A statistically shows the profiles of the average image heights (p) defined by
the three concentric circles varied by the different locations (referring to FIG. 10) of the
camera in the experiment;
FIG. 12B shows the varying ranges and the overlapping situation of the average
image heights (p) corresponding to FIG. 12 A;
FIG. 13 shows the beautiful overlapping profiles of the zenithal distances (α) to the
image heights (p) as the viewpoint is exactly set in the experiment;
FIG. 14 shows the divergent profiles of the zenithal distances (α) to the image heights
(p) when the location of the viewpoint is shifted from its exact position; FIG. 15 shows the beautiful overlapping profiles of the zenithal focal length (zFL) to
the image heights (p) as the viewpoint is exactly set in the experiment;
FIG. 16 shows the divergent profiles of the zenithal focal length (zFL) to the image
heights (p) when the location of the viewpoint is shifted from its exact position; and
FIG. 17 shows the divergent length composed of multiple traces of the zFL, which is
used to evaluate the overlapping degree of the profiles, an example taken from FIGs. 15
and 16.
DETAILED DESCRIPTION OF THE INVENTION
Several coordinate systems are defined in advance of the detailed technical disclosure
necessary to a convenient analysis:
1. The absolute coordinate system of W(X,Y,Z) places its origin at the geometric
center of a target, and defines the direction perpendicularly away from the target
as the positive of the Z-axis.
2. The image-plane coordinate system of C'(x,y) or P'(p, β) represents the image
plane of the camera in the Cartesian coordinate system or the polar coordinate
system in which its origin is set at the principal point.
3. The pixel coordinate system of I(u,v) represents images which can be directly
observed on a computer screen with an unit of "pixel". The principal point is
imaged at the coordinate denoted as I(uc,vc) on the computer screen. Basically,
the dimensions on the image plane, C'(x', y') or P'(p', β'), can correspond to the
pixel coordinate system of I(u,v). Therefore, the Cartesian coordinate system of
C(u,v) or the polar coordinate system of P(p, β) can represent as well the pixel coordinate system of I(u,v) in which I(uc, vc) is the origin.
4. The camera outer-space coordinate system of N(α,β,h) describes the geometry of
the sight rays in the field of view (FOV) of the camera.
5. The camera inner-space coordinate system of S(α', β', / ) describes the
projection geometry inside the camera.
The serial numbers of sampled points will be identified at the subscript positions and
the sampling sequence is indicated by means of an array. For example, Wn(a,b,c)[k]
expresses that a calibration mark of n is located at (a, b, c) in the absolute coordinate
system during the kth test. The other coordinates are denoted by a similar rule. Part of the
denotation could be omitted in the interests of fluent comprehension while readability is
not adversely affected. The coordinate denotations will be quoted in the invention
hereinafter.
The fisheye lens is severely diverged from the Gaussian optics model to be a non¬
linear perspective projection lens, which means its projective behavior cannot be
interpreted by the well-known pinhole model following the rectilinear perspective
projection mechanism. Compared with other lenses, an image captured by the fisheye
lens (simplified as the fisheye image) is possessed of a severe barrel distortion. The
fisheye lens is frequently employed to create dramatic or extraordinary effects but it is
found lacking in the accurate mapping of the original dimensions and features of objects.
However, there are still a couple of rules to follow while mapping images. Rule 1, the
quantity of distortions throughout the fisheye image are distributed with a radial
symmetry whose point of origin is termed a principal point, and its optical projection
geometry in space symmetrically encircles the optical axis of the camera. Rule 2, all object points located on the same specific sight ray in object space are totally projected
onto a specific imaged point on the image plane. The projection mechanism in space can
be postulated as follows: incident rays (including active or inactive reflective light rays)
cast from an object in the FOV will logically converge on a unique spatial optical center
(or termed the viewpoint, simplified as VP) and then divergently map on the image plane
in light of a projection function. The rules and the postulation described above are well
known to specialists skilled in the related art of optical engineering.
The present invention designs a particular target according to the characteristic of the
radial symmetry of distortion across a fisheye image (rule 1) to locate the principal point
on the image plane and position the optical axis in space. Then, the specific projection
relationship between the sight ray and the imaged point (rule 2) is analyzed to obtain the
absolute coordinate of the VP on the optical axis as well as the absolute coordinate of the
sight ray in space; the focal length constant is deduced accordingly and the projection
model of the camera is induced consequently. The present invention needs no assumption
of any existent projection models, such as the equidistant projection (EDP), the
stereo graphic projection (SGP) or the orthographic projection (OGP), only if cameras
possessed of the mapping properties of fisheye images, or similar ones, all of which are
analyzable in the invention.
The spatial projecting symmetry of rule 1 is illustrated in FIG. 4, which shows the
optical projection paths between the fisheye camera and the planar target 30 being placed
in the FOV thereof; wherein take the fisheye lens 221 and the image plane 225 standing
equivalently for the fisheye camera. With a view of geometry, a planar drawing capable of
representing an axis-symmetric geometrical arrangement in space can image a center- symmetric image inside the camera. Therefore, referring to FIG. 5, a planar target 30 with
a physical central-symmetric pattern (PCP) 31 thereon is placed in the FOV of the camera.
The PCP 31 is composed of a central mark 38 located at the geometric center thereof and
a plurality of calibration marks 311-318, 321-328, 331-338 defined by a plurality of
center-symmetric geometric figures. The relative position of the target 30 and the camera
is adjusted in order to obtain an imaged central-symmetric pattern (ICP) 226 on the image
plane 225. Obtaining the ICP 230 means the optical axis 224 perpendicularly penetrates
both the principal point 227 on the image plane 225 and the central mark 38 of the PCP 31.
The position of the optical axis 224 in space can be determined absolutely by referring to
the target 30 because its absolute position is man-made and given in advance. The feature
coordinate of the blob imaged by the central mark 38 (or the center of gravity of the
imaged blob) is regarded as the principal point 227 on the image plane 225.
If the projection behavior of a camera conforms to any known circular- function
relationship (note: meaning the product of a circular function and a focal length), the
incident rays cast from the PCP 31 will certainly and essentially achieve a collimating
mechanism; namely, referring to FIG.4 again, all incident rays will converge at a logical
optical center of the fisheye lens 221, termed the front cardinal point (FCP) 222, and
divergently refract onto the image plane 225 (or the optical sensor) from the back cardinal
point (BCP) 223 according to the projection function so that two light cones whose
zeniths are separately at the FCP 222 and BCP 223 are fonned. The FCP 222 and BCP
223 are two referred points for the two distinct spaces delimiting the projecting behaviors
inside and outside the fisheye camera. Sight rays refer to the FCP 222 and the image plane
225 refers to the BCP 223 while analyzing the projection mechanism of the fisheye camera. The distance between the two cardinal points 222 and 223 is arbitrary because it
is not a parameter of the camera system. The present invention therefore merges the two
cardinal points 222 and 223 at a single viewpoint (VP) or picks the FCP 222 on behalf of
the VP in order to simplify the imaging logic. Such a technique of expression is often seen
in volumes on optics in discussing lenses.
The equivalent mapping mechanism of rule 2 is shown in FIG. 6. As far as an optical
model is concerned the different object points on the sight ray 80 (such as the three
calibration marks 313, 323, and 333 in FIG. 5 whose absolute coordinates are W313[p],
W323[q], and W333[r] respectively while the target 30 is passed through at the three
different locations of p, q, and r) can hardly be told apart simply by a single image
message (such as the imaged point 91) on the image plane 225. Another aspect is that if at
least two different object points simultaneously map at the same imaged blob, the sight
ray 80 defined by these object points can be determined by the spatial absolute
coordinates thereof. The intersection of the sight ray 80 and the optical axis 224 situates
the FCP 222 or, instead, the VP.
The projection mechanism of any sight ray 80 (also called the incident ray) of the
fisheye lens can be explained by the Gaussian optics model. The sight ray 80 is assumed
to be refracted at the FCP 222 (that is the FNP 222' in the Gaussian optics model,
referring to FIG. 3) and then mapped on the image plane 225 to form an imaged point 91
whose coordinate is C'(u,v) after the sight ray 80 meets the optical axis 224; therefore a
track parallel to the sight ray 80 from the imaged point 91 can be inferred inversely to
obtain the conesponding BNP 223'. If the projection behavior of the sight ray 80
conforms to the Gaussian optics model, the BNP 223' matches the BCP 223, and the focal length constant (/) can be derived by an object distance, an object height and an image
height only by employing simple mathematic geometry. Only the lenses following the
Gaussian optics model can obtain the same focal length as a constant wherever the
imaged point 91 is located.
If the sight ray 80 corresponding to any coordinate on the image plane 225 is
analyzable, the mapping geometry of the camera is totally describable without the need
for the camera's projection function. This is a vital subject and basis of the invention and
will be disclosed hereinafter.
Due to the severe distortions caused by the fisheye lens, it is impossible to enable all
sight rays 80 to pass through a single BNP 223'. That is to say, there is no unique focal
length constant in view of the Gaussian optics model. However, the geometric projection
mechanism between a specific sight ray 80 and its corresponding imaged point 91 is still
separately describable by a Gaussian model. The invention terms the individual focal
length attained by this method as a zenithal focal length (simplified as the zFL
hereinafter), that is, the distance between the BNP 223 ' and the principal point 227 shown
in FIG. 6; wherein the location of the BNP 223' is determined by the line parallel to the
sight ray 80 and passing the imaged point 91 C'(u,v) observable on the image plane 225.
The zFL can also be called the image-height focal length because image heights of
equivalent values will correspond to the same zFL and each image height is determined
by an imaged point 91. Thus, the existence of a different but unique zFL corresponding to
every single imaged point 91 is definitely inferred in view of the Gaussian optics model,
but its values will be decreasing while the image heights are increasing. Based on the
one-to-one correspondence, the mapping/distortion mechanism of the fisheye camera can also be described by the zFL, one of the parameters of the camera.
If the projection function of the fisheye lens is describable by a circular function, the
relationship between the image height (p) and the zenithal distance (α) is deducible as
well; wherein the zenithal distance (α) is the angular distance of an incident ray away
from the optical axis 224 in space. Taking the EDP as an example, the image height (p) is
determined by the product of the zenithal distance (α) and the focal length constant (f),
namely p=/*α; the value of α is derivable when both p and/ are given.
Referring to FIG. 4 again, the relationship between the zenithal distance (α) defined
by the outer light cone and the image height (p) which is the radius at the bottom of the
inner light cone is described by the projection function. Inversely, if the relationship can
be measured, the projection function can be inferred as well. This mechanism is not
limited in one single function with a closed form, such as trigonometric functions. The
present invention terms as an ideal lens the kind whose mapping mechanism throughout
the entire FOV can be described by a single circular function. Logically, once the natural
projection function of the lens is obtained, there does exist a BCP 223 in a modal.
However, as far as the outer light cone is concerned, it is correct to say that there is only
one FCP 222 because the origin of the sight ray 80 utilized to describe the absolute
projection space is modally infinite, and it is reasonable to regard the camera as a point.
Referring to FIG. 4 again, if there is a camera mounting an ideal lens, the value of α
corresponding to a physical object point in the FOV can be obtained by a simple tangent
function if the absolute coordinate of the FCP 222 is given. Furthermore, the point of
intersection of the incident sight ray 80 and the optical axis 224 situates the FCP 222;
taking the image plane 225 as a base and referring to the focal length constant (f) as the height, the unique image height (p) corresponding to the sight ray 80 can infer the BCP
223, the zenith of the inner light cone.
The absolute coordinate of the FCP 222 and the orientation of the optical axis 224,
both of which are the extrinsic parameters of the camera, can represent the position of the
camera; the focal length constant and the projection function are regarded as the intrinsic
parameters of the camera. The invention develops a measuring system and an analyzing
methodology to verify that these parameters are deducible in logic without knowledge of
the camera's projection model.
The realization of the mapping mechanism described above is a key basis to design
the measuring system in the invention. The arrangement of the measuring system is
shown in FIG. 7, in which the movement of the target 30 refers to the one in FIG. 6. The
measuring system employs a computer program executed automatically for the
automatized measuring procedures comprising the capture of images, the extraction of
feature coordinates of imaged blobs, and the deductions for the intrinsic and extrinsic
parameters of the camera 22.
Speaking generally, the measuring system is a composition of hardware devices and
software elements used to perform the mapping mechanism described above. Apart from
the devices and elements in operation, the qualities of measurement are also greatly
influenced by the surrounding factors of the laboratory such as the relative positions of
the devices, and both the specification and installation of lamps. The theoretical model in
FIG. 6 coupled with the measuring system in FIG. 7 present the first embodiment of the
invention. However, in practice, the first embodiment will cause irregular illumination on
the surface of the target 30 cast from the illuminant 24 while the target 30 is moving at different locations; this can certainly affect the experimental accuracy. A second
embodiment of the invention is therefore introduced with the benefits of simplified
calculation and unifonn illumination, as shown in FIG. 8, in which the target 30 is fixed
as the origin of reference of the absolute coordinate system 28 and the camera 22 is
moved instead. The corresponding theoretical model of the second embodiment is shown
in FIG. 9. The description hereinafter will take the second embodiment as a representative
example to disclose the details of the invention. However, it does not imply a limitation in
the invention; any variations or modifications following the same spirit are not to be
regarded as a departure from the spirit and scope of the invention.
The present invention defines four coordinate systems depending on each other in the
measuring system, referring to FIG. 8 to know their inset positions: (1) the absolute
coordinate system 28 (denoted as W=(X,Y,Z)) defined by the target 30; (2) the platform
coordinate system 29 (denoted as W'=(X',Y',Z')) defined by the adjusting platform 23
driving the orientation and location of the camera 22; (3) the pixel coordinate system 27
(denoted as I=(u,v)) displayed on a computer screen and corresponding to the image-
plane coordinate system 27' ( denoted as C'(x', y') or P'(p', β')) on the image plane 225;
and (4) the camera coordinate system 26 (denoted as N( α, β, h) and S( α', β',/)) utilized
to describe the imaging geometry of the camera 22.
The camera coordinate system 26 is composed of N(α, β, h) and S(α', β',/), in which
α and β have been defined hereinbefore while α' and β' are the corresponding angular
distances determined by virtual rays with reference to the image plane 225. Referring to
FIG. 4 again, S(α', β',. ) defines the refractive light rays bounded on the inner light cone
placing the zenith at the BCP 223 while N( α, β, h) defines the corresponding sight ray 80 bounded on the outer light cone whose zenith is at the FCP 222. Owing to the irregular
refraction from the outer to the inner space of the camera 22, α' is not equal to α but β' is
usually the equivalent of β (or β+π). The functional relationship between α and α' can
represent the mapping mechanism of the camera 22; however, α' cannot be directly
observed.
The image-plane coordinate system 27' defines the dimensions of images on the
image plane 225 separately in the Cartesian coordinate system (C'(x\ y')) or the polar
coordinate system (P'(p'? β'))> placing the origin at the principal point 227.
The pixel coordinate system 27 expresses the dimensions of images displayed on the
computer screen individually in the Cartesian coordinate system (C(x, y)) or the polar
coordinate system (P(p, β)), placing the origin at the feature coordinate imaged by the
principal point 227 (denoted as I(uc, vc)=C(0, 0)=(0, β)) with a unit of a pixel.
Referring to FIG. 5 again, the absolute coordinate system 28 regards the center of the
PCP 31 (i.e. the barycentric coordinate of the central mark 38) as the origin, and defines
the X-axis by the feature coordinates of the horizontal calibration marks 335, 325, 315, 38,
311, 321 and 331 and the Y-axis by the feature coordinates of the vertical calibration
marks 333, 323, 313, 38, 317,327 and 337; accordingly W38=W(0,0,0).
During the experiment, the position of the target 30 is keeping fixed so that the
absolute coordinates of all the calibration marks 38, 311-318, 321-328, and 331-338 are
consequently ensured. Then, the camera 22 is moved within a particular object space and
the image changes enable the mapping mechanism of the sight ray 80 defined by α and β
in the camera coordinate system 26 to be analyzable. The details of the analysis will be
disclosed hereinafter. Referring to FIG. 8 again, the camera 22 is fixed on the adjusting platform 23 which
is composed of three rigid axes perpendicular to one another, that is the X' rigid axis 231,
the Y' rigid axis 232 and the Z' rigid axis 233 correspondingly representing the X'-, Y'-
and Z'-axis in the platform coordinate system 29; wherein the positive of the Z'-axis is
the direction departing from the target 30. Ideally, the three axes (X', Y', Z') of the
platform coordinate system 29 have to be parallel with the ones (X, Y, Z) of the absolute
coordinate system 28. However, in practice, initially there is a six-dimensional difference
between the two coordinate systems 28 and 29. Therefore, in addition to freely driving the
three rigid axes 231, 232, and 233 of the adjusting platform 23 to position the camera 22
fixed thereon, an omnidirectional base 70 is installed on the Y' -axis (under the camera 22)
for pamiing, tilting or rotating the camera 22. The mechanical arrangement can collimate
the optical axis 224 to the Z-axis. The details of the operation will be disclosed
hereinafter.
The pixel coordinate system 27 is utilized to express the two-dimensional memory
coordinates of digital video signals captured by the camera 22, digitized by a frame
grabber 252 and then provided to a CPU 251 or a digital image processor 253. Logically,
the value in the pixel coordinate system 27 can represent the dimensions of images on the
image plane 225; however, the proportion of both their units reveals a transformed
relationship, termed an aspect ratio. A square image displayed on the screen might not
have the aspect ratio equal to 1 ; it will turn an original circular image into an elliptic one.
In practice, the image mapped on the image plane 225 is displayed on the screen for
observers who can only read the values in the pixel coordinate system 27 to indirectly
represent the dimensions of images. The value of the aspect ratio can also be ensured by the invention. The details will be disclosed hereinafter.
The measuring system not only builds a mechanical structure for the coordinate
systems described above but also functionally plays as a device for capturing images,
calculating feature coordinates, and adjusting coordinate systems. The details of the rest
of the major devices are described as follows:
1. The camera 22: a BW camera applied to surveillance, which is equipped with a
1 /2-inch CCD (charge coupled device) and a fisheye lens (with a vendor's
specification of 2.8mm focal length). It has the capability of focusing infinitely
and outputs video signals along the standards of the NTSC (National
Television System Committee) and further transmits the video signals to the
frame grabber 252. In addition to the CCD camera, it is another embodiment to
adopt a CMOS (Complementary Metal Oxide Semiconductor) camera or
cameras mounting image sensors.
2. An illuminant 24: an important element in the invention. The category and
arrangement of the illuminant 24 totally affect the distribution of the
illumination and cause different results in the experiment. The invention takes
two lamps with high frequency conversions as the illuminant 24 to light up the
target 30. The relative position of the illuminant 24 and the target 30 is fixed
for the full duration of the experiment in order to keep the illumination stable.
3. A platform controller 21: utilized to control the movement of the adjusting
platform 23 through the commands of software and provide power to and limit
the moving range of the adjusting platform 23. If necessary, users can
manually adjust the orientation of the camera 22 as well. 4. A processing unit 25: a normal personal computer (PC), which is employed to
retrieve, process and calculate the images of the camera 22 and command the
platform controller 21 to adjust the position of the camera 22. Wherein, the
CPU 251 is utilized to execute the software, handle the entire operation and
manage data; the digital image processor 253, which is connected with the
frame grabber 252, is employed to process digital signals in order to extract
pixel coordinates; the frame grabber 252 is utilized to turn analogical signals
into digital ones and store them in a memory in order to supply the digital
image processor 253 and the CPU 251 in calculating the imaged feature
coordinates corresponding to the calibration marks 38, 311-318, 321-328, and
331-338 in real time. The frame grabber 252, the digital image processor 253
and the CPU 251 are integrated together in the PC with the operation system of
MS Windows. The software developed for the experimental operation will be
disclosed hereinafter.
5. The target 30: fixed in the FOV of the camera 22 as a reference for analyzing
the sight ray 80. A physical central-symmetric pattern (PCP) 31 is illustrated
on the target 30. The PCP 31 is composed of a central origin located at the
geometric center thereof and a plurality of calibration marks is defined by a
plurality of center-symmetric geometric figures. The embodiment of the PCP
31 shown in FIG. 5 takes the central mark 38 as its central origin to define three
concentric circles as the plurality of center-symmetric geometric figures. Eight
individual calibration marks 311-318, 321-328, and 331-338 are
symmetrically placed on the three concentric circles to form three symmetric regular octagons. The radii of the three concentric circles are 20mm, 40mm
and 60mm respectively. The locations of the calibration marks 311-318, 321-
328, and 331-338 begin at 0 degree and mark every π/4 shift, totaling 24 marks.
They are black squares each of 8mm width and 8mm length. Otherwise, take
the four extreme external calibration marks 331, 333, 335, and 337 as the
tangent points to form a square whose four vertexes are the test marks 341-344.
The PCP 31 is drafted by a computer-aided designer (CAD) and printed on a
piece of high-quality photo paper by an ink-jet printer to form the target 30.
Besides, there is another embodiment to form the marks 38, 311-318, 321-328,
331-338, and 341-344 by using LEDs (light emitting diodes) as active lighting
elements in order to attain better image quality; meanwhile, the illuminant 24
could be absent in the measuring system. During the experiment, the target 30
is fixed at a proper location on an experimental table and its absolute
coordinate can be precisely defined.
The embodiment of the PCP 31 is not limited in FIG. 5, depicting three regular
octagons defined by three concentric circles. It performs well as long as the PCP 31 fits a
concentric and symmetric design. Hence triangles, rectangles, squares or any other
polygons shaped by a number of calibration marks are all possible forms for the PCP 31.
The better choice, however, is if each of the geometric figures is composed of even
calibration marks. It has the advantage of easy calculation. The extremeness of a polygon
is a circle, such as the PCP 31 shown in FIG. 4. Besides, a 3-D calibration target 30 might
have the same function if it can symmetrically surround the optical axis 224.
Before entering into the details of implementing the invention, the issues that the invention intends to solve are listed as follows:
1. deducing the principal point 227 on the image plane 225 and situating the
absolute position of the optical axis 224 in space;
2. deducing the absolute coordinate of the FCP 222 (also called the viewpoint);
3. deducing a length profile of the zFL ( also termed the length profile of the
image-height focal length);
4. deducing the projection function from the absolute coordinate system 28 to the
camera coordinate system 26; and
5. deducing the distortive degree of the image and the calibration mechanism.
Relative to the above issues the invention discloses an experimental procedure and a
deductive method described as follows:
A. Locate the principal point 2271(uc,vc) on the pixel coordinate system and collimate the
optical axis 224 to W(0,0,z) by regularizing the image of the PCP 31 to achieve an
ICP 226.
According to the axial-symmetric projection geometry of the fisheye lens, the
radial-symmetric distortion of the fisheye image and the centric-symmetric arrangement
of the PCP 31, if and only if the optical axis 224 is collimated to the Z-axis in the absolute
coordinate system 28, a concentric and symmetric image, i.e. the ICP 226, can be
achieved. The spatial disposition of the measuring system is adjusted in light of the
symmetry of the imaged blobs (actually the symmetry of their barycentric coordinates,
termed the feature coordinates) mapped by the calibration marks displayed on the
computer screen. A computer program, which controls the procedure to dynamically
adjust the absolute coordinate of the camera 22, perfonns the work for adjusting camera's orientation with manual assistance. The camera coordinate system 26 is therefore
collimated to the absolute coordinate system 28 once the adjustment is completed. At the
time, the geometric center of the ICP 226 (i.e. the feature coordinate of the imaged blob
mapped by the central mark 38 if the PCP 31 is similar to the one shown in FIG. 5) is the
principal point 227; meanwhile, the optical axis 224 peφendicularly penetrates both the
principal point 227 and the geometric center of the PCP 31 (i.e. the feature coordinate of
the central mark 38). The details are described as follows:
1. Use one's eyesight to properly set the relative position between the adjusting
platform 23 and the target 30 in order to make the three rigid axes 231 -233 of the
adjusting platform 23 as parallel as possible to the axes of the absolute
coordinate system 28.
2. Properly place the illuminant 24 to distribute unifonn illumination on the target
30 and regard the center of the PCP 31 (the barycenter of the central mark 38) as
the origin W(0,0,0) of the absolute coordinate system 28.
3. Install the camera 22 on the Y' rigid axis 232 of the adjusting platform 23. An
omnidirectional base 70 is mounted at the bottom of the camera 22 to manually
pan, tilt or rotate the camera 22. The optical axis 224 denoted as S(0,0,f) in the
camera coordinate system 26 has to coincide with the Z'-axis denoted as
W'(0,0,z) in the platform coordinate system 29 so that the movement of the
camera 22 along the Z' rigid axis 233 can be regarded as the equivalent of the
one along the optical axis 224. Therefore, in practice, the utmost care is exerted
to collimate the Z-axis in the absolute coordinate system 28, the Z'-axis in the
platform coordinate system 29 and the optical axis 224 in the camera coordinate system 26 in order to align them along the same straight line.
4. Vary the coordinate of the camera 22 on the adjusting platform 23 to locate the
four test marks 341-344 beside the four corners of the computer screen in order
to maximize the calibrated range.
5. A symmetry-analyzing background program is employed to keep tracing the
geometric center of the ICP 226 (i.e. the feature coordinate of the imaged blob
mapped by the central mark 38), and by referring to the center I(u3S,v38), to
calculate the "image-distortion indexes" and "horizontal/vertical deviation
indexes" of the imaged blobs mapped by the calibration marks 311-318, 321-328
and 331-338. These indexes are displayed on the computer screen and as a
feedback to the program to command the platform controller 21 driving the
adjusting platform 23 to vary the coordinate of the camera 22 in the platform
coordinate system 29, that is W'(x', y', z'), until these indexes approach optimal
values. If these indexes displayed on the screen have reached satisfactory
standards, the program will go on to the next step, or repeat this step.
6. Record the "imaged-distortion indexes", the "horizontal/vertical deviation
indexes" and the "object-to-image conjugate coordinates", denoted as (Wc'
(x',y',z')[0], ln(u,v)[0]), obtained during the procedure. Wherein, Wc'(x',y',z')[0]
means the platform coordinate of the camera 22 while ln(u,v)[0] is the pixel
coordinate of the calibration mark of n; n may be equal to 38, 311-318, 321-328
or 331-338, representing any calibration mark of the PCP 31 in FIG. 5; k=0
means an initial position of the camera 22, increasing by 1 with each movement,
such as the pth, qth and rtb measured in FIGs. 6 and 9. As shown here the collimation procedure for the camera coordinate system 26 and
the absolute coordinate system 28 has been achieved. The "imaged-distortion indexes"
and the "horizontal/vertical deviation indexes" are going to be inteφreted before the
advanced discussion. The symmetry-analyzing background program is kept running
during the whole experiment. In order to enable software to guide the task of adjustment,
not only the image of the PCP 31 but these indexes representing the symmetry of the
image are also displayed on the screen. The measuring system actively adjusts the
position of the camera 22 in light of these symmetric indexes, sometimes with the
assistance of hands. The imaged-distortion indexes and the horizontal/vertical deviation
indexes are defined as follows:
a. The imaged-distortion indexes (su[m][k], sv[m][k]) are the summation of
imaged differences between the calibration marks 311-318, 321-328, 331-
338 and the central mark 38 individually in the u- vector and v- vector of the
pixel coordinate system 27. Referring to the serial numbers shown in FIG. 5,
the formulae of the indexes are given as follows:
Figure imgf000033_0001
(1)
sv[m][k] = ∑ (v(300+w*10+fl) [k] - v38 [/c]) a=\
(2)
wherein l ≤m≤ 3 ; 1 ≤a≤ 8 and k=0. u(300+m,10+a) actually is u„, standing for
the u-vector of In(u,v), and the same rule applies to v(300+m,10+a). The imaged-distortion indexes, denoted as (su[m][k], sv[m][k]), should both
approach zero if an ideal ICP 226 is obtained by reason of the symmetric
distribution of the calibration marks 311-318, 321-328, and 331-338.
b. The horizontal deviation index is the standard deviation of the v- vectors of
L/u,,, vn)[k], which are the feature coordinates of all horizontal imaged
blobs in the pixel coordinate system 27. Referring to the PCP 31 shown in
FIG. 5, n is equal to 335, 325, 315, 38, 311, 321 and 331, so the horizontal
deviation index is the standard deviation of the series composed of v335[k],
V325M, v315[k], v38[k], v311[k], v321[k] and v331[k].
c. The vertical deviation index is the standard deviation of the u- vectors of
I yLn, vn)[k], which are the feature coordinates of all vertical imaged blobs in
the pixel coordinate system 27. Referring to the PCP 31 shown in FIG. 5, n
is equal to 333, 323, 313, 38, 317, 327 and 337, so the vertical deviation
index is the standard deviation of the series composed of u333[k], u323[k],
u313[k], u38[k], u317[k], u327[k] and u337[k].
Minimizing the symmetric indexes described above can help collimate the optical
axis 224 (S(0,0,f)) to the Z-axis of the absolute coordinate 28. This implies that the Z-axis
also peφendicularly passes through the principal point 227 (I(uc, vc)) on the image plane
225, and the optical axis 224 is traceable by referring to the given absolute coordinate of
the PCP 31. Nevertheless the absolute coordinate of the camera 22 (i.e. the absolute
coordinate of the viewpoint) is unknown till now.
The aspect ratio is also a parameter in the field of camera calibration. The invention
can easily attain the parameter because the horizontal vectors and the vertical vectors of [k] have reflected it directly. If the aspect ratio is equal to one, in an ideal
situation the image heights (p) of the vertexes of a regular polygon will be exactly the
same after calibration. This is the case in practice.
B. Deduce the absolute coordinate of an identical sight ray 80 by realizing an overlapping
mechanism of different calibration marks located in the same radial direction and
locate the viewpoint of the camera 22.
From the analysis that different absolute coordinates map on the same imaged point
91 to deduce the mapping mechanism of the camera 22 is a significant innovation of the
invention. These different coordinates construct a sight ray 80, termed the identical sight
ray 80. Any single imaged point on the image plane 225 can be modally analyzed to
obtain its corresponding identical sight ray 80.
Referring to the measuring system in FIG. 8, the camera 22 is moved further along
the optical axis 224 which is locked on the normal passing through the central mark 38.
The imaged blobs mapped by the calibration marks 311-318, 321-328, and 331-338 are
getting closer toward the principal point 227 while the obj ect distances are getting bigger.
During the period of time different calibration marks may map at the same location of one
imaged blob. The relative offsets of the camera 22 (actively driven by the program) are
measurable. This offset data couples with the feature coordinate of the particular imaged
blob overlapped by different calibration marks, and the given absolute coordinates of the
calibration marks 311-318, 321-328, 331-338 can infer the absolute location of the
identical sight ray 80 in space corresponding to the particular imaged blob.
Referring to FIG. 6 again, the present invention takes the first embodiment as an
example to explain how to locate the identical sight ray 80 in theory; the camera 22 is fixed but the target 30 is being moved in this embodiment. If at least two different
calibration marks (like the three calibration marks 313,323, and 333 in the vertical
direction on the target 30) jointly map at the same imaged point 91 (I(u,v)) while they
move to at least two different absolute coordinates in space (such as W313[p], W323[q], and
W333[r] in the figure), the identical sight ray 80 corresponding to the imaged point 91 can
be defined thereby. Because the line composed of the calibration marks lying on the same
diameter of the PCP 31 is constantly peφendicular to the optical axis 224, the particular
imaged point 91 overlapped is obtainable, that is I313(u, v)[p]= I323(u, v)[q]= I333(u, v)[r],
while driving the target 30 to move along the optical axis 224. Actually, it is identical to
Tsai's radial alignment constraint and is a characteristic of the radial-symmetric mapping
mechanism; basically it is fully identified by those skilled in the art. The intersection
point of the identical sight ray 80 and the optical axis 224 is exactly the FCP 222, also
termed the viewpoint (VP), representing the absolute coordinate of the camera 22 in
space.
In addition to marking the imaged point 91 1(u, v) distorted by the fisheye lens, FIG.
6 also shows the imaged point 921(u', v') calibrated to conform to the rectilinear mapping
mechanism. The difference between the two imaged points 91 and 92 is customarily
called the distortion value of the imaged point 91 1(u, v).
Nevertheless, in practice, in order to keep the illumination uniform and simplify the
calculation, the invention adopts the second embodiment and implements it in an
experiment; wherein, on the contrary, the camera 22 is moved but the target 30 is fixed
during the experiment; it is also able to achieve the same mapping mechanism as in FIG.
6. Referring to FIG. 9, move the camera 22 (represented by the FCP 222) away from the target 30 along the optical axis 224, which is already collimated to the Z' rigid axis 233, in
order to change the relative offsets between the target 30, possessed of three calibration
marks 313, 323, and 333, and the camera 22 (represented by the FCP 222). Further,
enable the three calibration marks 313, 323, and 333, whose coordinates are separately
W313, W323 and W333, to map at the same coordinate of the imaged point 91 (i.e. I3ι3[p] =
Ϊ323[£l] =:l333[r]) on the image plane 225 in three different tests numbered as p, q, and r
( which means the FCP 222 of the camera 22 is individually located at Wc[p], Wc[q] and
Wc[r]).
Note the initial offset of the FCP 222 (Wc[p]) from the central mark 38 (W38(0,0,0))
as D. With unvarying direction the camera 22 is driven along the optical axis 224 with
several movements; meanwhile, the feature coordinates of imaged points (In[k]) mapped
by the calibration marks 38, 311-318, 321-328, and 331-338 are extracted and separately
coupled with the corresponding locations (Wc'[k]) of the camera 22 in the platform
coordinate system 29 to form an object-to-image conjugate-coordinate pair (Wc' [k], In[k]),
in which k is the sampled sequence.
The procedure for data extraction is divided into two parts: (1) each time after the
camera 22 moves a distance of dZ' along the Z' rigid axis 233, actively and finely adjust
the position but keep the direction of the camera 22 unvaried through the symmetry-
analyzing program in order to keep an optimal symmetry on the ICP 226; (2) extract the
data of the object-to-image conjugate-coordinate pair (W0'(x',y',z')[k], In(u,v)[k]) in each
test, and pool it in each of a sequence of tests to form an object-to-image conjugate-
coordinate array. Continuing after the steps in section A, the details are described as
follows: 7. Keep the arrangement of the measuring system unchanged from the former
section A, while the optical axis 224 has been collimated to the Z-axis in the
absolute coordinate 28 and the initial object-to-image conjugate-coordinate pair
of (Wc'[0], In[0]) is already obtained. Set the initial offset of the FCP 222 from
the target 30 as D — the aim of the calculation (note: generally the orientation of
the camera 22 needn't be adjusted in this procedure);
8. Raise the location index of k and actively control the camera 22 to increase an
offset of dZ' along the Z' rigid axis 233;
9. In light of the symmetric indexes (the imaged-distortion indexes and the
horizontal/vertical deviation indexes) displayed on the screen, the position of the
camera 22 (i.e. W'(X',Y'), while the z-vector is fixed) is finely adjusted until the
symmetry of the ICP 226 reaches a preset standard; then, record the object-to-
image conjugate-coordinate pair (Wc'[k], In [k]);
10. If the location of the camera 22 is still within the default experimental range, the
program will back up to the previous step 8, or go on to the next step;
11. Close the symmetry-analyzing background program;
12. Finish acquiring the data regarding the object-to-image conjugate-coordinate
array provided for calculation; and
13. Deduce the related coefficients of the parameters of the camera 22 (the details
will be described hereinafter).
The following will introduce the data obtained in a practical experiment to verify the
practicability of the invention. In the experiment, each offset of the camera 22 (i.e. the dZ')
moving along the Z' rigid axis 233 is increased by 10 mm. There are totally 19 offsets, plus the initial one, constructing an object-to-image conjugate-coordinate arcay
(Wc'[0..19], L 0..19]) composed of 20 object-to-image conjugate-coordinate pairs. After
the procedure described above, the data for deriving the parameters of the camera 22 is
already obtained and will be analyzed by the following steps:
1. The position profiles of the camera 22 (Wc'[0..19]) in the platform coordinate
system 29: FIG. 10 illustrates the distribution of the series of positions of the
camera 22 from Wc'[0]=W'(-7.5mm,-15mm,0mm) to Wc'[19]= W'(-8mm,-
19mm, 190mm), while the ICP 226 is reached in each test; the profiles also
suggest the traces of the optical axis 224 in the platform coordinate system 29.
W„'[0]= W'(-7.7mm,-15.0mm,0mm) indicates the initial position in the platform
coordinate system 29 in which the x'-vector is -7.7 mm and the y' -vector -15.0
mm, and the z'-vector here is treated as the reference point during the experiment.
Although the profiles of Xc'[0..19] and Yc'[0..19] show slight deviations, they still
hold linearity. This reveals that the optical axis 224 can be efficiently traced by
means of the symmetry of the ICP 226. Nevertheless, it also emerges that the
platform coordinate system 29 is not perfectly collimated to the absolute
coordinate system 28 in the experiment, but the deviation is quite small; that is
0.3% in the x'-vector and 2% in the y'-vector. The result suggests that the
camera's offsets in the absolute coordinate system 28 can be replaced by the ones
just along the Z'-axis and it is reliable, because the percentage of the error is only
0.002% calculated by the formula of x 100% . Therefore, d the absolute offsets of the camera 22 (Zc[l during the experiment are regarded as
Z0'[k] +D. 2. The position profiles of the imaged blob mapped by the central mark 38 (I38(u,v)[0..19]) in the pixel coordinate system 27: FIG.11 illustrates the feature
coordinates of the imaged blobs mapped by the central mark 38; each data pair also stands for the principal point 227 practically measured in the pixel coordinate
system 27. In accordance with the spatial mapping symmetry of the camera 22
shown in FIG. 4, I38(u,v)[k] should be a constant and does not vary with the
position of the camera 22 while it is moving along the Z'-axis (Zc'[0..19]) in the
platform coordinate system 29. Based on these measured data, the standard
deviations of the principal point 227 are separately 0.25 pixel in the u- vector and
0.18 pixel in the v- vector. Further, the result in linear matching indicates the
location of the principal point 227 at I(uc,vc)= 1(318.1 ,236.1) pixels. In conclusion,
the slight values of the standard deviations attest to the reliability of the
experimental result, and verify that the postulation that the coordinate of the
principal point 227 is a constant.
3. The featured image-height profiles (ρm[0..19]; m=[1..3]) of the ICP 226 in the
pixel coordinate system 27: FIG. 12 A illustrates the profiles of three average
image heights (individually pi, p2 and p3, each defined by the calibration marks
located on the same circle from the inside to the outside; for example, pi
determined by calibration marks 311-318) in the pixel coordinate system 27,
which are varied with Zc'[0..19] while the camera 22 is moving along the Z'-axis.
The formula is as follows:
PM , 1 ≤m≤3
Figure imgf000040_0001
(3) wherein k is the serial number of samplings, m the layer of the concentric circles
from the inside to the outside, n the number of the calibration marks 311-318,
321-328 and 331-338 shown in FIG. 5, and pm[k] the average image-height array
corresponding to the concentric circles in each test. FIG. 12B, redrawn from FIG.
12 A, shows a clear overlapping phenomenon between each pair of the three
average image heights (p^O.,19], p2[0..19] and p3[0..19]) corresponding to the
three layers of the concentric circles. This supports the postulation described
above that measured image-height ranges hold the information for positioning the
identical sight ray 80. In an ideal model, the image heights defined by equi-radius
calibration marks should be equal to each other when the ICP 226 reaches perfect
symmetry. The experimental result shows that the deviation of descriptive
statistics of the equi-radius calibration marks on a selected circle is 0.22 pixel; this
proves that the measuring system can imitate the circular symmetric projection
mechanism and perform satisfactorily in practice.
The optical parameters of the camera are deduced according to the data measured in
the experiment. Taking the first embodiment as an example, referring to FIG. 6 again, if
Wc is the optical origin, the zenithal distance (α), i.e. the angular distance of the sight ray
80 away from the optical axis 224, is formulated as:
α k] = tan 'CR,, Z[p])=(R2, Z[q]) =(R3, Z[r])
(4) wherein Rtl 3] denotes the radii of the three concentric circles on the target 30, namely the
object heights in absolute space; Z[p] represents the object distance of the target 30 on the Z-axis while k=p, that is to say Z[p]=D here, and Z[q] as well as Z[r] are the denotations
following the same rule. A line segment is determined by W313[p], W323[q] and W333[r] if
Wn[p..r] are given. The extension of the line segment toward the optical axis 224 will
intersect to determine the absolute coordinate of Wc. Similarly in FIG. 9, the positions
(Wc'[p..r]) of the moving camera 22 in the platform coordinate system 29 are observable
and controllable while the target 30 is fixed. The absolute coordinates (Wc[p], Wc[q] and
Wc[r]) of the camera 22 can be accordingly obtained by comparing similar triangles
bounded by both the optical axis 224 and the target 30 which are peφendicular to each
other if the three offsets of the camera 22 are given. These are the two theoretical models
of the invention for solving the FCP 222.
However, considering the limitations of samplings in the experiment, it is hard to
obtain two image heights (or the imaged coordinates mapped by the calibration marks)
exactly coinciding with each other in practice. Moreover, the unavoidable errors caused
by the random quality of image signals while setting the feature coordinates also suggest
that the sight ray 80 should not be directly deduced and, accordingly, the FCP 222 will not
be located thereby, even though exactly coinciding feature coordinates are attained.
In view of the limitations in practice, the present invention proposes an alternative
method to analyze the measured data. There are three groups of data categorized from the
original one, including: the image heights (pm[0..19] ; m=[1..3]), the object heights ( ^;
m=[1..3]) and the camera's offsets (Wc'[0..19]). The three groups of data are sufficiently
sampled for being over determined, and employed to deduce the FCP 222 (or termed the
viewpoint) and the mapping mechanism of the camera 22.
First, the image heights have an inverse proportion to the object distances (i.e. the distances of the camera 22); FIG 12A shows this phenomenon, from which the image
heights cannot be directly related to the mapping mechanism of the camera 22. However,
on the basis of the postulation about the identical sight ray 80, if the object heights (i.e. the
physical lengths of the radii of the PCP 31) are represented by another kind of aspect, such
as the zenithal distance (α), the mutual meanings of the three profiles in FIG. 12A are
thereby related, namely the overlapping of image heights or/and the overlapping of
angular distances. The fact of one identical sight ray 80 holding one zenithal distance (α)
offers a consistent explanation for all the sampled image heights if the object heights are
replaced with α. Therefore, the FCP 222 (or the viewpoint) ought to be located first in
order to obtain accurate object distances and turn the object heights exactly into the
zenithal distances (α). The overlapping phenomenon of the ranges of different image
heights (p) revealed in the experiment implies that replacing the object heights with the
zenithal distances (α) also presents an overlapping mechanism. Therefore, the object
distance (the distance between the camera 22 and the target 30) is involved as a factor to
deduce the sight ray 80 of the camera 22; then the overlapping phenomenon of the
zenithal distances (α) similar to FIG. 12B would appear as well.
Therefore, on the basis of the conception described in FIG. 9, the method of trial-
and-error is utilized in searching for the FCP 222 along the optical axis 224 (note: at this
time the optical axis 224 has been positioned). In other words, taking the successive
target points one by one on the optical axis 224, each point is supposed to be the FCP 222,
so that the initial distance (D[p]) between Wc[p] and the target 30 is accordingly
determined. The offsets between Wc[p], Wc[q] and Wc[r] are already given so D[q] and
D[r] can be derived from D[p]. Based on the three given coordinates (Wc[p], Wc[q] and Wc[r]) and referring to a equi-length image height (I3ι3[p]=I323[q]=I333[r]), only when D[p]
is accurate can the three corresponding zenithal distances (i.e. α313, α323 and α333),
transformed from the object heights by the tangent function, be equal to each other.
In the experiment, the object-to-image conjugate coordinate array
(Wc'(x',y',z')[0..19], In(u,v)[0..19]) is extracted at 20 positions from k=0 to k=19.
Namely, aiming at every object height (P , extract the image-height profile (pm[0..19]) at
twenty given positions of the camera 22 (Wc[0..19]). Twenty object distances (D[0:19])
would be obtained as well in the course of the experiment, while the distance of Wc[0] is
assumed to be D[0]. The α-profiles (αm[0:20] ' m=[1..3]) are accordingly deduced by
referring to both D[0:19] and the object heights, or the radii of the concentric circles. The
task of posing the camera 22 is realized by the overlapping degree of the traces disturbed
by the zenithal distances (αm[0:20] > m=[l ..3]), referring to the image heights (pm[0..19] '
m=[1..3]); it is tenned the first overlapping index in the invention. The overlapping
phenomenon will appear only if the FCP 222, i.e. the value of D[0], is accurately fixed.
This is the first method for posing the camera 22 disclosed in the invention. FIG. 13
illustrates the traces of the data points of αm[0..19] to pm[0..19] when D[0] is accurately
acquired; it reveals an extremely good overlapping phenomenon on the profiles
corresponding to the three concentric circles. The functional relationship between the
image height (p) and the zenithal distance (α) is exactly the projection function of the
camera 22, and hence the curve shown in FIG. 13 is the so-called projection curve or
projection function in optics. To date, the measurement for the projection function of the
camera 22 with a nonlinear perspective projection model is still unattainable in the related
art (note: not for the lens); however, the present invention can achieve the function only with the assistance of simple equipment. On the other hand, if D[0] is shifted from the
accurate value, taking 50 mm as an example, an obvious divergent phenomenon of the
α-traces will appear, as shown in FIG. 14.
In conclusion, the object-to-image conjugate-coordinate array is found to be capable
of deducing the projection function and posing the camera 22 (i.e. positioning the FCP
222). Although the experimental result shows that the projection function is close to the
EDP (equidistant projection), this is just a special case and will not cause any limitation in
the projection model in the invention. The method disclosed in the invention can be
widely applied in various sorts of projection curves.
The projection curve is able to describe the mapping mechanism of the camera 22,
but unable to quantify the distortion degree of the camera system. From the experimental
result, it is clear that the lens used in the embodiment is similar to the one with the EDP so
that its projective curve is supposedly a straight line, and the result is quite close. In the
aspect of a rectilinear projection model, there is a nonlinear relationship (minus the
direct-proportions) between the distortion degrees and the image heights. For the
convenience of further explaining the distortion mechanism of the camera system, the
invention defines an optical parameter, termed the zenithal focal length (simplified as the
zFL hereinafter). Referring to FIG. 6, the zFL (also termed the focal length constant in the
Gaussian optics model) is the distance between BNP 223' and the principal point 227,
which can express the specific mapping mechanism of the sight ray 80 of a particular
zenithal distance (α), and formulized as follows:
ZFL 0..19] = pm[0..19]*cot(αm[0..19])
(5) In the aspect of the one-to-one corresponding relationship between the imaged
coordinate I(u,v) and the identical sight ray 80, the zFL can be regarded as the focal length
constant in conformity with the rectilinear perspective projection model while only one
specific imaged coordinate is considered. The zFL will vary along with different image
heights (p); the bigger the margins of different zFLs, the severer the radial distortion of
the camera system. Therefore, an image height can be explained as a zenithal distance (α)
in the object space; however, in the matter of the mapping mechanism, the image height is
also dependent on the zFL, so the function of zFL(p) can directly reveal the distortion
degrees of the camera system, subsequently presented in their totality as "the zFL-curve"
or "the zFL-function".
While turning the image heights (pm[0..19]) in FIG. 12A into the zFLs (zFLm[0..19])5
it is necessary to refer to the object distances, and the overlapping phenomenon of the
zFL-profiles also has the capability of locating the FCP 222 of the camera 22. This is the
first method for posing the camera 22 disclosed in the invention. The image heights
m[0..19j) shown in FIG. 12A can be replaced by the zFLs (zFL JO..19]) with a
consistent explanation. Therefore, the method of trial-and-error is employed for searching
the FCP 222 along the optical axis 224; namely, by taking the successive target points on
the optical axis 224 one by one, each of which point is supposed to be the FCP 222 so that
the initial distance (D[0]) is accordingly determined. Further, the image heights
(Pm[0--i9],nι=[1..3]) are accordingly turned into the corresponding zFLs (zFL
m[0..19],m=[1..3]), and the task of posing the camera 22 is realized by the overlapping
degree of the zFL-pro files; this is termed the second overlapping index in the invention.
FIG. 15 illustrates the profiles of the zFL-function showing a pretty good overlapping phenomenon. It also signifies that the FCP 222 on the optical axis 224 can be truly
positioned with the aid of the practical results of the experiment. On the other hand, while
D[0] is shifted from the accurate value, taking 5 mm as an example, an obvious divergent
phenomenon appears among the three zFL-pro files, as shown in FIG. 16. Besides, FIG.
15 also directly exposes the distortion degrees of the image or the distortion mechanism
of the camera system.
It is worth noting that, comparing FIG. 16 with FIG. 14, the divergent phenomenon
in FIG. 16, with only a 5-mm shift of D-value, is much more apparent than the one in FIG.
14 with a 50-mm shift of D-value. This proves the sensitivity of zFL(p) is much higher
than the one of α(p) to the position of the FCP 222 of the camera 22. It also reveals that, in
practice, utilizing the overlapping degree of the zFL-curve to fix the FCP 222 of the
camera 22 is a superior method. Moreover, when the image height (pm[0..19]) approaches
zero, the zFL will be the focal length of the lens mounted in the camera 22; generally ideal
lenses take this value as their focal length constant.
To enable the method for positioning the viewpoint in the invention to be more
multi-functional and suitable to any kind of projection function, the invention further
discloses a method for verifying the overlapping degree of the profiles. Owing to the high
sensitivity of the zFL-function, taking it as an example, a "divergent length" (also called
the "feature length") on the overlapping portion of the profiles is calculated after
rearranging the three groups of data in FIGs. 15 and 16 to evaluate the overlapping degree
of the curve of zFL(p), as shown in FIG. 17. The method connects all adjacent points
representing the relationships of the zFLs to the image heights (p), and then the total
length (i.e. the divergent length) of the overlapping portion is calculated. If the divergent length is the minimum, like the curve notated as zFL in FIG. 17, the overlapping degree of
the tracks of zFL(p) is supposed to be optimum, and accordingly, the tested target point
on the optical axis 224 is the FCP 222 (or the viewpoint) of the camera 22. Otherwise, the
tracks of zFL(p) reflect a longer divergent length, like the one notated as zFL_shift.
Furthermore, the invention utilizes the nature of the image projected from the
innovative PCP 31 to estimate the quality of the arrangement of the measuring system, to
modify the arrangement accordingly and predict whether the camera system is capable of
being examined or not. The mapping mechanisms of some cameras with defects are
apparently below expectations because the distortion model of the camera 22 is
unpredictable. For example, if the optical axis 224 of a lens-set is not peφendicular to the
image plane 225 in the camera 22, it is impossible to get an utterly symmetric image no
matter how much effort is expended. However, the method disclosed in the invention can
eliminate these cameras, with all sorts of defects, from calibration beforehand.
In conclusion, the method disclosed in the invention attains the goal of evaluating the
specifications and obtaining the optical parameters of the camera 22 either by the
projection function or the zFL-function of the said camera 22.
Therefore, the method and the measuring system disclosed in the invention perform
most satisfactorily indeed in analyzing the mapping mechanism of the camera 22. Further,
the present invention can guide or modify the arrangement of the measuring system as
well as determine the reliability of the measured parameters by the distribution of the
measured data, and is finally applied in calibrating cameras or employed to develop
image-processing / image-transfonned technologies. Overall, the invention has the following advantages:
1. The capabilities of accurately locating the optical axis 224, posing the camera 22
(namely, fixing the absolute coordinate of the FCP 222) and evaluating the
projection function and the focal length constant of the camera 22.
2. The capabilities of quantifying the distortion of the imaged points through the
zFL-function.
3. The capability of verifying the reliability of the measuring system through the
measured data.
4. The capability of verifying the quality of the target camera 22 through the
measured data.
5. The capability of directly turning the imaged points into the corresponding
projection angles (i.e. the zenithal distances) in space.
6. The capability of being applied in stereoscopic applications.
7. The merits of simplicity and low cost make the method suitable for any kind of
nonlinear mapping mechanism of the camera 22.
The invention being thus described, it will be obvious that the same may be varied in
many ways. Such variations are not to be regarded as a departure from the spirit and scope
of the invention, and all such modifications as would be obvious to one skilled in the art
are intended to be included within the scope of the following claims.

Claims

CLAIMSWhat is claimed is:
1. A method for obtaining the optical parameters of a camera, which utilizes the
specific characteristic that one single sight ray in space corresponds to one single imaged
point on an image plane to obtain the optical parameters of a camera, the method
comprises:
placing a target with a physical central-symmetric pattern (PCP) thereon in
the field of view (FOV) of the camera, in which the PCP is composed of a
central mark, located at the geometric center thereof, and at least two
calibration marks, individually termed the first calibration mark and the
second calibration mark, located on a straight radial line centered at the
central mark;
collimating the target and the camera to enable an optical axis of the camera
to peφendicularly pass through the central mark;
recording the pixel coordinate of an imaged point imaged by the first
calibration mark;
moving the target along the optical axis by locking the central mark thereon
in order to enable the second calibration mark to image in an overlapping
manner at the same pixel coordinate of the imaged point;
extracting both the spatial absolute coordinates of the first and second
calibration marks and deducing a sight ray defined by the two spatial
absolute coordinates; and
regarding the point of intersection of the sight ray and the optical axis as a viewpoint of the camera.
2. The method according to claim 1, wherein the step of collimating the target and
the camera is fulfilled by locating a principal point on the image plane, hence a spatial
sight ray peφendicularly passing through both the principal point and the central mark
representing the optical axis.
3. The method according to claim 2, wherein the method of locating the principal
point comprises:
further providing a plurality of center-symmetric geometric figures to the
PCP;
placing the target in the FOV of the camera to allow the PCP to image on the
image plane;
adjusting the relative position between the target and the camera until the
image of the PCP turns into an imaged central-symmetric pattern (ICP); and
examining the symmetry of the ICP with at least one symmetric index to
ensure that the imaged traces of the plurality of geometric figures are
symmetrical, the feature coordinate of the imaged point mapped by the
central mark locating the principal point.
4. The method according to claim 3, wherein the plurality of geometric figures is
selected from the group comprising concentric circles, concentric rectangles, concentric
triangles and concentric polygons.
5. The method according to claim 3, wherein the plurality of geometric figures is a
combination of any number of concentric-and-symmetric circles, rectangles, triangles
and/or polygons.
6. The method according to claim 3, wherein the at least one symmetric index
comprises imaged-distortion indexes, a horizontal deviation index and a vertical
deviation index.
7. A method for obtaining the optical parameters of a camera, which utilizes the
specific characteristic that one single sight ray in space coπesponds to one single imaged
point on an image plane to obtain the optical parameters of a camera, the method
comprises:
placing a target with a physical central-symmetric pattern (PCP) thereon in
the field of view (FOV) of the camera, in which the PCP is composed of a
central mark, located at the geometric center thereof, and a plurality of
calibration marks defined by a plurality of geometric figures;
collimating the target and the camera to enable an optical axis of the camera
to peφendicularly pass through the central mark;
varying the relative position between the target and the camera along the
optical axis and recording a plurality of object-to-image conjugate-
coordinate pairs corresponding to the plurality of calibration marks
separately in different the relative positions to form an object-to-image
conjugate-coordinate array; and
searching a target point along the optical axis to enable an overlapping
index to approach optimal trace-overlap by means of analyzing the object-
to-image conjugate-coordinate aπay on the basis of the target point, in
which the target point is a viewpoint of the camera.
8. The method according to claim 7, wherein the step of collimating the target and the camera is fulfilled by locating a principal point on the image plane, hence a spatial
sight ray peφendicularly passing through both the principal point and the central mark
representing the optical axis.
9. The method according to claim 8, wherein the method for locating the principal
point further comprises:
placing the target in the FOV of the camera to allow the PCP to image on the
image plane;
adjusting the relative position between the target and the camera until the
image of the PCP turns into an imaged central-symmetric pattern (ICP); and
examining the symmetry of the ICP with at least one symmetric index to
ensure that the imaged traces of the plurality of geometric figures achieve a
symmetry request, the feature coordinate of the imaged point mapped by the
central mark locating the principal point.
10. The method according to claim 9, wherein the at least one symmetric index
comprises imaged-distortion indexes, a horizontal deviation index and a vertical
deviation index.
11. The method according to claim 7, wherein the plurality of geometric figures is
selected from the group comprising concentric circles, concentric rectangles, concentric
triangles and concentric polygons.
12. The method according to claim 7, wherein the plurality of geometric figures is a
combination of any number of concentric-and-symmetric circles, rectangles, triangles
and/or polygons.
13. The method according to claim 7, wherein the plurality of object-to-image conjugate-coordinate pairs is composed of the absolute coordinates of the plurality of
calibration marks, or the coordinates of the camera, and the pixel coordinates
coπesponding to the plurality of calibration marks, in which three parameters, including
the image height, an object height and an object distance, can be deduced by means of
analyzing the plurality of object-to-image conjugate-coordinate pairs.
14. The method according to claim 7, wherein the overlapping index is a divergent
length, which is deduced by the steps comprising:
analyzing the object-to-image conjugate-coordinate aπay to obtain a
plurality of data points; and
adjacently connecting the plurality of data points to form the divergent
length which is minimized to make the overlapping index optimal.
15. The method according to claim 14, wherein the plurality of data points
expresses the relationship between two variables of the zenithal distance (α) and the
image height (p), representing a projection curve of the camera as a whole and being
obtained by means of analyzing the object-to-image conjugate-coordinate aπay and the
postulated locations of the target point.
16. The method according to claim 14, wherein the plurality of data points
expresses the relationship between two variables of the zenithal focal length (zFL) and the
image height (p), representing the level of distortion of the camera as a whole and being
obtained by means of analyzing the object-to-image conjugate-coordinate aπay and the
postulated locations of the target point.
17. The method according to claim 16, wherein the zenithal focal length (zFL) is
determined by the mathematic equation as follows: zFL = p*cot(α)
wherein:
p is the image height, which is the distance between an imaged point and a
principal point on the image plane; and
α is the zenithal distance, which is the angular distance of a sight ray away
from the optical axis.
18. A method for obtaining the optical parameters of a camera, which utilizes the
specific characteristic that one single sight ray in space coπesponds to one single imaged
point on an image plane to obtain the optical parameters of a camera, the method
comprises:
looking for at least two different absolute coordinates in space, all of which
project at the same imaged point, in order to define the sight ray;
deducing a zenithal distance (α) on behalf of the sight ray, which is the
angular distance of the sight ray away from an optical axis of the camera;
further deducing a plurality of zenithal distances (α) on behalf of a plurality
of sight rays separately coπesponding to a plurality of imaged points; and
obtaining a projection function describing the projecting behavior of the
camera from the relationship between the plurality of imaged points and the
plurality of zenithal distances (α).
19. The method according to claim 18, wherein the method for defining the sight
ray further comprises:
placing a target with a physical central-symmetric pattern (PCP) thereon in
the field of view (FOV) of the camera, in which the PCP is composed of a central mark, located at the geometrical center thereof, and at least two
calibration marks, individually termed the first calibration mark and the
second calibration mark, located on a straight radial line centered at the
central mark;
collimating the target and the camera to enable the optical axis of the camera
to peφendicularly pass through the central mark;
recording the pixel coordinate of the imaged point imaged by the first
calibration mark;
moving the target along the optical axis by locking the central mark thereon
in order to enable the second calibration mark to image in an overlapping
manner at the same pixel coordinate of the imaged point; and
extracting both the spatial absolute coordinates of the first and second
calibration marks, the two spatial absolute coordinates defining the sight ray.
20. The method according to claim 19, wherein the point of intersection of the sight
ray and the optical axis is a viewpoint of the camera.
21. The method according to claim 19, wherein the step of collimating the target
and the camera is fulfilled by locating a principal point on the image plane, hence a spatial
sight ray peφendicularly passing through both the principal point and the central mark
representing the optical axis, the method further comprising:
further providing a plurality of center-symmetric geometric figures to the
PCP;
placing the target in the FOV of the camera to allow the PCP to image on the
image plane; adjusting the relative position between the target and the camera until the
image of the PCP turns into an imaged central-symmetric pattern (ICP);
examining the symmetry of the ICP with at least one symmetric index to
ensure that the imaged traces of the plurality of geometric figures are
symmetrical, the feature coordinate of the imaged point mapped by the
central mark locating the principal point; and
according to the given position of the target, picking the spatial sight ray
peφendicularly passing through both the principal point and the central mark
as the optical axis.
22. The method according to claim 21, wherein the at least one symmetric index
comprises imaged-distortion indexes, a horizontal deviation index and a vertical
deviation index.
23. The method according to claim 21, wherein the plurality of geometric figures is
selected from the group comprising concentric circles, concentric rectangles, concentric
triangles and concentric polygons.
24. The method according to claim 21, wherein the plurality of geometric figures is
a combination of any number of concentric-and-symmetric circles, rectangles, triangles
and/or polygons.
25. The method according to claim 18, wherein through analyzing the relationship
between the plurality of imaged points and the plurality of zenithal distances (α) a
viewpoint of the camera is obtained.
26. The method according to claim 25, wherein the viewpoint is obtained by the
steps comprising: placing a target with a physical central-symmetric pattern (PCP) thereon in
the field of view (FOV) of the camera, in which the PCP is composed of a
central mark, located at the geometric center thereof, and a plurality of
calibration marks defined by a plurality of geometric figures;
collimating the target and the camera to enable the optical axis of the camera
to peφendicularly pass through the central mark;
varying the relative position between the target and the camera along the
optical axis and recording a plurality of object-to-image conjugate-
coordinate pairs coπesponding to the plurality of calibration marks
separately in different the relative positions to fonn an object-to-image
conjugate-coordinate aπay; and
searching a target point along the optical axis to enable an overlapping index
to approach optimal trace-overlap by means of analyzing the object-to-image
conjugate-coordinate array on the basis of the target point, in which the target
point is the viewpoint of the camera.
27. The method according to claim 26, wherein the plurality of object-to-image
conjugate-coordinate pairs is composed of the absolute coordinates of the plurality of
calibration marks, or the coordinates of the camera, and the pixel coordinates
coπesponding to the plurality of calibration marks, in which tliree parameters, including
the image height, object height and object distance, can be deduced by means of
analyzing the plurality of object-to-image conjugate-coordinate pairs.
28. The method according to claim 26, wherein the overlapping index is a divergent
length, which is deduced by the steps comprising: analyzing the object-to-image conjugate-coordinate array to obtain a
plurality of data points; and
adjacently connecting the plurality of data points to form the divergent length
which is minimized to make the overlapping index optimal.
29. The method according to claim 28, wherein the plurality of data points
expresses the relationship between two variables of the zenithal distance (α) and the
image height (p), representing a projection curve of the camera as a whole and being
obtained by means of analyzing the object-to-image conjugate-coordinate array and the
postulated locations of the target point.
30. The method according to claim 28, wherein the plurality of data points
expresses the relationship between two variables of the zenithal focal length (zFL) and the
image height (p), representing the level of distortion of the camera as a whole and being
obtained by means of analyzing the object-to-image conjugate-coordinate aπay and the
postulated locations of the target point.
31. The method according to claim 30, wherein the zenithal focal length (zFL) is
determined by the mathematic equation as follows:
zFL = p*cot(α)
wherein:
p is the image height, which is the distance between an imaged point and a
principal point on the image plane; and
α is the zenithal distance, which is the angular distance of a sight ray
proceeding away from the optical axis.
32. A system for obtaining the optical parameters of a camera, which is employed to analyze the relationship between a plurality of sight rays in object space and a plurality of
imaged points on an image plane, the system comprises:
a target possessed of a physical central-symmetric pattern (PCP) which is
composed of a central mark and a plurality of center-symmetric geometric
figures defining a plurality of calibration marks;
a camera equipped with a non-linear perspective projection lens used to
capture the rays from the PCP and fonn a coπesponding image on the image
plane;
an adjusting platform possessed of three rigid axes which are peφendicular
to one another in order to define a coordinate system used to adjust the
relative position between the target and the camera;
a platform controller connected with the adjusting platform and used to
provide power to and limit the moving range of the adjusting platform; and
a processing unit connected with the camera and the platform controller,
which is used to command the platform controller to adjust the positions of
the three rigid axes and grab the absolute coordinates of the plurality of
calibration marks and their coπesponding imaged pixel coordinates in order
to form an object-to-image conjugate-coordinate aπay as the basis of
calculation for obtaining the camera's projection function representing the
relationship between the object space and the image plane as the mapping
mechanism of the camera.
33. The system according to claim 32, wherein the system further comprises an
illuminant used to light up the target.
34. The system according to claim 32, wherein the plurality of calibration marks is
individually constructed by an active lighting element.
35. The system according to claim 34, wherein the active lighting element is a LED
(Light Emitting Diode).
36. The system according to claim 32, wherein the processing unit further
comprises:
a frame grabber connected with the camera, which is employed to turn the
analogical signals captured by the camera into digital ones;
a digital image processor connected with the frame grabber, which is
employed to process the digital signals in order to extract the imaged pixel
coordinates; and
a CPU in charge of controlling the frame grabber and the digital image
processor.
37. The system according to claim 32, wherein the processing unit is a personal
computer (PC).
38. The system according to claim 32, wherein the camera is selected from the
group comprising a CCD camera, a CMOS camera and the one mounting an image
sensor.
39. The system according to claim 32, wherein the plurality of geometric figures is
selected from the group comprising concentric circles, concentric rectangles, concentric
triangles and concentric polygons.
40. The system according to claim 32, wherein the plurality of geometric figures is a
combination of any number of concentric-and-symmetric circles, rectangles, triangles and/or polygons.
PCT/IB2004/001109 2003-04-18 2004-04-12 Method and system for obtaining optical parameters of camera WO2004092826A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW92109159A TW565735B (en) 2003-04-18 2003-04-18 Method for determining the optical parameters of a camera
TW92109159 2003-04-18

Publications (1)

Publication Number Publication Date
WO2004092826A1 true WO2004092826A1 (en) 2004-10-28

Family

ID=32503978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/001109 WO2004092826A1 (en) 2003-04-18 2004-04-12 Method and system for obtaining optical parameters of camera

Country Status (2)

Country Link
TW (1) TW565735B (en)
WO (1) WO2004092826A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009140678A2 (en) * 2008-05-16 2009-11-19 Mersive Technologies, Inc. Systems and methods for generating images using radiometric response characterizations
US7893393B2 (en) 2006-04-21 2011-02-22 Mersive Technologies, Inc. System and method for calibrating an image projection system
WO2015197019A1 (en) * 2014-06-27 2015-12-30 青岛歌尔声学科技有限公司 Method and system for measuring lens distortion
CN106780617A (en) * 2016-11-24 2017-05-31 北京小鸟看看科技有限公司 A kind of virtual reality system and its localization method
RU2635336C2 (en) * 2015-03-30 2017-11-10 Открытое Акционерное Общество "Пеленг" Method of calibrating optical-electronic device and device for its implementation
CN111105488A (en) * 2019-12-20 2020-05-05 成都纵横自动化技术股份有限公司 Imaging simulation method and device, electronic equipment and storage medium
CN111432204A (en) * 2020-03-30 2020-07-17 杭州栖金科技有限公司 Camera testing device and system
CN111445522A (en) * 2020-03-11 2020-07-24 上海大学 Passive night-vision intelligent mine detection system and intelligent mine detection method
CN111612710A (en) * 2020-05-14 2020-09-01 中国人民解放军95859部队 Geometric imaging pixel number calculation method for target rectangular projection image
JP2020148700A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Distance image sensor, and angle information acquisition method
CN112950719A (en) * 2021-01-23 2021-06-11 西北工业大学 Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform
CN113310420A (en) * 2021-04-22 2021-08-27 中国工程物理研究院上海激光等离子体研究所 Method for measuring distance between two targets through image
CN116954011A (en) * 2023-09-18 2023-10-27 中国科学院长春光学精密机械与物理研究所 Mounting and adjusting method for high-precision optical reflection system calibration camera

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108931357B (en) * 2017-05-22 2020-10-23 宁波舜宇车载光学技术有限公司 Test target and corresponding lens MTF detection system and method
TWI738098B (en) * 2019-10-28 2021-09-01 阿丹電子企業股份有限公司 Optical volume-measuring device
TWI788838B (en) * 2021-05-07 2023-01-01 宏茂光電股份有限公司 Method for coordinate transformation from spherical to polar
TWI793702B (en) * 2021-08-05 2023-02-21 明志科技大學 Method for obtaining optical projection mechanism of camera

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
US5870135A (en) * 1995-07-27 1999-02-09 Sensormatic Electronics Corporation Image splitting forming and processing device and method for use with no moving parts camera
EP1028389A2 (en) * 1999-02-12 2000-08-16 Advanet, Inc. Arithmetic unit for image transformation
US20030090586A1 (en) * 2001-09-17 2003-05-15 Gwo-Jen Jan Method for exploring viewpoint and focal length of camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5185667A (en) * 1991-05-13 1993-02-09 Telerobotics International, Inc. Omniview motionless camera orientation system
US5870135A (en) * 1995-07-27 1999-02-09 Sensormatic Electronics Corporation Image splitting forming and processing device and method for use with no moving parts camera
EP1028389A2 (en) * 1999-02-12 2000-08-16 Advanet, Inc. Arithmetic unit for image transformation
US20030090586A1 (en) * 2001-09-17 2003-05-15 Gwo-Jen Jan Method for exploring viewpoint and focal length of camera

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7893393B2 (en) 2006-04-21 2011-02-22 Mersive Technologies, Inc. System and method for calibrating an image projection system
WO2009140678A2 (en) * 2008-05-16 2009-11-19 Mersive Technologies, Inc. Systems and methods for generating images using radiometric response characterizations
WO2009140678A3 (en) * 2008-05-16 2010-01-07 Mersive Technologies, Inc. Systems and methods for generating images using radiometric response characterizations
US10151664B2 (en) 2014-06-27 2018-12-11 Qingdao Goertek Technology Co., Ltd. Method and system for measuring lens distortion
WO2015197019A1 (en) * 2014-06-27 2015-12-30 青岛歌尔声学科技有限公司 Method and system for measuring lens distortion
JP2017524920A (en) * 2014-06-27 2017-08-31 チンタオ ゴーアテック テクノロジー カンパニー リミテッドQingdao Goertek Technology Co., Ltd. Method and system for measuring lens distortion
US9810602B2 (en) 2014-06-27 2017-11-07 Qingdao Goertek Technology Co., Ltd. Method and system for measuring lens distortion
RU2635336C2 (en) * 2015-03-30 2017-11-10 Открытое Акционерное Общество "Пеленг" Method of calibrating optical-electronic device and device for its implementation
CN106780617B (en) * 2016-11-24 2023-11-10 北京小鸟看看科技有限公司 Virtual reality system and positioning method thereof
CN106780617A (en) * 2016-11-24 2017-05-31 北京小鸟看看科技有限公司 A kind of virtual reality system and its localization method
JP2020148700A (en) * 2019-03-15 2020-09-17 オムロン株式会社 Distance image sensor, and angle information acquisition method
WO2020189071A1 (en) * 2019-03-15 2020-09-24 オムロン株式会社 Distance image sensor and angle information acquisition method
CN113508309A (en) * 2019-03-15 2021-10-15 欧姆龙株式会社 Distance image sensor and angle information acquisition method
CN111105488A (en) * 2019-12-20 2020-05-05 成都纵横自动化技术股份有限公司 Imaging simulation method and device, electronic equipment and storage medium
CN111105488B (en) * 2019-12-20 2023-09-08 成都纵横自动化技术股份有限公司 Imaging simulation method, imaging simulation device, electronic equipment and storage medium
CN111445522A (en) * 2020-03-11 2020-07-24 上海大学 Passive night-vision intelligent mine detection system and intelligent mine detection method
CN111445522B (en) * 2020-03-11 2023-05-23 上海大学 Passive night vision intelligent lightning detection system and intelligent lightning detection method
CN111432204A (en) * 2020-03-30 2020-07-17 杭州栖金科技有限公司 Camera testing device and system
CN111612710A (en) * 2020-05-14 2020-09-01 中国人民解放军95859部队 Geometric imaging pixel number calculation method for target rectangular projection image
CN111612710B (en) * 2020-05-14 2022-10-04 中国人民解放军95859部队 Geometric imaging pixel number calculation method for target rectangular projection image
CN112950719A (en) * 2021-01-23 2021-06-11 西北工业大学 Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform
CN112950719B (en) * 2021-01-23 2024-06-04 西北工业大学 Passive target rapid positioning method based on unmanned aerial vehicle active photoelectric platform
CN113310420A (en) * 2021-04-22 2021-08-27 中国工程物理研究院上海激光等离子体研究所 Method for measuring distance between two targets through image
CN116954011A (en) * 2023-09-18 2023-10-27 中国科学院长春光学精密机械与物理研究所 Mounting and adjusting method for high-precision optical reflection system calibration camera
CN116954011B (en) * 2023-09-18 2023-11-21 中国科学院长春光学精密机械与物理研究所 Mounting and adjusting method for high-precision optical reflection system calibration camera

Also Published As

Publication number Publication date
TW200422754A (en) 2004-11-01
TW565735B (en) 2003-12-11

Similar Documents

Publication Publication Date Title
Luhmann et al. Sensor modelling and camera calibration for close-range photogrammetry
WO2004092826A1 (en) Method and system for obtaining optical parameters of camera
US6985183B2 (en) Method for exploring viewpoint and focal length of camera
US5276546A (en) Three dimensional scanning system
CN102509261B (en) Distortion correction method for fisheye lens
CN106408556B (en) A kind of small items measuring system scaling method based on general imaging model
Schmalz et al. Camera calibration: active versus passive targets
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
US7042508B2 (en) Method for presenting fisheye-camera images
US8619248B2 (en) System and method for calibrating ultra wide-angle lenses
CN110447220A (en) Calibrating installation, calibration method, Optical devices, camera and projection arrangement
US9881377B2 (en) Apparatus and method for determining the distinct location of an image-recording camera
CN103294886A (en) System for reproducing virtual objects
CN106643563B (en) A kind of Table top type wide view-field three-D scanning means and method
Shen et al. Multi-camera network calibration with a non-planar target
CN113298886B (en) Calibration method of projector
TW565736B (en) Method for determining the optical parameters of a camera
Orghidan et al. Omnidirectional depth computation from a single image
JP3704494B2 (en) How to check camera viewpoint and focal length
Gordon et al. A Single-Pixel Touchless Laser Tracker Probe
Tompkin et al. Joint 5d pen input for light field displays
Strobl et al. On the issue of camera calibration with narrow angular field of view
Orghidan Catadioptric stereo based on structured light projection
Xing et al. A method to verify internal parameters of camera calibration in world coordinate system
Feng et al. Precise measurement of fibers position using bundle adjustment algorithm

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase