WO2015000056A1 - System and method for imaging device modelling and calibration - Google Patents

System and method for imaging device modelling and calibration Download PDF

Info

Publication number
WO2015000056A1
WO2015000056A1 PCT/CA2014/000534 CA2014000534W WO2015000056A1 WO 2015000056 A1 WO2015000056 A1 WO 2015000056A1 CA 2014000534 W CA2014000534 W CA 2014000534W WO 2015000056 A1 WO2015000056 A1 WO 2015000056A1
Authority
WO
WIPO (PCT)
Prior art keywords
axis
coordinate system
image
point
plane
Prior art date
Application number
PCT/CA2014/000534
Other languages
French (fr)
Inventor
Guy Martin
Original Assignee
Guy Martin
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guy Martin filed Critical Guy Martin
Priority to BR112015033020A priority Critical patent/BR112015033020A2/en
Priority to US14/898,016 priority patent/US9792684B2/en
Priority to EP14820593.3A priority patent/EP3017599A4/en
Priority to KR1020167003009A priority patent/KR20160030228A/en
Priority to CN201480038248.5A priority patent/CN105379264B/en
Priority to JP2016522146A priority patent/JP2016531281A/en
Priority to RU2016103197A priority patent/RU2677562C2/en
Publication of WO2015000056A1 publication Critical patent/WO2015000056A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
    • H04N25/611Correction of chromatic aberration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention relates to a system and method for imaging device modelling and calibration that compensates for imperfections in line of sight axis squareness with the image plane of the imaging device.
  • Calibration of digital cameras and other imaging devices seeks to create a mathematical model of how the image 'prints' through the lens on the imaging device's surface.
  • the procedure first uses a picture from a calibration target with accurately known tolerance, and extracts target elements from the image. Finally, a mathematical model relates the image information with the real three-dimensional (3D) target information.
  • the imaging device can then be used to map real world objects using a scale factor, the focal distance /.
  • the proposed calibration and modelling technique introduces an accurate perspective correction to account for assembly tolerances in the imaging device or camera/lens system, causing the lens axis to be off-squareness with the image plane.
  • Accurate knowledge of camera plane and lens assembly removes a systematic bias in telemetry systems and 3D scanning using a digital camera or a camera stereo pair, yields an accurate focal length (image scale) measurement, locates the true image center position on the camera plane, and increases accuracy in measuring distortion introduced by image curvature (geometric distortion) and rainbow light splitting in the lens optics (chromatic distortion).
  • Removing lens distortion increases the image compression ratio without adding any loss.
  • a computer-implemented method for modeling an imaging device for use in calibration and image correction comprising defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third
  • the second coordinate system is defined such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and the third coordinate system is defined such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
  • the received set of 3D coordinates is [x y z 1] T and the projection of the point of the 3D object onto the true scale plane is computed as: where « is a scale equivalent operator and defines a projection operation onto the true scale plane with respect to the first coordinate system.
  • the projection of the point of the 3D object onto the image plane is computed as:
  • P ⁇ defines a projection operation onto the image plane, / is the focal distance, a is the first angle, ⁇ is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the a rotation is performed, R(y, ⁇ ) is a ⁇ rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the ⁇ rotation is performed, the a rotation computed rightmost such that the ⁇ rotation is performed relative to the axis x rotated by the angle a, and where
  • h 23 -sina
  • h 32 ⁇ sina
  • the method further comprises determining a homography H between the true scale plane and the image plane as :
  • the homography H is determined as : cos /? / sin /? sina / sin ⁇ cos a f fafi //?
  • a system for modeling an imaging device for use in calibration and image correction comprising a memory; a processor; and at least one application stored in the memory and executable by the processor for defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate
  • the at least one application is executable by the processor for defining the second coordinate system such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and defining the third coordinate system such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
  • the at least one application is executable by the processor for receiving the set of 3D coordinates as [x y z 1] T and computing the projection of the point of the 3D object onto the true scale plane as:
  • « is a scale equivalent operator and P, defines a projection operation onto the true scale plane with respect to the first coordinate system.
  • the at least one application is executable by the processor for computing the projection of the point of the 3D object onto the image plane as:
  • P f defines a projection operation onto the image plane, / is the focal distance, a is the first angle, ⁇ is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the arotation is performed, R(y, ⁇ ) is a ⁇ rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the ⁇ rotation is performed, the a rotation computed rightmost such that the ⁇ rotation is performed relative to the axis x rotated by the angle a, and where
  • the at least one application is executable by the processor for determining a homography H between the true scale plane and the image plane as :
  • the at least one application is executable by the processor for determining the homography H as : cos/? /sin /? sin a /sin /? cosa ' / fafi ffi '
  • the imaging device comprises one of a zooming lens camera, a near- infrared imaging device, a short-wavelength infrared imaging device, a long-wavelength infrared imaging device, a radar device, a light detection and ranging device, a parabolic mirror telescope imager, a surgical endoscopic camera, a Computed tomography scanning device, a satellite imaging device, a sonar device, and a multi spectral sensor fusion system.
  • a zooming lens camera a near- infrared imaging device, a short-wavelength infrared imaging device, a long-wavelength infrared imaging device, a radar device, a light detection and ranging device, a parabolic mirror telescope imager, a surgical endoscopic camera, a Computed tomography scanning device, a satellite imaging device, a sonar device, and a multi spectral sensor fusion system.
  • a computer readable medium having stored thereon program code executable by a processor for modeling an imaging device for use in calibration and image correction, the program code executable for defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of
  • Figure 1 is a schematic diagram illustrating lens distortion
  • Figure 2 are schematic views illustrating barrel and pincushion lens geometric distortion
  • Figure 3 is a plan view illustrating edge dithering when two neighbouring pixel colours mix
  • Figure 4 is a schematic diagram illustrating the parameters that define the behaviour of a camera/lens combination in an ideal camera model representation assuming the image plane is square with the line of sight;
  • Figure 5 is a schematic diagram of the tilted axis assumption of a camera internal model where tilted axis compensation is added to the ideal camera representation of Figure 4;
  • Figure 6 is a schematic diagram of a new set of variables for a camera internal model, in accordance with an illustrative embodiment of the present invention.
  • Figure 7 is a schematic diagram of a radial distortion mode, in accordance with an illustrative embodiment of the present invention.
  • Figure 8a is a flowchart of a method for computing the location of an image point, in accordance with an illustrative embodiment of the present invention.
  • Figure 8c is a flowchart of the step of Figure 8a of applying a lens distortion model
  • Figure 8d is a flowchart of the step of Figure 8a of projecting on a tilted image plane / using an internal camera model
  • Figure 9a is a schematic diagram of a system for computing the location of an image point, in accordance with an illustrative embodiment of the present invention
  • Figure 9b is is a block diagram showing an exemplary application running on the processor of Figure
  • Figure 10 is a distorted photograph view of a calibration target
  • Figure 11 are photographic views of a micro lens test camera with circuit board
  • Figure 12 is a combined illustration of target extraction
  • Figure 13 is a schematic diagram of a stereo pair used for measuring objects in 3D using two camera images simultaneously;
  • Figure 14 are photographs illustrating geometric distortion correction using a test camera;
  • Figure 16 is graph illustrating red chromatic distortion, radial correction vs distance from image center (pixels);
  • Figure. 17 is a graph illustrating blue chromatic distortion, radial correction vs distance from image center (pixels).
  • Figure 18 is a schematic illustration of the Bayer Pattern layout for a colour camera.
  • Lens distortion introduces the biggest error found in digital imaging. This is illustrated in Figures 1 and 2.
  • the fish eye effect is referred to as geometric distortion and curves straight lines.
  • Coloured shading at the edges of the image (referred to as « Blue Tinted Edge » and « Red Tinted Edge » in Figure 1 ) is referred to as chromatic distortion and is caused by the splitting of light in the lens of the imaging device (not shown).
  • chromatic distortion is caused by the splitting of light in the lens of the imaging device (not shown).
  • dithering is the intermediate pixel colour encountered when an edge goes through a given pixel and both neighbouring colours mix.
  • the pixel colour is a weighed average of adjacent colour values, on either side of the edge, with respect to each colour's respective surface inside the pixel.
  • edge dithering shading at object edges
  • edge dithering Using colour images from a black and white target, colour edge shading is caused by chromatic distortion.
  • dithering appears in grey shades as does geometric distortion. It is therefore desirable to isolate chromatic lens distortion from edge dithering or geometric distortion using edge colour.
  • Modelling a camera requires a mathematical model and a calibration procedure to measure the parameters that define the behaviour of a specific camera/lens combination.
  • a camera is referred to herein, the proposed system and method also apply to other imaging devices.
  • devices including, but not restricted to, zooming lens cameras; near-infrared (NIR), short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) infrared imaging devices; Radar and Light Detection And Ranging (LIDAR) devices; parabolic mirror telescope imagers; surgical endoscopic cameras; computed tomography (CT) scanning devices; satellite imaging devices; sonar devices; and multi spectral sensor fusion systems may also apply:
  • the ideal camera model has three components, as shown in Figure 4, namely:
  • Focal point O is the location in space where all images collapse to a single point; in front of the focal point O is the camera image plane (not shown).
  • Lens axis Z c crosses the image plane at two (2) right angles (i.e. is square therewith), defining the image center location (C x , C Y ).
  • the camera external model shows accurate throughout the literature. Defining two coordinate sets, 1- World (X w Y w Z w ) with origin set at (0,0,0); and
  • the external camera model expresses rotations ( ⁇ ⁇ ⁇ ) and translations (T x T Y T 2 ) needed to align the camera coordinate set (Xc Yc Z c ) with the world set of coordinates (X w Yw Z w ), and bring the focal point O at the world origin (0,0,0).
  • the external camera model therefore has six (6) degrees of freedom, namely the ( ⁇ ⁇ ⁇ ) rotation angles and translations (T x T Y T z ).
  • Parameter a is the horizontal image scale, perfectly aligned with the camera pixel grid array horizontal axis
  • the vertical scale is set to b, different from a;
  • the scale and orientation of the vertical axis of the image plane is tilted by skew parameter s relative to the axis Y c , where s is a scale measure of skew relative to the image scale.
  • skew parameter s is a scale measure of skew relative to the image scale.
  • the widespread tilted axis assumption however introduces a perspective bias, shifting all the other camera parameters, and should be replaced by a full 3D perspective model of the image plane that retains the camera image plane geometry. It is therefore proposed to introduce a new set of variables for the internal camera model, as shown in Figure 6 in which a model is represented in camera coordinates (starting from the focal point O).
  • the image center (C x> C Y ) remains the intersection between the lens axis Z c and the camera (i.e. image) plane.
  • Two scale independent simultaneous perspectives of an outside 3D world object (a point P thereof being located somewhere in the world at given coordinates relative to the axes (X c Y c Z c )) are considered.
  • This first plane represents the perfect 1 : 1 true scale projection of the 3D object on a plane having infinite dimensions in x and y.
  • point P [X Y Z 1] T in 3D world coordinates (X Y Z is given with respect to world coordinates (X w Y w Z w ), X' Y' Z' with respect to camera coordinates system (X c Yc Z c )) projects as:
  • the second perspective is the image plane itself, i.e. the output of the lens system.
  • the image plane is represented in Figure 6 by two axes intersecting at (Cx, CY). Since the camera plane at the focal distance / is off-squareness with the lens axis Z c , it needs five (5) parameters.
  • two rotation angles a and ⁇ with respect to both x and y axes are used to account for the tilting of the camera plane.
  • the x axis of the image plane is rotated by angle a while the y axis of the image plane is rotated by angle ⁇ , such that the image plane is tilted by angles a and ⁇ with respect to axes x and y, with the x and y axes of the image plane taken parallel to Xc and Yc at origin O initially, i.e. before any rotation.
  • the axes x and y are illustratively taken parallel to X c and Y c at origin O and reproduced on the image plane before any rotation is applied to tilt the image plane.
  • tilting of the image plane can be expressed by two (2) 3D rotations in space, namely a rotation about axis Yc by angle a and a second rotation about axis X c by angle ⁇ .
  • the x axis of the image plane being arbitrarily selected as aligned with the horizontal camera plane direction, there is therefore no need for a z axis rotation angle.
  • a z axis rotation angle may be desirable.
  • the three remaining degrees of freedom for the camera internal model are then the focal distance / (or camera image scale) and coordinates of the image center (C x , C Y ).
  • the top left 2x2 matrix partition in equation (3) represents the image plane x and y axes with skew parameter s, horizontal scale a, and vertical scale b.
  • the image plane x axis is aligned with the horizontal direction of the camera plane pixel array grid (not shown), accounting for the 0 value in position (2,1 ) of the K matrix.
  • the image plane y axis is tilted by s in the x direction as illustrated in Figure 5.
  • the last column represents the image center location (C x , C Y ).
  • the error in the tilted axis assumption of Figure 5 is visible in the lower left 1x2 matrix partition.
  • the two (2) terms of the lower left 1x2 partition should not be zero when the lens axis is off-squareness with the camera plane. When they are non-zero, these terms apply a perspective correction to x and y scales in the image plane as one moves away from the image center.
  • the internal camera model is defined as a perspective transformation with five (5) degrees of freedom that relates the outside camera model projection in true 1 :1 scale to the image plane projection at focal distance / on a common line of sight Z c , and where the image plane is tilted by angles a and ⁇ with respect to axes x and y on the image plane, the x and y axes taken parallel to X c and Y c at origin O before any rotation.
  • Figure 6 shows (Cx, C Y ) at the line of sight Z c .
  • Z c is taken as the origin for all planes intersecting with the line of sight.
  • (C Xl C Y ) (0, 0)
  • a shift of origin is applied to offset the image plane centre from (0, 0) to (C x , C Y ).
  • Equation (4) The last operation in equation (4) is the rescaling fourth (4 ) coordinate to unity.
  • P f defines the projection operation where element (4,3) is 1//, / is the focal distance, R(y, ⁇ ) is a ⁇ rotation matrix with respect to the y axis, and R(x, a) is an ⁇ rotation matrix with respect to the x axis.
  • Equation (5) The a rotation in equation (5) is computed rightmost, so the ⁇ rotation is performed relative to an image plane y axis rotated by angle a. It should be understood that the ⁇ rotation could be handled rightmost in equation (5), meaning that the a rotation would be performed relative to a ⁇ rotated x axis. Homogeneous equations read from right to left and reversing the order of multiplication yields different mathematical formulations. Several models are possible.
  • Equation (8) is again the rescaling of the fourth (4 ) coordinate to unity.
  • the tilted image plane coordinate (x", y") is a homographic transformation of the (x
  • the image plane has five (5) degrees of freedom: plane tilting angles a and ⁇ , image center (C x , C Y ) and focal distance /, giving the internal model.
  • lens distortion occurs between the two planes and has to be accounted for in the model.
  • calibration is finding a 3D correspondence between pairs of coordinates projected in the two planes, compensating for lens distortion.
  • the lens distortion model can be reduced to a purely radial function, both geometric and chromatic.
  • Many lens geometric distortion models were published. Some authors claim 1/20 pixel accuracy in removing geometric lens distortion. Overall, their basic criterion is more or less the same: lines that are straight in real life should appear straight in the image once geometric distortion is removed. Very few authors consider chromatic distortion in their lens model.
  • x 1 x + x (k, r 2 + k 2 r 4 + k 3 r 6 ) + p,*( r 2 + 2 x 2 ) + 2 p 2 xy (14)
  • ( ⁇ ', y' ) represents the new location of point (x, y), computed with respect to image center (C x , C Y )
  • k k 2l and k 3 are three terms of radial distortion
  • pi and p 2 are two terms of decentering distortion.
  • Calibration retrieves numerical values for parameters k-,, k 2 , k 3 , pi, and p 2 .
  • Image analysis gives (x' y').
  • the undistorted (x y) position is found solving the two equations using a 2D search algorithm.
  • FIG. 8a there is illustrated a method 100 for computing, using the proposed camera model, the location of an image point, as per the above.
  • a 3D point in space goes through three transformations to give an image point located on the image plane.
  • step 106 is performed, the location (x", y") of the camera image point corresponding to a 3D point (X, Y, Z) captured by the camera is obtained.
  • the step 102 illustratively computes the proposed external camera model transformation and comprises receiving at step 106 the coordinates (X, Y, Z) of a 3D point P expressed with respect to world coordinate system (X w Yw Z w ).
  • the external model image point is then output at step 110.
  • applying the lens distortion model at step 104 illustratively comprises receiving the external model image point (x, y) at step 1 12.
  • step 1 14 illustratively comprises computing r, r', and the distorted image point ( ⁇ ', y').
  • parameters kt and k 2 may be expanded. Indeed, as discussed above, in its simplest form, geometric distortion can be modelled as a fully radial displacement.
  • the new distorted distance r' knowing r is given by:
  • Distorted image point (x' y') can be computed knowing ⁇ or using similar triangle properties:
  • the distorted image point ( ⁇ ', y') is then output at step 1 6.
  • obtaining 106 the internal camera model illustratively comprises receiving the distorted image point ( ⁇ ', y') at step 1 18. From the distorted image point ( ⁇ ', y') and from the internal camera model five degrees of freedom, namely a and ⁇ (image plane tilt angles), the focal distance /, and the image center coordinates (C x , C Y ), the following is computed at step 120:
  • lens distortion can be modeled with respect to the image plane scale .
  • an imaginary intermediate plane of projection has to be added to the model, located at / along Z Cl with (0, 0) center, and perfectly square with lens axis Z c .
  • the system 200 comprises one or more server(s) 202 accessible via the network 204.
  • server(s) 202 accessible via the network 204.
  • a series of servers corresponding to a web server, an application server, and a database server may be used. These servers are all represented by server 202.
  • the server 202 may be accessed by a user using one of a plurality of devices 206 adapted to communicate over the network 204.
  • the devices 206 may comprise any device, such as a personal computer, a tablet computer, a personal digital assistant, a smart phone, or the like, which is configured to communicate over the network 204, such as the Internet, the Public Switch Telephone Network (PSTN), a cellular network, or others known to those skilled in the art.
  • PSTN Public Switch Telephone Network
  • the server 202 may also be integrated with the devices 206, either as a downloaded software application, a firmware application, or a combination thereof. It should also be understood that several devices as in 206 may access the server 202 at once.
  • Imaging data may be acquired by an imaging device 207 used for calibration and image correction.
  • the device 207 may be separate from (as illustrated) the devices 206 or integral therewith.
  • the imaging data may comprise one or more images of a real world 3D object (not shown), such as a calibration target as will be discussed further below.
  • the imaging data may then be processed at the server 202 to obtain a model of the imaging device 207 in the manner described above with reference to Figure 8a, Figure 8b, Figure 8c, and Figure 8d.
  • the imaging data is illustratively acquired in real-time (e.g. at a rate of 30 images per second) for an object, such as a moving object whose movement in space is being monitored.
  • the server 202 may then process the imaging data to determine an image point associated with each point of each acquired image.
  • the imaging data may be processed to determine an image point associated with each one of one or more points of interest in the image.
  • the server 202 may comprise, amongst other things, a processor 208 coupled to a memory 210 and having a plurality of applications 212a ... 212n running thereon. It should be understood that while the applications 212a ... 212n presented herein are illustrated and described as separate entities, they may be combined or separated in a variety of ways.
  • One or more databases 214 may be integrated directly into the memory 210 or may be provided separately therefrom and remotely from the server 202 (as illustrated). In the case of a remote access to the databases 214, access may occur via any type of network 204, as indicated above.
  • the various databases 214 described herein may be provided as collections of data or information organized for rapid search and retrieval by a computer.
  • the databases 214 may be structured to facilitate storage, retrieval, modification, and deletion of data in conjunction with various data-processing operations.
  • the databases 214 may consist of a file or sets of files that can be broken down into records, each of which consists of one or more fields. Database information may be retrieved through queries using keywords and sorting commands, in order to rapidly search, rearrange, group, and select the field.
  • the databases 214 may be any organization of data on a data storage medium, such as one or more servers.
  • the databases 214 are secure web servers and Hypertext Transport Protocol Secure (HTTPS) capable of supporting Transport Layer Security (TLS), which is a protocol used for access to the data.
  • HTTPS Hypertext Transport Protocol Secure
  • TLS Transport Layer Security
  • Communications to and from the secure web servers may be secured using Secure Sockets Layer (SSL).
  • SSL Secure Sockets Layer
  • Identity verification of a user may be performed using usernames and passwords for all users.
  • Various levels of access rights may be provided to multiple levels of users.
  • any known communication protocols that enable devices within a computer network to exchange information may be used.
  • Protocol Internet Protocol
  • UDP User Datagram Protocol
  • TCP Transmission Control Protocol
  • DHCP Dynamic Host Configuration Protocol
  • HTTP Hypertext Transfer Protocol
  • FTP File Transfer Protocol
  • Telnet Telnet Remote Protocol
  • SSH Secure Shell Remote Protocol
  • the memory 210 accessible by the processor 208 may receive and store data.
  • the memory 210 may be a main memory, such as a high speed Random Access Memory (RAM), or an auxiliary storage unit, such as a hard disk, flash memory, or a magnetic tape drive.
  • RAM Random Access Memory
  • auxiliary storage unit such as a hard disk, flash memory, or a magnetic tape drive.
  • the memory 210 may be any other type of memory, such as a Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM).or optical storage media such as a videodisc and a compact disc.
  • ROM Read-Only Memory
  • EPROM Erasable Programmable Read-Only Memory
  • optical storage media such as a videodisc and a compact disc.
  • the processor 208 may access the memory 210 to retrieve data.
  • the processor 208 may be any device that can perform operations on data. Examples are a central processing unit (CPU), a front-end processor, a microprocessor, and a network processor.
  • the applications 212a ... 212n are coupled to the processor 208 and configured to perform various tasks as explained below in more detail.
  • An output may be transmitted to the devices 206.
  • Figure 9b is an exemplary embodiment of an application 212a running on the processor 208.
  • the application 212a may comprise a receiving module 302 for receiving the imaging data from the imaging device 207 and obtaining therefrom coordinates of a point of a real 3D world object as captured by the imaging device 207, an external model projection module 304 enabling the method illustrated and described in reference to Figure 8b, a lens distortion compensation module 306 enabling the method illustrated and described in reference to Figure 8c, an internal model projection module 308 enabling the method illustrated and described in reference to Figure 8d, and an output module 310 for outputting coordinates of a camera image point, as computed by the internal model defining module 308.
  • the experimental proposed setup is intended to be field usable, even with low resolution short- wavelength infrared (SWIR) imagers.
  • SWIR short- wavelength infrared
  • a Levenberg-Marquardt search algorithm may be used to compute the model parameters. It should be understood that algorithms other than the Levenberg-Marquardt algorithm may apply. For instance, the steepest descent or Newton algorithms may be used. The accuracy improvements achieved with the proposed technique allowed to use a least square sum of error criteria without bias.
  • the error is defined as the image predicted target position from the model and 3D data set, minus the corresponding real image measurement in 2D.
  • Calibration target uses 1 " diameter circles at 2" center to center spacing. Using circles ensures that no corner should be detected even with a highly pixelized image, see Figure 12.
  • step 1 recovered an initial estimate for the edge points, adding compensation for edge orientation bias.
  • step 2 the initial ellipse fit is used to estimate local curvature and correct the edge location.
  • the leftmost camera parameter set is obtained from the most accurate model published, tested on our own experimental data.
  • the rightmost set was computed from the proposed model, where the lens model was taken as a purely radial geometric distortion, and where the internal camera model used the proposed implementation.
  • the first six (6) lines of the above table are the external camera parameters, three (3) angles and three (3) positions needed to compute [R 3x3 T 3x1 ].
  • the next five (5) lines are the internal camera parameters; we modified our parameter representation to fit the generally used model from Figure 5.
  • Our degrees of freedom use a different mathematical formulation.
  • the remaining two (2) lines show the major lens geometric distortion parameters k and k 2 . These two are present in most models and account for most of fish eye geometric distortion.
  • Figure 13 shows a stereo pair typically used for measuring objects in 3D, using two (2) simultaneous camera images. A full discussion on triangulation is given in [5].
  • O and O' are the optical centers for the two cameras (not shown), and both lens axes project at right angles on the image planes at the image centers, respectively (C x , C Y , / ) and (C x ⁇ C Y ', /') (not shown for clarity), where (C x , C Y ) is the origin of the image plane, and / the distance between O and the image plane, as shown in Figure 4. Similarly, (C x ⁇ C Y ') is the origin of the image plane, and /' the distance between O' and the image plane.
  • Both cameras are seeing a common point M on the object (not shown). M projects in both camera images as points m and m'.
  • the first four (4) requirements for 3D telemetric accuracy are found trough camera calibration, the fifth from sub pixel image feature extraction. The last is the triangulation 3D recovery itself.
  • the first four (4) error dependencies described above namely the optical centers O and O', the focal distances / and / ', the Image centers (C x , C Y ) and (C x ', C Y ), and the lens axis orientation Z c and Z c ', are subject to the discovered camera model bias discussed above.
  • Feature point extraction (m and m') is subject to the edge orientation bias, and corner detection bias that had to be delt with at calibration.
  • bias sources include the following:
  • JPEG image filtering at sub pixel level (variable with JPEG quality parameter )
  • every lens parameter is 'polluted' by the internal camera model bias referred to as the tilted axis assumption.
  • the bias can be removed by changing the tilted assumption for an accurate perspective model of the 3D internal camera image plane.
  • table 1 also shows that lens distortion parameters are under evaluated, with the minus sign on ki meaning barrel distortion.
  • Range and aim measurements are also biased and related to the error percentage on focal distance / since a camera gives a scaled measure. It also prevents the accurate modelling of the zooming lens camera.
  • focal point O moves along the lens axis Z c . From calibration, O is found by knowing image center (C x , C Y ), / away at right angle with the image plane.
  • the proposed example shows a systematic bias in those parameters. It gets even worse when considering run out in the lens mechanism since it moves the lens axis Z c . Without the proposed modification to the camera model, it then becomes impossible to model a zooming lens.
  • Modeling of the zooming lens camera requires plotting the displacement of focal point O in space.
  • the only way to evaluate the mechanical quality of the zooming lens therefore depends on the accurate knowledge of image center (Cx, C Y ) and /.
  • zooming lens tradeoff zooming in to gain added accuracy when needed, at the cost of losing accuracy for assembly tolerances in the lens mechanism.
  • Figure 14 shows how lens distortion is removed from the image. Chromatic distortion is not visible on a black and white image.
  • chromatic distortion target displacement is shown amplified by fifty (50) times.
  • Target positions are shown for the Red Green and Blue (RGB) camera colour channels, and are grouped by clusters of three (3).
  • the 'x' or cross sign marker symbol indicates the target extraction in Blue
  • the '+' or plus sign marker symbol indicates the target extraction in Red
  • the dot or point marker symbol indicates the target extraction in Green.
  • the visible spectrum spread pushes the Red target centres outwards, and the Blue target centers inwards with respect to Green.
  • the graph of Figure 15 shows a mostly radial behaviour.
  • the imaginary lines joining Red Green and Blue centers for any given target location tend to line up and aim towards the image center indicated by the circled plus sign marker symbol close to the (500, 400) pixel coordinate.
  • Bayer Pattern colour cameras give a single colour signal for each given pixel, Red, Green, or Blue, as indicated by an R, G, or B prefix in the pixel number of Figure 18. Missing colour information is interpolated using neighbouring pixel information.
  • the missing G13 value is computed as:
  • step two we compute missing B and R values using known G for edge sensing, assuming edges in B and R are geometrically found in the same image plane locations as G edges.
  • Bayer pattern recovery requires adapting to compensate for 'colour shifting' edge location as we scan from B to G to R pixels.
  • a software approach creates an open integration architecture
  • the computer generated image has ideal perspective and known focal length. Since a computer generated image is perfectly pinhole, created from set value for /, it stands from reason to correct the camera image for distortion and fit it to the same scale as the synthetic image.
  • any lens system will exhibit distortion at some level.
  • the earth's atmosphere also adds distortion which can only be compensated for when the lens distortion is accurately known.
  • under-compensated geometric distortion will build up curvature, and biased perspective as caused by the tilted axis assumption will create a shape alteration: loss of squareness, loss of verticality...
  • the proposed approach is desirable for zooming lens telemetry, increases speed and accuracy in wide angle lens application, and allows system miniaturization in two ways. Firstly by providing added accuracy from smaller lens systems, and secondly, filtering through software allows for simpler optics. It provides the best trade-off for accuracy, speed, cost, bulk, weight, maintenance and upgradeability.
  • the tilted axis assumption creates a major bias and has to be replaced by a perspective model of the image plane that retains the camera image plane 3D geometry: horizontal and vertical image scales are equal and at right angle.
  • the tilted axis assumption introduces a calibration bias showing on 3D triangulation since the image center is out of position.
  • the two (2) pixel image center bias dominates every other error in the triangulation process since image features can be extracted to 1/4 pixel accuracy.
  • Sub pixel bias sources include, but are not restricted to:
  • the perspective model for the internal camera image plane is needed to locate the displacement of the lens focal point in a zooming lens.
  • a software correction approach increases speed and accuracy in wide angle lens application, and allows system miniaturization in two ways. Firstly by providing added accuracy from smaller lens systems, and secondly, filtering through software allows for simpler optics.
  • Software model/calibration is the only technique for improving camera performance beyond hardware limitations.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Studio Devices (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Geometry (AREA)
  • Measurement Of Optical Distance (AREA)
  • Endoscopes (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention relates to a camera modelling and calibration system and method using a new set of variables to compensate for imperfections in line of sight axis squareness with camera plane and which increases accuracy in measuring distortion introduced by image curvature caused by geometric and chromatic lens distortion and wherein the camera image plane is represented from a full 3D perspective projection.

Description

SYSTEM AND METHOD FOR IMAGING DEVICE MODELLING AND CALIBRATION
CROSS-REFERENCE TO RELATED APPLICATIONS
This patent application claims priority of Canadian Application Serial No. 2,819,956, filed on July 2,
2013. TECHNICAL FIELD
The present invention relates to a system and method for imaging device modelling and calibration that compensates for imperfections in line of sight axis squareness with the image plane of the imaging device.
BACKGROUND
Calibration of digital cameras and other imaging devices seeks to create a mathematical model of how the image 'prints' through the lens on the imaging device's surface. The procedure first uses a picture from a calibration target with accurately known tolerance, and extracts target elements from the image. Finally, a mathematical model relates the image information with the real three-dimensional (3D) target information. Once calibrated, the imaging device can then be used to map real world objects using a scale factor, the focal distance /. When working from off-the-shelf cameras and lenses, we need to calibrate the camera to compensate for the tolerance on the lens focal distance, where the tolerance can be as high as 10%.
Moreover, once the model is accurately known, it can then be used to recreate a perfect camera image, also referred to as pinhole, needed for almost every high end automated imaging system. Through software image correction, we can compensate for image errors introduced by the imperfect nature of lenses, fish eye image deformation called geometric distortion, and rainbow light splitting in the lens optics called chromatic distortion. Several imaging devices will exhibit an off-squareness line of sight bias with respect to the image plane. In order to properly measure image distortion, the off-squareness of the image plane with respect to the lens line of sight needs to be compensated for. Known calibration techniques use the tilted axis assumption for this purpose. However, this assumption has proven to bias every camera parameter in the model, causing systematic measurement errors. As a result, a scale-size-distortion bias is introduced in the image that every other camera parameter seeks to compensate, biasing those camera parameters as well. In 3D scanning or telemetry, this translates into a geometry and location bias of 3D objects when reconstructing from a pair of simultaneous camera images. There is therefore a need to improve on existing calibration and modelling techniques for imaging devices.
SUMMARY OF INVENTION
The proposed calibration and modelling technique introduces an accurate perspective correction to account for assembly tolerances in the imaging device or camera/lens system, causing the lens axis to be off-squareness with the image plane. Accurate knowledge of camera plane and lens assembly removes a systematic bias in telemetry systems and 3D scanning using a digital camera or a camera stereo pair, yields an accurate focal length (image scale) measurement, locates the true image center position on the camera plane, and increases accuracy in measuring distortion introduced by image curvature (geometric distortion) and rainbow light splitting in the lens optics (chromatic distortion).
Accurate knowledge of camera plane and lens assembly increases the computational efficiency and accuracy in removing lens distortion, geometric and chromatic.
Removing lens distortion increases the image compression ratio without adding any loss.
According to a first broad aspect, there is described a computer-implemented method for modeling an imaging device for use in calibration and image correction, the method comprising defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third axis of the third coordinate system respectively tilted by a first and a second angle relative to an orientation of the second and the third axis of the first coordinate system, the second and the third axis of the third coordinate system thereby defining an image plane off-squareness relative to the line of sight; receiving a set of 3D coordinates associated with a point of a real world 3D object captured by the imaging device; computing a projection of the point onto the true scale plane, thereby obtaining a first set of planar coordinates, and onto the image plane, thereby obtaining a second set of planar coordinates; and outputting the second set of planar coordinates indicative of a location of an image point corresponding to the point of the 3D object..
In some embodiments, the second coordinate system is defined such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and the third coordinate system is defined such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
In some embodiments, the received set of 3D coordinates is [x y z 1]T and the projection of the point of the 3D object onto the true scale plane is computed as:
Figure imgf000004_0001
where « is a scale equivalent operator and defines a projection operation onto the true scale plane with respect to the first coordinate system.
In some embodiments, the projection of the point of the 3D object onto the image plane is computed as:
Figure imgf000004_0002
h x + l 2y + h^3
/¾,x + h32y + h 3z
1 / f(h3 lx + h32y + 33z)
f{hx ,x +
f(h22
Figure imgf000004_0003
where P} defines a projection operation onto the image plane, / is the focal distance, a is the first angle, β is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the a rotation is performed, R(y, β) is a β rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the β rotation is performed, the a rotation computed rightmost such that the β rotation is performed relative to the axis x rotated by the angle a, and where
hn = οοεβ,
hl2 = εϊηβ sina,
hl3 = είηβ cosa,
h22 = cosa,
h23 = -sina, h32 = οοε sina, and
h33 = οοεβ cosa. In some embodiments, the method further comprises determining a homography H between the true scale plane and the image plane as :
H where ti3i and h32 are non-zero elements applying a perspective correction to x and y
Figure imgf000005_0001
scales in the image plane and the second set of planar coordinates (x", y") is a homographic transformation of a distorted position (χ', y') of an image of the point on the true scale plane, the homographic transformation expressed as :
[x" y" lf * [M v wJ = H[x' if
where u = /(cos x' + sinp sina y' + είηβ cosa),
v = /(cosa y' - sina),
w = -είηβ x' + cosp sina y' + οοεβ cosa,
x" = u/w + Cx, and
y" = v/w + CY with (Cx ,Cy) being a position of the origin of the third coordinate system. In some embodiments, the homography H is determined as : cos /? / sin /? sina / sin β cos a f fafi //?
H = 0 / cos a - f sin 0 / - fa
sin/? cos/? sin a cos /? cos a - β a 1 where « is the scale equivalent operator and the approximation cos9~1 and sin9~6 are used for small angles a and β.
In some embodiments, the method further comprises compensating for a distortion of a lens of the imaging device at the true scale plane, the compensating comprising applying a lens distortion model defined by : r' = r + k-, r3 + k2 r6 + where the first set of planar coordinates comprises an undistorted position (x, y) of an image of the point on the true scale plane expressed in radial coordinates (r, Θ), with r2=x2+y2 and tanG = y/x, (χ', y') represents a distorted position of (x, y) at an output of the lens before projection of the point on the image plane, r' is a distorted radial distance computed on the basis of (χ', y'), and kt and k2 are geometric distortion parameters of the lens.
According to a second broad aspect, there is described a system for modeling an imaging device for use in calibration and image correction, the system comprising a memory; a processor; and at least one application stored in the memory and executable by the processor for defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third axis of the third coordinate system respectively tilted by a first and a second angle relative to an orientation of the second and the third axis of the first coordinate system, the second and the third axis of the third coordinate system thereby defining an image plane off-squareness relative to the line of sight; receiving a set of 3D coordinates associated with a point of a real world 3D object captured by the imaging device; computing a projection of the point onto the true scale plane, thereby obtaining a first set of planar coordinates, and onto the image plane, thereby obtaining a second set of planar coordinates; and outputting the second set of planar coordinates indicative of a location of an image point corresponding to the point of the 3D object.
In some embodiments, the at least one application is executable by the processor for defining the second coordinate system such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and defining the third coordinate system such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
In some embodiments, the at least one application is executable by the processor for receiving the set of 3D coordinates as [x y z 1]T and computing the projection of the point of the 3D object onto the true scale plane as:
Figure imgf000006_0001
where « is a scale equivalent operator and P, defines a projection operation onto the true scale plane with respect to the first coordinate system.
In some embodiments, the at least one application is executable by the processor for computing the projection of the point of the 3D object onto the image plane as:
Figure imgf000007_0001
hux + h 2y + l 3z h3 x + h32y + h33z
1 / / (h3lx + ^y + h33z)
f(hnx + hny +
f(h22y + h23
Figure imgf000007_0002
where Pf defines a projection operation onto the image plane, / is the focal distance, a is the first angle, β is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the arotation is performed, R(y, β) is a β rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the β rotation is performed, the a rotation computed rightmost such that the β rotation is performed relative to the axis x rotated by the angle a, and where
hn = οοεβ,
h12 = είηβ sina,
hi3 = sin3 cosa,
h22 = cosa,
h23 = -sina,
h31 = -είηβ,
h32 = cos sina, and
h33 = οοεβ cosa.
In some embodiments, the at least one application is executable by the processor for determining a homography H between the true scale plane and the image plane as :
H where ri3i and h32 are non-zero elements applying a perspective correction to x and y
Figure imgf000007_0003
scales in the image plane and the second set of planar coordinates (x", y") is a homographic transformation of a distorted position (χ', y') of an image of the point on the true scale plane, the homographic transformation expressed as:
Figure imgf000007_0004
where u = /(cos3 x' + βίηβ sina y' + βίηβ cosa),
v = /(cosa y' - sina),
w = -είηβ x' + cos3 sina y' + οοεβ cosa,
x" = u/w + Cx, and
y" = v/w + CY with (Cx ,CY) being a position of the origin of the third coordinate system.
In some embodiments, the at least one application is executable by the processor for determining the homography H as : cos/? /sin /? sin a /sin /? cosa ' / fafi ffi '
H = 0 / cos a - f sin 0 / - fa
sin /? cos /? sin a; cos /? cos a - β a 1 where the approximation cos6~1 and είηθ~θ is used for small angles a and β. In some embodiments, the at least one application is executable by the processor for compensating for a distortion of a lens of the imaging device at the true scale plane, the compensating comprising applying a lens distortion model defined by :
Figure imgf000008_0001
where the first set of planar coordinates comprises an undistorted position (x, y) of an image of the point on the true scale plane is expressed in radial coordinates (r, Θ), with r2=x2+y2 and tan0 = y/x, (χ', y') represents a distorted position of (x, y) at an output of the lens before projection of the point on the image plane, r' is a distorted radial distance computed on the basis of (χ', y'), and k-i and k2 are geometric distortion parameters of the lens.
In some embodiments, the imaging device comprises one of a zooming lens camera, a near- infrared imaging device, a short-wavelength infrared imaging device, a long-wavelength infrared imaging device, a radar device, a light detection and ranging device, a parabolic mirror telescope imager, a surgical endoscopic camera, a Computed tomography scanning device, a satellite imaging device, a sonar device, and a multi spectral sensor fusion system.
According to a third broad aspect, there is described a computer readable medium having stored thereon program code executable by a processor for modeling an imaging device for use in calibration and image correction, the program code executable for defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third axis of the third coordinate system respectively tilted by a first and a second angle relative to an orientation of the second and the third axis of the first coordinate system, the second and the third axis of the third coordinate system thereby defining an image plane off-squareness relative to the line of sight; receiving a set of 3D coordinates associated with a point of a real world 3D object captured by the imaging device; computing a projection of the point onto the true scale plane, thereby obtaining a first set of planar coordinates, and onto the image plane, thereby obtaining a second set of planar coordinates; and outputting the second set of planar coordinates indicative of a location of an image point corresponding to the point of the 3D object.
BRIEF DESCRIPTION OF THE DRAWINGS
Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
Figure 1 is a schematic diagram illustrating lens distortion;
Figure 2 are schematic views illustrating barrel and pincushion lens geometric distortion;
Figure 3 is a plan view illustrating edge dithering when two neighbouring pixel colours mix;
Figure 4 is a schematic diagram illustrating the parameters that define the behaviour of a camera/lens combination in an ideal camera model representation assuming the image plane is square with the line of sight; Figure 5 is a schematic diagram of the tilted axis assumption of a camera internal model where tilted axis compensation is added to the ideal camera representation of Figure 4;
Figure 6 is a schematic diagram of a new set of variables for a camera internal model, in accordance with an illustrative embodiment of the present invention;
Figure 7 is a schematic diagram of a radial distortion mode, in accordance with an illustrative embodiment of the present invention;
Figure 8a is a flowchart of a method for computing the location of an image point, in accordance with an illustrative embodiment of the present invention;
Figure 8b is a flowchart of the step of Figure 8a of projecting on plane / = 1 using an external camera model; Figure 8c is a flowchart of the step of Figure 8a of applying a lens distortion model;
Figure 8d is a flowchart of the step of Figure 8a of projecting on a tilted image plane / using an internal camera model; Figure 9a is a schematic diagram of a system for computing the location of an image point, in accordance with an illustrative embodiment of the present invention;
Figure 9b is is a block diagram showing an exemplary application running on the processor of Figure
9a; Figure 10 is a distorted photograph view of a calibration target;
Figure 11 are photographic views of a micro lens test camera with circuit board;
Figure 12 is a combined illustration of target extraction;
Figure 13 is a schematic diagram of a stereo pair used for measuring objects in 3D using two camera images simultaneously; Figure 14 are photographs illustrating geometric distortion correction using a test camera;
Figure 15 is a graph illustrating chromatic distortion for a / = 4 mm Cosmicar® C Mount lens;
Figure 16 is graph illustrating red chromatic distortion, radial correction vs distance from image center (pixels);
Figure. 17 is a graph illustrating blue chromatic distortion, radial correction vs distance from image center (pixels); and
Figure 18 is a schematic illustration of the Bayer Pattern layout for a colour camera.
It will be noted that throughout the appended drawings, like features are identified by like reference numerals.
BRIEF DESCRIPTION OF PREFERRED EMBODIMENTS 1.1 Lens Distortion
Lens distortion introduces the biggest error found in digital imaging. This is illustrated in Figures 1 and 2. As can be seen in Figure 1 , the fish eye effect is referred to as geometric distortion and curves straight lines. Coloured shading at the edges of the image (referred to as « Blue Tinted Edge » and « Red Tinted Edge » in Figure 1 ) is referred to as chromatic distortion and is caused by the splitting of light in the lens of the imaging device (not shown). These deviations from pinhole behaviour increase with the lens angle of view. Both distortions have to be modeled and compensated for to achieve sub-pixel accuracy, compensation possible only through software to go beyond hardware capabilities. As can be seen in Figure 2, when geometric distortion compresses the image on itself, it is referred to as barrel distortion (see Figure 2 (a)); when the image expands, it is referred to as pincushion distortion (see Figure 2 (b)). 1.2 Dithering
With reference to Figure 3, dithering is the intermediate pixel colour encountered when an edge goes through a given pixel and both neighbouring colours mix. The pixel colour is a weighed average of adjacent colour values, on either side of the edge, with respect to each colour's respective surface inside the pixel.
In low definition images, edge dithering (shading at object edges) interferes with lens distortion, geometric and chromatic. Using colour images from a black and white target, colour edge shading is caused by chromatic distortion. For a black and white target, dithering appears in grey shades as does geometric distortion. It is therefore desirable to isolate chromatic lens distortion from edge dithering or geometric distortion using edge colour.
1.3 Camera Model
Modelling a camera (or other imaging device) requires a mathematical model and a calibration procedure to measure the parameters that define the behaviour of a specific camera/lens combination.
It should be understood that, although a camera is referred to herein, the proposed system and method also apply to other imaging devices. In particular, devices including, but not restricted to, zooming lens cameras; near-infrared (NIR), short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) infrared imaging devices; Radar and Light Detection And Ranging (LIDAR) devices; parabolic mirror telescope imagers; surgical endoscopic cameras; computed tomography (CT) scanning devices; satellite imaging devices; sonar devices; and multi spectral sensor fusion systems may also apply:
According to the published literature on the subject, the ideal camera model has three components, as shown in Figure 4, namely:
1 - External Model: Relationship between Camera Coordinates at Focal Point O (point in space where all light collapses to a single point), and
World Coordinates in the world coordinate system (Xw Yw Zw);
2- Internal Model: Camera Plane Coordinate System (Xc Yc Zc), where Zc is the lens axis (i.e. the lens line of sight); and
3- Lens Model: Lens Geometric and Chromatic Distortion formulas. Focal point O is the location in space where all images collapse to a single point; in front of the focal point O is the camera image plane (not shown). Lens axis Zc crosses the image plane at two (2) right angles (i.e. is square therewith), defining the image center location (Cx, CY).
1.3.1 Camera External Model
The camera external model shows accurate throughout the literature. Defining two coordinate sets, 1- World (Xw Yw Zw) with origin set at (0,0,0); and
2- Camera (Xc Yc Zc) at focal point O. As can be seen in Figure 4, which illustrates the ideal camera model, the camera coordinate set (Xc Yc Zc) starts with the lens axis Zc and the focal point O as the origin, Xc is selected lining up with the horizontal axis (not shown) of the camera image plane. Geometrically, the Yc vertical axis should complete the set using the right hand rule. The external model writes as matrix [ R3x3 T3xi ] (discussed further below) and represents the World coordinate set (Xw Yw Zw) with origin set at (0, 0, 0).
The external camera model expresses rotations (κ φ Ω) and translations (Tx TY T2) needed to align the camera coordinate set (Xc Yc Zc) with the world set of coordinates (Xw Yw Zw), and bring the focal point O at the world origin (0,0,0). The external camera model therefore has six (6) degrees of freedom, namely the (κ φ Ω) rotation angles and translations (Tx TY Tz). 1.3.2 Camera Internal Model
Referring now to Figure 5 in addition to Figure 4, a prior art camera internal model where compensation for off-squareness is performed will now be described. If the image plane were perfectly square with the lens axis Zc, the scale factor between world measurements Xw Yw and camera measurements Xc Yc would be / in both x and y directions. To account for the loss of squareness between the lens axis Zc and the image plane, which results from manufacturing errors, the research community introduces the tilted axis assumption shown in Figure 5. In the tilted axis assumption, tilted of image plane axis is assumed so that the image plane can be considered square with the lens axis.
Various formulations exist, essentially:
- Parameter a is the horizontal image scale, perfectly aligned with the camera pixel grid array horizontal axis,
- The vertical scale is set to b, different from a;
- The scale and orientation of the vertical axis of the image plane is tilted by skew parameter s relative to the axis Yc, where s is a scale measure of skew relative to the image scale. With the image center (CXl CY) being the point where the lens axis Zc intersects the image plane, together with the coordinates (Cx, CY), a, b, and s would be, according to already published work using the tilted axis assumption, the five (5) internal camera parameters. The internal camera model therefore has five (5) degrees of freedom.
The image plane pixel grid array having a very accurate manufacturing process, calibration should retrieve a horizontal image scale equal to the vertical image scale, i.e. a = b = /, with no skew of the vertical axis, i.e. s = 0. The widespread tilted axis assumption however introduces a perspective bias, shifting all the other camera parameters, and should be replaced by a full 3D perspective model of the image plane that retains the camera image plane geometry. It is therefore proposed to introduce a new set of variables for the internal camera model, as shown in Figure 6 in which a model is represented in camera coordinates (starting from the focal point O).
In the proposed model, the image center (Cx> CY) remains the intersection between the lens axis Zc and the camera (i.e. image) plane. Two scale independent simultaneous perspectives of an outside 3D world object (a point P thereof being located somewhere in the world at given coordinates relative to the axes (Xc Yc Zc)) are considered.
Illustratively, the entry of the lens system is defined as a theoretical plane at / = 1 along the line of sight axis Zc, the theoretical plane being represented in Figure 6 by two axes, with unit focal distance at / = 1 , and perfectly square (i.e. at right angles) with the lens axis Zc. This first plane represents the perfect 1 : 1 true scale projection of the 3D object on a plane having infinite dimensions in x and y.
The projection on / = 1 is therefore expressed by the matrix transformation [ R3x3 T3x1 ]:
Figure imgf000013_0001
Using homogeneous coordinates, point P = [X Y Z 1]T in 3D world coordinates (X Y Z is given with respect to world coordinates (Xw Yw Zw), X' Y' Z' with respect to camera coordinates system (Xc Yc Zc)) projects as:
l3r3Ji Y z
Figure imgf000013_0002
= [x< r z'Y*[x'/z' r/z if (2) where the third component Z' of the resulting three-component vector is rescaled to unity, such that [X' Y' Z' ]T is scale equivalent to [Χ'/Ζ' Y'/Z' 1 ]T. The 3D point P = (X, Y, Z) is projected in two dimensions as (Χ'/Ζ', Υ'/Ζ'). The symbol = in equation (2) represents the scale equivalent operator in homogeneous coordinates.
The elements ru , i,j=1 ,2,3 in equation (1) are functions of model parameters, namely the three (3) rotation angles ( φ Ω) discussed above, and (Tx TY Tz), which is the position of the world origin with respect to focal point O.
In addition to the true scale perspective of the outside 3D world object, the second perspective is the image plane itself, i.e. the output of the lens system. The image plane is represented in Figure 6 by two axes intersecting at (Cx, CY). Since the camera plane at the focal distance / is off-squareness with the lens axis Zc, it needs five (5) parameters. Using on the image plane at / the same horizontal and vertical axis orientations as defined at the focal point O coordinate set (i.e. coordinate set (Xc Yc Zc)), two rotation angles a and β with respect to both x and y axes are used to account for the tilting of the camera plane. In particular and as shown in Figure 6, the x axis of the image plane is rotated by angle a while the y axis of the image plane is rotated by angle β, such that the image plane is tilted by angles a and β with respect to axes x and y, with the x and y axes of the image plane taken parallel to Xc and Yc at origin O initially, i.e. before any rotation. In other words, the axes x and y are illustratively taken parallel to Xc and Yc at origin O and reproduced on the image plane before any rotation is applied to tilt the image plane. As a reuslt, tilting of the image plane can be expressed by two (2) 3D rotations in space, namely a rotation about axis Yc by angle a and a second rotation about axis Xc by angle β. The x axis of the image plane being arbitrarily selected as aligned with the horizontal camera plane direction, there is therefore no need for a z axis rotation angle. It should be understood that in some embodiments, for instance embodiments where two imaging devices sharing the same line of sight are used concurrently to capture images of the world object or embodiments where a three-CCD camera (with three separate charge-coupled devices (CCDs)) is used, a z axis rotation angle may be desirable. In addition to the rotation angles a and β, the three remaining degrees of freedom for the camera internal model are then the focal distance / (or camera image scale) and coordinates of the image center (Cx, CY).
The internal K matrix corresponding to the widespread tilted axis assumption in Figure 5 is given by equation (3) below.
a s
K = 0 b
0 0 1
The top left 2x2 matrix partition in equation (3) represents the image plane x and y axes with skew parameter s, horizontal scale a, and vertical scale b. Taken as column vectors, the image plane x axis is aligned with the horizontal direction of the camera plane pixel array grid (not shown), accounting for the 0 value in position (2,1 ) of the K matrix. The image plane y axis is tilted by s in the x direction as illustrated in Figure 5. The last column represents the image center location (Cx, CY).
The error in the tilted axis assumption of Figure 5 is visible in the lower left 1x2 matrix partition. The two (2) terms of the lower left 1x2 partition should not be zero when the lens axis is off-squareness with the camera plane. When they are non-zero, these terms apply a perspective correction to x and y scales in the image plane as one moves away from the image center.
To compute the projected x and y axis as they should be taking perspective into account, it is therefore proposed to start with a camera image plane perfectly square with the lens axis Z0. The projected camera x and y axis are then computed as tilted respectively by angles a and β, as discussed above, thereby obtaining an image plane that is off-squareness with the lens axis Zc. In the proposed method, the internal camera model is defined as a perspective transformation with five (5) degrees of freedom that relates the outside camera model projection in true 1 :1 scale to the image plane projection at focal distance / on a common line of sight Zc, and where the image plane is tilted by angles a and β with respect to axes x and y on the image plane, the x and y axes taken parallel to Xc and Yc at origin O before any rotation. Figure 6 shows (Cx, CY) at the line of sight Zc. In Figure 6, Zc is taken as the origin for all planes intersecting with the line of sight. We initially assume (CXl CY) = (0, 0), and, after projection, a shift of origin is applied to offset the image plane centre from (0, 0) to (Cx, CY).
In homogeneous coordinates, any 3D point in space [x y z 1 ]T is projected on the 1 : 1 /=1 true scale plane as:
Figure imgf000015_0001
The last operation in equation (4) is the rescaling fourth (4 ) coordinate to unity. The third coordinate shows that point (x/z, y/z) lies on a plane at z = 1.
For an image plane tilted by a and β with respect to axes x and y respectively and with focal distance /, the same point is also projected on the tilted image plane as:
Figure imgf000015_0002
1 0 0 0" cos β 0 sin β 0" "1 0 0 0" x
0 1 0 0 0 1 0 0 0 cos a - sin a 0 y (5)
0 0 1 0 - sin/? 0 cos β 0 0 sin a cos or 0 z
0 0 i// 0 0 0 0 1 0 0 0 1 1 where Pf defines the projection operation where element (4,3) is 1//, / is the focal distance, R(y, β) is a β rotation matrix with respect to the y axis, and R(x, a) is an□ rotation matrix with respect to the x axis.
The a rotation in equation (5) is computed rightmost, so the β rotation is performed relative to an image plane y axis rotated by angle a. It should be understood that the β rotation could be handled rightmost in equation (5), meaning that the a rotation would be performed relative to a β rotated x axis. Homogeneous equations read from right to left and reversing the order of multiplication yields different mathematical formulations. Several models are possible.
Multiplying the matrices of equation (5), we obtain:
PfR(y, fi)R(x, a)[x y z l]
1 0 0 cos ? sin β sin a sin/? cos a 0" X
0 1 0 0 0 cos a - sin a 0 y
0 0 1 0 - sin β cos β sin a cos β cos 0 z
0 0 0 0 0 0 1 1
Figure imgf000015_0003
0 0
1 0 hn h2i 0 y (7) 0 1 z
0 0 \ ! f 0 0 0 1
Figure imgf000016_0001
l x + ] 2y + l 3z f(hl ]x + 2y + 3z) /(h3lx + h32y + h3 z)
h22y + T,Z f(h22y + h2iz)/(h3ix + h32y + h 3z)
(8) + ^y + h33z f
1 / f( 3]x + h32y + h33z) 1
The last operation (equation (8)) is again the rescaling of the fourth (4 ) coordinate to unity.
Pi [x y z 1 ]T and P} R(y, β) R(x, a) [x y z 1 ]T are two projections of the same 3D point [x y z 1 ]T, and are related by a simple homographic transformation. Substituting for x' = x/z and y' = y/z and noting that (χ', y') is the projection P-i [x y z 1 ]T where we discarded the z component (1 , plane located at z = 1 ) and unit scale factor, P; R(y, β) R(x, a) [x y z 1]T can be written as:
PfR{y,fi)R{x,a)[x y z if = = [*" y" f if
f (9)
1
where = οοεβ
h12 = 5ΐηβ sina
Figure imgf000016_0002
h3i = -sinp
Figure imgf000016_0003
ΓΙ33 = cosfi cosa
Defining the homography H between both planes (at / = 1 and /) as:
Figure imgf000016_0004
The tilted image plane coordinate (x", y") is a homographic transformation of the (x
scale plane projection, expressed in 2D homogeneous coordinates:
Figure imgf000016_0005
(1 1 ) where the symbol = represents the scale equivalent operator in homogeneous coordinates. As expected, element (2, 1 ) in Η (equation (10)) is 0, meaning that the x axis is parallel to the camera plane horizontal grid. The perspective elements h31 and h32 in row 3 (see equation (10)) create the plane perspective scale change moving away from the image center. These elements vanish to zero when the camera plane is square with the lens axis, i.e. when α = β = 0.
For a non zero (Cx, CY), since the internal camera model has to be handled in homogeneous coordinates, a perspective rescaling is needed before adding the image center (Cx, CY). In a two-step process, assuming no lens distortion, between external model point (χ', y') in 1 :1 true scale and image plane coordinates (x",y"), we obtain:
Step 1 : [u v w]T = H [x' y' 1 ]T u= /(cosp x' + είηβ sina y'+ sinp cosa)
v= /(cosa y' - sina) (12) w = - είηβ x' + cosp sina y' + cosp cosa
Step 2: rescaling to unity w=1 and translating by (Cx, CY) giving image point (x",y") x" = (u/w + Cx)
y" = (v/w + CY) (13)
In the absence of lens distortion, calibration is therefore finding the best match between two (2) projections. Every point in space maps to two (2) independent projection planes, i.e. the true scale plane and the tilted image plane. As discussed above, the / = 1 (i.e. true scale) plane is illustratively perfectly square with the lens axis and has six (6) degrees of freedom : (κ φ Ω) and (Tx TY Tz), giving our proposed external model (i.e. the relationship between camera and world coordinates). At /, the camera (i.e. image) plane is scale invariant with the true scale plane: The image at / cannot be a pure scale multiplication of the =1 true scale image. At /, the image plane has five (5) degrees of freedom: plane tilting angles a and β, image center (Cx, CY) and focal distance /, giving the internal model. A point in the /=1 true size plane then corresponds to a point in the tilted image plane and all corresponding projection points pairs at / = 1 and / define lines converging at the focal point O (in the absence of lens distortion). Still, lens distortion occurs between the two planes and has to be accounted for in the model. Thus, knowing the exact 3D object geometry of the calibration target, calibration is finding a 3D correspondence between pairs of coordinates projected in the two planes, compensating for lens distortion. Alternate formulations for the internal camera model are possible using the same basic principle discussed above, for instance rotations with respect to axes x and y on image plane, taken parallel to Xc and Yc at origin O before any rotation, can be geometrically applied in reverse order. As discussed above, when two image planes share the same line of sight, a z axis rotation can be added to one of them in order to express the relative misalignment between both images x horizontal axis. 1.3.3 Lens Distortion Model
Once the camera plane tilting angles a and β are properly accounted for, the camera image can be computed on a plane perfectly square with lens axis Z0, e.g. a plane at / = 1 or a plane at / corrected for squareness with the lens axis Zc being the line of sight for all planes under consideration, as discussed above. Although the proposed technique discussed herein models the imaging device (e.g. camera) using projections on a plane at / = 1 and a plane at /, it should be understood that more than two projection planes may be used to model lens distortion. Indeed, planes at / = 1 and at / are illustratively the minimal requirements as / = 1 is the projection in the external model and / locates the image plane. In some embodiments, one or more intermediary planes may be modeled between the planes at / = 1 and the plane at /. For instance, a third intermediary plane may be positioned at the minimum focal distance /min of the imaging device, with a first homography being computed between the planes at / = 1 and /min and a second homography being computed between the planes at /min and /.
For a projection plane at right angle with Zc, the lens distortion model can be reduced to a purely radial function, both geometric and chromatic. Many lens geometric distortion models were published. Some authors claim 1/20 pixel accuracy in removing geometric lens distortion. Overall, their basic criterion is more or less the same: lines that are straight in real life should appear straight in the image once geometric distortion is removed. Very few authors consider chromatic distortion in their lens model. The most widespread lens geometric distortion model is the Shawn Becker model, as follows [3]: x1 = x + x (k, r2 + k2 r4 + k3 r6) + p,*( r2 + 2 x2) + 2 p2 xy (14) y' = y + y (ki r2 + k2 r4 + k3 r6) + p2*( r2 + 2 y2) + 2 p, xy , r2=x2+y2 (15) where (χ', y' ) represents the new location of point (x, y), computed with respect to image center (Cx, CY), k k2l and k3 are three terms of radial distortion, and pi and p2 are two terms of decentering distortion. Calibration retrieves numerical values for parameters k-,, k2, k3, pi, and p2. Image analysis gives (x' y'). The undistorted (x y) position is found solving the two equations using a 2D search algorithm.
Most lens distortion models were able to straighten curved lines. Modeling errors appeared when recovering 3D positions from a calibrated stereo pair. Straight lines looking straight is an insufficient criterion to guarantee accurate geometric distortion correction. Wrong perspective will cause a measurement error across the image, and the titled axis assumption in Figure 5 creates a systematic perspective bias.
The proposed modification of the camera model increased calibration accuracy and reduced the lens geometric distortion model complexity. Only parameters k, and k2 of the Shawn Becker's lens geometric distortion model were retained, and Shawn Becker's two equations (14), (15) reduce to only one: r' = r + ki r3 + k2 r5 + ... (can be expanded), find r knowing r' from a fully radial displacement model, (16) which could be expanded using odd terms in r, where r^x^y2. As can be seen in Figure 7, a fully radial displacement model can be used where geometric distortion can be modelled as a fully radial displacement. External model image point (x, y) (which is illustratively representative of a measurable quantity, e.g. in inches or millimeters, rather than pixels) can be expressed in radial coordinates (r, Θ) where r2=x +y2 and tan6=y/x, x and y taken with respect to image center.
Even from a look-up table (LUT), using equation (16) reduces computation by 4:1 , uses significantly less memory, making the proposed camera model better suited for real time computation. Even with this simplified model, from a 640x480 Bayer Pattern 1/3 CCD colour camera with a /= 4mm micro lens (angle of view about 90°), the focal distance / was retrieved to an accuracy of 10"10 mm. Once the true image center is known, chromatic distortion can be modelled from a single image center. Several formulations are possible for chromatic distortion:
1 - Single center from geometric calibration on green channel, using deviation of blue and red;
2- Calibration of red green and blue channels independently; 3- Average red green and blue for geometric calibration, deviation of red and blue for chromatic;
1.3.4 Entire camera model
Referring now to Figure 8a, there is illustrated a method 100 for computing, using the proposed camera model, the location of an image point, as per the above. A 3D point in space goes through three transformations to give an image point located on the image plane. The method 100 illustratively comprises projection on plane /=1 using the proposed external camera model at step 102, applying the lens distortion model at step 104, and projecting on the tilted image plane at / using the internal camera model at step 106. After step 106 is performed, the location (x", y") of the camera image point corresponding to a 3D point (X, Y, Z) captured by the camera is obtained.
Referring now to Figure 8b, the step 102 illustratively computes the proposed external camera model transformation and comprises receiving at step 106 the coordinates (X, Y, Z) of a 3D point P expressed with respect to world coordinate system (Xw Yw Zw).
As discussed above, given the posture of the 3D object point P belongs to, the angles and position from which the camera sees the object, the /=1 projection gives a unique 1 :1 true scale image of the object from its six (6) degrees of freedom: three (3) angles ( φ Ω), and position (Tx TY T2) defining relative orientation and position between world and camera coordinate reference systems (Xw Yw Zw) and (Xc Yc Zc). Using the model parameters (κ φ Ω), (Tx Τγ Τ2), the following is computed at step 108 in homogeneous coordinates, as discussed above: P = [X Y Z 1 ]T
.,3 are functions of target posture angles ( φ Ω),
Figure imgf000020_0001
[ Χ' Y' Ζ· ]T = [R Τ3χ1][Χ Υ Ζ 1 ]τ
Scaling to unity Z' gives the external model image point (x, y), where:
[x y 1 ]T = [Χ'/Ζ' Y'/Z' 1 ]T - [X' Y" Z']T.
The external model image point is then output at step 110.
Referring now to Figure 8c, applying the lens distortion model at step 104 illustratively comprises receiving the external model image point (x, y) at step 1 12. Using model parameters, i.e. the lens geometric distortion parameters k, and k2 step 1 14 illustratively comprises computing r, r', and the distorted image point (χ', y'). It should be understood that, in some embodiment, e.g. depending on the fisheye of the imaging device's lens, parameters kt and k2 may be expanded. Indeed, as discussed above, in its simplest form, geometric distortion can be modelled as a fully radial displacement. External model image point (x, y) can be expressed in radial coordinates (r, Θ) where r2=x2+y2 and tan6=y/x, x and y taken with respect to image center (0, 0). The new distorted distance r' knowing r is given by:
r' = r + k-ι r3 + k2 r5 + ... (can be expanded),
where ki and k2 are the lens geometric distortion parameters. Distorted image point (x' y') can be computed knowing Θ or using similar triangle properties:
(χ', y') = (r'cos6, r'sinB),
or (χ', y') = (x rVr, y r'/r)
The distorted image point (χ', y') is then output at step 1 6.
In one embodiment, lens distortion is modelled in 1 : 1 /=1 scale with (0, 0) image center. Still, as will be discussed further below, it should be understood that / can be factored from the internal camera model and lens distortion handled in the / scale.
Referring now to Figure 8d, obtaining 106 the internal camera model illustratively comprises receiving the distorted image point (χ', y') at step 1 18. From the distorted image point (χ', y') and from the internal camera model five degrees of freedom, namely a and β (image plane tilt angles), the focal distance /, and the image center coordinates (Cx, CY), the following is computed at step 120:
u = /(cosp x' + είηβ sina y'+ είηβ cosa)
v = /(cosa y' - sina)
w = - είηβ x' + cosp sina y' + cosp cosa
x" = (u/w + Cx)
y" = (v/w + Cy)
where (x", y") is the image point on the camera internal image plane, which may be output at step
122. As discussed above, / can be factored from the internal camera model. In order to create an approximation of the internal camera model, we can use for small angles a and β, the approximation cosG=1 and sin0=0. It should be understood that other series approximations of sin and cos are possible. As can be seen from equation (12) below, substituting, h33 becomes unity,
Figure imgf000021_0001
-fa create a correction of the image center, h-π and h22 become / and give an identical scale in x and y, h-,2= /αβ creates the equivalent of skew in the image. h3i= -β as well as h32= a cannot become zero as indicated previously. They give the image a perspective correction moving away in x and y from the image center when rescaling by w -1 + ya - χβ, with x and y measured with respect to the image center. cos/? /sin /? sin ce /sin/?cosa f fafi ffi
H 0 f cos - f since 0 / - fa (17) sin/? cos/? sin a cos/? cos a - /? a 1
As also discussed above, lens distortion can be modeled with respect to the image plane scale . To model lens distortion according to the image plane scale, an imaginary intermediate plane of projection has to be added to the model, located at / along ZCl with (0, 0) center, and perfectly square with lens axis Zc.
Figure imgf000021_0002
The transformation is represented with a pure scaling homography Sf
Figure imgf000021_0003
We can factor Sf from H in the internal camera model. We can apply Sf scaling at the end of the external model to compute (fx, /y), use (fx, /y) in the lens distortion model, and therefore set / = 1 in the internal camera model. Lens distortion parameters will then be computed as in / scale, as if the image plane was corrected and tilted back at right angle with the line of sight Zc with (0, 0) image center position.
The requirement for lens distortion modelling is to compute radial distance r for a plane perfectly square with the line of sight, with respect to the image center. It provides added freedom in modelling fixed lenses, but /=1 1 :1 true scale modelling of lens distortion is an advantage for zooming lens applications making lens distortion parameters independent from /.
Referring now to Figure 9a, there is illustrated a system 200 for modelling and calibrating an imaging device. The system 200 comprises one or more server(s) 202 accessible via the network 204. For example, a series of servers corresponding to a web server, an application server, and a database server may be used. These servers are all represented by server 202. The server 202 may be accessed by a user using one of a plurality of devices 206 adapted to communicate over the network 204. The devices 206 may comprise any device, such as a personal computer, a tablet computer, a personal digital assistant, a smart phone, or the like, which is configured to communicate over the network 204, such as the Internet, the Public Switch Telephone Network (PSTN), a cellular network, or others known to those skilled in the art. Although illustrated as being separate and remote from the devices 206, it should be understood that the server 202 may also be integrated with the devices 206, either as a downloaded software application, a firmware application, or a combination thereof. It should also be understood that several devices as in 206 may access the server 202 at once.
Imaging data may be acquired by an imaging device 207 used for calibration and image correction. The device 207 may be separate from (as illustrated) the devices 206 or integral therewith. The imaging data may comprise one or more images of a real world 3D object (not shown), such as a calibration target as will be discussed further below. The imaging data may then be processed at the server 202 to obtain a model of the imaging device 207 in the manner described above with reference to Figure 8a, Figure 8b, Figure 8c, and Figure 8d. The imaging data is illustratively acquired in real-time (e.g. at a rate of 30 images per second) for an object, such as a moving object whose movement in space is being monitored. The server 202 may then process the imaging data to determine an image point associated with each point of each acquired image. Alternatively, the imaging data may be processed to determine an image point associated with each one of one or more points of interest in the image.
The server 202 may comprise, amongst other things, a processor 208 coupled to a memory 210 and having a plurality of applications 212a ... 212n running thereon. It should be understood that while the applications 212a ... 212n presented herein are illustrated and described as separate entities, they may be combined or separated in a variety of ways.
One or more databases 214 may be integrated directly into the memory 210 or may be provided separately therefrom and remotely from the server 202 (as illustrated). In the case of a remote access to the databases 214, access may occur via any type of network 204, as indicated above. The various databases 214 described herein may be provided as collections of data or information organized for rapid search and retrieval by a computer. The databases 214 may be structured to facilitate storage, retrieval, modification, and deletion of data in conjunction with various data-processing operations. The databases 214 may consist of a file or sets of files that can be broken down into records, each of which consists of one or more fields. Database information may be retrieved through queries using keywords and sorting commands, in order to rapidly search, rearrange, group, and select the field. The databases 214 may be any organization of data on a data storage medium, such as one or more servers.
In one embodiment, the databases 214 are secure web servers and Hypertext Transport Protocol Secure (HTTPS) capable of supporting Transport Layer Security (TLS), which is a protocol used for access to the data. Communications to and from the secure web servers may be secured using Secure Sockets Layer (SSL). Identity verification of a user may be performed using usernames and passwords for all users. Various levels of access rights may be provided to multiple levels of users. Alternatively, any known communication protocols that enable devices within a computer network to exchange information may be used. Examples of protocols are as follows: IP (Internet Protocol), UDP (User Datagram Protocol), TCP (Transmission Control Protocol), DHCP (Dynamic Host Configuration Protocol), HTTP (Hypertext Transfer Protocol), FTP (File Transfer Protocol), Telnet (Telnet Remote Protocol), SSH (Secure Shell Remote Protocol)
The memory 210 accessible by the processor 208 may receive and store data. The memory 210 may be a main memory, such as a high speed Random Access Memory (RAM), or an auxiliary storage unit, such as a hard disk, flash memory, or a magnetic tape drive. The memory 210 may be any other type of memory, such as a Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM).or optical storage media such as a videodisc and a compact disc.
The processor 208 may access the memory 210 to retrieve data. The processor 208 may be any device that can perform operations on data. Examples are a central processing unit (CPU), a front-end processor, a microprocessor, and a network processor. The applications 212a ... 212n are coupled to the processor 208 and configured to perform various tasks as explained below in more detail. An output may be transmitted to the devices 206.
Figure 9b is an exemplary embodiment of an application 212a running on the processor 208. The application 212a may comprise a receiving module 302 for receiving the imaging data from the imaging device 207 and obtaining therefrom coordinates of a point of a real 3D world object as captured by the imaging device 207, an external model projection module 304 enabling the method illustrated and described in reference to Figure 8b, a lens distortion compensation module 306 enabling the method illustrated and described in reference to Figure 8c, an internal model projection module 308 enabling the method illustrated and described in reference to Figure 8d, and an output module 310 for outputting coordinates of a camera image point, as computed by the internal model defining module 308.
2.0 CALIBRATION
Referring now to Figure 10, Figure 11 , and Figure 12, calibration models the 3D to 2D image creation process. From two (2) calibrated cameras, the 2D to 3D stereo pair inverse operation is used to validate model accuracy.
2.1 Experimental Setup
The experimental proposed setup is intended to be field usable, even with low resolution short- wavelength infrared (SWIR) imagers. On two (2) 90° planes of black anodized aluminium (not shown), two circle grids (not shown) are formed (e.g. engraved), changing the surface emissive properties in the SWIR spectrum, and providing black and white information for colour calibration, as illustrated in Figure 10.
Some published approaches use the center portion in the image to avoid distortion and isolate some camera parameters. Unfortunately, it also creates a parameter estimation bias. In the proposed approach, any ellipse center taken anywhere in the image should fit the model. Therefore, the proposed model is accurate across the entire image, even for a wide angle lens.
Once the ellipse centers are measured from the image, we have a data set that relates 3D real world target positions with their 2D respective location in the image. Using a camera model to correlate them, a Levenberg-Marquardt search algorithm may be used to compute the model parameters. It should be understood that algorithms other than the Levenberg-Marquardt algorithm may apply. For instance, the steepest descent or Newton algorithms may be used. The accuracy improvements achieved with the proposed technique allowed to use a least square sum of error criteria without bias. The error is defined as the image predicted target position from the model and 3D data set, minus the corresponding real image measurement in 2D.
Calibration target uses 1 " diameter circles at 2" center to center spacing. Using circles ensures that no corner should be detected even with a highly pixelized image, see Figure 12.
Each circle gives a local estimate of the camera behaviour, without bias or any preferred edge orientation. We are more concerned with accurate ellipse center location accuracy than signal-to-noise (S/N) ratio on edge detection. Significant work was needed to test various techniques for ellipse modelling and avoid a center bias estimation. Since the image is highly pixelized, the edge detector footprint is illustratively restricted to a 3 x 3 pixel area.
Since it is intended to use the proposed technique on low resolution cameras, a 640x480 Bayer Pattern Point Grey Research Firefly colour camera is illustratively chosen, with its supplied / = 4mm micro lens for testing, as shown in Figure 11.
It was eventually concluded that moment techniques are unable to deal with glare and reflection, therefore unusable for field calibration. We found 1/4 to 1/2 pixel center bias in several cases. Those errors being so small, extensive mathematical analysis was required to remove them from the shape recovery process; they are invisible to the human eye. Edge gradient sensing techniques, on the other hand, exhibited a sub pixel location bias when the edge orientation did not line up with the horizontal or vertical camera grid pixel array. In the end, a sub pixel correction on the 'Non Maxima Suppression' sub pixel extension by Devernay [1] was used. In a two step process, step 1 recovered an initial estimate for the edge points, adding compensation for edge orientation bias. On that initial set, a first estimate of the ellipse geometry is computed. In step 2, the initial ellipse fit is used to estimate local curvature and correct the edge location.
2.2 Calibration Result
Using the same experimental data, the parameter estimation is compared for two (2) camera models, as shown in the table 1 below. TABLE 1
Figure imgf000025_0001
The leftmost camera parameter set is obtained from the most accurate model published, tested on our own experimental data. The rightmost set was computed from the proposed model, where the lens model was taken as a purely radial geometric distortion, and where the internal camera model used the proposed implementation.
The first six (6) lines of the above table are the external camera parameters, three (3) angles and three (3) positions needed to compute [R3x3 T3x1]. The next five (5) lines are the internal camera parameters; we modified our parameter representation to fit the generally used model from Figure 5. Our degrees of freedom use a different mathematical formulation. Then, the remaining two (2) lines show the major lens geometric distortion parameters k and k2. These two are present in most models and account for most of fish eye geometric distortion.
From a, b and s, as discussed herein above, we consider a = b = with s = 0 as expressing camera pixel squareness, and the error on focal distance /. If a pixel is square, height scale should be equal to width scale, both should be perfectly at right angle, and a = b = should represent the image scale.
Switching to the proposed model, the error on / reduces from 10"3 mm to 0"10 mm. Initially, focal distance / was wrong by 0.03 %. Although it seems small, the model bias shifted the image center (Cx, CY) by close to two (2) pixels mostly in the Y direction. At the same time, all external parameters have shifted. All the angles are changed, and object distance Tz is wrong by 0.3%: An error on range measurement amplified ten (10) times with respect to the error on /. It's a systematic range measurement error: A 3 mm error at 1 m distance would scale to 30 m at 10 km distance. Error percentages on Tx and TY are even worse, indicating that the leftmost model seeks to preserve distances along lens axis Zc. From a calibrated stereo pair, 3D recovery shows an error equivalent to two (2) pixels at the image scale. Stereo 3D measurement will be further discussed further below.
Considering distortion parameters k, and k2, (the minus sign on ki means barrel distortion), it is noticed that both are under estimated. There is some residual curvature as one goes away from the image center. It may be smaller than a pixel, but curvature would build up if one tried to stitch images to create a map from multiple pictures.
3.0 MODEL/CALIBRATION BIAS IMPACT
The major model bias impact shows on 3D telemetry from a stereo pair, as used in a 3D scanner application. The same conclusion holds true for a 3D extraction from a moving camera since basically the mathematical triangulation process is the same.
3.1 Recovering 3D from a stereo pair
As mentioned previously, neglecting the proposed correction on the camera model creates a 3D triangulation systematic error. Figure 13 shows a stereo pair typically used for measuring objects in 3D, using two (2) simultaneous camera images. A full discussion on triangulation is given in [5].
O and O' are the optical centers for the two cameras (not shown), and both lens axes project at right angles on the image planes at the image centers, respectively (Cx, CY, / ) and (Cx\ CY', /') (not shown for clarity), where (Cx, CY) is the origin of the image plane, and / the distance between O and the image plane, as shown in Figure 4. Similarly, (Cx\ CY') is the origin of the image plane, and /' the distance between O' and the image plane.
Both cameras are seeing a common point M on the object (not shown). M projects in both camera images as points m and m'.
To find out where M is in space, two lines are stretched starting from O and O' through their respective camera image points m and m'. M is computed where both lines intersect.
3D accuracy depends on the accurate knowledge of:
1. Optical centers O and O'
2. Focal distances / and / '
3. Image centers (Cx, CY) and (Cx\ CY')
4. Lens axis orientation Zc and Zc'
5. Accuracy on image points m and m'
6. Intersection for OM and O'M
The first four (4) requirements for 3D telemetric accuracy are found trough camera calibration, the fifth from sub pixel image feature extraction. The last is the triangulation 3D recovery itself.
The first four (4) error dependencies described above, namely the optical centers O and O', the focal distances / and / ', the Image centers (Cx, CY) and (Cx', CY ), and the lens axis orientation Zc and Zc', are subject to the discovered camera model bias discussed above.
Although the tilted axis assumption creates a very small error on focal distance /, it will generate a large bias on image center (Cx, CY) and focal points O and O'. Since O and O' are out of position, the triangulation to find M gives a systematic 3D error. From the proposed calibration example, the two (2) pixel error on the optical centers O and O' dominate any measurement error on image points m and m' since we were able to retrieve them to 1/4 pixel accuracy.
Feature point extraction (m and m') is subject to the edge orientation bias, and corner detection bias that had to be delt with at calibration.
And finally, triangulation, a classical Singular Value Decomposition (SVD) approach was resorted to for its stability and speed. Nothing ever guarantees that two lines will intersect in space. Therefore, M is sought as the point in space where both lines are closest.
Over the course of our investigation, several bias sources were measured as affecting accuracy, with the camera model bias being the major contributor. The bias sources include the following:
Camera/lens model (2 pixel error on image center (Cx, CY))
Sub pixel edge orientation bias (1/4 pixel edge shift )
Sub pixel corner detection bias (1/4 pixel corner offset)
Unaccounted chromatic distortion (1/2 pixel edge shift with respect to colour)
- Under compensated geometric distortion (1/2 pixel residual curvature easily undetected)
JPEG image filtering at sub pixel level (variable with JPEG quality parameter )
Aside the camera model's bias, most bias sources will result in feature point extraction errors. Removing these bias sources leads to a cumulated benefit. Achieving / accurate to 10"10 mm even from a low resolution Bayer pattern camera using a wide angle micro lens by simply changing the camera internal model shows a major improvement, and explains why an accurate zooming lens model was impossible until now.
3.2 Model Bias: Overall and the Zooming Lens
As discussed above, every lens parameter is 'polluted' by the internal camera model bias referred to as the tilted axis assumption. The bias can be removed by changing the tilted assumption for an accurate perspective model of the 3D internal camera image plane.
In 3D triangulation, either from stereo or from a moving camera, the impact is that since image scale and image center are wrong, triangulation is shifted.
The example illustrated in table 1 also shows that lens distortion parameters are under evaluated, with the minus sign on ki meaning barrel distortion. When stitching multiple images to create a map, it results as curvature buildup from image to image.
Range and aim measurements are also biased and related to the error percentage on focal distance / since a camera gives a scaled measure. It also prevents the accurate modelling of the zooming lens camera. In a zooming lens, focal point O moves along the lens axis Zc. From calibration, O is found by knowing image center (Cx, CY), / away at right angle with the image plane. The proposed example shows a systematic bias in those parameters. It gets even worse when considering run out in the lens mechanism since it moves the lens axis Zc. Without the proposed modification to the camera model, it then becomes impossible to model a zooming lens.
Modeling of the zooming lens camera requires plotting the displacement of focal point O in space. An ideal zooming lens would have O moving in a straight line on lens axis Zc, with entry plane /=1 moving along. As soon as mechanical assembly errors occur, the linear displacement relationship for point O breaks up. The only way to evaluate the mechanical quality of the zooming lens therefore depends on the accurate knowledge of image center (Cx, CY) and /.
Mechanical quality behaviour is also the zooming lens tradeoff: zooming in to gain added accuracy when needed, at the cost of losing accuracy for assembly tolerances in the lens mechanism.
3.3 Geometric Distortion Removal Example
Referring now to Figure 14, using the previously calibrated test camera discussed above, and from the proposed algorithm where lens geometric distortion is expressed as equation (16), Figure 14 shows how lens distortion is removed from the image. Chromatic distortion is not visible on a black and white image.
3.4 Chromatic Distortion
Reference is now made to Figure 15 illustrating chromatic distortion from a / = 4 mm Cosmicar® C Mount lens, at a resolution of 1024 x 768. Once the true image center (Cx, CY) is known, chromatic distortion can be modelled. In most images, chromatic distortion is hardly visible, unless the subject is in full black and white and pictured with a colour camera. Reference [2] gives a model where each RGB color channel is modelled independently.
In Figure 15, chromatic distortion target displacement is shown amplified by fifty (50) times. Target positions are shown for the Red Green and Blue (RGB) camera colour channels, and are grouped by clusters of three (3). The 'x' or cross sign marker symbol indicates the target extraction in Blue, the '+' or plus sign marker symbol indicates the target extraction in Red, and the dot or point marker symbol indicates the target extraction in Green. The visible spectrum spread pushes the Red target centres outwards, and the Blue target centers inwards with respect to Green. The graph of Figure 15 shows a mostly radial behaviour. The imaginary lines joining Red Green and Blue centers for any given target location tend to line up and aim towards the image center indicated by the circled plus sign marker symbol close to the (500, 400) pixel coordinate.
The next two (2) graphs, illustrated in Figure 16 and Figure 17, show that both Blue and Red chromatic distortions are zero at the image center, starting at ordinate origin (0,0) as expected. As the lens theoretical behaviour predicts, chromatic distortion should be zero at the image center. Both chromatic Blue and Red distortions have their peak values at different radial distances from the center. From over ±1/2 pixel, chromatic distortion can be modelled and brought down to less than ± 1/8 pixel.
In radial coordinates taken from the image center (Cx, CY), unaccounted chromatic distortion creates a ± 1/2 pixel error on edge location with changing object colour, or changing light source spectrum. It stresses the need to be careful in extracting RGB from a Bayer pattern colour image since edge sensing is biased with colour.
3.5 Bayer Pattern Recovery
With reference now to Figure 18, Bayer Pattern colour cameras give a single colour signal for each given pixel, Red, Green, or Blue, as indicated by an R, G, or B prefix in the pixel number of Figure 18. Missing colour information is interpolated using neighbouring pixel information.
The most accurate Bayer pattern interpolation schemes use edge sensing to recover missing RGB information. We cannot interpolate across an edge since accurate Bayer pattern recovery needs to avoid discontinuities.
In a two-step process, we first compute the missing G pixel values on B and R pixels. For example, on red pixel R13, the missing G13 value is computed as:
(G12+G14)/2 if the edge is horizontal (R13 > ( R3 +R23)/2) (1 1 ) ( G8 +G18)/2 if the edge is vertical (R 3 > (R1 1 +R15)/2) (12) (G12+G8+G14+G18)/4 otherwise (13)
In step two, we compute missing B and R values using known G for edge sensing, assuming edges in B and R are geometrically found in the same image plane locations as G edges.
Since the lens introduces chromatic distortion, Bayer pattern recovery requires adapting to compensate for 'colour shifting' edge location as we scan from B to G to R pixels.
3.6 Optical System Design Tradeoffs
For surveillance, 3D telemetry, and scanners, the need to eliminate the camera calibration bias has been demonstrated. Other key assets for the technology include, but are not limited to:
1. A software approach creates an open integration architecture;
2. Allows the use of wide angle lenses or reduces lens size;
3. Allows to model a zooming lens camera;
4. Image geometric and chromatic distortion correction algorithms computation speed increases, and adds lossless image compression;
5. Remove a Bayer pattern recovery error caused by chromatic distortion.
It should be noted that software appears to be the only strategy to increase the accuracy beyond the capabilities of the camera hardware. As an enabler, the technology allows:
• The use of wide angle lenses to increase the camera angle of view without loss of accuracy. A 1/3 CCD / = 4mm combination gives a 90 degrees angle of view.
• To compensate cameras' low resolution by adding chromatic distortion modelling and sub pixel edge measurement across the spectrum.
• Miniaturization: We achieved calibration using a micro lens and focal distance evaluation accurate to 10" 0mm, roughly the size of a hydrogen molecule.
• Sensor fusion between SWIR-Colour-synthetic images-Radar-LIDAR: to achieve accurate fusion, each image scale and image center has to be known and image distortion removed. Digital image lag may cause nausea to a human observer making the proposed geometric distortion removal 4:1 simplification desirable. A residual 3D error from a calibration bias will also cause human discomfort, such as headaches or cybersickness. Testing vision amplification for soldier vision concludes that synthetic imaging lagging by 1/4 sec on reality can make a human observer nauseous. Since the solution is software implemented, it becomes cross platform independent.
On low resolution images, sub pixel edge extraction and plotting helps the human brain in interpreting the image. Low resolution SWIR images can fuse with higher resolution colour images.
In augmented reality, the computer generated image has ideal perspective and known focal length. Since a computer generated image is perfectly pinhole, created from set value for /, it stands from reason to correct the camera image for distortion and fit it to the same scale as the synthetic image.
In earth observation and surveillance from satellite, any lens system will exhibit distortion at some level. The earth's atmosphere also adds distortion which can only be compensated for when the lens distortion is accurately known. When stitching images, under-compensated geometric distortion will build up curvature, and biased perspective as caused by the tilted axis assumption will create a shape alteration: loss of squareness, loss of verticality...
Sub pixel edge extraction is by far the most efficient means of image compression. Correcting the image for lens distortion and through a modification of JPEG, an added 30% lossless image compression was also demonstrated .
The proposed approach is desirable for zooming lens telemetry, increases speed and accuracy in wide angle lens application, and allows system miniaturization in two ways. Firstly by providing added accuracy from smaller lens systems, and secondly, filtering through software allows for simpler optics. It provides the best trade-off for accuracy, speed, cost, bulk, weight, maintenance and upgradeability.
4.0 CONCLUSION
No automated system is more accurate than its instrument. The use of digital cameras as measuring tools in Intelligent Systems (IS) requires the camera to be calibrated.
Added accuracy is achievable only through software since commercial lenses can have a 10% tolerance on focal distance /, and software is the only way to compensate lens distortion at sub pixel level.
The tilted axis assumption creates a major bias and has to be replaced by a perspective model of the image plane that retains the camera image plane 3D geometry: horizontal and vertical image scales are equal and at right angle. The tilted axis assumption introduces a calibration bias showing on 3D triangulation since the image center is out of position. In the example discussed above, the two (2) pixel image center bias dominates every other error in the triangulation process since image features can be extracted to 1/4 pixel accuracy.
Care should be taken in extracting image features for calibration since several sub pixel biases can occur. Sub pixel bias sources include, but are not restricted to:
- Sub pixel edge location-orientation bias;
- Sub pixel corner detection bias;
- Unaccounted chromatic distortion;
- Under compensated geometric distortion;
- JPEG image filtering at sub pixel level.
The perspective model for the internal camera image plane is needed to locate the displacement of the lens focal point in a zooming lens. A software correction approach increases speed and accuracy in wide angle lens application, and allows system miniaturization in two ways. Firstly by providing added accuracy from smaller lens systems, and secondly, filtering through software allows for simpler optics. Software model/calibration is the only technique for improving camera performance beyond hardware limitations.
While illustrated in the block diagrams as groups of discrete components communicating with each other via distinct data signal connections, it will be understood by those skilled in the art that the present embodiments are provided by a combination of hardware and software components, with some components being implemented by a given function or operation of a hardware or software system, and many of the data paths illustrated being implemented by data communication within a computer application or operating system. The structure illustrated is thus provided for efficiency of teaching the present embodiment.
It should be noted that the present invention can be carried out as a method, can be embodied in a system, and/or on a computer readable medium. The embodiments of the invention described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the scope of the appended claims.
REFERENCES
[1] Frederic Devernay. A Non-Maxima Suppression Method for Edge Detection with Sub-Pixel Accuracy.
INRIA: INSTITUT NATIONAL DE RECHERCHE EN INFORMATIQUE ET EN AUTOMATIQUE. Report N° 2724, November 1995, 20 pages.
[2] Y. M. Harry Ng, C. P. Kwong. Correcting the Chromatic Aberration in Barrel Distortion of Endoscopic Images. Department of Automation and Computer Aided Engineering, Chinese University of Hong Kong. 6 pages
[3] Shawn Becker. Semiautomatic Camera Lens Calibration from Partially Known Structure. MIT:
Massachusetts Institute of Technology.
http://alumni.media.mit.edu/~sbeck/results/Distortion/distortion.html ©1994. 1995
[4] Konstantinos G. Derpanis. The Harris Corner Detector. October 2004, 2 pages.
[5] L.H. Hartley, P. Sturm. Triangulation. Proc. of the ARPA Image Understanding Workshop 1994, Monterey, CA 1994, pp. 957-966.

Claims

CLAIMS:
1. A computer-implemented method for modeling an imaging device for use in calibration and image correction, the method comprising :
defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device;
defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight;
defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third axis of the third coordinate system respectively tilted by a first and a second angle relative to an orientation of the second and the third axis of the first coordinate system, the second and the third axis of the third coordinate system thereby defining an image plane off-squareness relative to the line of sight; receiving a set of 3D coordinates associated with a point of a real world 3D object captured by the imaging device;
computing a projection of the point onto the true scale plane, thereby obtaining a first set of planar coordinates, and onto the image plane, thereby obtaining a second set of planar coordinates; and
outputting the second set of planar coordinates indicative of a location of an image point corresponding to the point of the 3D object.
2. The method of claim 1 , wherein the second coordinate system is defined such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and the third coordinate system is defined such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
3. The method of claim 1 , wherein the received set of 3D coordinates is [x y z 1]T and the projection of the point of the 3D object onto the true scale plane is computed as:
Figure imgf000034_0001
where « is a scale equivalent operator and Pi defines a projection operation onto the true scale plane with respect to the first coordinate system.
4. The method of claim 3, wherein the projection of the point of the 3D object onto the image plane is computed as:
Figure imgf000035_0001
} χχ + 2y + 3z
h22y + h2iz
hnx + h32y + /¾3z
\ l f{h3 lx + h32y + h3 z)
f(hi ]x + h 2y + /(h3]x + ¾32_v + h^z)
f {h22y + h23z)
Figure imgf000035_0002
where P/ defines a projection operation onto the image plane, / is the focal distance, a is the first angle, β is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the a rotation is performed, R(y, β) is a β rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the β rotation is performed, the a rotation computed rightmost such that the β rotation is performed relative to the axis x rotated by the angle a, and where
h = οοεβ,
h12 = sin3 sina,
h-i3 = είηβ cosa,
h22 = cosa, h31 = -είηβ,
h32 = οοεβ sina, and
h33 = cosp cosa.
5. The method of claim 4, further comprising determining a homography H between the true scale plane and the image plane as :
H = 0 fli22 fli23 , where h31 and h32 are non-zero elements applying a perspective correction to x h3l h32 h33
and y scales in the image plane and the second set of planar coordinates (x", y") is a homographic transformation of a distorted position (χ', y') of an image of the point on the true scale plane, the homographic transformation expressed as :
Figure imgf000036_0001
where u = /(cosp x' + sinp sina y' + είηβ cosa),
v = /(cosa y' - sina),
w = -είηβ χ' + cosp sina y' + οοεβ cosa,
x" = u/w + Cx, and
y" = v/w + CY with (Cx ,CY) being a position of the origin of the third coordinate system.
The method of claim 5, wherein the homography H is determined as cos/? /sin /?sina /sin/? cos a ' f ίαβ β '
0 / cosa - / sin a 0 / - fa
sin/? cos β sin a cos β cos a - β a 1 where the approximation cos6~1 and sin0~9 is used for small angles a and β.
7. The method of claim 1 , further comprising compensating for a distortion of a lens of the imaging device at the true scale plane, the compensating comprising applying a lens distortion model defined by :
Figure imgf000036_0002
where the first set of planar coordinates comprises an undistorted position (x, y) of an image of the point on the true scale plane expressed in radial coordinates (r, Θ), with r^=x2+y2 and tan9 = y/x, (χ', y') represents a distorted position of (x, y) at an output of the lens before projection of the point on the image plane, r' is a distorted radial distance computed on the basis of (χ', y'), and k-i and k2 are geometric distortion parameters of the lens.
8. A system for modeling an imaging device for use in calibration and image correction, the system comprising:
a memory;
a processor; and
at least one application stored in the memory and executable by the processor for
defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device;
defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight;
defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third axis of the third coordinate system respectively tilted by a first and a second angle relative to an orientation of the second and the third axis of the first coordinate system, the second and the third axis of the third coordinate system thereby defining an image plane off-squareness relative to the line of sight;
receiving a set of 3D coordinates associated with a point of a real world 3D object captured by the imaging device;
computing a projection of the point onto the true scale plane, thereby obtaining a first set of planar coordinates, and onto the image plane, thereby obtaining a second set of planar coordinates; and
outputting the second set of planar coordinates indicative of a location of an image point corresponding to the point of the 3D object.
9. The system of claim 8, wherein the at least one application is executable by the processor for defining the second coordinate system such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and defining the third coordinate system such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
10. The system of claim 8, wherein the at least one application is executable by the processor for receiving the set of 3D coordinates as [x y z 1]T and computing the projection of the point of the 3D object onto the true scale plane as:
Figure imgf000037_0001
where « is a scale equivalent operator, defines a projection operation onto the true scale plane with respect to the first coordinate system.
11. The system of claim 10, wherein the at least one application is executable by the processor for computing the projection of the point of the 3D object onto the image plane as: P
Figure imgf000038_0001
where Pf defines a projection operation onto the image plane, / is the focal distance, a is the first angle, β is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the a rotation is performed, R(y, β) is a β rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the β rotation is performed, the a rotation computed rightmost such that the β rotation is performed relative to the axis x rotated by the angle a, and where
hn - οοεβ,
h12 = είηβ sina,
hi3 = είηβ cosa,
h22 = cosa, h31 = -sinp,
h32 = οοε sina, and
h33 = οοεβ cosa.
12. The system of claim 1 1 , wherein the at least one application is executable by the processor for determining a homography H between the true scale plane and the image plane as :
Figure imgf000038_0002
H = 0 ¾2 jh^ , where h3i and h32 are non-zero elements applying a perspective correction to x
'31 "32 "33
and y scales in the image plane and the second set of planar coordinates (x", y") is a homographic transformation of a distorted position (χ', y') of an image of the point on the true scale plane, the homographic transformation expressed as : [x" y if « [« WJ = H[X' l]7 where u = /(cos x' + sinp sina y' + sin cosa),
v = /(cosa y' - sina),
w = -είηβ x' + οοεβ sina y' + οοεβ cosa,
x" = u/w + Cx, and
y" = v/w + CY with (Cx ,Cy) being a position of the origin of the third coordinate system.
13. The system of claim 12, wherein the at least one application is executable by the processor for determining the homography H as : cos β /sin/?sina /sin/?cosa f /αβ //?
H = 0 f cosa -/sina 0 / -fa
sin/? cos/? sin a cos/? cos a -β a 1 where the approximation cos9~1 and sin9~9 is used for small angles a and β.
14. The system of claim 1 , wherein the at least one application is executable by the processor for compensating for a distortion of a lens of the imaging device at the true scale plane, the compensating comprising applying a lens distortion model defined by : r" = r + ki r3 + k2 r5 + where the first set of planar coordinates comprises an undistorted position (x, y) of an image of the point on the true scale plane is expressed in radial coordinates (r, Θ), with r2=x2+y2 and tanG = y/x, (χ', y') represents a distorted position of (x, y) at an output of the lens before projection of the point on the image plane, r' is a distorted radial distance computed on the basis of (χ', y'), and ^ and k2 are geometric distortion parameters of the lens.
15. The system of claim 8, wherein the imaging device comprises one of a zooming lens camera, a near-infrared imaging device, a short-wavelength infrared imaging device, a long-wavelength infrared imaging device, a radar device, a light detection and ranging device, a parabolic mirror telescope imager, a surgical endoscopic camera, a Computed tomography scanning device, a satellite imaging device, a sonar device, and a multi spectral sensor fusion system.
16. A computer readable medium having stored thereon program code executable by a processor for modeling an imaging device for use in calibration and image correction, the program code executable for: defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight;
defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third axis of the third coordinate system respectively tilted by a first and a second angle relative to an orientation of the second and the third axis of the first coordinate system, the second and the third axis of the third coordinate system thereby defining an image plane off-squareness relative to the line of sight; receiving a set of 3D coordinates associated with a point of a real world 3D object captured by the imaging device;
computing a projection of the point onto the true scale plane, thereby obtaining a first set of planar coordinates, and onto the image plane, thereby obtaining a second set of planar coordinates; and
outputting the second set of planar coordinates indicative of a location of an image point corresponding to the point of the 3D object.
PCT/CA2014/000534 2013-07-02 2014-07-02 System and method for imaging device modelling and calibration WO2015000056A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
BR112015033020A BR112015033020A2 (en) 2013-07-02 2014-07-02 SYSTEM AND METHOD FOR MODELING AND CALIBRATION OF IMAGE FORMATION DEVICE
US14/898,016 US9792684B2 (en) 2013-07-02 2014-07-02 System and method for imaging device modelling and calibration
EP14820593.3A EP3017599A4 (en) 2013-07-02 2014-07-02 System and method for imaging device modelling and calibration
KR1020167003009A KR20160030228A (en) 2013-07-02 2014-07-02 System and method for imaging device modelling and calibration
CN201480038248.5A CN105379264B (en) 2013-07-02 2014-07-02 The system and method with calibrating are modeled for imaging device
JP2016522146A JP2016531281A (en) 2013-07-02 2014-07-02 System and method for modeling and calibration of imaging apparatus
RU2016103197A RU2677562C2 (en) 2013-07-02 2014-07-02 System and method for modeling and calibrating imaging device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CA2819956 2013-07-02
CA2819956A CA2819956C (en) 2013-07-02 2013-07-02 High accuracy camera modelling and calibration method

Publications (1)

Publication Number Publication Date
WO2015000056A1 true WO2015000056A1 (en) 2015-01-08

Family

ID=52142954

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2014/000534 WO2015000056A1 (en) 2013-07-02 2014-07-02 System and method for imaging device modelling and calibration

Country Status (9)

Country Link
US (1) US9792684B2 (en)
EP (1) EP3017599A4 (en)
JP (2) JP2016531281A (en)
KR (1) KR20160030228A (en)
CN (1) CN105379264B (en)
BR (1) BR112015033020A2 (en)
CA (1) CA2819956C (en)
RU (1) RU2677562C2 (en)
WO (1) WO2015000056A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106643669A (en) * 2016-11-22 2017-05-10 北京空间机电研究所 Single-center projection transformation method of multi-lens and multi-detector aerial camera
EP3228568A1 (en) * 2016-04-08 2017-10-11 Otis Elevator Company Method and system for multiple 3d sensor calibration
DE102016217792A1 (en) 2016-09-16 2018-03-22 Xion Gmbh alignment system
CN108169722A (en) * 2017-11-30 2018-06-15 河南大学 A kind of unknown disturbances influence the system deviation method for registering of lower sensor
CN109901142A (en) * 2019-02-28 2019-06-18 东软睿驰汽车技术(沈阳)有限公司 A kind of scaling method and device

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9196039B2 (en) * 2014-04-01 2015-11-24 Gopro, Inc. Image sensor read window adjustment for multi-camera array tolerance
CN104469167B (en) * 2014-12-26 2017-10-13 小米科技有限责任公司 Atomatic focusing method and device
CN105678748B (en) * 2015-12-30 2019-01-15 清华大学 Interactive calibration method and device in three-dimension monitoring system based on three-dimensionalreconstruction
DE102016002186A1 (en) * 2016-02-24 2017-08-24 Testo SE & Co. KGaA Method and image processing device for determining a geometric measured variable of an object
EP3217355A1 (en) 2016-03-07 2017-09-13 Lateral Reality Kft. Methods and computer program products for calibrating stereo imaging systems by using a planar mirror
US10922559B2 (en) * 2016-03-25 2021-02-16 Bendix Commercial Vehicle Systems Llc Automatic surround view homography matrix adjustment, and system and method for calibration thereof
EP3236286B1 (en) * 2016-04-18 2023-01-25 Otis Elevator Company Auto commissioning system and method
JP7092382B2 (en) * 2017-01-06 2022-06-28 フォトニケア,インコーポレイテッド Self-oriented imaging device and how to use it
KR101905403B1 (en) 2017-02-15 2018-10-08 동명대학교산학협력단 multi-scale curvatures based perceptual vector data hashing techinique for vector content authentication
JP7002007B2 (en) * 2017-05-01 2022-01-20 パナソニックIpマネジメント株式会社 Camera parameter set calculation device, camera parameter set calculation method and program
US10777018B2 (en) * 2017-05-17 2020-09-15 Bespoke, Inc. Systems and methods for determining the scale of human anatomy from images
JP7051845B2 (en) 2017-06-15 2022-04-11 富士フイルム株式会社 How to operate a medical image processing device, an endoscope system, and a medical image processing device
CN107895347A (en) * 2017-07-20 2018-04-10 吉林大学 A kind of vision self-adapting adjusting display device and method
SG11201907126SA (en) * 2017-08-25 2019-09-27 Maker Trading Pte Ltd A general monocular machine vision system and method for identifying locations of target elements
CN107632407B (en) * 2017-11-08 2020-02-04 凌云光技术集团有限责任公司 Calibrating device of cylindrical lens imaging system
CN108038888B (en) * 2017-12-19 2020-11-27 清华大学 Space calibration method and device of hybrid camera system
KR102066393B1 (en) * 2018-02-08 2020-01-15 망고슬래브 주식회사 System, method and computer readable recording medium for taking a phtography to paper and sharing to server
US11061132B2 (en) 2018-05-21 2021-07-13 Johnson Controls Technology Company Building radar-camera surveillance system
JP2020008434A (en) * 2018-07-09 2020-01-16 オムロン株式会社 Three-dimensional measuring device and method
CN109167992A (en) * 2018-08-08 2019-01-08 珠海格力电器股份有限公司 Image processing method and device
CN109143207B (en) 2018-09-06 2020-11-10 百度在线网络技术(北京)有限公司 Laser radar internal reference precision verification method, device, equipment and medium
CN111047643B (en) * 2018-10-12 2023-06-27 深圳富联富桂精密工业有限公司 Monocular distance measuring device
CN109612384B (en) * 2018-11-01 2020-11-06 南京理工大学 Tilting aberration correction compensation method based on spectrum sub-pixel translation
CN109506589B (en) * 2018-12-25 2020-07-28 东南大学苏州医疗器械研究院 Three-dimensional profile measuring method based on structural light field imaging
CN109949367B (en) * 2019-03-11 2023-01-20 中山大学 Visible light imaging positioning method based on circular projection
CN111696047B (en) * 2019-03-14 2023-08-22 四川中测辐射科技有限公司 Imaging quality determining method and system of medical imaging equipment
CN111913169B (en) * 2019-05-10 2023-08-22 北京四维图新科技股份有限公司 Laser radar internal reference and point cloud data correction method, device and storage medium
CN110322519B (en) * 2019-07-18 2023-03-31 天津大学 Calibration device and calibration method for combined calibration of laser radar and camera
CN110596720A (en) * 2019-08-19 2019-12-20 深圳奥锐达科技有限公司 Distance measuring system
KR102715161B1 (en) * 2019-11-28 2024-10-08 삼성전자주식회사 Method and device for restoring image
CN111462245B (en) * 2020-01-09 2023-05-26 华中科技大学 Zoom camera posture calibration method and system based on rectangular structure
TWI709780B (en) * 2020-01-21 2020-11-11 台灣骨王生技股份有限公司 Active imaging correction device and method for infrared lens
CN111355894B (en) * 2020-04-14 2021-09-03 长春理工大学 Novel self-calibration laser scanning projection system
CN111507902B (en) * 2020-04-15 2023-09-26 京东城市(北京)数字科技有限公司 High-resolution image acquisition method and device
CN113554710A (en) * 2020-04-24 2021-10-26 西门子(深圳)磁共振有限公司 Calibration method, system and storage medium of 3D camera in medical image system
CN111627072B (en) * 2020-04-30 2023-10-24 贝壳技术有限公司 Method, device and storage medium for calibrating multiple sensors
CN111514476B (en) * 2020-04-30 2022-03-15 江苏瑞尔医疗科技有限公司 Calibration method for X-ray image guidance system
JP2023113980A (en) * 2020-07-13 2023-08-17 パナソニックIpマネジメント株式会社 Ellipse detection method, camera calibration method, ellipse detection device, and program
CN112050752B (en) * 2020-09-02 2022-03-18 苏州东方克洛托光电技术有限公司 Projector calibration method based on secondary projection
CN111986197A (en) * 2020-09-09 2020-11-24 福州大学 Partial reference sonar image application quality evaluation method based on contour statistical characteristics
CN112230204A (en) * 2020-10-27 2021-01-15 深兰人工智能(深圳)有限公司 Combined calibration method and device for laser radar and camera
CN112634152B (en) * 2020-12-16 2024-06-18 中科海微(北京)科技有限公司 Face sample data enhancement method and system based on image depth information
CN112883000B (en) * 2021-03-17 2022-04-15 中国有色金属长沙勘察设计研究院有限公司 Deformation monitoring radar data file storage method
CN113177989B (en) * 2021-05-07 2024-07-19 深圳云甲科技有限公司 Intraoral scanner calibration method and device
CN113284189B (en) * 2021-05-12 2024-07-19 深圳市格灵精睿视觉有限公司 Distortion parameter calibration method, device, equipment and storage medium
CN113538565A (en) * 2021-06-28 2021-10-22 深圳市拓普瑞思科技有限公司 Visual accurate positioning control method based on industrial camera
CN113487594B (en) * 2021-07-22 2023-12-01 上海嘉奥信息科技发展有限公司 Sub-pixel corner detection method, system and medium based on deep learning
CN113706607B (en) * 2021-08-18 2023-10-20 广东江粉高科技产业园有限公司 Subpixel positioning method, computer equipment and device based on circular array diagram
TWI789012B (en) * 2021-09-14 2023-01-01 明俐科技有限公司 Method and device of calibrating real-time image through dithering process
US11927757B1 (en) * 2021-10-29 2024-03-12 Apple Inc. Electronic device display having distortion compensation
KR102701104B1 (en) * 2021-11-03 2024-08-29 경북대학교 산학협력단 Apparatus and method for generating 3D DSM using multi-view and multi-time satellite images
CN114167663B (en) * 2021-12-02 2023-04-11 浙江大学 Coded aperture optical imaging system containing vignetting removal algorithm
CN114170314B (en) * 2021-12-07 2023-05-26 群滨智造科技(苏州)有限公司 Intelligent 3D vision processing-based 3D glasses process track execution method
CN115272471B (en) * 2022-08-30 2023-07-28 杭州微影软件有限公司 Method, device and equipment for determining optical center position
CN116033733B (en) * 2022-08-30 2023-10-20 荣耀终端有限公司 Display device and assembly method thereof
CN116839499B (en) * 2022-11-03 2024-04-30 上海点莘技术有限公司 Large-visual-field micro-size 2D and 3D measurement calibration method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080031514A1 (en) * 2004-11-24 2008-02-07 Aisin Seiki Kabushiki Kaisha Camera Calibration Method And Camera Calibration Device
US20100283856A1 (en) * 2009-05-05 2010-11-11 Kapsch Trafficcom Ag Method For Calibrating The Image Of A Camera
US20120268579A1 (en) * 2009-03-31 2012-10-25 Intuitive Surgical Operations, Inc. Targets, fixtures, and workflows for calibrating an endoscopic camera
WO2013015699A1 (en) * 2011-07-25 2013-01-31 Universidade De Coimbra Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000227547A (en) * 1999-02-05 2000-08-15 Fuji Photo Film Co Ltd Photographic lens and camera using the same
US6437823B1 (en) 1999-04-30 2002-08-20 Microsoft Corporation Method and system for calibrating digital cameras
JP4501239B2 (en) 2000-07-13 2010-07-14 ソニー株式会社 Camera calibration apparatus and method, and storage medium
RU2199150C2 (en) * 2001-02-02 2003-02-20 Курский государственный технический университет Optoelectronic system calibration device
KR100386090B1 (en) 2001-04-02 2003-06-02 한국과학기술원 Camera calibration system and method using planar concentric circles
US6995762B1 (en) 2001-09-13 2006-02-07 Symbol Technologies, Inc. Measurement of dimensions of solid objects from two-dimensional image(s)
JP3624288B2 (en) 2001-09-17 2005-03-02 株式会社日立製作所 Store management system
US7068303B2 (en) 2002-06-03 2006-06-27 Microsoft Corporation System and method for calibrating a camera with one-dimensional objects
JP4147059B2 (en) 2002-07-03 2008-09-10 株式会社トプコン Calibration data measuring device, measuring method and measuring program, computer-readable recording medium, and image data processing device
KR100576874B1 (en) * 2004-10-25 2006-05-10 삼성전기주식회사 Optical System Using Diffractive Optiacal Element
US8082120B2 (en) 2005-03-11 2011-12-20 Creaform Inc. Hand-held self-referenced apparatus for three-dimensional scanning
CA2600926C (en) 2005-03-11 2009-06-09 Creaform Inc. Auto-referenced system and apparatus for three-dimensional scanning
DE102007045525A1 (en) * 2007-09-24 2009-04-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. image sensor
CN101419705B (en) 2007-10-24 2011-01-05 华为终端有限公司 Video camera demarcating method and device
KR100966592B1 (en) 2007-12-17 2010-06-29 한국전자통신연구원 Method for calibrating a camera with homography of imaged parallelogram
JP4751939B2 (en) * 2009-03-31 2011-08-17 アイシン精機株式会社 Car camera calibration system
US8223230B2 (en) * 2009-05-08 2012-07-17 Qualcomm Incorporated Systems, methods, and apparatus for camera tuning and systems, methods, and apparatus for reference pattern generation
CN101727671B (en) 2009-12-01 2012-05-23 湖南大学 Single camera calibration method based on road surface collinear three points and parallel line thereof
JP4763847B1 (en) * 2010-08-30 2011-08-31 楽天株式会社 Image conversion apparatus, image processing apparatus, and image processing system
CN102466857B (en) * 2010-11-19 2014-03-26 鸿富锦精密工业(深圳)有限公司 Imaging lens
US8711275B2 (en) * 2011-05-31 2014-04-29 Apple Inc. Estimating optical characteristics of a camera component using sharpness sweep data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080031514A1 (en) * 2004-11-24 2008-02-07 Aisin Seiki Kabushiki Kaisha Camera Calibration Method And Camera Calibration Device
US20120268579A1 (en) * 2009-03-31 2012-10-25 Intuitive Surgical Operations, Inc. Targets, fixtures, and workflows for calibrating an endoscopic camera
US20100283856A1 (en) * 2009-05-05 2010-11-11 Kapsch Trafficcom Ag Method For Calibrating The Image Of A Camera
WO2013015699A1 (en) * 2011-07-25 2013-01-31 Universidade De Coimbra Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
AHOUANDINOU ET AL.: "AN APPROACH TO CORRECTING IMAGE DISTORTION BY SELF CALIBRATION STEREOSCOPIC SCENE FROM MULTIPLE VIEWS", 2012 EIGHTH INTERNATIONAL CONFERENCE ON SIGNAL IMAGE TECHNOLOGY AND INTERNET BASED SYSTEMS, 25 November 2012 (2012-11-25), pages 389 - 394, XP032348540 *
MELO ET AL.: "A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy-Initial Technical Evaluation", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 59, no. 3, 1 March 2012 (2012-03-01), pages 634 - 644, XP011489985 *
See also references of EP3017599A4 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3228568A1 (en) * 2016-04-08 2017-10-11 Otis Elevator Company Method and system for multiple 3d sensor calibration
US10371512B2 (en) 2016-04-08 2019-08-06 Otis Elevator Company Method and system for multiple 3D sensor calibration
DE102016217792A1 (en) 2016-09-16 2018-03-22 Xion Gmbh alignment system
CN108307178A (en) * 2016-09-16 2018-07-20 艾克松有限责任公司 Calibration system
US11115643B2 (en) 2016-09-16 2021-09-07 Xion Gmbh Alignment system
CN106643669A (en) * 2016-11-22 2017-05-10 北京空间机电研究所 Single-center projection transformation method of multi-lens and multi-detector aerial camera
CN108169722A (en) * 2017-11-30 2018-06-15 河南大学 A kind of unknown disturbances influence the system deviation method for registering of lower sensor
CN109901142A (en) * 2019-02-28 2019-06-18 东软睿驰汽车技术(沈阳)有限公司 A kind of scaling method and device

Also Published As

Publication number Publication date
RU2016103197A3 (en) 2018-05-17
EP3017599A4 (en) 2017-11-22
RU2677562C2 (en) 2019-01-17
CN105379264A (en) 2016-03-02
JP6722323B2 (en) 2020-07-15
CN105379264B (en) 2017-12-26
US9792684B2 (en) 2017-10-17
JP2016531281A (en) 2016-10-06
JP2019149809A (en) 2019-09-05
CA2819956A1 (en) 2015-01-02
US20160140713A1 (en) 2016-05-19
RU2016103197A (en) 2017-08-07
KR20160030228A (en) 2016-03-16
BR112015033020A2 (en) 2017-10-03
EP3017599A1 (en) 2016-05-11
CA2819956C (en) 2022-07-12

Similar Documents

Publication Publication Date Title
US9792684B2 (en) System and method for imaging device modelling and calibration
US10958892B2 (en) System and methods for calibration of an array camera
CN113256730B (en) System and method for dynamic calibration of an array camera
Birklbauer et al. Panorama light‐field imaging
US6847392B1 (en) Three-dimensional structure estimation apparatus
CN106157304A (en) A kind of Panoramagram montage method based on multiple cameras and system
Henrique Brito et al. Radial distortion self-calibration
JP2001346226A (en) Image processor, stereoscopic photograph print system, image processing method, stereoscopic photograph print method, and medium recorded with processing program
KR20180053669A (en) Light field data representation
WO2022100668A1 (en) Temperature measurement method, apparatus, and system, storage medium, and program product
CN112489137A (en) RGBD camera calibration method and system
Nozick Multiple view image rectification
KR100513789B1 (en) Method of Lens Distortion Correction and Orthoimage Reconstruction In Digital Camera and A Digital Camera Using Thereof
DK3189493T3 (en) PERSPECTIVE CORRECTION OF DIGITAL PHOTOS USING DEPTH MAP
CN110322514B (en) Light field camera parameter estimation method based on multi-center projection model
CN111292380B (en) Image processing method and device
US6697573B1 (en) Hybrid stereoscopic motion picture camera with film and digital sensor
Vupparaboina et al. Euclidean auto calibration of camera networks: baseline constraint removes scale ambiguity
Müller-Rowold et al. Hyperspectral panoramic imaging
Raghavachari et al. Efficient use of bandwidth by image compression for vision-based robotic navigation and control
CN111080689B (en) Method and device for determining face depth map
JP6103767B2 (en) Image processing apparatus, method, and program
Sturm et al. On calibration, structure-from-motion and multi-view geometry for panoramic camera models
Angst et al. Radial Distortion Self-Calibration
Rova Affine multi-view modelling for close range object measurement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14820593

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 14898016

Country of ref document: US

ENP Entry into the national phase

Ref document number: 2016522146

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 20167003009

Country of ref document: KR

Kind code of ref document: A

Ref document number: 2016103197

Country of ref document: RU

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2014820593

Country of ref document: EP

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112015033020

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112015033020

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20151230