WO2015000056A1 - System and method for imaging device modelling and calibration - Google Patents
System and method for imaging device modelling and calibration Download PDFInfo
- Publication number
- WO2015000056A1 WO2015000056A1 PCT/CA2014/000534 CA2014000534W WO2015000056A1 WO 2015000056 A1 WO2015000056 A1 WO 2015000056A1 CA 2014000534 W CA2014000534 W CA 2014000534W WO 2015000056 A1 WO2015000056 A1 WO 2015000056A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- axis
- coordinate system
- image
- point
- plane
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003384 imaging method Methods 0.000 title claims description 77
- 241001442234 Cosa Species 0.000 claims description 36
- 244000089409 Erythrina poeppigiana Species 0.000 claims description 31
- 235000009776 Rathbunia alamosensis Nutrition 0.000 claims description 31
- 239000011159 matrix material Substances 0.000 claims description 16
- 238000012937 correction Methods 0.000 claims description 15
- 230000009466 transformation Effects 0.000 claims description 15
- 238000003702 image correction Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 5
- 238000003331 infrared imaging Methods 0.000 claims description 5
- 238000002591 computed tomography Methods 0.000 claims description 4
- 230000003595 spectral effect Effects 0.000 claims description 3
- 101100042630 Caenorhabditis elegans sin-3 gene Proteins 0.000 claims description 2
- 238000003333 near-infrared imaging Methods 0.000 claims description 2
- 238000005259 measurement Methods 0.000 description 12
- 238000000605 extraction Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 238000006073 displacement reaction Methods 0.000 description 8
- 230000006399 behavior Effects 0.000 description 7
- 239000000203 mixture Substances 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000011084 recovery Methods 0.000 description 7
- 230000009897 systematic effect Effects 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 238000009472 formulation Methods 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000001914 filtration Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 239000003550 marker Substances 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 241000251468 Actinopterygii Species 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000013178 mathematical model Methods 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 206010028813 Nausea Diseases 0.000 description 2
- 240000000136 Scabiosa atropurpurea Species 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 238000010845 search algorithm Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000014616 translation Effects 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 206010019233 Headaches Diseases 0.000 description 1
- UFHFLCQGNIYNRP-UHFFFAOYSA-N Hydrogen Chemical compound [H][H] UFHFLCQGNIYNRP-UHFFFAOYSA-N 0.000 description 1
- 241000254158 Lampyridae Species 0.000 description 1
- 238000012897 Levenberg–Marquardt algorithm Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- 239000004411 aluminium Substances 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 230000001955 cumulated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 231100000869 headache Toxicity 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000005304 joining Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 230000008693 nausea Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 238000001429 visible spectrum Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/02—Affine transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the present invention relates to a system and method for imaging device modelling and calibration that compensates for imperfections in line of sight axis squareness with the image plane of the imaging device.
- Calibration of digital cameras and other imaging devices seeks to create a mathematical model of how the image 'prints' through the lens on the imaging device's surface.
- the procedure first uses a picture from a calibration target with accurately known tolerance, and extracts target elements from the image. Finally, a mathematical model relates the image information with the real three-dimensional (3D) target information.
- the imaging device can then be used to map real world objects using a scale factor, the focal distance /.
- the proposed calibration and modelling technique introduces an accurate perspective correction to account for assembly tolerances in the imaging device or camera/lens system, causing the lens axis to be off-squareness with the image plane.
- Accurate knowledge of camera plane and lens assembly removes a systematic bias in telemetry systems and 3D scanning using a digital camera or a camera stereo pair, yields an accurate focal length (image scale) measurement, locates the true image center position on the camera plane, and increases accuracy in measuring distortion introduced by image curvature (geometric distortion) and rainbow light splitting in the lens optics (chromatic distortion).
- Removing lens distortion increases the image compression ratio without adding any loss.
- a computer-implemented method for modeling an imaging device for use in calibration and image correction comprising defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of the line of sight, a second and a third
- the second coordinate system is defined such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and the third coordinate system is defined such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
- the received set of 3D coordinates is [x y z 1] T and the projection of the point of the 3D object onto the true scale plane is computed as: where « is a scale equivalent operator and defines a projection operation onto the true scale plane with respect to the first coordinate system.
- the projection of the point of the 3D object onto the image plane is computed as:
- P ⁇ defines a projection operation onto the image plane, / is the focal distance, a is the first angle, ⁇ is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the a rotation is performed, R(y, ⁇ ) is a ⁇ rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the ⁇ rotation is performed, the a rotation computed rightmost such that the ⁇ rotation is performed relative to the axis x rotated by the angle a, and where
- h 23 -sina
- h 32 ⁇ sina
- the method further comprises determining a homography H between the true scale plane and the image plane as :
- the homography H is determined as : cos /? / sin /? sina / sin ⁇ cos a f fafi //?
- a system for modeling an imaging device for use in calibration and image correction comprising a memory; a processor; and at least one application stored in the memory and executable by the processor for defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate
- the at least one application is executable by the processor for defining the second coordinate system such that the true scale plane establishes an entry to a lens system of the imaging device and the projection on the true scale plane expresses an output of an external model of the imaging device and defining the third coordinate system such that the image plane establishes an output to the lens system and the projection on the image plane expresses an output of an internal model of the imaging device.
- the at least one application is executable by the processor for receiving the set of 3D coordinates as [x y z 1] T and computing the projection of the point of the 3D object onto the true scale plane as:
- « is a scale equivalent operator and P, defines a projection operation onto the true scale plane with respect to the first coordinate system.
- the at least one application is executable by the processor for computing the projection of the point of the 3D object onto the image plane as:
- P f defines a projection operation onto the image plane, / is the focal distance, a is the first angle, ⁇ is the second angle, R(x, a) is an a rotation matrix with respect to an axis x of the image plane, the axis x defined as substantially parallel to the second axis of the first coordinate system before the arotation is performed, R(y, ⁇ ) is a ⁇ rotation matrix with respect to an axis y of the image plane, the axis y defined as substantially parallel to the third axis of the first coordinate system before the ⁇ rotation is performed, the a rotation computed rightmost such that the ⁇ rotation is performed relative to the axis x rotated by the angle a, and where
- the at least one application is executable by the processor for determining a homography H between the true scale plane and the image plane as :
- the at least one application is executable by the processor for determining the homography H as : cos/? /sin /? sin a /sin /? cosa ' / fafi ffi '
- the imaging device comprises one of a zooming lens camera, a near- infrared imaging device, a short-wavelength infrared imaging device, a long-wavelength infrared imaging device, a radar device, a light detection and ranging device, a parabolic mirror telescope imager, a surgical endoscopic camera, a Computed tomography scanning device, a satellite imaging device, a sonar device, and a multi spectral sensor fusion system.
- a zooming lens camera a near- infrared imaging device, a short-wavelength infrared imaging device, a long-wavelength infrared imaging device, a radar device, a light detection and ranging device, a parabolic mirror telescope imager, a surgical endoscopic camera, a Computed tomography scanning device, a satellite imaging device, a sonar device, and a multi spectral sensor fusion system.
- a computer readable medium having stored thereon program code executable by a processor for modeling an imaging device for use in calibration and image correction, the program code executable for defining a first 3D orthogonal coordinate system having an origin located at a focal point of the imaging device, a first axis of the first coordinate system extending along a direction of a line of sight of the imaging device; defining a second 3D orthogonal coordinate system having an origin located at a unitary distance from the focal point, a first axis of the second coordinate system extending along the direction of the line of sight, a second and a third axis of the second coordinate system substantially parallel to a second and a third axis of the first coordinate system respectively, the second and the third axis of the second coordinate system thereby defining a true scale plane square with the line of sight; defining a third 3D coordinate system having an origin located at a focal distance from the focal point, a first axis of the third coordinate system extending along the direction of
- Figure 1 is a schematic diagram illustrating lens distortion
- Figure 2 are schematic views illustrating barrel and pincushion lens geometric distortion
- Figure 3 is a plan view illustrating edge dithering when two neighbouring pixel colours mix
- Figure 4 is a schematic diagram illustrating the parameters that define the behaviour of a camera/lens combination in an ideal camera model representation assuming the image plane is square with the line of sight;
- Figure 5 is a schematic diagram of the tilted axis assumption of a camera internal model where tilted axis compensation is added to the ideal camera representation of Figure 4;
- Figure 6 is a schematic diagram of a new set of variables for a camera internal model, in accordance with an illustrative embodiment of the present invention.
- Figure 7 is a schematic diagram of a radial distortion mode, in accordance with an illustrative embodiment of the present invention.
- Figure 8a is a flowchart of a method for computing the location of an image point, in accordance with an illustrative embodiment of the present invention.
- Figure 8c is a flowchart of the step of Figure 8a of applying a lens distortion model
- Figure 8d is a flowchart of the step of Figure 8a of projecting on a tilted image plane / using an internal camera model
- Figure 9a is a schematic diagram of a system for computing the location of an image point, in accordance with an illustrative embodiment of the present invention
- Figure 9b is is a block diagram showing an exemplary application running on the processor of Figure
- Figure 10 is a distorted photograph view of a calibration target
- Figure 11 are photographic views of a micro lens test camera with circuit board
- Figure 12 is a combined illustration of target extraction
- Figure 13 is a schematic diagram of a stereo pair used for measuring objects in 3D using two camera images simultaneously;
- Figure 14 are photographs illustrating geometric distortion correction using a test camera;
- Figure 16 is graph illustrating red chromatic distortion, radial correction vs distance from image center (pixels);
- Figure. 17 is a graph illustrating blue chromatic distortion, radial correction vs distance from image center (pixels).
- Figure 18 is a schematic illustration of the Bayer Pattern layout for a colour camera.
- Lens distortion introduces the biggest error found in digital imaging. This is illustrated in Figures 1 and 2.
- the fish eye effect is referred to as geometric distortion and curves straight lines.
- Coloured shading at the edges of the image (referred to as « Blue Tinted Edge » and « Red Tinted Edge » in Figure 1 ) is referred to as chromatic distortion and is caused by the splitting of light in the lens of the imaging device (not shown).
- chromatic distortion is caused by the splitting of light in the lens of the imaging device (not shown).
- dithering is the intermediate pixel colour encountered when an edge goes through a given pixel and both neighbouring colours mix.
- the pixel colour is a weighed average of adjacent colour values, on either side of the edge, with respect to each colour's respective surface inside the pixel.
- edge dithering shading at object edges
- edge dithering Using colour images from a black and white target, colour edge shading is caused by chromatic distortion.
- dithering appears in grey shades as does geometric distortion. It is therefore desirable to isolate chromatic lens distortion from edge dithering or geometric distortion using edge colour.
- Modelling a camera requires a mathematical model and a calibration procedure to measure the parameters that define the behaviour of a specific camera/lens combination.
- a camera is referred to herein, the proposed system and method also apply to other imaging devices.
- devices including, but not restricted to, zooming lens cameras; near-infrared (NIR), short-wavelength infrared (SWIR), and long-wavelength infrared (LWIR) infrared imaging devices; Radar and Light Detection And Ranging (LIDAR) devices; parabolic mirror telescope imagers; surgical endoscopic cameras; computed tomography (CT) scanning devices; satellite imaging devices; sonar devices; and multi spectral sensor fusion systems may also apply:
- the ideal camera model has three components, as shown in Figure 4, namely:
- Focal point O is the location in space where all images collapse to a single point; in front of the focal point O is the camera image plane (not shown).
- Lens axis Z c crosses the image plane at two (2) right angles (i.e. is square therewith), defining the image center location (C x , C Y ).
- the camera external model shows accurate throughout the literature. Defining two coordinate sets, 1- World (X w Y w Z w ) with origin set at (0,0,0); and
- the external camera model expresses rotations ( ⁇ ⁇ ⁇ ) and translations (T x T Y T 2 ) needed to align the camera coordinate set (Xc Yc Z c ) with the world set of coordinates (X w Yw Z w ), and bring the focal point O at the world origin (0,0,0).
- the external camera model therefore has six (6) degrees of freedom, namely the ( ⁇ ⁇ ⁇ ) rotation angles and translations (T x T Y T z ).
- Parameter a is the horizontal image scale, perfectly aligned with the camera pixel grid array horizontal axis
- the vertical scale is set to b, different from a;
- the scale and orientation of the vertical axis of the image plane is tilted by skew parameter s relative to the axis Y c , where s is a scale measure of skew relative to the image scale.
- skew parameter s is a scale measure of skew relative to the image scale.
- the widespread tilted axis assumption however introduces a perspective bias, shifting all the other camera parameters, and should be replaced by a full 3D perspective model of the image plane that retains the camera image plane geometry. It is therefore proposed to introduce a new set of variables for the internal camera model, as shown in Figure 6 in which a model is represented in camera coordinates (starting from the focal point O).
- the image center (C x> C Y ) remains the intersection between the lens axis Z c and the camera (i.e. image) plane.
- Two scale independent simultaneous perspectives of an outside 3D world object (a point P thereof being located somewhere in the world at given coordinates relative to the axes (X c Y c Z c )) are considered.
- This first plane represents the perfect 1 : 1 true scale projection of the 3D object on a plane having infinite dimensions in x and y.
- point P [X Y Z 1] T in 3D world coordinates (X Y Z is given with respect to world coordinates (X w Y w Z w ), X' Y' Z' with respect to camera coordinates system (X c Yc Z c )) projects as:
- the second perspective is the image plane itself, i.e. the output of the lens system.
- the image plane is represented in Figure 6 by two axes intersecting at (Cx, CY). Since the camera plane at the focal distance / is off-squareness with the lens axis Z c , it needs five (5) parameters.
- two rotation angles a and ⁇ with respect to both x and y axes are used to account for the tilting of the camera plane.
- the x axis of the image plane is rotated by angle a while the y axis of the image plane is rotated by angle ⁇ , such that the image plane is tilted by angles a and ⁇ with respect to axes x and y, with the x and y axes of the image plane taken parallel to Xc and Yc at origin O initially, i.e. before any rotation.
- the axes x and y are illustratively taken parallel to X c and Y c at origin O and reproduced on the image plane before any rotation is applied to tilt the image plane.
- tilting of the image plane can be expressed by two (2) 3D rotations in space, namely a rotation about axis Yc by angle a and a second rotation about axis X c by angle ⁇ .
- the x axis of the image plane being arbitrarily selected as aligned with the horizontal camera plane direction, there is therefore no need for a z axis rotation angle.
- a z axis rotation angle may be desirable.
- the three remaining degrees of freedom for the camera internal model are then the focal distance / (or camera image scale) and coordinates of the image center (C x , C Y ).
- the top left 2x2 matrix partition in equation (3) represents the image plane x and y axes with skew parameter s, horizontal scale a, and vertical scale b.
- the image plane x axis is aligned with the horizontal direction of the camera plane pixel array grid (not shown), accounting for the 0 value in position (2,1 ) of the K matrix.
- the image plane y axis is tilted by s in the x direction as illustrated in Figure 5.
- the last column represents the image center location (C x , C Y ).
- the error in the tilted axis assumption of Figure 5 is visible in the lower left 1x2 matrix partition.
- the two (2) terms of the lower left 1x2 partition should not be zero when the lens axis is off-squareness with the camera plane. When they are non-zero, these terms apply a perspective correction to x and y scales in the image plane as one moves away from the image center.
- the internal camera model is defined as a perspective transformation with five (5) degrees of freedom that relates the outside camera model projection in true 1 :1 scale to the image plane projection at focal distance / on a common line of sight Z c , and where the image plane is tilted by angles a and ⁇ with respect to axes x and y on the image plane, the x and y axes taken parallel to X c and Y c at origin O before any rotation.
- Figure 6 shows (Cx, C Y ) at the line of sight Z c .
- Z c is taken as the origin for all planes intersecting with the line of sight.
- (C Xl C Y ) (0, 0)
- a shift of origin is applied to offset the image plane centre from (0, 0) to (C x , C Y ).
- Equation (4) The last operation in equation (4) is the rescaling fourth (4 ) coordinate to unity.
- P f defines the projection operation where element (4,3) is 1//, / is the focal distance, R(y, ⁇ ) is a ⁇ rotation matrix with respect to the y axis, and R(x, a) is an ⁇ rotation matrix with respect to the x axis.
- Equation (5) The a rotation in equation (5) is computed rightmost, so the ⁇ rotation is performed relative to an image plane y axis rotated by angle a. It should be understood that the ⁇ rotation could be handled rightmost in equation (5), meaning that the a rotation would be performed relative to a ⁇ rotated x axis. Homogeneous equations read from right to left and reversing the order of multiplication yields different mathematical formulations. Several models are possible.
- Equation (8) is again the rescaling of the fourth (4 ) coordinate to unity.
- the tilted image plane coordinate (x", y") is a homographic transformation of the (x
- the image plane has five (5) degrees of freedom: plane tilting angles a and ⁇ , image center (C x , C Y ) and focal distance /, giving the internal model.
- lens distortion occurs between the two planes and has to be accounted for in the model.
- calibration is finding a 3D correspondence between pairs of coordinates projected in the two planes, compensating for lens distortion.
- the lens distortion model can be reduced to a purely radial function, both geometric and chromatic.
- Many lens geometric distortion models were published. Some authors claim 1/20 pixel accuracy in removing geometric lens distortion. Overall, their basic criterion is more or less the same: lines that are straight in real life should appear straight in the image once geometric distortion is removed. Very few authors consider chromatic distortion in their lens model.
- x 1 x + x (k, r 2 + k 2 r 4 + k 3 r 6 ) + p,*( r 2 + 2 x 2 ) + 2 p 2 xy (14)
- ( ⁇ ', y' ) represents the new location of point (x, y), computed with respect to image center (C x , C Y )
- k k 2l and k 3 are three terms of radial distortion
- pi and p 2 are two terms of decentering distortion.
- Calibration retrieves numerical values for parameters k-,, k 2 , k 3 , pi, and p 2 .
- Image analysis gives (x' y').
- the undistorted (x y) position is found solving the two equations using a 2D search algorithm.
- FIG. 8a there is illustrated a method 100 for computing, using the proposed camera model, the location of an image point, as per the above.
- a 3D point in space goes through three transformations to give an image point located on the image plane.
- step 106 is performed, the location (x", y") of the camera image point corresponding to a 3D point (X, Y, Z) captured by the camera is obtained.
- the step 102 illustratively computes the proposed external camera model transformation and comprises receiving at step 106 the coordinates (X, Y, Z) of a 3D point P expressed with respect to world coordinate system (X w Yw Z w ).
- the external model image point is then output at step 110.
- applying the lens distortion model at step 104 illustratively comprises receiving the external model image point (x, y) at step 1 12.
- step 1 14 illustratively comprises computing r, r', and the distorted image point ( ⁇ ', y').
- parameters kt and k 2 may be expanded. Indeed, as discussed above, in its simplest form, geometric distortion can be modelled as a fully radial displacement.
- the new distorted distance r' knowing r is given by:
- Distorted image point (x' y') can be computed knowing ⁇ or using similar triangle properties:
- the distorted image point ( ⁇ ', y') is then output at step 1 6.
- obtaining 106 the internal camera model illustratively comprises receiving the distorted image point ( ⁇ ', y') at step 1 18. From the distorted image point ( ⁇ ', y') and from the internal camera model five degrees of freedom, namely a and ⁇ (image plane tilt angles), the focal distance /, and the image center coordinates (C x , C Y ), the following is computed at step 120:
- lens distortion can be modeled with respect to the image plane scale .
- an imaginary intermediate plane of projection has to be added to the model, located at / along Z Cl with (0, 0) center, and perfectly square with lens axis Z c .
- the system 200 comprises one or more server(s) 202 accessible via the network 204.
- server(s) 202 accessible via the network 204.
- a series of servers corresponding to a web server, an application server, and a database server may be used. These servers are all represented by server 202.
- the server 202 may be accessed by a user using one of a plurality of devices 206 adapted to communicate over the network 204.
- the devices 206 may comprise any device, such as a personal computer, a tablet computer, a personal digital assistant, a smart phone, or the like, which is configured to communicate over the network 204, such as the Internet, the Public Switch Telephone Network (PSTN), a cellular network, or others known to those skilled in the art.
- PSTN Public Switch Telephone Network
- the server 202 may also be integrated with the devices 206, either as a downloaded software application, a firmware application, or a combination thereof. It should also be understood that several devices as in 206 may access the server 202 at once.
- Imaging data may be acquired by an imaging device 207 used for calibration and image correction.
- the device 207 may be separate from (as illustrated) the devices 206 or integral therewith.
- the imaging data may comprise one or more images of a real world 3D object (not shown), such as a calibration target as will be discussed further below.
- the imaging data may then be processed at the server 202 to obtain a model of the imaging device 207 in the manner described above with reference to Figure 8a, Figure 8b, Figure 8c, and Figure 8d.
- the imaging data is illustratively acquired in real-time (e.g. at a rate of 30 images per second) for an object, such as a moving object whose movement in space is being monitored.
- the server 202 may then process the imaging data to determine an image point associated with each point of each acquired image.
- the imaging data may be processed to determine an image point associated with each one of one or more points of interest in the image.
- the server 202 may comprise, amongst other things, a processor 208 coupled to a memory 210 and having a plurality of applications 212a ... 212n running thereon. It should be understood that while the applications 212a ... 212n presented herein are illustrated and described as separate entities, they may be combined or separated in a variety of ways.
- One or more databases 214 may be integrated directly into the memory 210 or may be provided separately therefrom and remotely from the server 202 (as illustrated). In the case of a remote access to the databases 214, access may occur via any type of network 204, as indicated above.
- the various databases 214 described herein may be provided as collections of data or information organized for rapid search and retrieval by a computer.
- the databases 214 may be structured to facilitate storage, retrieval, modification, and deletion of data in conjunction with various data-processing operations.
- the databases 214 may consist of a file or sets of files that can be broken down into records, each of which consists of one or more fields. Database information may be retrieved through queries using keywords and sorting commands, in order to rapidly search, rearrange, group, and select the field.
- the databases 214 may be any organization of data on a data storage medium, such as one or more servers.
- the databases 214 are secure web servers and Hypertext Transport Protocol Secure (HTTPS) capable of supporting Transport Layer Security (TLS), which is a protocol used for access to the data.
- HTTPS Hypertext Transport Protocol Secure
- TLS Transport Layer Security
- Communications to and from the secure web servers may be secured using Secure Sockets Layer (SSL).
- SSL Secure Sockets Layer
- Identity verification of a user may be performed using usernames and passwords for all users.
- Various levels of access rights may be provided to multiple levels of users.
- any known communication protocols that enable devices within a computer network to exchange information may be used.
- Protocol Internet Protocol
- UDP User Datagram Protocol
- TCP Transmission Control Protocol
- DHCP Dynamic Host Configuration Protocol
- HTTP Hypertext Transfer Protocol
- FTP File Transfer Protocol
- Telnet Telnet Remote Protocol
- SSH Secure Shell Remote Protocol
- the memory 210 accessible by the processor 208 may receive and store data.
- the memory 210 may be a main memory, such as a high speed Random Access Memory (RAM), or an auxiliary storage unit, such as a hard disk, flash memory, or a magnetic tape drive.
- RAM Random Access Memory
- auxiliary storage unit such as a hard disk, flash memory, or a magnetic tape drive.
- the memory 210 may be any other type of memory, such as a Read-Only Memory (ROM), Erasable Programmable Read-Only Memory (EPROM).or optical storage media such as a videodisc and a compact disc.
- ROM Read-Only Memory
- EPROM Erasable Programmable Read-Only Memory
- optical storage media such as a videodisc and a compact disc.
- the processor 208 may access the memory 210 to retrieve data.
- the processor 208 may be any device that can perform operations on data. Examples are a central processing unit (CPU), a front-end processor, a microprocessor, and a network processor.
- the applications 212a ... 212n are coupled to the processor 208 and configured to perform various tasks as explained below in more detail.
- An output may be transmitted to the devices 206.
- Figure 9b is an exemplary embodiment of an application 212a running on the processor 208.
- the application 212a may comprise a receiving module 302 for receiving the imaging data from the imaging device 207 and obtaining therefrom coordinates of a point of a real 3D world object as captured by the imaging device 207, an external model projection module 304 enabling the method illustrated and described in reference to Figure 8b, a lens distortion compensation module 306 enabling the method illustrated and described in reference to Figure 8c, an internal model projection module 308 enabling the method illustrated and described in reference to Figure 8d, and an output module 310 for outputting coordinates of a camera image point, as computed by the internal model defining module 308.
- the experimental proposed setup is intended to be field usable, even with low resolution short- wavelength infrared (SWIR) imagers.
- SWIR short- wavelength infrared
- a Levenberg-Marquardt search algorithm may be used to compute the model parameters. It should be understood that algorithms other than the Levenberg-Marquardt algorithm may apply. For instance, the steepest descent or Newton algorithms may be used. The accuracy improvements achieved with the proposed technique allowed to use a least square sum of error criteria without bias.
- the error is defined as the image predicted target position from the model and 3D data set, minus the corresponding real image measurement in 2D.
- Calibration target uses 1 " diameter circles at 2" center to center spacing. Using circles ensures that no corner should be detected even with a highly pixelized image, see Figure 12.
- step 1 recovered an initial estimate for the edge points, adding compensation for edge orientation bias.
- step 2 the initial ellipse fit is used to estimate local curvature and correct the edge location.
- the leftmost camera parameter set is obtained from the most accurate model published, tested on our own experimental data.
- the rightmost set was computed from the proposed model, where the lens model was taken as a purely radial geometric distortion, and where the internal camera model used the proposed implementation.
- the first six (6) lines of the above table are the external camera parameters, three (3) angles and three (3) positions needed to compute [R 3x3 T 3x1 ].
- the next five (5) lines are the internal camera parameters; we modified our parameter representation to fit the generally used model from Figure 5.
- Our degrees of freedom use a different mathematical formulation.
- the remaining two (2) lines show the major lens geometric distortion parameters k and k 2 . These two are present in most models and account for most of fish eye geometric distortion.
- Figure 13 shows a stereo pair typically used for measuring objects in 3D, using two (2) simultaneous camera images. A full discussion on triangulation is given in [5].
- O and O' are the optical centers for the two cameras (not shown), and both lens axes project at right angles on the image planes at the image centers, respectively (C x , C Y , / ) and (C x ⁇ C Y ', /') (not shown for clarity), where (C x , C Y ) is the origin of the image plane, and / the distance between O and the image plane, as shown in Figure 4. Similarly, (C x ⁇ C Y ') is the origin of the image plane, and /' the distance between O' and the image plane.
- Both cameras are seeing a common point M on the object (not shown). M projects in both camera images as points m and m'.
- the first four (4) requirements for 3D telemetric accuracy are found trough camera calibration, the fifth from sub pixel image feature extraction. The last is the triangulation 3D recovery itself.
- the first four (4) error dependencies described above namely the optical centers O and O', the focal distances / and / ', the Image centers (C x , C Y ) and (C x ', C Y ), and the lens axis orientation Z c and Z c ', are subject to the discovered camera model bias discussed above.
- Feature point extraction (m and m') is subject to the edge orientation bias, and corner detection bias that had to be delt with at calibration.
- bias sources include the following:
- JPEG image filtering at sub pixel level (variable with JPEG quality parameter )
- every lens parameter is 'polluted' by the internal camera model bias referred to as the tilted axis assumption.
- the bias can be removed by changing the tilted assumption for an accurate perspective model of the 3D internal camera image plane.
- table 1 also shows that lens distortion parameters are under evaluated, with the minus sign on ki meaning barrel distortion.
- Range and aim measurements are also biased and related to the error percentage on focal distance / since a camera gives a scaled measure. It also prevents the accurate modelling of the zooming lens camera.
- focal point O moves along the lens axis Z c . From calibration, O is found by knowing image center (C x , C Y ), / away at right angle with the image plane.
- the proposed example shows a systematic bias in those parameters. It gets even worse when considering run out in the lens mechanism since it moves the lens axis Z c . Without the proposed modification to the camera model, it then becomes impossible to model a zooming lens.
- Modeling of the zooming lens camera requires plotting the displacement of focal point O in space.
- the only way to evaluate the mechanical quality of the zooming lens therefore depends on the accurate knowledge of image center (Cx, C Y ) and /.
- zooming lens tradeoff zooming in to gain added accuracy when needed, at the cost of losing accuracy for assembly tolerances in the lens mechanism.
- Figure 14 shows how lens distortion is removed from the image. Chromatic distortion is not visible on a black and white image.
- chromatic distortion target displacement is shown amplified by fifty (50) times.
- Target positions are shown for the Red Green and Blue (RGB) camera colour channels, and are grouped by clusters of three (3).
- the 'x' or cross sign marker symbol indicates the target extraction in Blue
- the '+' or plus sign marker symbol indicates the target extraction in Red
- the dot or point marker symbol indicates the target extraction in Green.
- the visible spectrum spread pushes the Red target centres outwards, and the Blue target centers inwards with respect to Green.
- the graph of Figure 15 shows a mostly radial behaviour.
- the imaginary lines joining Red Green and Blue centers for any given target location tend to line up and aim towards the image center indicated by the circled plus sign marker symbol close to the (500, 400) pixel coordinate.
- Bayer Pattern colour cameras give a single colour signal for each given pixel, Red, Green, or Blue, as indicated by an R, G, or B prefix in the pixel number of Figure 18. Missing colour information is interpolated using neighbouring pixel information.
- the missing G13 value is computed as:
- step two we compute missing B and R values using known G for edge sensing, assuming edges in B and R are geometrically found in the same image plane locations as G edges.
- Bayer pattern recovery requires adapting to compensate for 'colour shifting' edge location as we scan from B to G to R pixels.
- a software approach creates an open integration architecture
- the computer generated image has ideal perspective and known focal length. Since a computer generated image is perfectly pinhole, created from set value for /, it stands from reason to correct the camera image for distortion and fit it to the same scale as the synthetic image.
- any lens system will exhibit distortion at some level.
- the earth's atmosphere also adds distortion which can only be compensated for when the lens distortion is accurately known.
- under-compensated geometric distortion will build up curvature, and biased perspective as caused by the tilted axis assumption will create a shape alteration: loss of squareness, loss of verticality...
- the proposed approach is desirable for zooming lens telemetry, increases speed and accuracy in wide angle lens application, and allows system miniaturization in two ways. Firstly by providing added accuracy from smaller lens systems, and secondly, filtering through software allows for simpler optics. It provides the best trade-off for accuracy, speed, cost, bulk, weight, maintenance and upgradeability.
- the tilted axis assumption creates a major bias and has to be replaced by a perspective model of the image plane that retains the camera image plane 3D geometry: horizontal and vertical image scales are equal and at right angle.
- the tilted axis assumption introduces a calibration bias showing on 3D triangulation since the image center is out of position.
- the two (2) pixel image center bias dominates every other error in the triangulation process since image features can be extracted to 1/4 pixel accuracy.
- Sub pixel bias sources include, but are not restricted to:
- the perspective model for the internal camera image plane is needed to locate the displacement of the lens focal point in a zooming lens.
- a software correction approach increases speed and accuracy in wide angle lens application, and allows system miniaturization in two ways. Firstly by providing added accuracy from smaller lens systems, and secondly, filtering through software allows for simpler optics.
- Software model/calibration is the only technique for improving camera performance beyond hardware limitations.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Studio Devices (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Geometry (AREA)
- Measurement Of Optical Distance (AREA)
- Endoscopes (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
Abstract
Description
Claims
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112015033020A BR112015033020A2 (en) | 2013-07-02 | 2014-07-02 | SYSTEM AND METHOD FOR MODELING AND CALIBRATION OF IMAGE FORMATION DEVICE |
US14/898,016 US9792684B2 (en) | 2013-07-02 | 2014-07-02 | System and method for imaging device modelling and calibration |
EP14820593.3A EP3017599A4 (en) | 2013-07-02 | 2014-07-02 | System and method for imaging device modelling and calibration |
KR1020167003009A KR20160030228A (en) | 2013-07-02 | 2014-07-02 | System and method for imaging device modelling and calibration |
CN201480038248.5A CN105379264B (en) | 2013-07-02 | 2014-07-02 | The system and method with calibrating are modeled for imaging device |
JP2016522146A JP2016531281A (en) | 2013-07-02 | 2014-07-02 | System and method for modeling and calibration of imaging apparatus |
RU2016103197A RU2677562C2 (en) | 2013-07-02 | 2014-07-02 | System and method for modeling and calibrating imaging device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2819956 | 2013-07-02 | ||
CA2819956A CA2819956C (en) | 2013-07-02 | 2013-07-02 | High accuracy camera modelling and calibration method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015000056A1 true WO2015000056A1 (en) | 2015-01-08 |
Family
ID=52142954
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2014/000534 WO2015000056A1 (en) | 2013-07-02 | 2014-07-02 | System and method for imaging device modelling and calibration |
Country Status (9)
Country | Link |
---|---|
US (1) | US9792684B2 (en) |
EP (1) | EP3017599A4 (en) |
JP (2) | JP2016531281A (en) |
KR (1) | KR20160030228A (en) |
CN (1) | CN105379264B (en) |
BR (1) | BR112015033020A2 (en) |
CA (1) | CA2819956C (en) |
RU (1) | RU2677562C2 (en) |
WO (1) | WO2015000056A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106643669A (en) * | 2016-11-22 | 2017-05-10 | 北京空间机电研究所 | Single-center projection transformation method of multi-lens and multi-detector aerial camera |
EP3228568A1 (en) * | 2016-04-08 | 2017-10-11 | Otis Elevator Company | Method and system for multiple 3d sensor calibration |
DE102016217792A1 (en) | 2016-09-16 | 2018-03-22 | Xion Gmbh | alignment system |
CN108169722A (en) * | 2017-11-30 | 2018-06-15 | 河南大学 | A kind of unknown disturbances influence the system deviation method for registering of lower sensor |
CN109901142A (en) * | 2019-02-28 | 2019-06-18 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of scaling method and device |
Families Citing this family (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9196039B2 (en) * | 2014-04-01 | 2015-11-24 | Gopro, Inc. | Image sensor read window adjustment for multi-camera array tolerance |
CN104469167B (en) * | 2014-12-26 | 2017-10-13 | 小米科技有限责任公司 | Atomatic focusing method and device |
CN105678748B (en) * | 2015-12-30 | 2019-01-15 | 清华大学 | Interactive calibration method and device in three-dimension monitoring system based on three-dimensionalreconstruction |
DE102016002186A1 (en) * | 2016-02-24 | 2017-08-24 | Testo SE & Co. KGaA | Method and image processing device for determining a geometric measured variable of an object |
EP3217355A1 (en) | 2016-03-07 | 2017-09-13 | Lateral Reality Kft. | Methods and computer program products for calibrating stereo imaging systems by using a planar mirror |
US10922559B2 (en) * | 2016-03-25 | 2021-02-16 | Bendix Commercial Vehicle Systems Llc | Automatic surround view homography matrix adjustment, and system and method for calibration thereof |
EP3236286B1 (en) * | 2016-04-18 | 2023-01-25 | Otis Elevator Company | Auto commissioning system and method |
JP7092382B2 (en) * | 2017-01-06 | 2022-06-28 | フォトニケア,インコーポレイテッド | Self-oriented imaging device and how to use it |
KR101905403B1 (en) | 2017-02-15 | 2018-10-08 | 동명대학교산학협력단 | multi-scale curvatures based perceptual vector data hashing techinique for vector content authentication |
JP7002007B2 (en) * | 2017-05-01 | 2022-01-20 | パナソニックIpマネジメント株式会社 | Camera parameter set calculation device, camera parameter set calculation method and program |
US10777018B2 (en) * | 2017-05-17 | 2020-09-15 | Bespoke, Inc. | Systems and methods for determining the scale of human anatomy from images |
JP7051845B2 (en) | 2017-06-15 | 2022-04-11 | 富士フイルム株式会社 | How to operate a medical image processing device, an endoscope system, and a medical image processing device |
CN107895347A (en) * | 2017-07-20 | 2018-04-10 | 吉林大学 | A kind of vision self-adapting adjusting display device and method |
SG11201907126SA (en) * | 2017-08-25 | 2019-09-27 | Maker Trading Pte Ltd | A general monocular machine vision system and method for identifying locations of target elements |
CN107632407B (en) * | 2017-11-08 | 2020-02-04 | 凌云光技术集团有限责任公司 | Calibrating device of cylindrical lens imaging system |
CN108038888B (en) * | 2017-12-19 | 2020-11-27 | 清华大学 | Space calibration method and device of hybrid camera system |
KR102066393B1 (en) * | 2018-02-08 | 2020-01-15 | 망고슬래브 주식회사 | System, method and computer readable recording medium for taking a phtography to paper and sharing to server |
US11061132B2 (en) | 2018-05-21 | 2021-07-13 | Johnson Controls Technology Company | Building radar-camera surveillance system |
JP2020008434A (en) * | 2018-07-09 | 2020-01-16 | オムロン株式会社 | Three-dimensional measuring device and method |
CN109167992A (en) * | 2018-08-08 | 2019-01-08 | 珠海格力电器股份有限公司 | Image processing method and device |
CN109143207B (en) | 2018-09-06 | 2020-11-10 | 百度在线网络技术(北京)有限公司 | Laser radar internal reference precision verification method, device, equipment and medium |
CN111047643B (en) * | 2018-10-12 | 2023-06-27 | 深圳富联富桂精密工业有限公司 | Monocular distance measuring device |
CN109612384B (en) * | 2018-11-01 | 2020-11-06 | 南京理工大学 | Tilting aberration correction compensation method based on spectrum sub-pixel translation |
CN109506589B (en) * | 2018-12-25 | 2020-07-28 | 东南大学苏州医疗器械研究院 | Three-dimensional profile measuring method based on structural light field imaging |
CN109949367B (en) * | 2019-03-11 | 2023-01-20 | 中山大学 | Visible light imaging positioning method based on circular projection |
CN111696047B (en) * | 2019-03-14 | 2023-08-22 | 四川中测辐射科技有限公司 | Imaging quality determining method and system of medical imaging equipment |
CN111913169B (en) * | 2019-05-10 | 2023-08-22 | 北京四维图新科技股份有限公司 | Laser radar internal reference and point cloud data correction method, device and storage medium |
CN110322519B (en) * | 2019-07-18 | 2023-03-31 | 天津大学 | Calibration device and calibration method for combined calibration of laser radar and camera |
CN110596720A (en) * | 2019-08-19 | 2019-12-20 | 深圳奥锐达科技有限公司 | Distance measuring system |
KR102715161B1 (en) * | 2019-11-28 | 2024-10-08 | 삼성전자주식회사 | Method and device for restoring image |
CN111462245B (en) * | 2020-01-09 | 2023-05-26 | 华中科技大学 | Zoom camera posture calibration method and system based on rectangular structure |
TWI709780B (en) * | 2020-01-21 | 2020-11-11 | 台灣骨王生技股份有限公司 | Active imaging correction device and method for infrared lens |
CN111355894B (en) * | 2020-04-14 | 2021-09-03 | 长春理工大学 | Novel self-calibration laser scanning projection system |
CN111507902B (en) * | 2020-04-15 | 2023-09-26 | 京东城市(北京)数字科技有限公司 | High-resolution image acquisition method and device |
CN113554710A (en) * | 2020-04-24 | 2021-10-26 | 西门子(深圳)磁共振有限公司 | Calibration method, system and storage medium of 3D camera in medical image system |
CN111627072B (en) * | 2020-04-30 | 2023-10-24 | 贝壳技术有限公司 | Method, device and storage medium for calibrating multiple sensors |
CN111514476B (en) * | 2020-04-30 | 2022-03-15 | 江苏瑞尔医疗科技有限公司 | Calibration method for X-ray image guidance system |
JP2023113980A (en) * | 2020-07-13 | 2023-08-17 | パナソニックIpマネジメント株式会社 | Ellipse detection method, camera calibration method, ellipse detection device, and program |
CN112050752B (en) * | 2020-09-02 | 2022-03-18 | 苏州东方克洛托光电技术有限公司 | Projector calibration method based on secondary projection |
CN111986197A (en) * | 2020-09-09 | 2020-11-24 | 福州大学 | Partial reference sonar image application quality evaluation method based on contour statistical characteristics |
CN112230204A (en) * | 2020-10-27 | 2021-01-15 | 深兰人工智能(深圳)有限公司 | Combined calibration method and device for laser radar and camera |
CN112634152B (en) * | 2020-12-16 | 2024-06-18 | 中科海微(北京)科技有限公司 | Face sample data enhancement method and system based on image depth information |
CN112883000B (en) * | 2021-03-17 | 2022-04-15 | 中国有色金属长沙勘察设计研究院有限公司 | Deformation monitoring radar data file storage method |
CN113177989B (en) * | 2021-05-07 | 2024-07-19 | 深圳云甲科技有限公司 | Intraoral scanner calibration method and device |
CN113284189B (en) * | 2021-05-12 | 2024-07-19 | 深圳市格灵精睿视觉有限公司 | Distortion parameter calibration method, device, equipment and storage medium |
CN113538565A (en) * | 2021-06-28 | 2021-10-22 | 深圳市拓普瑞思科技有限公司 | Visual accurate positioning control method based on industrial camera |
CN113487594B (en) * | 2021-07-22 | 2023-12-01 | 上海嘉奥信息科技发展有限公司 | Sub-pixel corner detection method, system and medium based on deep learning |
CN113706607B (en) * | 2021-08-18 | 2023-10-20 | 广东江粉高科技产业园有限公司 | Subpixel positioning method, computer equipment and device based on circular array diagram |
TWI789012B (en) * | 2021-09-14 | 2023-01-01 | 明俐科技有限公司 | Method and device of calibrating real-time image through dithering process |
US11927757B1 (en) * | 2021-10-29 | 2024-03-12 | Apple Inc. | Electronic device display having distortion compensation |
KR102701104B1 (en) * | 2021-11-03 | 2024-08-29 | 경북대학교 산학협력단 | Apparatus and method for generating 3D DSM using multi-view and multi-time satellite images |
CN114167663B (en) * | 2021-12-02 | 2023-04-11 | 浙江大学 | Coded aperture optical imaging system containing vignetting removal algorithm |
CN114170314B (en) * | 2021-12-07 | 2023-05-26 | 群滨智造科技(苏州)有限公司 | Intelligent 3D vision processing-based 3D glasses process track execution method |
CN115272471B (en) * | 2022-08-30 | 2023-07-28 | 杭州微影软件有限公司 | Method, device and equipment for determining optical center position |
CN116033733B (en) * | 2022-08-30 | 2023-10-20 | 荣耀终端有限公司 | Display device and assembly method thereof |
CN116839499B (en) * | 2022-11-03 | 2024-04-30 | 上海点莘技术有限公司 | Large-visual-field micro-size 2D and 3D measurement calibration method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080031514A1 (en) * | 2004-11-24 | 2008-02-07 | Aisin Seiki Kabushiki Kaisha | Camera Calibration Method And Camera Calibration Device |
US20100283856A1 (en) * | 2009-05-05 | 2010-11-11 | Kapsch Trafficcom Ag | Method For Calibrating The Image Of A Camera |
US20120268579A1 (en) * | 2009-03-31 | 2012-10-25 | Intuitive Surgical Operations, Inc. | Targets, fixtures, and workflows for calibrating an endoscopic camera |
WO2013015699A1 (en) * | 2011-07-25 | 2013-01-31 | Universidade De Coimbra | Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000227547A (en) * | 1999-02-05 | 2000-08-15 | Fuji Photo Film Co Ltd | Photographic lens and camera using the same |
US6437823B1 (en) | 1999-04-30 | 2002-08-20 | Microsoft Corporation | Method and system for calibrating digital cameras |
JP4501239B2 (en) | 2000-07-13 | 2010-07-14 | ソニー株式会社 | Camera calibration apparatus and method, and storage medium |
RU2199150C2 (en) * | 2001-02-02 | 2003-02-20 | Курский государственный технический университет | Optoelectronic system calibration device |
KR100386090B1 (en) | 2001-04-02 | 2003-06-02 | 한국과학기술원 | Camera calibration system and method using planar concentric circles |
US6995762B1 (en) | 2001-09-13 | 2006-02-07 | Symbol Technologies, Inc. | Measurement of dimensions of solid objects from two-dimensional image(s) |
JP3624288B2 (en) | 2001-09-17 | 2005-03-02 | 株式会社日立製作所 | Store management system |
US7068303B2 (en) | 2002-06-03 | 2006-06-27 | Microsoft Corporation | System and method for calibrating a camera with one-dimensional objects |
JP4147059B2 (en) | 2002-07-03 | 2008-09-10 | 株式会社トプコン | Calibration data measuring device, measuring method and measuring program, computer-readable recording medium, and image data processing device |
KR100576874B1 (en) * | 2004-10-25 | 2006-05-10 | 삼성전기주식회사 | Optical System Using Diffractive Optiacal Element |
US8082120B2 (en) | 2005-03-11 | 2011-12-20 | Creaform Inc. | Hand-held self-referenced apparatus for three-dimensional scanning |
CA2600926C (en) | 2005-03-11 | 2009-06-09 | Creaform Inc. | Auto-referenced system and apparatus for three-dimensional scanning |
DE102007045525A1 (en) * | 2007-09-24 | 2009-04-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | image sensor |
CN101419705B (en) | 2007-10-24 | 2011-01-05 | 华为终端有限公司 | Video camera demarcating method and device |
KR100966592B1 (en) | 2007-12-17 | 2010-06-29 | 한국전자통신연구원 | Method for calibrating a camera with homography of imaged parallelogram |
JP4751939B2 (en) * | 2009-03-31 | 2011-08-17 | アイシン精機株式会社 | Car camera calibration system |
US8223230B2 (en) * | 2009-05-08 | 2012-07-17 | Qualcomm Incorporated | Systems, methods, and apparatus for camera tuning and systems, methods, and apparatus for reference pattern generation |
CN101727671B (en) | 2009-12-01 | 2012-05-23 | 湖南大学 | Single camera calibration method based on road surface collinear three points and parallel line thereof |
JP4763847B1 (en) * | 2010-08-30 | 2011-08-31 | 楽天株式会社 | Image conversion apparatus, image processing apparatus, and image processing system |
CN102466857B (en) * | 2010-11-19 | 2014-03-26 | 鸿富锦精密工业(深圳)有限公司 | Imaging lens |
US8711275B2 (en) * | 2011-05-31 | 2014-04-29 | Apple Inc. | Estimating optical characteristics of a camera component using sharpness sweep data |
-
2013
- 2013-07-02 CA CA2819956A patent/CA2819956C/en active Active
-
2014
- 2014-07-02 KR KR1020167003009A patent/KR20160030228A/en not_active Application Discontinuation
- 2014-07-02 CN CN201480038248.5A patent/CN105379264B/en not_active Expired - Fee Related
- 2014-07-02 RU RU2016103197A patent/RU2677562C2/en not_active IP Right Cessation
- 2014-07-02 BR BR112015033020A patent/BR112015033020A2/en not_active IP Right Cessation
- 2014-07-02 US US14/898,016 patent/US9792684B2/en active Active
- 2014-07-02 JP JP2016522146A patent/JP2016531281A/en active Pending
- 2014-07-02 WO PCT/CA2014/000534 patent/WO2015000056A1/en active Application Filing
- 2014-07-02 EP EP14820593.3A patent/EP3017599A4/en not_active Withdrawn
-
2019
- 2019-04-04 JP JP2019072152A patent/JP6722323B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080031514A1 (en) * | 2004-11-24 | 2008-02-07 | Aisin Seiki Kabushiki Kaisha | Camera Calibration Method And Camera Calibration Device |
US20120268579A1 (en) * | 2009-03-31 | 2012-10-25 | Intuitive Surgical Operations, Inc. | Targets, fixtures, and workflows for calibrating an endoscopic camera |
US20100283856A1 (en) * | 2009-05-05 | 2010-11-11 | Kapsch Trafficcom Ag | Method For Calibrating The Image Of A Camera |
WO2013015699A1 (en) * | 2011-07-25 | 2013-01-31 | Universidade De Coimbra | Method and apparatus for automatic camera calibration using one or more images of a checkerboard pattern |
Non-Patent Citations (3)
Title |
---|
AHOUANDINOU ET AL.: "AN APPROACH TO CORRECTING IMAGE DISTORTION BY SELF CALIBRATION STEREOSCOPIC SCENE FROM MULTIPLE VIEWS", 2012 EIGHTH INTERNATIONAL CONFERENCE ON SIGNAL IMAGE TECHNOLOGY AND INTERNET BASED SYSTEMS, 25 November 2012 (2012-11-25), pages 389 - 394, XP032348540 * |
MELO ET AL.: "A New Solution for Camera Calibration and Real-Time Image Distortion Correction in Medical Endoscopy-Initial Technical Evaluation", IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, vol. 59, no. 3, 1 March 2012 (2012-03-01), pages 634 - 644, XP011489985 * |
See also references of EP3017599A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3228568A1 (en) * | 2016-04-08 | 2017-10-11 | Otis Elevator Company | Method and system for multiple 3d sensor calibration |
US10371512B2 (en) | 2016-04-08 | 2019-08-06 | Otis Elevator Company | Method and system for multiple 3D sensor calibration |
DE102016217792A1 (en) | 2016-09-16 | 2018-03-22 | Xion Gmbh | alignment system |
CN108307178A (en) * | 2016-09-16 | 2018-07-20 | 艾克松有限责任公司 | Calibration system |
US11115643B2 (en) | 2016-09-16 | 2021-09-07 | Xion Gmbh | Alignment system |
CN106643669A (en) * | 2016-11-22 | 2017-05-10 | 北京空间机电研究所 | Single-center projection transformation method of multi-lens and multi-detector aerial camera |
CN108169722A (en) * | 2017-11-30 | 2018-06-15 | 河南大学 | A kind of unknown disturbances influence the system deviation method for registering of lower sensor |
CN109901142A (en) * | 2019-02-28 | 2019-06-18 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of scaling method and device |
Also Published As
Publication number | Publication date |
---|---|
RU2016103197A3 (en) | 2018-05-17 |
EP3017599A4 (en) | 2017-11-22 |
RU2677562C2 (en) | 2019-01-17 |
CN105379264A (en) | 2016-03-02 |
JP6722323B2 (en) | 2020-07-15 |
CN105379264B (en) | 2017-12-26 |
US9792684B2 (en) | 2017-10-17 |
JP2016531281A (en) | 2016-10-06 |
JP2019149809A (en) | 2019-09-05 |
CA2819956A1 (en) | 2015-01-02 |
US20160140713A1 (en) | 2016-05-19 |
RU2016103197A (en) | 2017-08-07 |
KR20160030228A (en) | 2016-03-16 |
BR112015033020A2 (en) | 2017-10-03 |
EP3017599A1 (en) | 2016-05-11 |
CA2819956C (en) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9792684B2 (en) | System and method for imaging device modelling and calibration | |
US10958892B2 (en) | System and methods for calibration of an array camera | |
CN113256730B (en) | System and method for dynamic calibration of an array camera | |
Birklbauer et al. | Panorama light‐field imaging | |
US6847392B1 (en) | Three-dimensional structure estimation apparatus | |
CN106157304A (en) | A kind of Panoramagram montage method based on multiple cameras and system | |
Henrique Brito et al. | Radial distortion self-calibration | |
JP2001346226A (en) | Image processor, stereoscopic photograph print system, image processing method, stereoscopic photograph print method, and medium recorded with processing program | |
KR20180053669A (en) | Light field data representation | |
WO2022100668A1 (en) | Temperature measurement method, apparatus, and system, storage medium, and program product | |
CN112489137A (en) | RGBD camera calibration method and system | |
Nozick | Multiple view image rectification | |
KR100513789B1 (en) | Method of Lens Distortion Correction and Orthoimage Reconstruction In Digital Camera and A Digital Camera Using Thereof | |
DK3189493T3 (en) | PERSPECTIVE CORRECTION OF DIGITAL PHOTOS USING DEPTH MAP | |
CN110322514B (en) | Light field camera parameter estimation method based on multi-center projection model | |
CN111292380B (en) | Image processing method and device | |
US6697573B1 (en) | Hybrid stereoscopic motion picture camera with film and digital sensor | |
Vupparaboina et al. | Euclidean auto calibration of camera networks: baseline constraint removes scale ambiguity | |
Müller-Rowold et al. | Hyperspectral panoramic imaging | |
Raghavachari et al. | Efficient use of bandwidth by image compression for vision-based robotic navigation and control | |
CN111080689B (en) | Method and device for determining face depth map | |
JP6103767B2 (en) | Image processing apparatus, method, and program | |
Sturm et al. | On calibration, structure-from-motion and multi-view geometry for panoramic camera models | |
Angst et al. | Radial Distortion Self-Calibration | |
Rova | Affine multi-view modelling for close range object measurement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14820593 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14898016 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2016522146 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20167003009 Country of ref document: KR Kind code of ref document: A Ref document number: 2016103197 Country of ref document: RU Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014820593 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112015033020 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112015033020 Country of ref document: BR Kind code of ref document: A2 Effective date: 20151230 |