WO2019050417A1 - Stereoscopic system calibration and method - Google Patents
Stereoscopic system calibration and method Download PDFInfo
- Publication number
- WO2019050417A1 WO2019050417A1 PCT/NZ2018/050121 NZ2018050121W WO2019050417A1 WO 2019050417 A1 WO2019050417 A1 WO 2019050417A1 NZ 2018050121 W NZ2018050121 W NZ 2018050121W WO 2019050417 A1 WO2019050417 A1 WO 2019050417A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- calibration
- images
- error
- model
- camera
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C25/00—Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
- G06T2207/30208—Marker matrix
Definitions
- the present invention relates to a calibration system and/or method.
- a calibration system and/or method In particular it relates to calibration of imaging systems and provides an improved method of calibrating a system using a calibration template.
- Three-dimensional (3D) computer vision systems have many applications in robotics, shape reconstruction, quality control, and 3D measurements in experimental mechanics.
- the majority of 3D computer vision systems use an image acquisition device such as one or more cameras, or a stereoscopic camera, to capture images from various viewing angles to generate 3D models of objects.
- References throughout this document to "camera” or “cameras” include any device or devices that can acquire an image i.e. any image acquisition device.
- the simultaneous calibration of multiple cameras is an important task in three- dimensional (3D) computer vision systems.
- the accuracy of stereoscopic systems in performing 3D measurements is often dependent upon the accuracy of camera calibration.
- the camera calibration process is challenging, particularly when using more than two cameras.
- calibration involves identifying the camera's intrinsic parameters and the lens distortion parameters.
- the intrinsic camera parameters relate an object to its image in the camera image plane, and the lens distortion parameters characterise the distortion effects of the lens.
- Multiple camera systems include a further step to identify extrinsic camera parameters that specify the 3D positions of each camera in the world-coordinate system. The extrinsic parameters of cameras are required for estimating the 3D position of the object points from two-dimensional (2D) camera images.
- the most common way of finding the extrinsic parameters of multi-camera systems is using a two-step method.
- the camera's intrinsic parameters and the initial estimates of extrinsic parameters are identified, followed by an optimisation process to minimise a defined objective function to refine the extrinsic parameters.
- a widely used objective function is the summation of reprojection errors of all the calibration images of all the cameras (the reprojection error of each calibration image is the sum of squared distances between the measured and the projected control points using the current estimates of the intrinsic and extrinsic parameters of the cameras). Therefore, the camera parameters are refined by minimising the reprojection error in a nonlinear least-squares optimisation. This typically has problems because it is slow, or ineffective for some problems. The use of stereo-pairs was attempted but was found inadequate to address these limitations. For instance Zhang describes a two-step process in which the initial values for the unknown parameters of the camera and the lens are found in the first step, and a nonlinear optimisation process refines the parameters in the second step.
- Zhang uses the reprojection error in the objective function of the optimisation process, since it relates the known coordinates of 3D control points to the unknown parameters of the camera and lens distortion coefficients.
- Zhang uses the knowledge of the configuration of the calibration target (the square size, and the number of rows and columns of the checkerboard template).
- Radial and tangential lens distortion coefficients can also be incorporated into the reprojection error to map the distorted pixel locations to the undistorted locations.
- the reprojected 3D points (M) can be a function of camera intrinsic parameters, the camera 3D pose ⁇ R and 7), and lens distortion coefficients.
- the values of the parameters can be optimised by minimising the reprojection error in the optimisation process:
- the method of Zhang has been extended to multiple cameras by adding the reprojection error in the images of each camera to the optimisation process, and selecting a world coordinate system and a common origin for 3D points, R, and T vectors of all the cameras.
- the R and T vectors of each camera locate the 3D position of a camera in the world coordinate system (i.e. relate the camera coordinate system to the world coordinate system).
- the process of finding the R and T vectors for each camera is called extrinsic camera calibration.
- the optimisation process is extended to minimise the summation of the reprojection errors of all cameras to find the refined parameters of each camera.
- the parameters should be optimised for all the cameras:
- template-based Generally methods that use calibration targets (template-based) are the primary method of calibrating multi-camera systems, and checkerboard templates are the most commonly used calibration targets.
- localising the control points of checkerboard templates i.e. their corner locations
- template-based methods are more reliable for controlled environments than self-calibration methods, they have some limitations. For instance the corners of the checkerboard that are used as control points in traditional methods cannot be localised accurately in many applications. Also, perspective distortions in the calibration images and imperfections of the calibration target decrease the accuracy of localising the control points.
- Datta et al. (A. Datta, J.-S. Kim, and T. Kanade, "Accurate camera calibration using iterative refinement of control points," 2009 IEEE 12th Int. Conf. Comput. Vis. Work. ICCV Work., pp. 1201 -1208, Sep. 2009) used parameters obtained from the traditional camera calibration method to undistort and unproject calibration images to canonical fronto-parallel images (i.e. images that are parallel to the image plane of the camera). The fronto-parallel images were then used to localise control points, which were projected back to recompute the camera parameters, iteratively. However this increases processing time and unprojecting the calibration images introduces an ambiguity in the image scale.
- Douxchamps et al. (D. Douxchamps and K. Chihara, "High-accuracy and robust localisation of large control markers for geometric camera calibration," IEEE Trans. Pattern Anal. Mach. Intell., vol. 31 , no. 2, pp. 376-383, 2009) used ray tracing to build a synthetic image of the calibration template at the estimated location of the calibration target in calibration images.
- the ray-traced model of the calibration template and the image of the calibration target were matched using an optimisation process to maximise the match between their bright and dark areas, which correspond to high and low intensities, respectively.
- the invention may broadly be said to consist in a calibration method for a system comprising one or more image acquisition devices, the method comprising the steps of:
- an error matrix comprising reprojection error values and/or the discrepancy between acquired images and those reconstructed from a mathematical model of the calibration template; wherein the reprojection error values comprise a reprojection error for the calibration objects in each of the plurality of images and the error matrix comprises the reprojection error values.
- the introduction of an error matrix for the reprojection values allows an improvement in the robustness and efficiency of the system. This is enabled because the error matrix representation provides possible separate entries for the reprojection error values for each of a number of calibration objects in each of the plurality of images if required, and has these values available for each image acquisition device (e.g. camera).
- the system separately addresses each component or parameter (images/calibration objects/cameras) and can easily adjust to different cameras in the system. It can also allow a user to distinguish which parameters have influenced the error.
- the image acquisition device comprises a camera.
- the image acquisition device operates at visual or optical frequencies.
- the error matrix comprises length error values.
- the length error values represent length errors in comparison to an expected length between calibration objects.
- the length error values measure a 3D distance between adjacent calibration objects.
- the error matrix comprises shape error values.
- the shape error values represent a difference between the 3D reconstructed shape and an ideal template shape.
- the shape error values measure a Euclidean distance between calibration objects and model or template calibration objects.
- any one or more of the types of error values are scaled when included in the error matrix. Scaling allows a fair comparison between the different error measurements.
- the method comprises the step of scaling the shape error values and/or the length error values between metres (length) and pixels. This allows the method to effectively combine them into the optimisation process.
- the calibration objects are calibration points or control points in a calibration target. Control points are specific points in a calibration target used to optimise the camera parameters by minimising the reprojection errors of them.
- the plurality of images are of the calibration target.
- the calibration points are formed by concentric circles or crossing lines, such as on a checkerboard.
- the calibration objects are target image features or a selection thereof.
- the calibration objects comprise or form feature sizes with a range of spatial frequencies.
- the calibration method comprises the step of:
- a calibration target has a plurality of known parameters.
- the known parameters are used to obtain length and/or shape error values.
- the calibration method comprises the step of minimising the error matrix.
- the matrix can be a matrix of objective function values rather than a single objective function value.
- the step minimising the error matrix optimises the parameters of the cameras.
- the optimisation process attempts to minimise the overall error in the error matrix by adjusting any one or more of the parameters in the system. Typically this is performed by minimising finding the arguments of the minima, the point(s) at which the matrix is value is minimised.
- the step of optimising the error matrix comprises a trust-region method.
- the method comprises the trust-region-reflective algorithm.
- the initial values comprise previously used values; values calculated by another method; known, measured, or selected values for equipment; and/or randomised values.
- the initial values comprise intrinsic initial values and extrinsic initial values.
- Intrinsic initial values comprise camera or lens specific values.
- Extrinsic values comprise system, or multi-camera arrangement (3D pose) values.
- parameter groups forming parameter groups; and scaling model parameters with respect to the largest group parameter.
- the forming of parameter groups comprises the step of grouping the parameters to groups of a same physical concept,
- the parameters include any one or more of: camera focal length; principal point; lens distortion coefficients; rotation vector and translation vector.
- the calibration method comprises the steps of:
- the step of detecting outliers comprises comparison of an error value to an average error value. In an embodiment the step of detecting outliers comprises comparison of the difference between the error value and an average error value to a threshold level. In an embodiment the threshold was set at a factor of 10.
- the calibration method comprises extending the error matrix to include a further image acquisition devices.
- the invention may broadly be said to consist in a calibration method for a system comprising one or more image acquisition devices, the method comprising the step of:
- inventions of at least the first aspect may be included with the second aspect.
- the error matrix is obtained based on current estimates of the parameters.
- the invention may broadly be said to consist in a calibration system for a multi-camera system, the calibration system comprising:
- an input means adapted to receive images from at least two image acquisition devices cameras in the multi-camera system
- control means adapted to calibrate the system based on the received images, the control means configured to obtain an error matrix of the system
- an output means to output images from the multi-camera system.
- the calibration system is configured to use the method of the first aspect and/or second aspects.
- the invention may broadly be said to consist in a method for configuring an additional camera to a calibrated multi-camera system; the method comprising the steps of:
- the invention may broadly be said to consist in a calibration method for a system comprising at least one image acquisition device the method comprising the steps of:
- removing the lens distortion from the series of images using a lens distortion model of the at least one image acquisition device, to obtain a measured model of the target; and updating the lens distortion model dependent on a comparison between the measured model of the target and a control model of the target.
- This process takes advantages of having a substantially accurate control model.
- This method can be used with any calibration method, including that described in the above aspects. As part of a calibration method this method allows calculation of more accurate control objects, allowing for instance more accurate estimation of parameters.
- the comparison is a comparison of a fixed property of the target.
- the comparison comprises a comparison of discrepancies between the measured model and the control model.
- the control model is synthetically generated.
- the comparison is a comparison of spatial features.
- the comparison uses an image registration algorithm.
- a sub-pixel image registration algorithm is use.
- the P-SG-GC algorithm is used.
- the method comprises the step of selecting a subimage size based on any one of more of the image characteristics, including: resolution, texture, and noise.
- the comparison comprises a projective transformation between the measured model and the control model.
- the orientation of the control model in the image is estimated by the orientation of the measured model.
- comparison between the measured model of the target and the control model of the target comprises a comparison of the control objects in each model.
- control objects comprise control points such as line cross points, edges, concentric circles or checkerboards.
- control objects comprise calibration target features.
- target features could have a range of spatial frequencies to provide a good comparison at multiple scales.
- control objects of the measured model and the control objects of the control model are mapped to one another.
- the series of images include images of the target throughout the field of view.
- the lens distortion model comprises a radial and tangential distortion model.
- the lens distortion model comprises a Taylor series expansion.
- the lens distortion model comprises Zernike polynomials.
- the lens distortion model has inputs comprising any one or more of: coordinates of control points and/or image discrepancies.
- the inputs are normalised within the unit circle.
- the lens distortion model maps the shifts between distorted and undistorted images.
- the shifts are mapped in the x and y directions.
- Some polynomials including Zernike polynomials are suitable for fitting to symmetric shapes, similar to lens distortions or aberrations. Therefore, using these polynomials to map the distorted locations to undistorted locations (instead of to the target location movement) can provide advantages.
- the invention may broadly be said to consist in a method for characterising lens distortion, the method comprising the steps of:
- the method may use any one or more of the above embodiments.
- the disclosed subject matter also provides a multi-camera system and a method for calibration which may broadly be said to consist in the parts, elements and features referred to or indicated in this specification, individually or collectively, in any or all combinations of two or more of those parts, elements or features. Where specific integers are mentioned in this specification which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated in the specification.
- Fig. 1 shows flowchart of embodiments of the calibration method showing (a) calibration of a imaging system and (b) refinement of a lens distortion model.
- Fig. 2 is a diagram of a multi-camera system which may be calibrated.
- Fig. 3 is a diagram of a checkerboard type calibration target.
- Fig. 4 is a diagram of the checkboard type calibration target of Fig. 3 where control points have been identified.
- Fig. 5 shows the change in different error measurements with iterations of the method of figure 1 a and a known method.
- Fig. 6 shows the difference in a fitted line between a distorted and undistorted image.
- Fig. 7 shows a concentric circle type calibration target.
- Fig. 8 shows a calibration target in (a) the measured model and (b) the control model.
- the invention is applicable to a wide variety of different fields in which camera calibration is necessary or desirable. These include, without limitation: Agriculture (fruit and produce inline sorting for example); Surgery; Security; Hi-Tech (for example automotive, sailing, etc.); and Large mechanical structures (deformation of cranes, bridges, wind turbines, etc).
- the proposed calibration method addresses the limitations of the traditional methods of calibrating multi-camera (at least two image acquisition devices) systems by altering the optimisation procedure and introducing a new objective function. This can provide a number of advantages including the ability to: optimise the intrinsic and extrinsic parameters of all the cameras in a single optimisation; estimate the parameters from imprecise initial values; calibrate many cameras simultaneously, and use a large set of calibration images.
- the calibration methods can be used as the final step of any calibration method to refine the camera parameters.
- image acquisition devices include, without limitation, cameras, microscopes, x-ray devices or other imaging means.
- the imaging device may operate at any suitable wavelength(s).
- the calibration method introduces a trust-region-reflective optimisation algorithm with an error matrix. Further to this at least two 3D error functions can be introduced into the error matrix, which then forms the objective function of the optimisation process.
- the objective function is the function that calculates the error matrix.
- the objective function is what an optimisation algorithm tries to minimise. This optimisation process is able to simultaneously calibrate all the cameras of a stereo system using many calibration images, which can improve the accuracy of camera parameter estimations compared to the methods that only can calibrate stereo-pairs or can use few images.
- Figure 1 a shows a flow chart of the overall method.
- Initial calibration values 10 are found or chosen for the intrinsic and extrinsic parameters.
- a plurality of images of a calibration target are taken 1 1 using a multi-camera system 42, for example.
- a single image acquisition device may acquire images from a plurality of different locations and/or dispositions.
- the method then identifies a number of calibration objects in each of the plurality of images, so as to be able to appropriately compare the images.
- the images obtained by each of the multiple cameras can now be compared to form an error matrix 18 in an objective function.
- the error matrix 18 is used to relate the reprojection error 15 (the distance between the coordinates of the reprojected 3D points (M) and the measured corresponding points in the camera image (m).
- the reprojected 3D points (3D points that are reprojected to the camera image plane using the camera matrix) are based on reprojected control points of the model.
- the error matrix is appended with further spatial 16 or 3D shape error 17 values.
- the various intrinsic and extrinsic values can now be optimised by optimisation of this error matrix 19.
- This step preferably uses a trust- region algorithm to find the argmin of the error matrix across the parameters.
- alternative embodiments may use different forms of algorithm or may find minimise to a nonzero or maximum value without departure from the method.
- the process can then be repeated using the newly calibrated intrinsic and extrinsic parameters.
- Figure 1 b shows a second embodiment of the system which calculates the lens distortion component of the reprojection values.
- a model based technique is used to calculate and/or update the lens distortion model or camera parameters of a multi-camera system 42.
- This model takes an initial lens distortion model 30 and uses this to remove lens distortion effects from a plurality of images obtained 32 from a camera or a multi-camera system 42.
- the calibration target in the undistorted image 33 is now compared 34 to the model of the calibration target 31 .
- the calibration target model 31 is ideally a perfect replication of the calibration target and may be based on a 3D drawing template, ray traced model or other means.
- the comparison 34 between the achieved image (measured, 33) and the ideal (control, 31 ) can be used to adjust the lens distortion model 35 and to optimise the camera parameters 36.
- This comparison 34 can be completed in a number of ways, such as finding the discrepancies between the calibration target in the image and the calibration target model. However it is preferable that it operates on a fixed characteristic of the target. Fixed characteristics include spatial characteristics such as length or 3D shape.
- the comparison uses subpixel image registration to find localised shifts between the calibration image and the model of the calibration target. Particular embodiments of sub- pixel image registration are described in NZ720269 (WO2017200395), included herein by reference.
- the multiple camera system 42 comprises at least two cameras 40 arranged above a field of view. Each of the plurality of cameras 40 is then able to image the object (e.g. hand 43), typically in a number of different positions.
- An example set-up is shown in Figure 2 where four cameras 40 are shown at the corners of a field of view, however the method is not limited to this arrangement, or to four cameras.
- a system comprises four monochrome USB 3 cameras (Point Grey FL3-U3-13Y3M-C), equipped with 6 mm focal length lenses (DF6HA-1 B from Fujinon). The image size of these cameras was 1280 pixel x 1024 pixel.
- the FOV of this stereoscopic system was approximately 200 mm ⁇ 200 mm with an average distance of 200 mm to the cameras.
- the cameras can be focused in the FOV using a focusing pattern.
- 100 calibration images could be taken to cover the whole FOV at various distances and angles to the four cameras of our setup.
- Figure 3 shows a calibration template or target 50 used to calibrate the system of Figure 2.
- the target 50 is not limited to a checkerboard pattern as shown.
- Some examples of the calibration targets 50 are templates that comprise circular control points, two orthogonal 1 D objects, or four collinear 1 D markers.
- the template or target 50 comprises a plurality of calibration objects, such as squares vertices 52.
- the calibration targets 50 may include objects with an array of patterns that have spatial frequencies.
- the checkerboard template 50 was printed and attached to a 3 mm thick acrylic sheet using an adhesive spray, resulting in a flat 2D template.
- the checkerboard square 51 size and number of squares 51 were selected based on the size of FOV, and the average distance of the FOV to the cameras.
- a checkerboard template of size 9 x 12 (i.e. 8 x 1 1 inner corners) with the square size of 6 mm was chosen as the calibration template).
- Figure 7 shows an alternative calibration object 50 (also referred to as a template or target) consisting of groups of concentric circles 53 positioned on a 3 ⁇ 4 grid.
- the characteristics of the template including the calibration template size, the radius of circles, and the distance between the concentric circles are generally selected based on the size of the FOV, and the average distance of the FOV to the cameras.
- an accurate image of the template 50 or an accurate reconstruction of the template is available. For instance a SVG (scalable vector graphics) image, or other image type produced by a 3D drawing program, may be used.
- This calibration target offers a number of calibration objects 53 (such as circle centres).
- Figure 4 shows an identification of a plurality of calibration points 54 on the calibration target 50.
- calibration points 54 are identified by the intersection points of the checkerboard.
- calibration objects 51 , 53 may be advantageous to choose calibration points or objects 51 , 53 with varying spatial frequency between them.
- the pinhole camera model is a commonly used simple mathematical representation of a camera without a lens and with a very small aperture opening.
- the pinhole model is useful to solve the camera equations with geometric optics.
- the relation between the physical 3D position of a point ⁇ [x, y, z]) and its corresponding pixel position ([u, v]) in the camera image plane is found using:
- r nm are the elements of the rotation matrix and t n are the elements of the translation vector.
- Lens distortions are mathematically defined as displacements between the observed pixel positions of the image features and their calculated positions[u, v]. Radial and tangential distortions are the two most common mathematical models for lens distortions. Radial distortion is corrected in camera images using : where, ⁇ x ⁇ ya) are distorted locations, (x u , y u ) are undistorted locations, k t are the distortion coefficients, and r is the distance of the distorted locations from the principal point.
- the radial distortion model is in the form a Taylor series expansion around the principal point (or the image centre), and is symmetric about the centre.
- Tangential distortions are asymmetric about the centre, and the corrected position of points are dependent on both of the current distorted and / positions ⁇ x d and y d ).
- Figure 1 a shows flow charts at different stages of the calibration method.
- the multiple camera calibration method is generally a two-step method.
- the initial values of the intrinsic and extrinsic parameters are found in the first step 1 0, and then used as the initial values in an optimisation process to find the optimised parameters in the second step 12.
- the first step of the method is to initialise calibration values 10.
- the initial camera and lens parameters can be found based on the method of Zhang. However it may also be suitable to estimate values, use the values calculated at a prior calibration technique, use manufacturer values, or create values by some other method. In embodiments other methods for finding the initial values may be adopted. In one example the initial values for the extrinsic parameters of the cameras 40 (i.e. 3D positions of the cameras with respect to the target or field of view) were found using 3D pose estimation for a set of images taken at various positions of the checkerboard template. Although the described methods for finding the initial values are able to find relatively good initial values, in preferred embodiments the proposed method is able to estimate the parameters using significantly inaccurate initial values.
- the initial values for the camera parameters and their 3D positions could be chosen at step 10 purely based on the lens and camera specifications and the physical configuration of the cameras
- a preprocessing step was used on checkerboard images 50 to improve the corner detection for the intrinsic initial values.
- the preprocessing step converted the colour images to greyscale images, and applied a 2D median filter with a neighbourhood size of 3 pixel ⁇ 3 pixel.
- the median filter improves the performance of the algorithm for finding the checkerboard corners or vertices 54 by removing some of the camera sensor noise and sudden changes in the illumination.
- median filters shift the borders of the image (i.e. the corner locations of the checkerboard), so were only used in the first step for finding the initial locations of the corners.
- the tangential distortion coefficients and in the camera intrinsic matrix were assumed to be zero at this step to help the OpenCV optimisation process by reducing the number of parameters (the tangential distortion coefficients were later estimated in our proposed global optimisation process for estimating the parameters).
- the images of the same 3D position of the calibration template 50 differ across the cameras 40, since in each camera 40 the images are based on that camera's coordinate system.
- the perspective transformation (homography) between the corner locations 54 in the image and their corresponding metric locations in the ideal calibration template was used to estimate the 3D position of the checkerboard template 50 in each image.
- the coordinate system of one of the cameras 40 of the system was selected as the world coordinate system, and the coordinate systems of the other cameras was transformed to this common coordinate system.
- other coordinate systems are possible. Assuming that the 3D positions of the checkerboard template in the coordinate systems of both cameras 40 are known, and the world coordinate system is selected to be the coordinate system of camera 1 , the 3D position of the camera 2 in the world coordinate system could be found using:
- (Ri i) and ⁇ R 2 , T 2 ) are the vectors indicating the 3D position of the checkerboard template in the coordinate systems of camera 1 and camera 2, respectively and (i? 1 ) T denote the transposed matrix of R 1 .
- the ⁇ R r , T r ) values are estimated for each calibration image and will have variation across a set of images. However, a single set of values should be chosen for the 3D position of each camera 40. The selection of the average value for (R r , T r ) of the camera 40 will introduce error, since some of the estimated values are outliers.
- the present system and method addresses this problem by proposing an optimisation process including using a matrix error 1 8.
- the error functions 15, 16, 1 7 of this method are able to be summed.
- the error is calculated in the form of a matrix. This is, in part, because a summation fails to properly represent the error being detected. For instance, a summation assumes that a single (scalar) value can be used to represent the error. This overlooks the fact that the reprojection errors are not distributed uniformly over the field of view (FOV) and across the cameras: due to the lens distortion effects, the reprojection errors are higher close to the peripheries of the image, and can vary over the FOV and across the cameras of a multi-camera system.
- FOV field of view
- a form of the matrix 18 (or objective function) is chosen which includes the reprojection error values, RE:
- the reprojection error values are measured at each checkerboard corner 54 location, L is the number of checkerboard corners, N is the number of calibration images, and C is the number of cameras.
- This matrix error forms an objective, that by minimising the error measured at each location (by modification of parameters (K, R, T, k and p) the multi-camera system can be calculated. This provides a much clearer understanding of which calibration object and image or camera, RE (a ,b) is causing a problem.
- a number of methods are capable of measuring the reprojection method, and a particular, a model based method is described herein.
- the error matrix preferably no assumption is made about the distribution of the errors.
- comprehensive error information is provided for the optimisation algorithm 19 at each calibration object location of each of a plurality of image across all the cameras 40.
- the error matrix was estimated in an optimisation process after undistorting the images.
- lens distortion parameters were taken into account. This is because the reprojection error, and therefore the error matrix incorporates lens distortion error.
- Embodiments of the method also use a different algorithm for calculating the optimised values 19 for the matrix error equation.
- any one of the optimisation algorithms that can minimise an error matrix 18 may be used.
- trust-region algorithms have been used. This is because Trust-region methods have very reliable convergence properties, particularly for solving problems with a sparse structure. Minimisation of the reprojection error for camera calibration is a similar example of a problem with a sparse structure.
- alternative optimisation methods such as gradient descent and steepest descent may also be used.
- the error matrix above is improved by the addition of further components which reflect, or reveal other errors present in the system.
- two further error functions or values are introduced. These are based on the 3D information of the reconstructed calibration template. Introducing these to the objective function, in combination with the reprojection error, can help to increase the accuracy and robustness of the optimisation process of calibrating multiple cameras.
- a second error function used a measurement of 3D shape error 17. This measures variations between the actual shape of the calibration target and its image. These variations include rotations or skews or other variations across the calibration target. This may be stated as the difference between the 3D reconstructed template and the ideal template shape (i.e. 3D shape errors). In a particular embodiment based on a checkerboard target these errors are calculated by measuring the Euclidian distances between the triangulated corners and the expected ideal 3D locations of the checkerboard corners (which are estimated knowing the number of rows and columns of the checkerboard template 50 and its square size).
- the expected ideal corners might be uniformly scaled due to an uncertainty in finding the 3D location of the calibration template.
- 3D shape errors only illustrate the geometric variation of the triangulated corners from an ideal checkerboard template, without taking into account the correct square size.
- the correct square size can be by measuring 3D length errors.
- RE is the reprojection error
- LE is the length error
- SE is the 3D shape error, (note that LE and SE are 3D measurements, and are only defined once for a multiple cameras system).
- RE is the reprojection error
- LE is the length error
- SE is the 3D shape error, (note that LE and SE are 3D measurements, and are only defined once for a multiple cameras system).
- it may be useful add further error measurements to the matrix, such as angular error measurements of the target or errors at different spatial frequencies.
- the above matrix form is not the only matrix form 18 of the system.
- the above form helps to separate the individual parts of the errors it is possible to combine, or separate, parts of different errors to form a matrix having the same information in a different arrangement.
- One advantage of the proposed embodiment is that the arrangement has a relatively sparse structure, including the reprojection error values and the the 3D length and 3D shape errors, which can improve the optimisation process.
- the 3D length 16 and 3D shape 17 errors are measured in meters, but the reprojection errors are measured in pixel units. Different units are preferably not directly combined in a single optimisation process, since they generally have different scales and physical meaning. To overcome this issue the different error measurement values should be converted to a common unit scale. Any unit scale should be suitable. In the present embodiment units of pixel were chosen. The average pixel size of our FOV was approximated in meters, and was used to convert the units of the 3D length and 3D shape error functions to pixel units. An alternative length measure could also be used. This enables all the error functions of the objective function to have the same unit of pixel, which enabled a consistent objective function. The errors associated with the lens distortion can be taken into account in the objective function by mapping the distorted checkerboard corners to undistorted locations, prior to estimating the error functions at each iteration 13 of the optimisation process.
- corner 54 finding is error-prone, particularly when images are noisy, blurred, or have areas with specular reflections. As a result, the error measurements become inaccurate in such images.
- the quality of the optimisation process can be improved by removing the outliers caused by the failure of the corner finding algorithm.
- outliers are detected and removed from the defined error matrix (or objective function) based on a comparison with an average or expected value, such as the average error values in the whole dataset for each camera. In a particular embodiment the average values were calculated for reprojection errors of each camera using
- a threshold was used to test whether the measurement was an outlier.
- reprojection error values that were greater than 10 times of the average reprojection error estimated using were detected as outliers, and were removed from the error matrix in.
- This choice of threshold for detecting the outliers attempts to ensure that only outliers will be removed, not points that may have larger error due to lens distortions, such as the images with checkerboard corners at the peripheries.
- larger or smaller thresholds may be used as required.
- the input parameters of multiple camera calibrations have various ranges and units. For instance, the ranges of rotation vectors are between 0 to 2 ⁇ radians, while the translation vectors can vary considerably. Furthermore, objective functions have different levels of sensitivity to the input parameters, which means that variation of input parameters could have different effects on the output error in optimisation processes. In embodiments of the invention input parameters are scaled prior to using them as the inputs of the optimisation process.
- the sensitivity of the objective function to each input parameter is often indicated based on its partial derivatives.
- the scaling of parameters can be performed in two steps. In the first step, the input parameters of each camera are divided into groups with the same units and similar magnitude, which were:
- the parameters of each group are normalised with respect to the largest value of that group.
- the sensitivity of the objective function to the changes of the input parameters is an important factor that affects the convergence rate and robustness of the optimisation process.
- this can be addressed by a second step where the input parameters (in the divided groups) are scaled based on the sensitivity of the objective function to the changes of that group of parameters, estimated using the Jacobian matrix (J) of the objective function.
- J Jacobian matrix
- an alternative way of assessing the sensitivity of the optimisation process to input parameters could be used.
- the Jacobian matrix is an approximation of the partial derivative of the objective function (O) at initial values:
- O is the objective function
- x t are the camera calibration parameters (input parameters)
- NE is the number of the elements of the error matrix (NE is equal to L x N x (C + 2)).
- the average values of columns of the Jacobian are an approximation of the partial derivative of O for that input parameter (x t ), and were thus used as the measure of the sensitivity of the objective function for that input parameter.
- the groups of input parameters were scaled according to the average value of the Jacobian matrix, so that more sensitive parameters become larger and vice versa. This is intended to results in a well-scaled problem for which, any changes in the input parameters will have a similar effect on the error functions (or the objective function).
- Figure 5 shows RMS reprojection errors for two 58, 59 of four cameras 40, RMS 3D error 60, and RMS length error 61 for a calibration dataset in the method of Figure 1 a 56 and a traditional method 57. This demonstrates that the new method converged quickly and has resulted in small errors. The results show both a smaller overall error and a faster convergence rate.
- Figure 6 shows two sample lines fitted to one row of corners (calibration objects) in a distorted image (on the left) and in its undistorted version (on the right). It is clear from this image that the original image was badly distorted by the multi-camera system 42. The distortion has been improved by the optimisation of the calibration values. The multi-camera system can now be used to picture unknown objects as required.
- the system is adapted for a multi-camera system, such as that shown in Figure 2, with a variable number of cameras 40 being used.
- the error matrix is arranged to have a separate column for each camera in the system. This means that, where a further camera 40 is introduced only one column of the error matrix must be updated. This is intended to result in an accurate initial value.
- the system does not attempt to find a single value for what may be different cameras, but enables details about each camera to be processed separately.
- Figure 1 b shows a flowchart of an embodiment of the system.
- the initial values of the parameters of our multi-camera system are estimated or initialised 30.
- This may use a rough calibration system, as in the earlier method, or may use any one or more of the methods described above.
- the use of a checkerboard template 50 for estimating the initial values of the parameters 30 at the first step may be due to its good reliability in the presence of lens and perspective distortions.
- the estimated initial values of the lens distortion coefficients allow us to correct most of the lens distortion effects 33 in the images 32.
- Removing the lens distortion effects paves the way to use concentric circle templates, which are typically more sensitive to lens distortions than checkerboard templates, but can provide higher accuracies for localising the control points in low-distortion images.
- concentric circle templates which are typically more sensitive to lens distortions than checkerboard templates, but can provide higher accuracies for localising the control points in low-distortion images.
- a series of calibration images are taken with the system and used to estimate the initial values of the parameters of our multi-camera system. Other methods of obtaining initial calibration values will be known to the skilled person.
- the initial values of the camera parameters are found using a calibration method.
- the initial parameters were then refined using a designed calibration target (for instance consisting of concentric circles) and a reconstructed model 31 of this calibration target.
- the reconstructed, or control model (a synthetically generated model 31 ) is, for instance reconstructed in the estimated 3D position of the calibration target in the calibration images using a projection estimate.
- This enables simultaneous refinement of the control point locations and estimation of the lens distortion effects.
- the discrepancies 34 between the calibration target and its reconstructed model were measured using an algorithm for subpixel image registration. Zernike polynomials can be used as mapping functions to define a forward lens distortion model. This process acts to improve the localisation of the control points and characterisation of the lens distortion.
- lens distortion is calculated 35.
- the improved accuracy or localisation of the control points means that a following calibration step 36 is more accurate, because the input parameters (e.g. reprojection errors) have been more accurately calculated.
- a model-based technique is employed to refine the camera parameters and estimate the lens distortion model.
- a further set of calibration images are taken 32 by each of the cameras 40, of the multi-camera system 42. This may use a different calibration target 50, such as the concentric circle calibration target of Figure 7. Images are taken in the FOV at various distances and positions with respect to the cameras. The lens distortion effects were removed from the calibration images using the initial values of the estimated parameters.
- the step of locating or obtaining the calibration object locations (control points) involves segmenting the template from the background. This may use a Canny edge detection algorithm to convert images to binary images. The components which are outside of a size range which identifies the calibration objects may then be removed (for instance 100 pixels ⁇ calibration object ⁇ 1000 pixels). The size range will be dependent on the camera image size and the calibration object. The calibration objects were then found based on the geometric characteristics of the components of the binary image, however other methods are possible. In a particular embodiment using concentric circles an ellipsef itting was used to find the centre of concentric circles. .
- the ratio of the length of the major axis to the minor axis of that ellipse ⁇ Ra, ranging from perfect circle (1 ) upwards) was calculated (circles become elliptic under perspective distortion in the camera images).
- the components of the binary image where Ra was smaller than, for example, 2 can be selected as the components that had the required geometric characteristics to be the circles 53 of the calibration template 50. Even though the upper threshold for the Ra value is dependent on the amount of perspective distortion in calibration images, the Ra value that we selected can be valid in a wide range of calibration images
- circles 53 may become elliptic under perspective distortion a least-squares method was used to find the best-fit ellipse for the data points of each component of the binary image.
- the centres of the fitted ellipses were used as the centres of the circles of the calibration template 50.
- the concentric circles 53 have a common centre, because of the errors in identifying the centre of each circle, several closely placed centres were found for the members of each group of concentric circles. Therefore, the /(-means clustering method was used to divide the found centre positions into a number of clusters equal to the number of control points, and the median values of the x and y positions of the centres in each cluster were selected as the centre of that group of concentric circles (or the control point of the calibration template).
- the median value of the centres was used to help to reduce the error in finding the centres of the circles in calibration images.
- finding the centres control points, calibration points
- the four markers were identified in calibration images based on a characteristic of the markers. For instance in the object of Figure 7, the number of circles in each group of concentric circles (marker 1 had five, marker 2 had four, and marker 3 and marker 4 had three concentric circles).
- the number of circles 53 was found after clustering the centres using the k-means clustering in the previous step.
- the control points of calibration images were mapped to the control of the calibration model (the synthetically generated model) using some known information about the geometry of the calibration template 50.
- the system uses a comparison between image(s) obtained from the multi-camera arrangement and the model of the calibration target 31 , (which may be referred to as a control model).
- the control model may be generated or prepared 31 using an image of the calibration target, such as its SVG image.
- the distance between the control points can be converted from mm to pixels by known means.
- the control model can be obtained in a number of ways (by measurement, ray tracing or otherwise) an advantage of using a 3D modelling program is that the control model is designed with known dimensions, the exact locations of the control points were known in the model of the calibration template.
- a projective transformation between the control points of the calibration template model and the control points of the control model may be found using a least-squares method.
- the initial values of the control points were estimated in undistorted images, which helped to estimate a more accurate projective transformation between the calibration model and the calibration image.
- This projective transformation is based on the current lens distortion parameters. Therefore it can be used to reconstruct the control model in the estimated location of the measured model. That is to say the projective transformation is used to make the control model appear to match the measured model. Any differences now seen between the projected control model and the measured model are discrepancies, or errors which can be measured and compensated 35.
- This is a forward lens distortion model, which maps from the distorted locations to undistorted locations.
- an inverse model may be used.
- the comparison 34 between the measured model and the control model is a comparison of a fixed property of the calibration target.
- a fixed property of the calibration target includes, but is not limited to a spatial dimension (e.g. length errors) or 3D shape feature of the calibration target.
- Figure 8 shows a sample calibration image 80 (Fig. 8a) and the generated model of the calibration template 81 (Fig. 8b). Zoomed views of the concentric circles on the bottom left side of the calibration template in the calibration image 82 and the model 83 are also provided in Fig. 8 for comparison.
- the expected view of the calibration template 82 (i.e. the model of the calibration template) was reconstructed very similarly to the actual calibration image 83.
- the reconstructed model of the calibration template and the calibration image show some minor local discrepancies due to the errors in localising the control points and lens distortion effects.
- the discrepancies between the calibration image and the reconstructed model of the calibration template (D x and D y ) were measured in subimages of size 128 pixel ⁇ 128 pixel using the P-SG-GC algorithm.
- the local discrepancy data (subpixel shifts) were measured in the subimages of all the calibration images of the multiple cameras in the x and y directions.
- the discrepancy data were used to create the lens distortion model.
- the local discrepancies (subpixel shifts or errors) between the reconstructed model of the calibration template (control model) and the calibration images (as taken by the multi-camera system) may now be estimated in the x and y directions.
- This may use any pixel registration method but preferably uses a sub-pixel image registration such as that described in NZ720269.
- the method uses the phase-based Savitzky-Golay gradient-correlation (P-SG-GC) subpixel image registration algorithm as described in NZ720269, however other means will be known.
- P-SG-GC phase-based Savitzky-Golay gradient-correlation
- the local discrepancies, D x and D y were estimated in subimages of size 128 pixel ⁇ 128 pixel, which were chosen around the (x, y) coordinates of the control points of the calibration template (64 pixel at each direction) for both the reconstructed model and the calibration images.
- the subimage size of 128 pixel ⁇ 128 pixel provided a good trade-off between the locality of measurements and having adequate image features in the subimages.
- Image features are useful for performing subpixel image registration, and the concentric circles of the calibration template provided suitable features for this purpose. However other features are suitable and, in particular, features with a range of spatial frequencies may be particularly useful.
- NZ720269 describes an image registration technique for a plurality of images.
- the image representation technique comprises the steps of: Obtaining an image characteristic at a plurality of points in each image; Estimating the gradient of the image characteristic, the gradient estimate comprising a feature extracting function.
- An accurate registration is achieved because of the calculation of a gradient combining multiple neighbouring points of the gradient measurement.
- the process may also involve the application of a feature extracting function, such as a smoothing function. This may be combined (or include) with a further operator for extracting or emphasising some of the image characteristics, such as a differentiator kernel (gradient).
- a feature extracting function such as a smoothing function.
- the gradient appears to be a minimal factor in the calculation and a more complex, or higher order function increases the computational load
- the addition of a smooth gradient, or a smoothing filter combined with a differentiator kernel has a large beneficial impact on the image registration.
- Further aspects of the image registration method include the steps of: Obtaining a frequency domain representation of a smoothing function; Applying the frequency domain representation of the smoothing function to a frequency domain representation of a cross correlation.
- the image registration comprises a two-step process which comprises the steps of obtaining an estimate of the integer pixel-shift between the images; and obtaining an estimate of the sub- pixel shift between the images.
- control points of the reconstructed calibration model at each image were selected as the refined control points of the calibration target in that image, and Dx and D y values were used to characterise the lens distortion effects. This is because the errors associated with localising the control points are likely smaller and more random than lens distortion effects.
- Brown's distortion model can be used to correct radial and tangential distortions.
- Brown's distortion model uses a Taylor series expansion around the principal point.
- Zernike polynomials are used instead.
- Other suitable polynomials include those which can accurately model symmetric shapes and include some basis functions.
- An example of these types of functions is "Bessel functions.
- the measured D x and D y values from provided data about lens distortion behaviour at the location of the control points of the calibration target in all of the calibration images of the camera. To characterise this behaviour at each camera, two independent Zernike polynomials were fitted to each of the measured D x and D y values in all of the cameras.
- the x and y inputs of the Zernike polynomials were the x and y coordinates of the control points in calibration images, and the z input was the measured D x and D y values at that location.
- the x and y positions of control points were normalised within the unit circle prior to being used for fitting in the Zernike polynomials, this takes advantage of the orthogonality of Zernike polynomials within the unit circle.
- Zernike polynomials have some advantages: they are orthogonal over the continuous unit circle, and they can readily capture and model different aspects of the signal shapes. After the lens distortion effects were characterised, two separate sets of Zernike polynomials were used to estimate the mapping function that maps the distorted x and y locations of the points to their undistorted locations. The forward lens distortion model was used instead of solving an inverse problem, and to increase accuracy.
- the Zernike polynomials By fitting Zernike polynomials to the D x and D y values, the Zernike polynomials become a mapping function that provides the x and y coordinates of a point, and based on that, can estimate the amount of shift that is caused by the lens distortion effects at that location. Thus, the undistorted location of points can be measured by subtracting the estimated shift from the distorted locations.
- An advantage of using Zernike polynomials to map the amount of shift between distorted and undistorted images rather than to map the distorted locations to undistorted locations is their suitability for fitting to symmetric shapes that are similar to lens distortions or lens aberrations.
- the Gram-Schmidt orthogonalisation technique allows the expansion of discrete data in terms of the Zernike polynomials while retaining orthogonality this takes into account that Zernike polynomials are only orthogonal over the continuous unit circle, whereas the D x and D y data characterising the lens distortion is discrete.
- Preferably third order Zernike polynomials are used, as this offers a balance between complexity and over fitting in higher order systems. Other polynomials may also be useful.
- the method has now obtained improved calibration estimates 36 of the multi-camera system 42. For instance the refined control points of a concentric calibration target 50are less prone to error compared to the corners of the checkerboard. In preferred embodiment the calibration process is repeated or iterated.
- a first step is to match the corresponding points in each of the cameras to 3D reconstruction of the surface of the flat object.
- Traditional methods such as block-matching, which typically use cross-correlation to match the corresponding points in cameras are challenging when matching the corresponding points in arbitrarily positioned cameras 40 that cause substantial differences between image views.
- a first camera 40 is chosen as the reference camera and a projective transformation is applied to the images of the remaining cameras to make the views similar to the reference camera. The projective transformation is only used as an initial estimate; thus, it does not need to be accurate.
- Corresponding points between cameras 40 may be found by extracting and matching the image features or by using block-matching methods and subpixel image registration to find the corresponding points from the transformed images.
- Camera One was used as the reference camera.
- a (24 ⁇ 39) virtual grid of points with a step size of 10 pixels (i.e. 936 points) was selected on the surface of the surface of the flat object in the reference camera One.
- the P-SG-GC subpixel image registration algorithm with subimages of size 128 pixel ⁇ 128 pixel was used to match the corresponding points between the image of the reference camera and the transformed images of the non-reference cameras.
- the matched corresponding points were transformed back to the original coordinate system of the non-reference cameras using the inverse of the projective transformation.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Manufacturing & Machinery (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NZ73529917 | 2017-09-06 | ||
NZ735299 | 2017-09-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019050417A1 true WO2019050417A1 (en) | 2019-03-14 |
Family
ID=65635101
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NZ2018/050121 WO2019050417A1 (en) | 2017-09-06 | 2018-09-06 | Stereoscopic system calibration and method |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2019050417A1 (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009610A (en) * | 2019-03-27 | 2019-07-12 | 仲恺农业工程学院 | A kind of reservoir dam slope protection surface damage visible detection method and bionic device |
CN110853104A (en) * | 2020-01-15 | 2020-02-28 | 广东博智林机器人有限公司 | Calibration plate, machine vision calibration device and method |
CN111047650A (en) * | 2019-12-02 | 2020-04-21 | 北京深测科技有限公司 | Parameter calibration method for time-of-flight camera |
CN111354015A (en) * | 2020-02-26 | 2020-06-30 | 上海市城市建设设计研究总院(集团)有限公司 | Bridge anti-collision laser calibration system and application method thereof |
CN111598954A (en) * | 2020-04-21 | 2020-08-28 | 哈尔滨拓博科技有限公司 | Rapid high-precision camera parameter calculation method |
WO2020235110A1 (en) * | 2019-05-23 | 2020-11-26 | 株式会社ソニー・インタラクティブエンタテインメント | Calibration device, chart for calibration, and calibration method |
CN112184823A (en) * | 2019-07-03 | 2021-01-05 | 上海飞猿信息科技有限公司 | Quick calibration method for panoramic system |
CN112785519A (en) * | 2021-01-11 | 2021-05-11 | 普联国际有限公司 | Positioning error calibration method, device and equipment based on panoramic image and storage medium |
CN113034614A (en) * | 2021-03-30 | 2021-06-25 | 上海久航电子有限公司 | Five-circle center calibration method of five-circle calibration plate |
CN113487626A (en) * | 2021-07-01 | 2021-10-08 | 杭州三坛医疗科技有限公司 | Mirror image identification method and device, electronic equipment and storage medium |
CN113554741A (en) * | 2020-04-24 | 2021-10-26 | 北京达佳互联信息技术有限公司 | Method and device for three-dimensional reconstruction of object, electronic equipment and storage medium |
US11538193B2 (en) | 2020-01-10 | 2022-12-27 | Aptiv Technologies Limited | Methods and systems for calibrating a camera |
WO2024087927A1 (en) * | 2022-10-28 | 2024-05-02 | Oppo广东移动通信有限公司 | Pose determination method and apparatus, and computer-readable storage medium and electronic device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2200311A1 (en) * | 2007-10-18 | 2010-06-23 | Sanyo Electric Co., Ltd. | Camera calibration device and method, and vehicle |
EP2523163A1 (en) * | 2011-05-10 | 2012-11-14 | Harman Becker Automotive Systems GmbH | Method and program for calibrating a multicamera system |
US8368762B1 (en) * | 2010-04-12 | 2013-02-05 | Adobe Systems Incorporated | Methods and apparatus for camera calibration based on multiview image geometry |
US20130176392A1 (en) * | 2012-01-09 | 2013-07-11 | Disney Enterprises, Inc. | Method And System For Determining Camera Parameters From A Long Range Gradient Based On Alignment Differences In Non-Point Image Landmarks |
US9734419B1 (en) * | 2008-12-30 | 2017-08-15 | Cognex Corporation | System and method for validating camera calibration in a vision system |
-
2018
- 2018-09-06 WO PCT/NZ2018/050121 patent/WO2019050417A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2200311A1 (en) * | 2007-10-18 | 2010-06-23 | Sanyo Electric Co., Ltd. | Camera calibration device and method, and vehicle |
US9734419B1 (en) * | 2008-12-30 | 2017-08-15 | Cognex Corporation | System and method for validating camera calibration in a vision system |
US8368762B1 (en) * | 2010-04-12 | 2013-02-05 | Adobe Systems Incorporated | Methods and apparatus for camera calibration based on multiview image geometry |
EP2523163A1 (en) * | 2011-05-10 | 2012-11-14 | Harman Becker Automotive Systems GmbH | Method and program for calibrating a multicamera system |
US20130176392A1 (en) * | 2012-01-09 | 2013-07-11 | Disney Enterprises, Inc. | Method And System For Determining Camera Parameters From A Long Range Gradient Based On Alignment Differences In Non-Point Image Landmarks |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110009610A (en) * | 2019-03-27 | 2019-07-12 | 仲恺农业工程学院 | A kind of reservoir dam slope protection surface damage visible detection method and bionic device |
US11881001B2 (en) | 2019-05-23 | 2024-01-23 | Sony Interactive Entertainment Inc. | Calibration apparatus, chart for calibration, and calibration method |
WO2020235110A1 (en) * | 2019-05-23 | 2020-11-26 | 株式会社ソニー・インタラクティブエンタテインメント | Calibration device, chart for calibration, and calibration method |
JPWO2020235110A1 (en) * | 2019-05-23 | 2020-11-26 | ||
JP7218435B2 (en) | 2019-05-23 | 2023-02-06 | 株式会社ソニー・インタラクティブエンタテインメント | CALIBRATION DEVICE, CALIBRATION CHART AND CALIBRATION METHOD |
CN112184823A (en) * | 2019-07-03 | 2021-01-05 | 上海飞猿信息科技有限公司 | Quick calibration method for panoramic system |
CN111047650A (en) * | 2019-12-02 | 2020-04-21 | 北京深测科技有限公司 | Parameter calibration method for time-of-flight camera |
CN111047650B (en) * | 2019-12-02 | 2023-09-01 | 北京深测科技有限公司 | Parameter calibration method for time-of-flight camera |
US11538193B2 (en) | 2020-01-10 | 2022-12-27 | Aptiv Technologies Limited | Methods and systems for calibrating a camera |
CN110853104B (en) * | 2020-01-15 | 2020-05-05 | 广东博智林机器人有限公司 | Calibration plate, machine vision calibration device and method |
CN110853104A (en) * | 2020-01-15 | 2020-02-28 | 广东博智林机器人有限公司 | Calibration plate, machine vision calibration device and method |
CN111354015A (en) * | 2020-02-26 | 2020-06-30 | 上海市城市建设设计研究总院(集团)有限公司 | Bridge anti-collision laser calibration system and application method thereof |
CN111354015B (en) * | 2020-02-26 | 2022-12-06 | 上海市城市建设设计研究总院(集团)有限公司 | Bridge anti-collision laser calibration system and application method thereof |
CN111598954A (en) * | 2020-04-21 | 2020-08-28 | 哈尔滨拓博科技有限公司 | Rapid high-precision camera parameter calculation method |
CN113554741A (en) * | 2020-04-24 | 2021-10-26 | 北京达佳互联信息技术有限公司 | Method and device for three-dimensional reconstruction of object, electronic equipment and storage medium |
CN113554741B (en) * | 2020-04-24 | 2023-08-08 | 北京达佳互联信息技术有限公司 | Method and device for reconstructing object in three dimensions, electronic equipment and storage medium |
CN112785519A (en) * | 2021-01-11 | 2021-05-11 | 普联国际有限公司 | Positioning error calibration method, device and equipment based on panoramic image and storage medium |
CN113034614B (en) * | 2021-03-30 | 2022-05-10 | 上海久航电子有限公司 | Five-circle center calibration method of five-circle calibration plate |
CN113034614A (en) * | 2021-03-30 | 2021-06-25 | 上海久航电子有限公司 | Five-circle center calibration method of five-circle calibration plate |
CN113487626A (en) * | 2021-07-01 | 2021-10-08 | 杭州三坛医疗科技有限公司 | Mirror image identification method and device, electronic equipment and storage medium |
CN113487626B (en) * | 2021-07-01 | 2024-03-15 | 杭州三坛医疗科技有限公司 | Mirror image identification method and device, electronic equipment and storage medium |
WO2024087927A1 (en) * | 2022-10-28 | 2024-05-02 | Oppo广东移动通信有限公司 | Pose determination method and apparatus, and computer-readable storage medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019050417A1 (en) | Stereoscopic system calibration and method | |
Johannsen et al. | On the calibration of focused plenoptic cameras | |
AU2016335123B2 (en) | Camera calibration using synthetic images | |
Mallon et al. | Which pattern? Biasing aspects of planar calibration patterns and detection methods | |
Wöhler | 3D computer vision: efficient methods and applications | |
JP4245963B2 (en) | Method and system for calibrating multiple cameras using a calibration object | |
JP6067175B2 (en) | Position measuring apparatus and position measuring method | |
US20110293142A1 (en) | Method for recognizing objects in a set of images recorded by one or more cameras | |
CN113920205B (en) | Calibration method of non-coaxial camera | |
CN110959099B (en) | System, method and marker for determining the position of a movable object in space | |
CN111080711A (en) | Method for calibrating microscopic imaging system in approximately parallel state based on magnification | |
Niu et al. | The line scan camera calibration based on space rings group | |
Siddique et al. | 3d object localization using 2d estimates for computer vision applications | |
CN115661226B (en) | Three-dimensional measuring method of mirror surface object, computer readable storage medium | |
Alturki et al. | Camera principal point estimation from vanishing points | |
Jarron et al. | Automatic detection and labelling of photogrammetric control points in a calibration test field | |
Alturki | Principal point determination for camera calibration | |
Claus et al. | A Plumbline Constraint for the Rational Function Lens Distortion Model. | |
Albarelli et al. | High-coverage 3D scanning through online structured light calibration | |
Paudel et al. | Localization of 2D cameras in a known environment using direct 2D-3D registration | |
Kuhl et al. | Monocular 3D scene reconstruction at absolute scales by combination of geometric and real-aperture methods | |
Grochulla et al. | Using spatially distributed patterns for multiple view camera calibration | |
De Villiers et al. | Effects of lens distortion calibration patterns on the accuracy of monocular 3D measurements | |
PirahanSiah et al. | Pattern image significance for camera calibration | |
Genovese | Single-image camera calibration with model-free distortion correction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18854496 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WPC | Withdrawal of priority claims after completion of the technical preparations for international publication |
Ref document number: 735299 Country of ref document: NZ Date of ref document: 20200304 Free format text: WITHDRAWN AFTER TECHNICAL PREPARATION FINISHED |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18854496 Country of ref document: EP Kind code of ref document: A1 |