CN112419427A - Method for improving time-of-flight camera accuracy - Google Patents

Method for improving time-of-flight camera accuracy Download PDF

Info

Publication number
CN112419427A
CN112419427A CN202011431582.0A CN202011431582A CN112419427A CN 112419427 A CN112419427 A CN 112419427A CN 202011431582 A CN202011431582 A CN 202011431582A CN 112419427 A CN112419427 A CN 112419427A
Authority
CN
China
Prior art keywords
depth
camera
time
pixel
flight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011431582.0A
Other languages
Chinese (zh)
Inventor
杨守瑞
于鹏飞
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN202011431582.0A priority Critical patent/CN112419427A/en
Publication of CN112419427A publication Critical patent/CN112419427A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T5/80
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

A method for improving the precision of a time-of-flight camera adopts the time-of-flight camera for obtaining scene three-dimensional imaging in real time, the camera emits modulated near infrared light signals, amplitude images and depth images are simultaneously obtained at a high frame rate by calculating the difference between the time or the phase of emitting light signals to a target object and the time or the phase of returning light signals to a sensor, so that the method is applied to computer vision scenes such as three-dimensional reconstruction, medical technology and the like, and the precision of the time-of-flight camera is improved by performing depth compensation on the time-of-flight camera and further correcting the characteristic point position of the time-of-flight image by combining a high-resolution camera.

Description

Method for improving time-of-flight camera accuracy
Technical Field
The present invention relates generally to Time of flight (ToF) cameras and, more particularly, to a method for improving accuracy of ToF cameras, and is in the field of measurement and calibration of camera instruments.
Compared with a time-of-flight image, the method can reduce the depth error of the ToF, and can correct the position of the characteristic point of the ToF image, thereby improving the precision of the ToF camera.
Background
With the progress of technology and the development of industrial level, the demand of three-dimensional information in many industries is higher and higher, and the three-dimensional information is widely applied to various corners of industrial production and life. In the industrial production field, with the enhancement of the degree of automation, in the scenes of fault detection of parts, automatic sorting of products, path planning of logistics distribution and the like, the acquisition of three-dimensional information is indispensable. In today's very hot automotive field, the acquisition of three-dimensional information of the space around the car is one of the first and most important steps. In the field of daily life, the face recognition module of the existing smart phone is often equipped with a depth sensor capable of acquiring three-dimensional information to realize the face recognition function.
The ToF camera based on the time of flight (ToF) principle is an active measurement 3D camera, and has the characteristics of small volume, high frame rate, no need of scanning, no dependence on external illumination and the like. The ToF camera can emit a modulated near-infrared light signal, and by calculating a time (phase) when the emitted light signal reaches a target object and a time (phase) when the returned light signal returns to the sensor, and by calculating a difference therebetween, the ToF camera can simultaneously acquire an amplitude image (grayscale image) and a depth image at a high frame rate, and thus can be used for computer vision scenes such as three-dimensional reconstruction, medical technology, and the like. Today, ToF cameras have high research value and application scenes, and the latest Huazhimate series mobile phones and apple ipadpro series flat plates in 2020 model are all provided with depth sensors with ToF technology for face recognition, virtual reality and other applications.
However, the ToF depth camera is susceptible to random errors, systematic errors, and the like, and has problems such as depth missing, offset, distortion, and the like. Sources of error experienced by ToF cameras include temperature, reflectivity of objects of the scene, distance between the camera and the scene, integration time, relative motion between the camera and the object before it is photographed, etc. The more important and significant error is the distance-related error, also called wobble error, which is often the error caused by the fact that the ideal sine wave signal of the ToF camera does not match the actually observed waveform of the signal. This error is one of the most important errors for ToF cameras. In addition, ToF cameras provide image resolution that is much lower than the image resolution of ordinary cameras that now provide two-dimensional information. This causes a blurred representation of the scene in space to appear on the ToF image, which is a challenge to locate the key information in space, and therefore, the correction of the positions of the key points (feature points) on the ToF image is a problem to be solved.
Disclosure of Invention
Based on the problems, the invention provides a method for improving the precision of a flight time camera, which comprises the steps of (1) building a calibrated platform of the camera; (2) realizing pixel-by-pixel error compensation of the ToF depth image by fitting a distance-error function model; (3) and by combining a high-resolution camera and through the relative pose relationship of the two cameras, more accurate characteristic point positions are searched in the high-resolution image and then are back-projected onto the ToF image, so that the characteristic point positions of the ToF image are corrected. The correction of three directions of the depth of the ToF image and the pixel position of the characteristic point can be realized by the method, so that the ToF precision limited by the resolution of a camera is improved.
In order to realize the problems, the invention adopts the following technical scheme:
a method for improving the accuracy of a time-of-flight camera, the method consisting essentially of:
building a calibration platform of a camera, wherein the calibration platform comprises a ToF camera, a high-resolution camera, a checkerboard calibration plate and a flat plane;
carrying out three-dimensional calibration on the ToF and the high-resolution camera by using a calibration platform to obtain an internal and external parameter matrix of the ToF camera, an internal and external parameter matrix of the high-resolution camera and the relative pose of the high-resolution camera relative to the ToF camera for subsequent steps;
using a single ToF camera to shoot depth images of flat planes at different distances, and simultaneously shooting a plurality of images (more than 30 images) at each position for reducing random noise of the depth images to obtain a depth map optimized at each position;
fitting the optimized depth map of each position into an ideal plane by using a least square method to serve as a ground true value (ground true) of a current position ToF capturing plane, and obtaining radial distance-error data of each position;
fitting the obtained depth-error data of each pixel under a group of distances into a quadratic polynomial with an independent variable being a radial distance and a dependent variable being a radial error, and compensating other errors for obtaining a depth image;
the method comprises the steps that an amplitude image obtained by a ToF camera is used for detecting the position of a sub-pixel feature point of the ToF image, then the sub-pixel feature point is positioned in a depth image to obtain four pixels around the sub-pixel and corresponding depth values of the four pixels, and then the obtained internal and external parameters and relative poses are used for transforming the four pixels of the ToF and the depth values of the four pixels into a high-resolution image;
and meanwhile, the feature points are detected in the corresponding areas in the high-resolution image, and the high-resolution image has clearer expression on the target scene, so that the feature points are compared with the feature points obtained by the amplitude image, and the sub-pixel feature points obtained at the moment are more accurate features. And then, a distance weighting method is used for obtaining the depth value of the sub-pixel in the high resolution image by utilizing four pixel points projected to the high resolution image.
And transforming each corrected feature point and the depth value thereof obtained in the high-resolution image to a ToF camera coordinate system and an image coordinate system so as to obtain more accurate point cloud and depth image.
Further, the specific setting of the calibration platform includes:
the ToF camera and the high-resolution camera are connected through a hardware trigger line (software trigger can be used, different schemes can be selected according to different SDKs of the cameras) so as to ensure synchronous acquisition of the ToF camera and the high-resolution camera, the two cameras are fixed in a mode of obtaining the maximum same field of view, and then checkerboards placed at different angles are shot for shooting images for three-dimensional calibration.
Further, for correcting internal and external parameters and distortion of the ToF camera and the high-resolution camera in the calibration platform, the method specifically comprises the following steps:
the method comprises the steps that an optical path model of the ToF camera is the same as that of the high-resolution camera, the ToF camera and the high-resolution camera are provided with the same pinhole imaging model, for calibration of internal and external parameters of the ToF camera and the high-resolution camera, a calibration mode based on homography change is adopted, and then the pose of the high-resolution camera relative to the ToF camera is obtained by utilizing an SVD method.
A point in space a [ X ]w,Yw,Zw]And color camera pixel coordinates [ x, y ]]And ToF pixel coordinates [ m, n]Coordinate transformation, as follows:
Figure BDA0002819831330000041
Figure BDA0002819831330000042
wherein KtAnd KcDenotes the internal reference matrices, [ R ] of the ToF camera and the high resolution camera, respectivelyt Tt]And [ R ]c Tc]Respectively, the external reference matrices of the ToF camera and the high resolution camera.
The relative pose between the high resolution camera to the ToF camera can be obtained by a matrix transformation as follows:
[Rt2c Tt2c]=[Rc Tc],Rt Tt]-1
because of the existence of lens design and processing errors, the actual camera model (ToF and high-resolution camera) is not an ideal pinhole imaging model, and the influence of lens distortion is considered to enable [ u [ [ u ] top,vp]Pixel coordinates representing an ideal point, [ u ]d,vd]Pixel coordinates representing the distortion point (actual point), then:
Figure BDA0002819831330000043
wherein the content of the first and second substances,
Figure BDA0002819831330000051
r2=x2+y2,k1,k2and p1,p2Radial distortion coefficient and tangential distortion coefficient respectively.
The ToF camera and the high-resolution camera are fixed, and the relative positions of the target plane and the camera pairs are changed for multiple times to obtain multiple homographies for three-dimensional calibration, so that the initial values of the internal and external parameters of each camera can be calculated to obtain the relative pose of the high-resolution camera relative to the ToF camera. After the initial values of the internal and external parameters of each camera are obtained, the internal and external parameters and distortion coefficients of each camera need to be optimized due to the existence of lens distortion.
Further correction of random errors, comprising:
shooting a plurality of (more than or equal to 30) depth images at the same position, and then calculating to obtain the depth average value u and the standard deviation sigma of each pixel point, wherein the depth value D (x, y) of each pixel point of the optimized depth image is calculated as:
Figure BDA0002819831330000052
wherein D isi(x, y) represents the depth value of the ith image, D (x, y) represents the depth value of each pixel point after random errors are removed, (D)i(x,y)-μ)∈,-3σ,3σ]Indicating the removal of outliers.
Further, fitting different distance depth-error functions on a pixel-by-pixel basis includes:
pixel-by-pixel distance residuals Δ d from n different distance depth imagesk(x, y) { k ═ 1, 2.., n } is fitted to a quadratic polynomial. Thus, a distance error model is built for each pixel of the depth image. The following equation is shown:
f(d)=a2d2+a1d+a0
where d denotes the depth of each pixel in the ray direction, ai{ i ═ 0,1,2} represents the calculated polynomial coefficients, respectively. f (d) is the estimated depth error along the ray direction.
Further, the method for correcting the ToF image feature points in combination with the high-resolution camera specifically includes:
step 1: the ToF and high resolution camera pair are calibrated. The ToF camera and the high resolution camera are mounted on the platform and remain stationary. The low resolution ToF camera and the high resolution camera simultaneously capture a chessboard with different relative poses.
Step 2: the pixels are projected from the ToF image onto the high resolution image. As shown in fig. 1 of the accompanying drawings, the ToF camera and the high resolution camera capture an object simultaneously. And a low-resolution ToF image and a high-resolution color image are obtained, respectively. First, the sub-pixel feature point p detected in the ToF image is determinedtmAnd four pixels p around itti{ i ═ 1,2,3,4 }. From the pixels and depth values, we can compute the corresponding spatial point P in the ToF camera coordinatesti{ i ═ 1,2,3,4 }. The four spatial points are then converted into corresponding points P by high resolution camera pose with respect to ToF camera coordinatesci{ i ═ 1,2,3,4}, and corresponding pixel pci{ i ═ 1,2,3,4} was obtained by a pinhole imaging model with a high resolution camera, as follows:
Pci=[Rt2c|tt2c]Pti
s·pci=KcPci
wherein R ist2c|tt2c]Is an external reference matrix, s is a scale factor, KcIs an internal reference matrix of a high resolution camera.
Then, by searching for pciSimilar features in the { i ═ 1,2,3,4} region, we can obtain the subpixel pcmThe sub-pixel is p in the high resolution image compared with the ToF imagetmMore accurate landmark position estimation.
And step 3: and correcting the position of the characteristic point. After step 2, according to pixel pciAnd its depth value D (p)ci) An algorithm based on pixel distance weighting is proposed to calculate the sub-pixel pcmDepth value of (d):
Figure BDA0002819831330000061
wherein D (p)ci) { i ═ 1,2,3,4} is the pixel pciThe depth value of (2). II | pcm-pciII and pciTo pcmIs inversely proportional to the pixel distance of (a), representing pciDepth value and pcmThe weight of the value.
Therefore, we obtain the result of accurate landmark positions with correct depth values in high resolution images. Finally, it will have a depth value D (p)cm) Is precisely characteristic point pcm=,ucm vcm]TConversion to precise 3D points in ToF Camera coordinates
Figure BDA0002819831330000062
And sub-pixels in depth images
Figure BDA0002819831330000063
Figure BDA0002819831330000071
Figure BDA0002819831330000072
Figure BDA0002819831330000073
Wherein s represents an arbitrary ratio, Xcm Ycm Zcm]Is a spatial point in the high resolution camera coordinates, Zcm=D(pcm),KtIs an internal parameter of the ToF camera, Rt2cAnd tt2cIs a rotation and translation matrix from ToF camera coordinates to high resolution camera coordinates.
According to the technical scheme, aiming at the ToF imaging problem of high error and low resolution, the method does not need a precise measuring instrument with complex measurement as an auxiliary, but provides a common high-resolution camera and a simple and effective algorithm to solve the problem.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic general flow chart of a method for improving the accuracy of a time-of-flight camera according to the present invention;
FIG. 2 is a schematic flow chart of the correction of time-of-flight camera errors by pixel-by-pixel depth compensation;
FIG. 3 is a schematic flow chart of further correcting time-of-flight camera image feature points in conjunction with a high resolution camera;
fig. 4 is a schematic diagram of a characteristic point correction method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention discloses a general flow method for improving the precision of a time-of-flight camera, which comprises the following steps of:
s1, shooting a plurality of checkerboard images (comprising an amplitude map and a depth map) at different relative positions by a user by using a flight time camera;
s2: calibrating a ToF camera based on angular points by using the amplitude image, and calculating to obtain an internal parameter matrix and a distortion coefficient (for distortion correction of the ToF image);
s3: using a flat plane, the ToF camera takes multiple sets of depth images at different distances with respect to the plane, and corrects the depth images using the distortion coefficients obtained in step S2;
s4, calculating and obtaining a depth-error function of each pixel by using the proposed depth compensation method for reducing the depth error;
s5: fixing a ToF camera and a high-resolution camera together in a mode of ensuring the maximum view field overlapping, and connecting the two cameras in a hardware triggering mode to ensure synchronous acquisition;
s6: performing three-dimensional calibration on the ToF and the high-resolution camera to obtain the relation between the internal and external parameters and the relative pose of the two cameras;
s7: on the basis of the ToF image for which depth compensation has been performed, correction of the feature point positions is performed to obtain a higher-precision ToF image.
Further, the present invention discloses a depth compensation method as shown in fig. 1, as shown in fig. 2:
s1: shooting a plane scene by using a flight time camera, continuously obtaining a plurality of images at each position, and shooting a group of depth images at different distances;
s2: obtaining an optimized depth image by performing random noise removal of the depth image based on Gaussian noise on each pixel point of a plurality of depth images continuously shot at each position;
s3: calculating the phase value of a pixel point of the depth image to obtain the distance value of each pixel point along the radial direction, and fitting the depth image at each distance into a space plane serving as a ground channel;
s4: a group of depth-error data is obtained by calculating the error of the depth image shot at each position and each pixel point of the fitted space plane in the radial direction;
s5: and fitting the depth-error data by using a polynomial function to obtain a depth-error model of each pixel point, so that the depth-error model can be used for error compensation of each pixel point of the actually shot depth image.
Further, the present invention discloses a method for combining the characteristic points of the time-of-flight camera with the high resolution camera as shown in fig. 1, as shown in fig. 3:
s1: fixing a time-of-flight camera and a high-resolution camera on a calibration platform together, and carrying out combined calibration on the time-of-flight (ToF) camera and the high-resolution camera to obtain an internal reference matrix of each camera and a relative pose relation of the high-resolution camera relative to the time-of-flight camera;
s2: extracting sub-pixel feature points by using an amplitude image shot by a flight time camera, determining four pixel positions around the sub-pixel feature points, and then simultaneously shooting to obtain positioning pixel points and depth values thereof in the depth image;
s3: projecting pixel points and depth values in the depth image into the high-resolution image by utilizing the relative pose relation of the high-resolution camera relative to the time-of-flight camera;
s4: searching more accurate characteristic points corresponding to the corresponding area of the amplitude map in the high-resolution color map with richer texture characteristics;
s5: calculating to obtain the depth value of the point by using a pixel distance weighting method;
s6: and back projecting the optimized feature points to the depth image by using the relative pose relationship to obtain more accurate feature point positions in the depth image.
Further, the invention discloses a corresponding relationship between poses of a flight time camera and a high resolution camera and pixel points, as shown in fig. 4:
s1: coordinate system, Xt Yt Zt]And, Xc Yc Zc]Coordinate systems representing a time-of-flight camera and a high-resolution camera, respectively, which simultaneously photograph the same target object;
s2: the sub-pixel point of the feature point m in the target object corresponding to the depth image obtained by shooting is ptmThe pixel region where it is located is pti{i=1,2,3,4};
S3: calculating to obtain the relative pose relationship between the high-resolution camera and the time-of-flight camera, and projecting the pixel points of the time-of-flight camera to the corresponding pixels with high resolutionPoint pcmIn the region of composition;
s4: finding more accurate feature points p in the corresponding pixel region of the high resolution camera that are similar to the corresponding region in the depth imagecm
S5: p after optimizationcmObtaining in depth image back-projected to time-of-flight camera
Figure BDA0002819831330000101
I.e. more accurate feature point positions.

Claims (3)

1. A method for improving the precision of a time-of-flight camera is characterized in that the time-of-flight camera for obtaining scene three-dimensional imaging in real time is adopted, the camera emits modulated near infrared light signals, amplitude images and depth images are simultaneously acquired at a high frame rate by calculating the difference between the time or the phase of the emitted light signals to a target object and the time or the phase of the returned light signals back to a sensor, so that computer vision scenes such as three-dimensional reconstruction and medical technology are obtained, and the precision of the time-of-flight is improved by performing depth compensation on the time-of-flight camera and further correcting the position of a characteristic point of the time-of-flight image by combining the high-resolution camera.
2. The method of claim 1, wherein the depth compensation method of the time-of-flight camera comprises capturing a planar scene with the time-of-flight camera, obtaining a plurality of images at each position in succession, capturing a set of depth images at different distances, obtaining an optimized depth image by performing a random noise removal of the depth image based on Gaussian noise for each pixel point of the plurality of depth images captured at each position in succession, obtaining a distance value of each pixel point in a radial direction by calculating a phase value of the pixel point of the depth image, fitting the depth image at each distance to a spatial plane as a ground true value, and calculating an error in the radial direction between the depth image captured at each position and each pixel point of the fitted spatial plane, and obtaining a group of depth-error data, fitting the depth-error data by using a polynomial function, obtaining a depth-error model of each pixel point, and further using the depth-error model for error compensation of each pixel point of the actually shot depth image.
3. The method of claim 2, wherein the time-of-flight image feature point position calibration with the high resolution camera is performed by fixing the time-of-flight camera and the high resolution camera together on a calibration platform, obtaining a reference matrix in each camera and a relative pose relationship of the high resolution camera with respect to the time-of-flight camera by jointly calibrating the time-of-flight camera and the high resolution camera, extracting sub-pixel feature points by using a magnitude map captured by the time-of-flight camera, determining four pixel positions around the sub-pixel feature points, capturing the sub-pixel feature points and the depth values of the sub-pixel feature points simultaneously to obtain positioning pixel points and depth values in a depth map, projecting pixel points and depth values in the depth map into the high resolution image by using the relative pose relationship of the high resolution camera with respect to the time-of-flight camera, and searching more accurate characteristic points corresponding to the area corresponding to the amplitude map in the high-resolution color map with richer texture characteristics, calculating by using a pixel distance weighting method to obtain the depth value of the points, and back projecting the optimized characteristic points to the depth image by using the relative pose relationship to obtain more accurate characteristic point positions in the depth map.
CN202011431582.0A 2020-12-07 2020-12-07 Method for improving time-of-flight camera accuracy Pending CN112419427A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011431582.0A CN112419427A (en) 2020-12-07 2020-12-07 Method for improving time-of-flight camera accuracy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011431582.0A CN112419427A (en) 2020-12-07 2020-12-07 Method for improving time-of-flight camera accuracy

Publications (1)

Publication Number Publication Date
CN112419427A true CN112419427A (en) 2021-02-26

Family

ID=74774967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011431582.0A Pending CN112419427A (en) 2020-12-07 2020-12-07 Method for improving time-of-flight camera accuracy

Country Status (1)

Country Link
CN (1) CN112419427A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192144A (en) * 2021-04-22 2021-07-30 上海炬佑智能科技有限公司 ToF module parameter correction method, ToF device and electronic equipment
CN114071114A (en) * 2022-01-17 2022-02-18 季华实验室 Event camera, depth event point diagram acquisition method, device, equipment and medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272556A (en) * 2018-08-31 2019-01-25 青岛小鸟看看科技有限公司 A kind of scaling method and device of flight time TOF camera
CN111508011A (en) * 2020-04-16 2020-08-07 北京深测科技有限公司 Depth data calibration method of flight time camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272556A (en) * 2018-08-31 2019-01-25 青岛小鸟看看科技有限公司 A kind of scaling method and device of flight time TOF camera
CN111508011A (en) * 2020-04-16 2020-08-07 北京深测科技有限公司 Depth data calibration method of flight time camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PENGFEI YU ET AL.: ""Accuracy improvement of time-of-flight depth measurement by combination of a high-resolution color camera"", 《APPLIED OPTICS》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113192144A (en) * 2021-04-22 2021-07-30 上海炬佑智能科技有限公司 ToF module parameter correction method, ToF device and electronic equipment
CN114071114A (en) * 2022-01-17 2022-02-18 季华实验室 Event camera, depth event point diagram acquisition method, device, equipment and medium

Similar Documents

Publication Publication Date Title
Ishikawa et al. Lidar and camera calibration using motions estimated by sensor fusion odometry
JP5393318B2 (en) Position and orientation measurement method and apparatus
CN107729893B (en) Visual positioning method and system of die spotting machine and storage medium
US8306323B2 (en) Method and apparatus for correcting depth image
CN105096329B (en) Method for accurately correcting image distortion of ultra-wide-angle camera
US20060215935A1 (en) System and architecture for automatic image registration
CN111815716A (en) Parameter calibration method and related device
KR20150112362A (en) Imaging processing method and apparatus for calibrating depth of depth sensor
CN111524194B (en) Positioning method and terminal for mutually fusing laser radar and binocular vision
CN112184811B (en) Monocular space structured light system structure calibration method and device
JP6282377B2 (en) Three-dimensional shape measurement system and measurement method thereof
US10628968B1 (en) Systems and methods of calibrating a depth-IR image offset
US20220020178A1 (en) Method and system for enhancing images using machine learning
CN112419427A (en) Method for improving time-of-flight camera accuracy
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
CN115830103A (en) Monocular color-based transparent object positioning method and device and storage medium
US20200007843A1 (en) Spatiotemporal calibration of rgb-d and displacement sensors
CN113963065A (en) Lens internal reference calibration method and device based on external reference known and electronic equipment
CN112233184B (en) Laser radar and camera calibration parameter correction method and device based on image registration
CN113808217A (en) Real-time laser radar and camera calibration error self-correction method and system
CN114693807B (en) Method and system for reconstructing mapping data of power transmission line image and point cloud
CN115880369A (en) Device, system and method for jointly calibrating line structured light 3D camera and line array camera
CN114663519A (en) Multi-camera calibration method and device and related equipment
CN110232715B (en) Method, device and system for self calibration of multi-depth camera
CN115830131A (en) Method, device and equipment for determining fixed phase deviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination