CN109816735B - Rapid calibration and correction method and TOF camera thereof - Google Patents

Rapid calibration and correction method and TOF camera thereof Download PDF

Info

Publication number
CN109816735B
CN109816735B CN201910068610.8A CN201910068610A CN109816735B CN 109816735 B CN109816735 B CN 109816735B CN 201910068610 A CN201910068610 A CN 201910068610A CN 109816735 B CN109816735 B CN 109816735B
Authority
CN
China
Prior art keywords
value
distance
wiggling
correction
integration time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910068610.8A
Other languages
Chinese (zh)
Other versions
CN109816735A (en
Inventor
瞿喜锋
王彤
周旭廷
孙小旭
张如意
郭庆洪
张强
于振中
李文兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HRG International Institute for Research and Innovation
Original Assignee
HRG International Institute for Research and Innovation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HRG International Institute for Research and Innovation filed Critical HRG International Institute for Research and Innovation
Priority to CN201910068610.8A priority Critical patent/CN109816735B/en
Publication of CN109816735A publication Critical patent/CN109816735A/en
Application granted granted Critical
Publication of CN109816735B publication Critical patent/CN109816735B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a quick calibration and correction method and a TOF camera thereof, which comprises the following steps: (1) Collecting related data of a TOF camera under a fixed frequency, wherein the related data comprises a depth value image, an integral time value and real distance value data; (2) Processing the data acquired in the step 1 to generate related calibration parameters including a global deviation value, a wiggling lookup table and an FPPN lookup table; (3) And (3) according to the relevant calibration parameters generated in the step (2), performing distance correction on the TOF camera, wherein the distance correction includes preprocessing, wiggling correction and FPPN correction. The method has the advantages of simple principle and simple and convenient operation, can ensure higher measurement precision of the TOF camera, can meet the speed requirement of a correction process, and is very suitable for engineering application.

Description

Rapid calibration and correction method and TOF camera thereof
Technical Field
The present invention relates to a calibration method and a camera, and more particularly, to a fast calibration and calibration method and a TOF camera thereof.
Background
With the continuous development of computer vision technology, depth cameras based on TOF (TimeofFlight) technology are increasingly applied. The method uses a low-cost CMOS pixel array and an active modulation light source technology, modulated near-infrared light is emitted by a light source and reflected after meeting an object, and a sensor acquires three-dimensional scene depth information by calculating the time difference or phase difference between light emission and reflection. While acquiring depth information, the TOF camera can also acquire gray scale information of the spatial target. The TOF technology has the characteristics of high precision, high frame rate, no dependence on Lai Huanjing light, low cost and the like, and is an important milestone in the field of computer vision.
Due to the characteristics of the imaging principle of the TOF camera and the differences among components, certain errors often exist in the acquired data, and the acquired data needs to be calibrated. The main factors causing errors in TOF cameras are: 1. periodic errors caused by odd harmonics (also known as "Wiggling"). Due to hardware limitations, the transmitted signal is not a perfectly standard waveform, which can produce a continuous offset related to the measured value. 2. Errors due to different integration times. The integration time is also called as exposure time, and the error caused by different integration times has different sizes. Especially, too large or too small integration time may result in "overexposure" or "underexposure", resulting in large measurement errors. 3. The error caused by the fixed phase deviation (also known as "FPPN" (fixedphasepatternnnoise)). There are slight differences from pixel to pixel due to design or manufacturing variations, and the offset of each pixel needs to be corrected. There are also many external factors that affect TOF camera measurement errors, such as ambient temperature, mixed reflected light, object shape, reflectivity, etc.
Although some calibration and correction methods exist in the prior art, for example, chinese patent application (application number: CN 2015108533668) discloses a calibration and correction system for TOF camera and apparatus and method thereof, the method includes the following steps: (a) setting an integration time of a TOF camera; (B) measuring and calculating related parameters; (C) obtaining a suitable integration time according to the relevant parameters; and (D) obtaining a measured value under a proper integration time, and comparing the measured value with an actual value to obtain a calibration error value. The TOF camera correction system comprises a calibration unit and a correction unit; the calibration unit acquires proper integration time of the TOF camera to obtain stable calibration error, and the correction unit corrects the calibration error of the calibration unit.
However, the calibration steps are extremely cumbersome because the TOF camera is caused by many error factors and is not completely independent of each other. On the premise of meeting certain measurement precision requirements and speed requirements in a correction process, calibrating and correcting a TOF camera becomes a key problem in the research and development process of the TOF camera.
In addition, the existing method mainly adopts a univariate correction method, and different influencing factors are corrected respectively. Although the method is high in accuracy, the steps are complicated, and variable dimensionality in the TOF camera correction process is too high, so that the operation speed of a correction algorithm is influenced.
Disclosure of Invention
In order to overcome the defects of the method, the invention discloses a simple and practical TOF camera rapid calibration and correction method. The method selects the factors such as wiggling, integration time, fixed phase deviation FPPN and the like which have great influence on the measurement error of the TOF camera, and stores the distance error result in a lookup table mode. By using the method, the TOF camera can be ensured to have higher measurement accuracy, and the speed requirement of a correction process can be met.
The technical scheme is as follows:
a quick calibration and correction method is characterized in that: the method comprises the following steps:
step 1: collecting related data of a TOF camera under a fixed frequency, wherein the related data comprises a depth value image, an integral time value and real distance value data;
step 2: processing the data acquired in the step 1 to generate related calibration parameters including a global deviation value, a wiggling lookup table and an FPPN lookup table;
and step 3: and (3) according to the related calibration parameters generated in the step (2), performing distance correction on the TOF camera, wherein the distance correction includes preprocessing, wiggling correction and FPPN correction.
Preferably, the TOF camera resolution employed in step 1 is 240 × 320;
preferably, the acquiring TOF camera related data in step 1 comprises the steps of:
(1a) A TOF camera minimum integration time is set.
(1b) And fixing the integration time of the TOF camera, wherein the measured target object is a white plane, and selecting the measuring distance at equal intervals until the maximum measuring distance under the modulation frequency is reached. According to the method, the measured distance value data of the pixel center point at each test distance is recorded n times.
(1c) And (3) changing the integration time of the TOF camera, and repeating the step (1 b), wherein the integration time is selected at equal intervals until the maximum integration time of the TOF camera is reached.
(1d) A large number of experiments show that FPPN has no obvious relation with the integration time and distance. Therefore, an appropriate distance and an appropriate integration time are selected, and m depth images are recorded.
Preferably, the step 2 of performing a processing operation on the data acquired in the step 1 and generating the relevant calibration parameters includes the following steps:
(2a) A global bias is calculated. There is a fixed offset at each modulation frequency due to electrical delays caused by the illumination driver circuitry and the electro-optic conversion. Through global deviation, the measured data are normalized, and the data are easier to process. And (4) averaging the measured distance values obtained in the step (1 b) and the step (1 c). Selecting a proper test distance with proper integration time, and subtracting a real distance value from the average value of the measured distance values to obtain a global deviation value, wherein the calculation formula is as follows:
P global_offset =M rawdistance -C realdistance
(Ⅰ)
wherein, P global_offset As a global bias value, M rawdistance For measuring the mean value of the distance values, C realdistance Is the true distance value.
(2b) And generating a wiggling lookup table. And (3) subtracting the global deviation value in the step (2 a) from the average measured distance value obtained in the step (1 b) and the step (1 c) at all integration time, and subtracting the actual distance value to obtain the distance error value of the measuring point at the integration time. And fitting error data by adopting a cubic polynomial interpolation value for a certain integration time to obtain a distribution curve of the distance error value relative to the measured distance value under the integration time. According to the distribution curve, in the measuring distance range of the TOF camera, starting from the minimum measuring distance value, the distance error value corresponding to each interval xmm of the measuring distance values is obtained under certain integration time until the maximum measuring distance value is reached, and the distance error values corresponding to each measuring distance value are sequentially stored according to the sequence of columns. And performing the operation for all the integration time, and storing the data corresponding to the corresponding integration time as a row. And storing all the obtained data into a file to obtain a Wiggling lookup table file.
(2c) An FPPN lookup table is generated. Firstly, carrying out average value processing on the m depth images acquired in the step (1 d), and then carrying out global deviation compensation and wiggling deviation correction on the depth images subjected to the average value processing according to the global deviation values generated in the step (2 a) and the step (2 b) and the wiggling lookup table. Because the measurement target object is a white plane, according to the camera aperture imaging principle, the real distance value corresponding to the central point pixel can calculate the real distance value of each pixel point of the depth map, and the calculation formula is as follows:
Figure BDA0001956542840000041
wherein, base (r, c) is the real distance value with pixel point coordinate (r, c), (x) center ,y center ) As the pixel coordinate of the center point of the depth image, D (x) center ,y center ) Is the true distance value of the center pixel, l pixel The variable unit is the pixel size of a TOF camera photosensitive component, f is the focal length of the TOF camera, and the variable unit is mm.
The FPPN offset values for all pixels are calculated as follows:
D FPPN_offset (r,c)=D calculated (r,c)-Base(r,c);(Ⅲ)
wherein D is FPPN (r, c) is the deviation value of FPPN with pixel point coordinate (r, c), D calculated And (r, c) is a distance value of a pixel point coordinate (r, c) after global deviation compensation and wiggling deviation correction, and Base (r, c) is a real distance value of a plane pixel point coordinate (r, c). Storing the FPPN deviation value of each pixel obtained by calculation into a file according to the sequence of the pixel points to obtain the FPPN deviation valueFPPN lookup table files.
Preferably, the step 3 of correcting the TOF camera comprises the following steps:
(3a) And (4) preprocessing. And carrying out global deviation compensation on the measurement distance value of each pixel of the acquired depth image, wherein the calculation formula is as follows:
D preprocessed (r,c)=D rawdistance (r,c)-P global_offset
(Ⅳ)
wherein D preprocessed (r, c) is the distance value of the pixel point coordinate (r, c) after pretreatment, D rawdistance (r, c) the measured distance value with (r, c) pixel point coordinate, P global_offset Is a global offset value.
(3b) And (5) carrying out wiggling correction. According to the wiggling lookup table file generated in the step (2 b), firstly, whether the integration time set by the TOF camera is in the wiggling lookup table or not is judged, if yes, whether the distance value obtained after preprocessing in the step (3 a) is in the wiggling lookup table or not is continuously judged, and if yes, the distance error corresponding to the distance value obtained after preprocessing in the integration time can be obtained in the wiggling lookup table. Secondly, if the integration time set by the TOF camera and the distance value obtained after the preprocessing in the step (3 a) are not all in the lookup table, calculating the integration time and the distance value which are close to each other in the table, then performing linear interpolation calculation, and finally obtaining a distance error value corresponding to the distance value obtained after the preprocessing in the step (3 a) in the integration time. The formula for carrying out wiggling correction is as follows:
D wiggling (r,c)=D preprocessed (r,c)-D wiggling_offset (r,c)
(Ⅴ)
wherein, (r, c) is the corresponding coordinate of the pixel coordinate point, D wiggling (r,c)、D preprocessed (r,c)、D wiggling_offset And (r, c) respectively obtaining a distance value after the wiggling correction, a distance value after the preprocessing and an obtained wiggling deviation value under the pixel coordinate.
(3c) And (4) FPPN correction. And (3) according to the FPPN lookup table generated in the step (2 c), carrying out FPPN correction on each pixel distance subjected to the wiggling correction in the step (3 a), wherein the calculation formula is as follows:
D out (r,c)=D wiggling (r,c)-D FPPN_offset (r,c)
(Ⅵ)
wherein, (r, c) is the corresponding coordinate of the pixel coordinate point, D out (r,c)、D wiggling (r,c)、D FPPN_offset And (r, c) respectively representing a distance value obtained after FPPN correction, a distance value obtained after wiggling correction and an FPPN deviation value under the pixel coordinate. Finally, a corrected distance value image can be obtained.
Has the advantages that:
the invention provides a simple calibration method for a TOF camera. Compared with other methods, the method has the advantages of simple principle and simple and convenient operation, can ensure higher measurement precision of the TOF camera, and can meet the speed requirement of a correction process. A large number of experimental results show that the method has obvious effect and is very suitable for engineering application.
Drawings
FIG. 1 is a graph of a cubic polynomial interpolation fit of 500us integrated time measurement distance to error values in accordance with the present invention.
Fig. 2 is a comparison of the reference plane and the witgling corrected measurement plane of the present invention.
FIG. 3 is a flow chart of the calibration process of the present invention.
Detailed Description
As shown in fig. 1-3, a method for rapid calibration and correction is characterized in that: the method comprises the following steps:
step 1: collecting relevant data of a TOF camera with fixed frequency of 40MHz, wherein the relevant data comprises a depth value image, an integral time value and real distance value data;
step 2: processing the data acquired in the step 1 to generate related calibration parameters including a global deviation value, a wiggling lookup table and an FPPN lookup table;
and 3, step 3: and (3) according to the related calibration parameters generated in the step (2), performing distance correction on the TOF camera, wherein the distance correction includes preprocessing, wiggling correction and FPPN correction.
The resolution of the TOF camera used in step 1 is 240 × 320;
the step 1 of collecting TOF camera related data comprises the following steps:
(1a) The TOF camera integration time is set to a minimum integration time of 50us.
(1b) And fixing the integration time of the TOF camera, wherein the measured target object is a white plane, and selecting the measuring distance at equal intervals of 10cm until the maximum measuring distance reaches 3750mm under 40 MHz. According to the above method, the measured distance value data of the center point of the pixel at each test distance is recorded 100 times.
(1c) And (3) changing the integration time of the TOF camera, and repeating the step (1 b), wherein the integration time is selected at equal intervals of 50us until the maximum integration time of 600us of the TOF camera is reached.
(1d) A large number of experiments show that FPPN has no obvious relation with the integration time and distance. Therefore, 50 depth images are recorded by selecting a proper actual distance of 40cm and a proper integration time of 500 us.
Preferably, the step 2 of performing a processing operation on the data acquired in the step 1 and generating the relevant calibration parameters includes the following steps:
(2a) A global bias is calculated. There is a fixed offset at each modulation frequency due to electrical delays caused by the illumination driver circuitry and the electro-optic conversion. Through global deviation, the measured data are normalized, and the data are easier to process. And (4) averaging the measured distance values obtained in the step (1 b) and the step (1 c). Selecting a proper test distance with proper integration time, and subtracting the actual distance value from the average value of the measured distance values to obtain a global deviation value, wherein the calculation formula is as follows:
P global_offset =M rawdistance -C realdistance
(Ⅰ)
wherein, P global_offset As a global bias value, M rawdistance For measuring the mean value of the distance values, C realdistance Is the true distance value. An integration time of 500us and a test distance of 400mm were chosen here.
(2b) And generating a wiggling lookup table. And (3) subtracting the global deviation value in the step (2 a) from the average measured distance value obtained in the step (1 b) and the step (1 c) at all integration time, and subtracting the actual distance value to obtain the distance error value of the measuring point at the integration time. And fitting the error data by adopting a cubic polynomial interpolation value for certain integration time to obtain a distribution curve of the distance error value relative to the measured distance value under the integration time. As shown in fig. 1, when the integration time is 500us, the error data is fitted by using a cubic polynomial interpolation to obtain a partial distribution curve of the distance error value with respect to the measured distance value.
According to the distribution curve, in the measuring distance range of the TOF camera, starting from the minimum measuring distance value, distance error values corresponding to the measuring distance values at intervals of 5mm are obtained under certain integration time until the maximum measuring distance value is reached, and the distance error values corresponding to the measuring distance values are sequentially stored according to the sequence of columns. And performing the operation for all the integration time, and storing the data corresponding to the corresponding integration time as a row. And storing all the obtained data into files to obtain the Wiggling lookup table file.
(2c) An FPPN lookup table is generated. Firstly, carrying out average value processing on 50 depth images acquired in the step (1 d), and then carrying out global deviation compensation and wiggling deviation correction on the depth images subjected to the average value processing according to the global deviation values generated in the step (2 a) and the step (2 b) and a wiggling lookup table. Because the measurement target object is a white plane, according to the camera pinhole imaging principle, the real distance value corresponding to the central point pixel can calculate the real distance value of each pixel point of the depth map, and the calculation formula is as follows:
Figure BDA0001956542840000091
wherein, base (r, c) is the real distance value with pixel point coordinate (r, c), (x) center ,y center ) As the pixel coordinate of the center point of the depth image, D (x) center ,y center ) Is the true distance value of the center pixel, l pixel Is the pixel size of a TOF camera photosensitive component, and f isAnd the focal length of the TOF camera is mm.
The FPPN offset values for all pixels are calculated as follows:
D FPPN_offset (r,c)=D calculated (r,c)-Base(r,c) ;
(Ⅲ)
wherein D is FPPN (r, c) is the deviation value of FPPN with pixel point coordinate (r, c), D calculated And (r, c) is a distance value of a pixel point coordinate (r, c) after global deviation compensation and wiggling deviation correction, and Base (r, c) is a real distance value of a plane pixel point coordinate (r, c). As shown in fig. 2, when the integration time is 500us and the real distance is 625mm, the obtained real distance plane of the white wall of the target object is compared with the distance value plane after global deviation compensation and wiggling deviation correction. And storing the calculated FPPN deviation value of each pixel into a file according to the sequence of the pixel points, thereby obtaining an FPPN lookup table file.
The step 3 of correcting the TOF camera comprises the following steps:
(3a) And (4) preprocessing. And carrying out global deviation compensation on the measurement distance value of each pixel of the acquired depth image, wherein the calculation formula is as follows:
D preprocessed (r,c)=D rawdistance (r,c)-P global_offset
(Ⅳ)
wherein D preprocessed (r, c) is the distance value of the pixel point coordinate (r, c) after pretreatment, D rawdistance (r, c) the measured distance value with (r, c) pixel point coordinate, P global_offset Is a global offset value.
(3b) And (5) carrying out wiggling correction. According to the wiggling lookup table file generated in the step (2 b), firstly, whether the integration time set by the TOF camera is in the wiggling lookup table or not is judged, if yes, whether the distance value obtained after preprocessing in the step (3 a) is in the wiggling lookup table or not is continuously judged, and if yes, the distance error value corresponding to the distance value obtained after preprocessing in the integration time can be obtained in the wiggling lookup table. Secondly, if the integration time set by the TOF camera and the distance value obtained after the preprocessing in the step (3 a) are not all in the lookup table, calculating the integration time and the distance value which are close to each other in the table, then performing linear interpolation calculation, and finally obtaining a distance error value corresponding to the distance value obtained after the preprocessing in the step (3 a) in the integration time. The formula for carrying out wiggling correction is as follows:
D wiggling (r,c)=D preprocessed (r,c)-D wiggling_offset (r,c)
(Ⅴ)
wherein, (r, c) is the corresponding coordinate of the pixel coordinate point, D wiggling (r,c)、D preprocessed (r,c)、D wiggling_offset And (r, c) respectively obtaining a distance value obtained after the wiggling correction, a distance value obtained after the preprocessing and an obtained wiggling distance error value under the pixel coordinate.
(3c) And (4) FPPN correction. And (3) according to the FPPN lookup table generated in the step (2 c), carrying out FPPN correction on each pixel distance subjected to the wiggling correction in the step (3 a), wherein the calculation formula is as follows:
D out (r,c)=D wiggling (r,c)-D FPPN_offset (r,c)
(Ⅵ)
wherein, (r, c) is the corresponding coordinate of the pixel coordinate point, D out (r,c)、D wiggling (r,c)、D FPPN_offset And (r, c) respectively representing a distance value obtained after FPPN correction, a distance value obtained after wiggling correction and an FPPN deviation value under the pixel coordinate. Finally, the corrected distance value image can be obtained. The whole correction flow is shown in fig. 3.
The foregoing shows and describes the general principles, principal features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but various changes and modifications may be made without departing from the spirit and scope of the invention, which is defined by the appended claims. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

1. A quick calibration and correction method is characterized in that: the method comprises the following steps:
step 1: collecting related data of a TOF camera under a fixed frequency, wherein the related data comprises a depth value image, an integral time value and real distance value data;
step 2: processing the data acquired in the step 1 to generate related calibration parameters, including calculating a global deviation value, generating a wiggling lookup table and generating an FPPN lookup table;
and step 3: according to the related calibration parameters generated in the step 2, distance correction is carried out on the TOF camera, and global deviation compensation preprocessing, wiggling correction and FPPN correction are carried out on the measured distance value of each pixel of the acquired depth image;
the step 2 of performing processing operation on the data acquired in the step 1 to generate relevant calibration parameters includes the following steps:
(2a) Calculating a global deviation value, wherein each modulation frequency has a fixed offset due to electrical delay caused by an illumination driving circuit and electro-optic conversion, and the measured data is normalized through the global deviation value so that the data is easier to process; averaging the measured distance values obtained in the step (1 b) and the step (1 c); selecting a proper test distance with proper integration time, and subtracting the actual distance value from the average value of the measured distance values to obtain a global deviation value, wherein the calculation formula is as follows:
P global_offset =M rawdistance -C realdistance (Ⅰ)
wherein, P global_offset As a global bias value, M rawdistance For measuring the mean value of the distance values, C realdistance Is the true distance value;
(2b) Generating a wiggling lookup table, subtracting the global deviation value in the step (2 a) from the average measured distance value obtained in the step (1 b) and the step (1 c) under all integration time, and subtracting the actual distance value to obtain a distance error value of the measurement point under the integration time; fitting error data by adopting a cubic polynomial interpolation value for a certain integration time to obtain a distribution curve of distance error values relative to measured distance values under the integration time, obtaining distance error values corresponding to the measured distance values at intervals of x mm from the minimum measured distance value in the measured distance range of the TOF camera according to the distribution curve until the maximum measured distance value is reached, and sequentially storing the distance error values corresponding to each measured distance value according to a column sequence, and storing the data corresponding to the corresponding integration time into a row for all the integration time; storing all the obtained data into a file to obtain a Wiggling lookup table file;
(2c) Generating an FPPN lookup table; firstly, carrying out average value processing on m depth images acquired in the step (1 d), and then carrying out global deviation compensation and wiggling deviation correction on the depth images subjected to the average value processing according to the global deviation values and the wiggling lookup table generated in the step (2 a) and the step (2 b); because the measurement target object is a white plane, according to the camera pinhole imaging principle, the real distance value corresponding to the central point pixel can calculate the real distance value of each pixel point of the depth map, and the calculation formula is as follows:
Figure FDA0003830806770000021
wherein, base (r, c) is the real distance value with pixel point coordinate (r, c), (x) center ,y center ) As the pixel coordinate of the center point of the depth image, D (x) center ,y center ) Is the true distance value of the center pixel, l pixel The pixel size of a TOF camera photosensitive component is shown, f is the focal length of the TOF camera, and the variable units are all mm;
the FPPN offset values for all pixels are calculated as follows:
D FPPNoffset (r,c)=D calculated (r,c)-Base(r,c); (Ⅲ)
wherein D is FPPN (r, c) is the deviation value of FPPN with pixel point coordinate (r, c), D calculated (r, c) is a distance value of a pixel point coordinate (r, c) after global deviation compensation and wiggling deviation correction, and Base (r, c) is a real distance value of a plane pixel point coordinate (r, c); will be calculated to obtainAnd storing the FPPN deviation value of each pixel into a file according to the sequence of the pixel points to obtain an FPPN lookup table file.
2. The method according to claim 1, wherein the TOF camera resolution used in step 1 is 240 × 320.
3. The method for rapid calibration and correction according to claim 1, wherein the step 1 of collecting TOF camera related data comprises the following steps:
(1a) Setting a TOF camera minimum integration time;
(1b) Fixing the integration time of the TOF camera, selecting the measuring distance at equal intervals until the measured target object is a white plane, and recording the measured distance value data of the pixel center point at each measuring distance for n times;
(1c) Changing the integration time of the TOF camera, and repeating the step (1 b), wherein the integration time is selected at equal intervals until the maximum integration time of the TOF camera is reached;
(1d) Since FPPN has no obvious relation with the integration time and distance, the appropriate distance and the appropriate integration time are selected to record m depth images.
4. The method for rapid calibration and correction according to claim 1, wherein the step 3 of correcting the TOF camera comprises the steps of:
(3a) Pre-treating; and carrying out global deviation compensation on the measurement distance value of each pixel of the acquired depth image, wherein the calculation formula is as follows:
D preprocessed (r,c)=D rawdistance (r,c)-P global_offset (Ⅳ)
wherein D preprocessed (r, c) is the distance value of the pixel point coordinate (r, c) after pretreatment, D rawdistance (r, c) the measured distance value with (r, c) pixel point coordinate, P global_offset Is a global deviation value;
(3b) Carrying out wiggling correction; according to the wiggling lookup table file generated in the step (2 b), firstly, whether the integration time set by the TOF camera is in the wiggling lookup table or not is judged, if so, whether the distance value obtained after preprocessing in the step (3 a) is in the wiggling lookup table or not is continuously judged, and if so, the distance error value corresponding to the distance value obtained after preprocessing in the integration time can be obtained in the wiggling lookup table; secondly, if the integration time set by the TOF camera and the distance value obtained after the preprocessing in the step (3 a) are not in a lookup table, calculating the integration time and the distance value which are close to each other in the table, then performing linear interpolation calculation, and finally obtaining a distance error value corresponding to the distance value obtained after the preprocessing in the step (3 a) under the integration time; the formula for carrying out the wiggling correction is as follows:
D wiggling (r,c)=D preprocessed (r,c)-D wiggling_offset (r,c) (Ⅴ)
wherein, (r, c) is the corresponding coordinate of the pixel coordinate point, D wiggling (r,c)、D preprocessed (r,c)、D wiggling_offset (r, c) respectively obtaining a distance value after wiggling correction, a distance value after preprocessing and an obtained wiggling deviation value under the pixel coordinate;
(3c) FPPN correction; and (3) according to the FPPN lookup table generated in the step (2 c), carrying out FPPN correction on each pixel distance subjected to the wiggling correction in the step (3 a), wherein the calculation formula is as follows:
D out (r,c)=D wiggling (r,c)-D FPPN_offset (r,c) (Ⅵ)
wherein, (r, c) is the corresponding coordinate of the pixel coordinate point, D out (r,c)、D wiggling (r,c)、D FPPN_offset (r, c) respectively obtaining a distance value after FPPN correction, a distance value after wiggling correction and an FPPN deviation value under the pixel coordinate; finally, the corrected distance value image can be obtained.
5. A TOF camera characterized by: comprising the rapid calibration and correction method according to any of the preceding claims 1-4.
CN201910068610.8A 2019-01-24 2019-01-24 Rapid calibration and correction method and TOF camera thereof Active CN109816735B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910068610.8A CN109816735B (en) 2019-01-24 2019-01-24 Rapid calibration and correction method and TOF camera thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910068610.8A CN109816735B (en) 2019-01-24 2019-01-24 Rapid calibration and correction method and TOF camera thereof

Publications (2)

Publication Number Publication Date
CN109816735A CN109816735A (en) 2019-05-28
CN109816735B true CN109816735B (en) 2022-10-21

Family

ID=66603193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910068610.8A Active CN109816735B (en) 2019-01-24 2019-01-24 Rapid calibration and correction method and TOF camera thereof

Country Status (1)

Country Link
CN (1) CN109816735B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110161484B (en) * 2019-06-12 2022-05-27 京东方科技集团股份有限公司 Distance compensation lookup table establishing method and device and distance compensation method and device
CN110335320B (en) * 2019-09-02 2020-04-28 常州天眼星图光电科技有限公司 Ground automatic calibration method for integration time of remote sensing camera
CN110794422B (en) * 2019-10-08 2022-03-29 歌尔光学科技有限公司 Robot data acquisition system and method with TOF imaging module
CN111028294B (en) * 2019-10-20 2024-01-16 奥比中光科技集团股份有限公司 Multi-distance calibration method and system based on depth camera
CN111508011A (en) * 2020-04-16 2020-08-07 北京深测科技有限公司 Depth data calibration method of flight time camera
CN111562562B (en) * 2020-04-28 2023-04-14 重庆市天实精工科技有限公司 3D imaging module calibration method based on TOF
CN111624580B (en) * 2020-05-07 2023-08-18 Oppo广东移动通信有限公司 Correction method, correction device and correction system for flight time module
CN112532970B (en) * 2020-10-26 2022-03-04 奥比中光科技集团股份有限公司 Tap non-uniformity correction method and device of multi-tap pixel sensor and TOF camera
CN112346034B (en) * 2020-11-12 2023-10-27 北京理工大学 Three-dimensional solid-state area array laser radar combined calibration method
CN114636992A (en) * 2020-12-15 2022-06-17 深圳市灵明光子科技有限公司 Camera calibration method, camera and computer-readable storage medium
CN113192144B (en) * 2021-04-22 2023-04-14 上海炬佑智能科技有限公司 ToF module parameter correction method, toF device and electronic equipment
CN113504530A (en) * 2021-05-08 2021-10-15 奥比中光科技集团股份有限公司 Depth camera safety control method and device and ToF depth camera
CN113760539A (en) * 2021-07-29 2021-12-07 珠海视熙科技有限公司 TOF camera depth data processing method, terminal and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815867A (en) * 2015-11-30 2017-06-09 宁波舜宇光电信息有限公司 TOF camera is demarcated and correction system and its apparatus and method for
CN109035345A (en) * 2018-07-20 2018-12-18 齐鲁工业大学 The TOF camera range correction method returned based on Gaussian process

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060241371A1 (en) * 2005-02-08 2006-10-26 Canesta, Inc. Method and system to correct motion blur in time-of-flight sensor systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106815867A (en) * 2015-11-30 2017-06-09 宁波舜宇光电信息有限公司 TOF camera is demarcated and correction system and its apparatus and method for
CN109035345A (en) * 2018-07-20 2018-12-18 齐鲁工业大学 The TOF camera range correction method returned based on Gaussian process

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ToF深度相机测量误差校正模型;王乐等;《系统仿真学报》;20171008(第10期);全文 *
基于相机内外参数的机载光电平台图像几何校正;李铁成等;《压电与声光》;20160215(第01期);全文 *

Also Published As

Publication number Publication date
CN109816735A (en) 2019-05-28

Similar Documents

Publication Publication Date Title
CN109816735B (en) Rapid calibration and correction method and TOF camera thereof
US8269833B2 (en) Method and system for measuring vehicle speed based on movement of video camera
JP4111592B2 (en) 3D input device
EP3640892B1 (en) Image calibration method and device applied to three-dimensional camera
US5193120A (en) Machine vision three dimensional profiling system
CN105651203B (en) A kind of high dynamic range 3 D measuring method of adaptive striped brightness
US5280542A (en) XYZ coordinates measuring system
US5909285A (en) Three dimensional inspection system
US8265343B2 (en) Apparatus, method and program for distance measurement
US20150279016A1 (en) Image processing method and apparatus for calibrating depth of depth sensor
CN106815867B (en) TOF camera calibration and correction system, and equipment and method thereof
CN111508011A (en) Depth data calibration method of flight time camera
CN110879398A (en) Time-of-flight camera and method for calibrating a time-of-flight camera
CN108921797B (en) Method for calibrating distorted image
CN107197222B (en) Method and device for generating correction information of projection equipment
JP2009180689A (en) Three-dimensional shape measuring apparatus
KR20200118073A (en) System and method for dynamic three-dimensional calibration
CN108010071B (en) System and method for measuring brightness distribution by using 3D depth measurement
JPH102711A (en) Three-dimensional measuring device
US20090262369A1 (en) Apparatus and method for measuring distances
CN112419427A (en) Method for improving time-of-flight camera accuracy
JPS6221011A (en) Measuring apparatus by light cutting method
JP5776212B2 (en) Image processing apparatus, method, program, and recording medium
CN111624580A (en) Correction method, correction device and correction system of flight time module
CN112346034B (en) Three-dimensional solid-state area array laser radar combined calibration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant