LU101024B1 - Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light - Google Patents

Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light Download PDF

Info

Publication number
LU101024B1
LU101024B1 LU101024A LU101024A LU101024B1 LU 101024 B1 LU101024 B1 LU 101024B1 LU 101024 A LU101024 A LU 101024A LU 101024 A LU101024 A LU 101024A LU 101024 B1 LU101024 B1 LU 101024B1
Authority
LU
Luxembourg
Prior art keywords
offset
function
pixel
camera
calibration
Prior art date
Application number
LU101024A
Other languages
German (de)
Inventor
Bruno Mirbach
Harald Clos
Thomas Solignac
Martin Boguslawski
Original Assignee
Iee Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iee Sa filed Critical Iee Sa
Priority to LU101024A priority Critical patent/LU101024B1/en
Priority to US17/311,225 priority patent/US11263765B2/en
Priority to DE112019006048.1T priority patent/DE112019006048T5/en
Priority to PCT/EP2019/083538 priority patent/WO2020115068A1/en
Priority to CN201980080091.5A priority patent/CN113167899B/en
Application granted granted Critical
Publication of LU101024B1 publication Critical patent/LU101024B1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4915Time delay measurement, e.g. operational details for pixel components; Phase measurement
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The invention relates to a method for corrected depth measurement with a time-of- flight camera (1) using amplitude-modulated continuous light. In order to enable an accurate and efficient depth measurement with a time-of-flight camera, the invention provides that the method comprises, for each of a plurality of pixels (3) of a sensor array (2) of the camera (1): acquiring (510) with the camera (1) a raw depth value rm for the pixel (3); and automatically calculating (520) a ground truth value rt according to: rt = g(rm- cm) + ct, wherein cn is a pixel-dependent first offset, g is a pixel-independent first function and ct is a pixel-independent second offset.

Description

Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light Technical field
[0001] The invention relates to a method for corrected depth measurement with a time-of-flight camera using amplitude-modulated continuous light.
Background of the Invention
[0002] Time-of-flight cameras are used to provide pixelwise depth information in an image of a three-dimensional object or scenery. The camera comprises a (normally two-dimensional) sensor array with a plurality of pixels. Each pixel provides information from which the depth (i.e. the distance from the camera) of a recorded point in space can be derived. Apart from TOF cameras using light pulses, another type of TOF camera uses an amplitude-modulated continuous light. In other words, the camera emits a continuous field of amplitude-modulated light, which is reflected from objects in the field of view of the camera. The reflected light is received by the individual pixels. Due to the amplitude-modulation, the phase of the received light can be deduced from the amplitude and by the relative phase difference the time of flight and thus the distance to the reflecting object can be determined. According to a well-known method, lock-in pixels are employed where the readout of each pixel is synchronised to the modulation frequency of the light. In particular, the readout frequency of each pixel can be 4 times the modulation frequency. This is also referred to as the 4-tap method.
[0003] Depth measurement with a ToF camera using amplitude modulated continuous light (AMCL) and lock-in pixels is described well by the homodyne principle. Therein, the modulation frequencies of light and pixel clock equal. The correlation signal of both functions is relevant for the measurement process and subject of the investigation. However, both signals, light modulation and pixel exposure process, are usually non-harmonic — an intermediate state between sinusoidal and square function — wherefore the correlation signal is non-harmonic as well.
[0004] Demodulation of a continuously modulated, non-harmonic signal by sampling with a frequency four times higher than the fundamental leads to aliasingeffects. In consequence, higher harmonics impurify the reconstructed signal. Hence, compared to the original signal, periodic deviations of the reconstruction occur. Therefore, the "fundamental" frequency of the error is at least four times the initial modulation frequency.
[0005] Numerous approaches were developed to correct for the mentioned systematic depth error, the so-called wiggling error. However, approaches so far include also camera-, integration time- or even pixel-wise corrections. This implies that every sample must be individually calibrated in an intricate, highly time- consuming process, which is not appropriate in the production of an industrial time-of-flight camera. In addition, the resulting calibration parameters demand large memory space.
Object of the invention
[0006] It is thus an object of the present invention to enable an accurate and efficient depth measurement with a time-of-flight camera.
[0007] This problem is solved by a method according to claim 1.
General Description of the Invention
[0008] The invention provides a method for corrected depth measurement with a time-of-flght camera using an amplitude-modulated continuous light. Depth measurement herein of course refers to measuring the distance from the camera, so that a 3D sensor array can be obtained. The principle of a time-of-flight (ToF) camera using amplitude-modulated continuous light is well known as such and has been explained above. While the term "light" may refer to visible light, it will be understood that infrared light or ultraviolet light could be employed as well.
[0009] The method comprises the following two steps for each of a plurality of pixels of a sensor array of the camera. The pixels may in particular be lock-in pixels. The sensor array comprises a plurality (normally between several hundred and several thousand) of pixels, usually disposed in a two-dimensional pattern, although a one-dimensional arrangement would be conceivable, too.
[0010] In a first step, a raw depth value 7, for the pixel is acquired with the camera. This raw depth value r,, is generally affected by measurement errors and does not represent the actual depth exactly.
[0011] In a second step, a ground truth value 7, is automatically calculated according to the following equation: Te = Im — Cm) + Ce (eq. 1) wherein c is a pixel-dependent first offset, g is a pixel-independent first function and c, is a pixel-independent second offset. The automatic calculation is preferably performed by the camera itself. For this purpose, the camera may comprise a volatile memory and/or a non-volatile memory and a processing unit. The processing unit may at least partially be software-implemented. Although the ground truth value r, may still differ (minimally) from the actual depth, it is generally a sufficiently accurate approximation.
[0012] While the sequence of the first and second step is fixed for a specific pixel, there are various possibilities in which order the steps may be performed for different pixels. For example, the first step could be performed sequentially for all pixels in the sensor array, the raw depth values r,, could be stored in a (volatile) memory and the ground truth values r, could be calculated subsequently for all pixels. Another possibility would be to calculate the ground truth value r, for each pixel before the raw depth value 7, for the next pixel is acquired. Although this approach would help to minimise the memory space, it could slow down the recording of a 3D image and thus could be unsuitable if rapid changes in the 3D image occur. Another possibility would be that the first and second step are performed in parallel, i.e. the ground truth value r, for at least one pixel can be calculated while the raw depth value r, for at least one pixel is being acquired.
[0013] When looking at eq.1, it becomes clear that calculation of the ground truth value 7, is relatively simple. It requires the following three steps: 1) Subtracting the pixel-dependent first offset c,; 2) Applying the pixel-independent first function g; and 3) Adding the pixel-independent second offset c,.
[0014] These calculations can be done in real-time, e.g. after acquisition of each depth sensor array frame. The only necessary input are the pixels of the sensor array. It should be noted that the first and third step are simple additions orsubtractions, respectively. Also, the third step is an addition of a single value that is the same for all pixels. Likewise, the second step requires application of a first function g that is the same for all pixels. Therefore, the only step that requires a memory space proportional to the size of the sensor array is the first step, which requires memory for the first offset c,, for each individual pixel. However, borne in mind that for each pixel only one or a few bytes are necessary and the number of pixels is typically within the range of several 10,000 (or even less), the total required memory for the first offset c,, is still comparatively small.
[0015] Therefore, the inventive method enables an effective depth correction for a ToF camera without the need for any additional devices, large memory space or extensive processing power.
[0016] Although the invention is not limited to this, acquiring the raw depth value r,, normally comprises determining four amplitude samples at a sampling frequency four times higher than a modulation frequency of the amplitude- modulated continuous light. This may also be referred to as a 4-tap method, wherein the four amplitudes A, ;, referred to as taps, are the base for the phase retrieval of the modulated light, since the phase ¢ can be calculated as: Ag — A; ¢@ = atan =): (eq. 2)
[0017] The correlation function between the light amplitude and the pixel clock is sampled four times per fundamental period with equally delayed sampling points. Thus, the sampling frequency is four times higher than the fundamental modulation frequency. According to the Shannon-Nyquist theorem, aliasing can only occur for harmonics with an order greater than two. Thus, amplitude and phase of the fundamental frequency are distorted by all higher harmonic frequencies, naturally of odd order.
[0018] In some cases, the function values of the first function g could be calculated in real time. In other cases, it may be desirable to save processing effort or the analytic form of the first function g may even be unknown. For these and other reasons, it is preferred that the first function g is applied by accessing a look-up table representing the first function g. The required memory space for the look-up table can be relatively small. By way of example, the look-up table can berepresented by a one-dimensional vector requiring a memory space between several kB and several 10 kB (e.g. with one entry per mm, this would be approx.15 kB for a modulation frequency of 20 MHz).
[0019] Usually, the analytic form of the first function g is complicated or even unknown. However, for creating the look-up table it is sufficient to note the inverse function. Usually, the look-up table is calculated by applying a second function f, which is the inverse function of the first function g. It is clear that the look-up table represents the second function as well as the first function g, while a specific entry of the look-up table represents a function value for the first function g and an argument for the second function, and vice versa. It is understood that eq.1 can be rewritten as Tm = Cm + fre — cp), (eq. 3) wherein the second function f(x) resembles the wiggling model function with the ground truth value r, as the input, shifted by the second offset c,. From the theoretical standpoint, eq. 1 is actually derived from equation eq. 3. Physically, the first and second offset c,,,c, may consider temporal delay times of sensor array exposure or LED control to the internal device clock. As mentioned above, the second offset c, is pixel-independent but may optionally be temperature- dependent, while the first offset c, is pixel-dependent but temperature- independent. It can be interpreted as a depth non-uniformity (DNU). The separation of this non-uniformity c,, and of the temperature-dependent second offset c, results in a wiggling model function f that is not pixel-dependent or temperature-dependent. The same of course applies to the first function g, which is the inverse function of the second function f. This has considerable advantages e.g. during an production end test, as will become clearer below.
[0020] The specific form of the second function f is not limited within the scope of the invention. It may depend e.g. on theoretical considerations, the desired accuracy and possible limitations regarding processing power or memory space, although the latter two are usually not relevant. For a second function f that does not consider wiggling errors, an optional offset term and a linear term are sufficient. Including wiggling, oscillatory terms can be included in the second function f. If the above described 4-tap method is applied, it follows from the
Shannon-Nyquist theorem that only aliasing of harmonics with an order greater than two can occur. Mostly it is sufficient to consider aliasing by third and fifth harmonic order, while higher-order contributions (seventh harmonic order etc.) can be neglected. Therefore, the second function f may comprise a linear term, a third order harmonic term and a fifth order harmonic term with respect to the modulation frequency. The third order harmonic term has a frequency four times higher than the modulation frequency and the fifth order harmonic term has a frequency eight times higher than the modulation frequency. In this embodiment, the second function f can be written as follows (with 7, — ce = r): f(r) = a, +r + a, cos(k,r) + az sin(k,r) + az cos(kgr) + à, sin(kgr) + + = ay +T + by cos(k,r + 6,) + b, cos(kgr + 62) ++ (eq. 4) with 47 Vin k,=n——, n=4,8
C b, = [a? +a3 a, 0, = arctan (2) a; and b,, 6, accordingly for kg terms, with vy, being the modulation frequency. The coefficients of the second function f can be found e.g. by linear regression with the raw depth values r,, and the ground truth values r, as input. This can be done during a calibration that is explained below.
[0021] Before the corrected depth measurement, a calibration can be performed for the camera, in which at least one of the first offset c,,, the first function g and the second offset c, is determined. "Before the corrected depth measurement” is not to be understood in any way limiting regarding the time interval between the calibration and the corrected depth measurement. Normally, the calibration is performed in the course of the production of the camera or immediately afterwards, while the (first) corrected depth measurement may be initiated by an end-user. However, this corrected depth measurement could also be performed in a testing process of the camera after its production. While it is understood that the first offset c,,, the first function g and the second offset c, need to be determinedsomehow, the calibration process for a specific camera, i.e. a specific sample, may only comprise determining one or two of these quantities. "Determining" may refer to defining the respective quantity as well as to obtaining the quantity by calculation and/or measurement.
[0022] The first offset c,, and/or the first function g can usually be assumed to be the same for all cameras of a given type or production series. Therefore, they do not need to be determined for each individual camera. Preferably, the calibration comprises a general calibration, in which at least one of the first offset c,, and/or the first function g is determined only once for a plurality of cameras. In other words, the first offset c,, and/or the first function g is determined in the general calibration once using a single camera and afterwards, the results of this general calibration can be used for all cameras that are sufficiently similar, e.g. all cameras of the same production series. It is understood that the concept of this general calibration greatly facilitates the calibration of the remaining cameras and reduces the time required.
[0023] Preferably, the calibration comprises determining the second function f by performing the following steps, which are not necessarily performed in the order in which they are described here:
[0024] A plurality of different depth measurements are performed with the camera, each measurement providing a raw depth value n,,(k) for each of a plurality of pixels in an area of interest, wherein k = 1,., N is the number of the individual depth measurement. In other words, N different depth measurements are performed, which means that the three-dimensional scenery recorded by the camera is different for each depth measurement from the point of view of the camera. One example for a simple setup would be to position the camera facing a plane surface and to change the distance between the camera and the surface for each depth measurement. A raw depth value r,(k) is acquired for each of a plurality of pixels in an area of interest. The area of interest may comprise the centre of the sensor array and it may have a square shape. However, it could be positioned off-centre and could have a different, even irregular or non-coherent shape. Preferably, the area of interest corresponds to a portion of the sensor array. In particular, it can be considerably smaller than the sensor array, e.g. it may comprise less than 10% of the sensor array or even less than 1% of thesensor array. It is understood that this greatly facilitates the calibration. For each depth measurement and for each pixel, the raw depth value 7,,(k) is stored in a memory of the camera or an external memory.
[0025] Furthermore, a ground truth value r,(k) is determined for each depth measurement and for each pixel in the area of interest. This ground truth value r.(k) represents the actual, physical distance between the respective pixel and the point from where the light received by this pixel is reflected. This distance could be measured by any sufficiently accurate means known in the art or it could be deduced from the position of the camera with respect to the recorded object(s). This ground truth value r,(k) is used as an objective reference for the calibration.
[0026] In another step, possibly before acquiring the raw depth values 7, (K) and/or the ground truth values r,(k), the second offset c, is defined. In this context, the second offset c, may be chosen arbitrarily.
[0027] According to yet another step, for each pixel in the area of interest, a pixel- dependent third function f, with at least one parameter is defined and the at least one parameter is fitted to the following condition: Tm(k) = Cm + fa (re(k) — cp). (eq. 5) Ideally, the number of depth measurements corresponds to the number of parameters, so that eq.5 can be fulfilled for all k. If the number of depth measurements is greater than the number of parameters, eq.5 can normally not be fulfilled for all k and fitting methods known in the art can be applied e.g. in order to minimise the mean square error. Normally, the number of parameters should be less or equal to the number of depth measurements. It shouid be noted that the memory space required for the third function f, can be comparatively small because the area of interest is normally only a small portion of the sensor array. The third function f, for each pixel in the area of interest can be stored either explicitly, e.g. in the form of a look-up table, or by storing the at least one parameter acquired by the fitting procedure.
[0028] When the third functions have been determined, the second function f is determined based on the third functions f, of a plurality of pixels in the area of interest. Normally, it is determined based on all pixels in the area of interest,
although it is conceivable that for some reason some of the pixels could be neglected or discarded. Normally, the second function f and the third functions f, have a similar form and may only differ by certain parameters. For example, if the second function f has the form given by eq.4, the third functions f, are normally chosen to have the same form.
[0029] There are various ways how the second function f could be determined based on the third functions f,. In general, some kind of averaging can be applied. In particular, the second function f can be determined by averaging the at least one parameter of the third functions f, over a plurality of pixels in the area of interest. Normally, averaging is performed over all pixels in the area of interest. In general, a specific parameter of the respective third function f, has different values for different pixels. By taking the average of each parameter over a plurality of pixels, a second function f can be determined that is pixel-independent. It should be borne in mind that the pixelwise distance-independent deviations from the second function f are not lost, but they are incorporated exclusively in the first offset c,,. When here and in the following is made reference to "averaging”, this normally refers to the arithmetic mean. However, it could also refer to other types of averaging, for example a weighted average.
[0030] There are, however, alternatives to defining pixel-dependent third functions f, and calculating the second function f based thereon. Namely, the raw depth values 7,,(k) and the ground truth values r.(k) of all pixels in the area of interest can be considered collectively. For example, one could consider a condition similar to (eq.5), in which the sum is taken over all pixels and the pixel- dependent third functions f, are replaced by the pixel-independent second function f: > tn) = Cn + DFO) = co. (eq. 5a) where C,, is a parameter corresponding to the sum of c, over all pixels. The second function f comprises at least one parameter which is fitted to fulfil eq.5a. Of course eq.5a could be divided by the number of pixels in the area of interest to get an average, thereby taking an arithmetic mean over all pixels. Apart from taking the arithmetic mean, a different kind of averaging could be performed, e.g. aweighted averaging. Yet another option is to simply fit the parameters of second function f according to the following condition: Tm(k) = Cm + f(r (k) — €). (eq. 5b) wherein the fitting process is performed taking into account the raw depth values Tm(k) and the ground truth values r,(k) of all pixels from all depth measurements. Irrespective of whether the second function f is deteremined directly as described here or via the third functions f, as described above, the first offset c, can be determined as described below.
[0031] According to a preferred embodiment, the calibration comprises, for each pixel in the area of interest and each depth measurement, calculating an offset estimate c,, (k) for the first offset c,, according to the following equation: cml) = tk) — f(re(k) — ce). (eq. 6) The offset estimate c,,(k) is related to the first offset c, and may in some cases even be identical. However, the offset estimate c,,(k) is generally different for each depth measurement. In other words, while the first offset c,, only depends on the pixel, the offset estimate c,(k) also depends on the individual depth measurement.
[0032] While the offset estimates c,,(k) for the individual depth measurements form the basis for determining the first offset c,, there are various ways conceivable how the first offset c,, could be determined. For instance, one of the offset estimates c,,(k) could be chosen by some criteria to be the first offset c,, while the other offset estimates are discarded. Preferably, the first offset c, is determined by averaging the offset estimate c,,(k) over a plurality of depth measurements. Normally, the average is taken over all depth measurements. As mentioned above, this averaging normally refers to taking the arithmetic mean, but it could be a different average.
[0033] As mentioned above, the first function g and the first offset c,, can be determined in a general calibration that is valid for a plurality of cameras, e.g. for all cameras of a given type or production series. The second offset c,, though, is usually specific for a given camera. In one embodiment of the invention, the calibration comprises using the first function g and the first offset c, determined ina general calibration with one camera and performing an individual calibration for a different camera. In other words, the first function g and the first offset c,, have been determined using one camera and this calibration only has to be performed once, whereafter the first function g and the first offset c,, can be used for all cameras that are sufficiently similar, e.g. all cameras of the same type. These cameras are of course different from the camera used for the general calibration. In the individual calibration, a depth measurement is performed for at least one pixel to obtain a raw depth value r,,. Further, a ground truth value r, for the at least one pixel is determined. The ground truth value r, is not determined by the camera, but by some other means, i.e. independently of the camera. Finally, the second offset c; is calculated according to: Ce = Te — gm — Cm) . (ea.7) It is understood that if the raw depth value 7, and the ground truth value r, are determined for a plurality of pixels, the above equation generally yields a different second offset c, for each pixel. In order to obtain a single value as required, averaging may be performed over the second offsets c, of all pixels.
[0034] Depending on the application, it may be sufficient to consider the second offset c, to be a constant that only depends on the individual camera sample. However, it is sometimes more realistic to assume that the second offset c, is temperature-dependent. While the actual temperature dependency may be complicated, it may be approximated, at least for a realistic temperature range, e.g. by a linear relation like c,(T) = ¢, (Ty) + b(T — To). If necessary, quadratic or higher-order terms could be included. Alternatively, the temperature dependency could be represented by a look-up table.
[0035] If the second offset c, is considered to be temperature-dependent, the calibration may comprise determining the second offset c,, for a first temperature and, for each of at least one second temperature, performing the following steps (which correspond to the determining of the second offset in the individual calibration). In a first step, a depth measurement is performed for at least one pixel to obtain a raw depth value r,. In another step, a ground truth value r, is determined for the at least one pixel. With the raw depth value 7, and the ground truth value r,determined, the second offset c, for the respective temperature iscalculated according to c, =r. — g(n — Cm). When these steps have been performed, at least one parameter related to a temperature dependency of the second offset c, is determined. Each temperature normally yields a different value for the second offset c,. Each of these values could simply be stored in a look-up table and values for intermediate temperatures could be interpolated. Attentively, the temperature dependency could be modeled as a function that is e.g. linear, quadratic or higher order with respect to the temperature. The parameters for the constant, linear, quadratic and other terms in this function can then be fitted according to the respective values determined at different temperatures.
[0036] It is highly preferred that the abovementioned steps are only performed in the general calibration, i.e. only once for a plurality of cameras. In this case, determining the second offset c, for the first temperature may correspond to defining the second offset as described above. If the second offset c, is temperature-dependent, the first temperature, for which the second offset c, is defined, needs to be determined and has to be stored for further reference. If performed as part of the general calibration, the above described process may also be referred to as a general temperature calibration.
[0037] It is a reasonable approximation to assume that the temperature- dependence of all cameras of a certain production series or type differ only by a constant offset e.g. c,(T,). Therefore, if the second offset c, is temperature- dependent, it is sufficient that the temperature is maintained constant in the individual calibration and the second offset of the individual calibration is used as c: (T9) (or use it to determine c, (To), if the temperature during individual calibration differs from T,). Either way, even with a temperature-dependent second offset c,, a single depth measurement in the individual calibration is sufficient to determine the second offset c, and thereby the first function g. One simple option would be to set T, as room temperature, so that individual calibration can be performed at room temperature in order to obtain c,(T,) in the most simple way.
Brief Description of the Drawings
[0038] Further details and advantages of the present invention will be apparent from the following detailed description of not limiting embodiments with reference to the attached drawing, wherein:
Fig. 1 is a schematic view of a TOF camera that can be used for the inventive method; and Fig. 2 is a flowchart illustrating an embodiment of the inventive method. Description of Preferred Embodiments
[0039] Fig.1 schematically shows a ToF camera 1 that is adapted for depth measurement using amplitude-modulated continuous light. It comprises a rectangular sensor array 2 with a plurality (e.g. several thousand or several ten thousand) of pixels 3. Furthermore, it comprises a memory 5 and a processing unit
6. The camera 1 is configured to emit amplitude-modulated continuous light using one or several light emitters, which are not shown here for sake of simplicity. The light is reflected by a 3D object or scenery in a field of view of the camera 1 and the reflected light is received by the pixels 3 of the sensor array 2. The amplitude of the received light is sampled at a frequency four times higher than a modulation frequency of the light. In other words, four amplitudes A, _;, also referred to as taps, are used to retrieve the phase ¢ of the modulated light, since — at (22) 2 = atan A, —4,) (eq.2)
[0040] Since the sampling frequency is four times higher than the fundamental modulation frequency, according to the Shannon-Nyquist theorem, aliasing can occur for harmonics with an order greater than two. When a certain raw depth value 7, is measured for one of the pixels 3, this aliasing, along with other effects, generally leads to a deviation from a ground truth value r,. This deviation is corrected according to an inventive method which will now be described with reference to the flowchart in fig. 2.
[0041] According to the inventive method, the relation between the raw depth value r,,, and the ground truth value r, is given by Tm = Cm + f(r: — Ce), (eq. 3) in this case, the second function f is modeled as: f(r) = ag +T + by cos(k,r + 91) + by cos(kgr + 63) ++ (eq. 4) (eq.3) can be rewritten as follows:
re = 90m — Cm) + Ce (eq. 1) so that the ground truth value r, can be calculated from the raw depth value 7, However, before this correction can be applied in a corrected depth measurement 500, the first function g as well as the first and second offset cn, ¢, need to be determined in a calibration 100. The calibration 100 comprises a general calibration 200 that needs to be carried out for only one camera 1 of a given production series, which may be referred to as a "golden sample”. This general calibration 200 yields a first function g and a first offset c,, that can be used for all cameras 1 of this production series.
[0042] In a first step, at 210, an area of interest 4 comprising a plurality of pixels 3 is defined on the sensor array 2. In this example, the area of interest 4 is rectangular and centred with respect to the sensor array 2, but it could have a different shape and/or location. The area of interest 4 represents only a small portion of the sensor array 2 and comprises e.g. between 10 and 20 pixels. Also, the second offset c, is defined, i.e. it is chosen arbitrarily.
[0043] In another step, at 220, a plurality of different depth measurements are performed. For example, the camera 1 could be positioned opposite a flat surface and the distance could be increased between consecutive depth measurements. If the second offset c, is considered to be temperature-dependent, the temperature has to be maintained constant until the depth measurements have been performed. Also, the temperature has to be determined, e.g. measured. In each depth measurement, a raw depth value 7,(k) and a ground truth value r.(k) are determined for each pixel 3 in the area of interest 4. The raw depth value 7, (k) is determined by the camera 1, while the ground truth value r.(k) is measured and/or calculated independently of the camera 1. All quantities mentioned here and in the following can be stored in the memory 5 of the camera 1 while all necessary calculations can be performed by the processing unit 6.
[0044] In a next step, at 230, several pixel-dependent third functions f, are defined, which have a similar form as the first function f and comprise corresponding parameters. Using the previously determined raw depth values rm (k) and ground truth values r,(k) for the respective pixel 3, the parameters of the individual third function f, are fitted to fulfill the condition
Tm (k) = Cm + fo (ry (k) - Ce) (eq. 5) for all depth measurements. This yields a plurality of third functions f,, namely one for each pixel 3 in the area of interest 4, which are generally pairwise different.
[0045] At 240, the parameters of the individual third functions f, are averaged over all pixels 3 in the area of interest 4 in order to obtain the parameters for the second function f, which is now pixel-independent. As an alternative to defining and determining individual third functions f, as in step 230, the second function f could be directly determined based on the raw depth values n,(k) and ground truth values r,(k) of all pixels (3). In this case, step 240, would be obsolete.
[0046] In another step at 250, an offset estimate c,,(k) is calculated for every pixel 3 of the sensor array 2 and for each depth measurement by Cm) = 1 (Kk) — fx) = ce) (eq.6) and at 260, the average of all offset estimates is taken over the depth measurements to obtain the first offset c,, for the individual pixel 3.
[0047] With the second function f known, its inverse function, namely the first function g, can be determined. This is done at 270 by calculating a look-up table for the first function g. Therefore, a set of theoretical values r’,(k) = ku, k = 0,..,N — 1 for the ground truth depth is defined with a chosen resolution u in the unambiguity range of the camera 1, e.g. u = 1 mm, k = 0,...,7500. If the function values of r'.(k) for the second function f are also defined with respect to the resolution u, i.e. rm = f(r’) = f(ku) = lu, the inverse function r’, = g(+’',,,) can be expressed as r’, = ku = g(r'm) = glu) = LUT(Du. The look-up table is determined in a simple iterative algorithm.
[0048] If the second offset c, is not temperature-dependent, the general calibration 200 ends with this step. If the second offset c, is temperature- dependent, which is checked at 280, the general calibration 200 continues with at least one second temperature that is different from the temperature of the previous measurements. The temperature dependency of the second offset c, can be represented by a function having several parameters. For example, the temperature dependency can be assumed to be linear, wherefore the respective function has two parameters. In order to determine the parameters, one needs atleast the same number of values for the second offset c,, one of which has already been defined previously. At 290, the temperature is changed. Another depth measurement is performed to obtain a raw depth value 7,, for at least one pixel 3, preferably a plurality of pixels 3, at 300. For instance, this could be all pixels 3 in the area of interest 4. Likewise, at 310, a ground truth value r, is determined for each pixel 3. At 320, the second offset c, is calculated for every pixel 3. If more than one pixel 3 has been taken into account, the average over all pixels 3 is taken at 330 to determine the final value for the second offset c,. If it is decided at 340 that the second offset c, has to be determined for another second temperature, the temperature is changed again at 290 and the following steps are repeated. If the measurement has been performed for all second temperatures, at least one parameter defining the temperature dependency of the second offset c, is determined at 350, and the general calibration 200 ends.
[0049] As mentioned above, the results of the general calibration 200 can not only be used for a single camera 1, but for all cameras 1 of the same production series. If a different camera 1 (i.e. different from the camera 1 used in the general calibration 200) needs to be calibrated, which is checked at 360, the method continues with an individual calibration 400. In this individual calibration 400, another depth measurement is performed to obtain a raw depth value 7,, for at least one pixel 3, preferably a plurality of pixels 3, at 410. For instance, this could be all pixels 3 in the area of interest 4. Likewise, at 420, a ground truth value r, is determined for each pixel 3. At 430, the second offset c, is calculated for every pixel 3. If more than one pixel 3 has been taken into account, the average over all pixels 3 is taken at 440 to determine the final value for the second offset c,. If it is decided at 450 that the second offset c, is considered not to be temperature- dependent, the individual calibration 400 ends, as well as the calibration 100.
[0050] If, however, the second offset c, is considered to be temperature- dependent, the temperature-dependent function for the second offset c, has to be adapted. However, it can be assumed that the temperature dependency for this camera differs from the "golden sample” only by a constant offset. Therefore, it is sufficient to compare the value for the second offset c, determined for the temperature of the individual calibration 400 with the value that would have been valid for the "golden sample" and shift the entire function by the difference (ifpresent). In other words, only a constant offset of the temperature-dependent function has to be determined or "updated" at 460, while the rest of the function, e.g. linear or quadratic terms, can be left unchanged. Therefore, even if the second offset c, is temperature-dependent, the individual calibration 400 can be carried out with a single depth measurement. It is understood that the temperature has to be maintained constant during the individual calibration 400 if the second offset c, is temperature-dependent.
[0051] When the calibration 100 has been completed, the actual corrected depth measurement 500 can begin. At 510, raw depth values 7, are acquired for all pixels 3 of the sensor array 2 and at 520, the corresponding ground truth values r, are calculated according to eq.1. This can be repeated if a new corrected depth measurement is required at 530. If not, the method ends.
List of Reference Symbols
1 TOF Camera
2 sensor array
3 Pixel
4 Area of interest
Memory
6 Processing unit

Claims (15)

Claims
1. A method for corrected depth measurement with a time-of-flight camera (1) using amplitude-modulated continuous light, the method comprising, for each of a plurality of pixels (3) of a sensor array (2) of the camera (1): - acquiring (510) with the camera (1) a raw depth value r,, for the pixel (3); and - automatically calculating (520) a ground truth value r, according to: Te = 9m — Cm) + Ce, wherein c, is a pixel-dependent first offset, g is a pixel-independent first function and c, is a pixel-independent second offset.
2. A method according to claim 1, characterised in that acquiring (510) the raw depth value 7,, comprises determining four amplitude samples at a sampling frequency four times higher than a modulation frequency of the amplitude- modulated continuous light.
3. A method according to any of the preceding claims, characterised in that the first function g is applied by accessing a look-up table representing the first function g.
4. A method according to any of the preceding claims, characterised in that the look-up table is calculated (270) by applying a second function f, which is the inverse function of the first function g.
5. A method according to claim 4, characterised in that the second function f comprises a linear term, a third order harmonic term and a fifth order harmonic term with respect to the modulation frequency.
| 6. A method according to any of the preceding claims, characterised in that before the corrected depth measurement, a calibration (100) is performed for the camera (1), in which at least one of the first offset cn. the first function g and the second offset c, is determined.
7. A method according to any of the preceding claims, characterised in that in the calibration (100) comprises a general calibration (200), in which at least one of the first offset c,, and the first function g is determined only once for a plurality of cameras (1). ee ——————————
8. A method according to claim 6 or 7, characterised in that the calibration (100) comprises determining the second function f by: - performing (220) with the camera (1) a plurality of different depth measurements, each depth measurement providing a raw depth value Tn(k) for each of a plurality of pixels (3) in an area of interest (4), wherein k = 1,., N is the number of the depth measurement; - for each depth measurement and for each pixel (3) in the area of interest (4), determining a ground truth value r,(k); - defining (210) the second offset ¢;; - for each pixel (3) in the area of interest (4), defining (230) a pixel-dependent third function f, with at least one parameter and fitting the at least one parameter to the condition 7,7 (k) = Cm + fa (re (K) — ¢;); and - determining (240) the second function f based on the third functions f, of a plurality of pixels (3) in the area of interest (4).
9. A method according to claim 8, characterised in that the area of interest (4) corresponds to a portion of the sensor array (2).
10. A method according to claim 9, characterised in that the second function f is determined (240) by averaging the at least one parameter of the third functions f, over a plurality of pixels (3) in the area of interest (4).
11. A method according to claim 9 or 10, characterised in that the calibration (100) comprises, for each pixel (3) in the area of interest (4) and each depth measurement, calculating (250) an offset estimate c,, (k) for the first offset cn according to cn(k) = tm — f(r: (k) — Ce) -
12. A method according to any of the preceding claims, characterised in that the first offset c,, is determined by averaging (260) the offset estimate c,(k) over | a plurality of depth measurements.
13. A method according to any of the preceding claims, characterised in that the calibration (100) comprises using the first function g and the first offset cn determined in a general calibration (200) with one camera (1) and performing an individual calibration (400) for a different camera (1) by: ee ———— EE —
- performing (410) a depth measurement for at least one pixel (3) to obtain a raw depth value 7, - determining (420) a ground truth value r, for the at least one pixel (3); and - calculating (430) the second offset c, according to c, = 1, — gm — Cm) -
14. A method according to any of the preceding claims, characterised in that the second offset c, is temperature-dependent.
15. A method according to claim 14, characterised in that the calibration (100) comprises determining the second offset c,, for a first temperature and, for each of at least one second temperature: — performing (300) a depth measurement for at least one pixel (3) to obtain a raw depth value 7; — determining (310) a ground truth value r, for the at least one pixel (3); and — calculating (320) the second offset c, for the respective second temperature according to ¢; = 7; — g(Tm — Cm): and determining (350) at least one parameter related to a temperature- dependency of the second offset c,.
DD
LU101024A 2018-12-04 2018-12-04 Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light LU101024B1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
LU101024A LU101024B1 (en) 2018-12-04 2018-12-04 Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light
US17/311,225 US11263765B2 (en) 2018-12-04 2019-12-03 Method for corrected depth measurement with a time-of-flight camera using amplitude-modulated continuous light
DE112019006048.1T DE112019006048T5 (en) 2018-12-04 2019-12-03 Method for corrected depth measurement with a TOF camera using amplitude-modulated continuous light
PCT/EP2019/083538 WO2020115068A1 (en) 2018-12-04 2019-12-03 Method for corrected depth measurement with a time-of-flight camera using amplitude-modulated continuous light
CN201980080091.5A CN113167899B (en) 2018-12-04 2019-12-03 Method for depth measurement with time-of-flight camera using amplitude modulated continuous light for correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
LU101024A LU101024B1 (en) 2018-12-04 2018-12-04 Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light

Publications (1)

Publication Number Publication Date
LU101024B1 true LU101024B1 (en) 2020-06-04

Family

ID=64607236

Family Applications (1)

Application Number Title Priority Date Filing Date
LU101024A LU101024B1 (en) 2018-12-04 2018-12-04 Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light

Country Status (1)

Country Link
LU (1) LU101024B1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015156684A2 (en) * 2014-04-08 2015-10-15 University Of Waikato Signal harmonic error cancellation method and apparatus
US20180106891A1 (en) * 2016-10-19 2018-04-19 Infineon Technologies Ag 3di sensor depth calibration concept using difference frequency approach

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015156684A2 (en) * 2014-04-08 2015-10-15 University Of Waikato Signal harmonic error cancellation method and apparatus
US20180106891A1 (en) * 2016-10-19 2018-04-19 Infineon Technologies Ag 3di sensor depth calibration concept using difference frequency approach

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ANDREW D PAYNE ET AL: "Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras", APPLIED OPTICS, OPTICAL SOCIETY OF AMERICA, WASHINGTON, DC; US, vol. 49, no. 23, 1 August 2010 (2010-08-01), pages 4392 - 4403, XP001556431, ISSN: 0003-6935, DOI: 10.1364/AO.49.004392 *

Similar Documents

Publication Publication Date Title
US11263765B2 (en) Method for corrected depth measurement with a time-of-flight camera using amplitude-modulated continuous light
JP5757950B2 (en) Non-contact object measurement
US10416296B2 (en) 3DI sensor depth calibration concept using difference frequency approach
Huang et al. Camera calibration with active phase target: improvement on feature detection and optimization
Kohm Modulation transfer function measurement method and results for the Orbview-3 high resolution imaging satellite
US7299145B2 (en) Method for the automatic simultaneous synchronization, calibration and qualification of a non-contact probe
US10157474B2 (en) 3D recording device, method for producing a 3D image, and method for setting up a 3D recording device
Eiríksson et al. Precision and accuracy parameters in structured light 3-D scanning
EP1361414B1 (en) Method for the calibration and qualification simultaneously of a non-contact probe
Coggrave et al. High-speed surface profilometer based on a spatial light modulator and pipeline image processor
US20040130730A1 (en) Fast 3D height measurement method and system
JP2020536691A (en) Determining subject profile using camera
US6944564B2 (en) Method for the automatic calibration-only, or calibration and qualification simultaneously of a non-contact probe
US7787696B2 (en) Systems and methods for adaptive sampling and estimating a systematic relationship between a plurality of points
CN114897959A (en) Phase unwrapping method based on light field multi-view constraint and related components
Frangez et al. Assessment and improvement of distance measurement accuracy for time-of-flight cameras
CN108286946B (en) Method and system for sensor position calibration and data splicing
LU101024B1 (en) Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Continuous Light
LU101120B1 (en) Method for Corrected Depth Measurement with a Time-Of-Flight Camera Using Amplitude-Modulated Coninuous Light
JP3410779B2 (en) Calibration method of moving stage in image input device
US11105689B2 (en) Temperature and heat map system
Pfeifer et al. 3D cameras: Errors, calibration and orientation
US7158914B2 (en) Precision surface measurement
JP7378310B2 (en) Shape measurement system, shape measurement method and program
RU2553339C9 (en) Method of producing and processing images for determining optical transfer functions and measuring distances (versions) and apparatus therefor (versions), as well as method of determining errors and correcting measurement results

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20200604