CN110187355B - Distance measurement method and depth camera - Google Patents

Distance measurement method and depth camera Download PDF

Info

Publication number
CN110187355B
CN110187355B CN201910425547.9A CN201910425547A CN110187355B CN 110187355 B CN110187355 B CN 110187355B CN 201910425547 A CN201910425547 A CN 201910425547A CN 110187355 B CN110187355 B CN 110187355B
Authority
CN
China
Prior art keywords
signal
max
receiving module
time
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910425547.9A
Other languages
Chinese (zh)
Other versions
CN110187355A (en
Inventor
胡小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orbbec Inc
Original Assignee
Orbbec Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orbbec Inc filed Critical Orbbec Inc
Priority to CN201910425547.9A priority Critical patent/CN110187355B/en
Publication of CN110187355A publication Critical patent/CN110187355A/en
Application granted granted Critical
Publication of CN110187355B publication Critical patent/CN110187355B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The invention is applicable to the technical field of optics, and provides a distance measuring method and a depth camera, wherein the distance measuring method comprises the following steps: transmitting an optical signal to a body to be measured; receiving the optical signals reflected by the body to be detected through a receiving module, wherein the receiving module comprises a plurality of signal acquisition units, the time sequences of the plurality of signal acquisition units are different, and each signal acquisition unit has at least two groups of acquisition signals with different frequencies in one exposure time; calculating the flight time of the optical signal; the multiple taps are arranged in the depth camera, the acquisition time sequences of the multiple acquisition units are controlled through the control module, and each tap has at least two groups of acquisition signals with different frequencies in one exposure time, so that the contradiction that pulse width is proportional to test distance and power consumption and is inversely related to test precision in the existing test scheme is eliminated, the expansion of the test distance is not limited by the pulse width any more, and the effects of keeping lower test power consumption and higher test precision under the condition of realizing a longer test distance are achieved.

Description

Distance measurement method and depth camera
Technical Field
The invention belongs to the technical field of optics, and particularly relates to a distance measurement method and a depth camera.
Background
ToF (Time-of-Flight) is a technique that achieves accurate distance measurement by measuring the Time of Flight of light. The existing common TOF depth camera adopts an I-TOF (index-TOF) technology, a laser emitting device emits a beam of periodically modulated laser on a time sequence to the surface of an object, the reflected light generates a time delay relative to the incident light on the time sequence, and the I-TOF technology measures the light flight time by measuring test data of different taps, so that distance measurement is further realized.
The time-of-flight technique can be classified into a Continuous Wave (CW) modulation and demodulation scheme and a Pulse Modulation (PM) modulation and demodulation scheme according to the modulation and demodulation scheme. The time-of-flight technology adopting the PM modulation and demodulation method still has some limitations at present, for example, the pulse width of the modulation and demodulation needs to be prolonged when long-distance testing is performed, so that the technical problems of power consumption rising, precision falling and the like are caused, and the market demand cannot be met.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a distance measurement method to solve the technical problems of power consumption increase and precision decrease caused by the need of extending the modem pulse width when the existing I-TOF technology performs a remote test.
A first aspect of an embodiment of the present invention provides a distance measurement method, including:
transmitting an optical signal to a body to be measured;
receiving optical signals reflected by the body to be detected through a receiving module, wherein the receiving module comprises a plurality of signal acquisition units, the time sequences of the signal acquisition units are different, and each signal acquisition unit has at least two groups of acquisition signals with different frequencies in one exposure time;
and calculating the flight time of the optical signal.
A second aspect of an embodiment of the present invention provides a depth camera, including:
the transmitting module is used for transmitting optical signals to the to-be-detected body;
the receiving module is provided with a plurality of signal acquisition units and is used for receiving the optical signals reflected by the to-be-detected body;
a control module for controlling the emitting module to emit light signals,
the signal acquisition units in the receiving module are controlled to have different time sequences, and at least two groups of acquisition signals with different frequencies are sent to each signal acquisition unit in one exposure time;
and the optical signal receiving module is used for acquiring the flight time of the optical signal according to the signal acquired by the receiving module.
A third aspect of the embodiments of the present invention provides a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the distance measurement method described above when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer readable storage medium, which when executed by a processor, implements the steps of the distance measurement method described above.
Compared with the prior art, the embodiment of the invention has the beneficial effects that:
the multiple taps are arranged in the depth camera, the acquisition time sequence of the multiple acquisition units is controlled through the control module, and each tap has at least two groups of acquisition signals with different frequencies in one exposure time, so that the contradiction that the pulse width is proportional to the test distance and the power consumption and is inversely related to the test precision in the existing test scheme is eliminated, the expansion of the test distance is not limited by the pulse width any more, and the effects of keeping lower test power consumption and higher test precision under the condition of realizing a longer test distance are achieved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a depth camera according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a depth camera according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a depth camera light emission signal and an acquisition signal according to an embodiment of the present invention;
FIG. 4 is a flowchart showing a distance measurement method according to an embodiment of the present invention;
FIG. 5 is a flow chart of an implementation of the calculation of time of flight of an optical signal of FIG. 4;
FIG. 6 is a second flowchart of an implementation of the distance measurement method according to the embodiment of the present invention;
FIG. 7 is a second schematic diagram of a depth camera light emission signal and an acquisition signal according to an embodiment of the present invention;
FIG. 8 is a third schematic diagram of a depth camera light emission signal and acquisition signal provided by an embodiment of the present invention;
fig. 9 is a schematic diagram of a second structure of the depth camera according to the embodiment of the present invention;
FIG. 10 is a schematic diagram of a depth camera light emission signal and acquisition signal according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of a depth camera light emission signal and an acquisition signal provided by an embodiment of the present invention;
fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Fig. 1 is a schematic diagram of a depth camera 10 according to an embodiment of the present invention. The depth camera is a TOF depth camera and comprises a transmitting module 11, a receiving module 12 and a control module 13, wherein the control module 13 is connected with the transmitting module 11 and the receiving module 12. Wherein the emitting module 11 is configured to emit an optical signal (e.g., a pulse laser beam) to the object 20 to be measured; the receiving module 12 is provided with a plurality of acquisition units for acquiring optical signals reflected by the body 20 to be detected; the control module 13 controls the transmitting module 11 and the receiving module 12, and is used for acquiring the distance between the object 20 to be measured and the depth camera 10 according to the signals acquired by different acquisition units of the receiving module 12. It should be appreciated that the depth camera 10 may also include a circuit module, a power module, a housing, and other components, which are not fully shown herein. The depth camera 10 may be a stand alone device or may be integrated into an electronic device such as a cell phone, tablet computer, etc., without limitation. The number of the acquisition units may be set as desired, and is not limited herein.
In one embodiment, the distance 101 between the transmitting module 11 and the receiving module 12 is only about a few millimeters and is much smaller than the distance between the object 20 and the depth camera 10, so the control module 13 controls the transmitting module 11 to transmit the light beam 102 to the object 20, the light beam 103 after being reflected by the object 20 returns to the receiving module 12, and the control module 13 calculates the time difference (phase difference) between the transmitting light beam 102 and the reflected light beam 103, and based on the time difference, the distance 104 between the object 20 and the depth camera 10 can be obtained.
Referring to fig. 2, in one embodiment, the emitting module 11 includes a laser 111, a laser driver 112, and an optical modulator 113, where the laser driver 112 is connected to the laser 111 and is used for driving the laser 111 to emit light, and a light beam emitted by the laser 111 is modulated by the optical modulator 113 and then emitted to the to-be-detected body 20.
In one embodiment, the laser 111 may be a near infrared band VCSEL (Vertical Cavity Surface Emitting Laser ). In the solar spectrum, the proportion of the near infrared band is much lower than that of the visible light, and meanwhile, the detection efficiency of the silicon-based detector can basically meet the detection requirement, and the interference of sunlight can be reduced to the greatest extent, so that the wavelength of the laser 111 selected by the embodiment is 850-940 nm, for example, 850nm or 940nm. The laser driver 112 internally contains a laser driving circuit connected to the laser 111 for driving the laser 111 to emit a high frequency modulated light beam. The light modulator 113 is connected to the laser 111, and is configured to modulate the light emission area and area of the laser 111, so as to spatially modulate the laser into an ideal surface illumination mode, so that the laser illumination area overlaps with the field of view of the imaging system of the receiving module 12 as much as possible, thereby maximizing the utilization rate of the laser and improving the detection accuracy.
In one embodiment, the light modulator 113 may include a diffractive optical element for diffracting the light beam emitted by the laser 111 to form a spot light beam, such as a regularly arranged spot light beam. The time-of-flight signal-to-noise ratio calculated for the spot beam is higher compared to flood illumination. The light modulator 113 may further include a lens for adjusting the light beam emitted from the laser 111 to perform focusing, collimation, and the like.
Referring to fig. 2, in one embodiment, the receiving module 12 includes a lens 121, a filter 122, and an image sensor 123 disposed along the optical path. The image sensor 123 includes a plurality of pixels therein, and each pixel includes a plurality of acquisition units (e.g., three, four, etc.), which may be taps for reading signal data. The optical signal reflected by the object 20 is filtered by the lens 121 and the optical filter 122 and then received by the image sensor 123, and then the signal data is received and demodulated by the tap to obtain a time difference, thereby obtaining the distance of the object 20.
In one embodiment, the lens 121 includes one or more optical lenses for collecting the reflected light beam from the object to be measured and imaging on the image sensor 123. The filter 122 is required to select a narrow band filter matching the wavelength of the laser 111 for suppressing ambient light noise in the remaining wavelength band. The image sensor 123 is an image sensor specially used for light time of flight (TOF) measurement, and may be, for example, an image sensor such as CMOS (complementary metal oxide semiconductor), APD (avalanche photodiode), SPAD (single photon avalanche photodiode), or the like, and a pixel of the image sensor may be in the form of a single point, a linear array, or an area array, or the like.
In one embodiment, the control module 13 is connected to both the transmitting module 11 and the receiving module 12, and is configured to send control signals to each module to perform corresponding control operations, and perform correlation calculation and processing on the received image, and so on. The control functions of the control module 13 include at least: providing a periodic modulation signal required when the laser 111 emits an optical signal, providing a collection signal for taps in the image sensor 123, etc., and may also provide auxiliary monitoring signals including temperature sensing, over-current, over-voltage protection, drop protection, etc. The control module 13 further includes a registering module and a processing module, which store and process the raw data collected by each tap in the image sensor 123, so as to obtain a specific position of the object 20 to be measured.
Referring to fig. 2 and 3, consider the case where three taps are provided for each pixel in the image sensor 123, and for convenience of description, the three taps are respectively denoted as a first tap 1201, a second tap 1202, and a third tap 1203, corresponding acquisition signals are respectively denoted as a first acquisition signal 1301, a second acquisition signal 1302, and a third acquisition signal 1303, and an optical signal emitted by the transmitting module 11 is denoted as a modulation signal 1300. The period of the optical signal emitted by the emitting module 11 is denoted as T mod The pulse width is denoted as T p . Corresponding to the modulated signal 1300, a first acquisition signalThe number 1301, the second acquisition signal 1302, and the third acquisition signal 1303 are also periodic pulse signals, the pulse width and the pulse width T of the modulated signal 1300 p The same, and therefore also denoted as T p . It should be appreciated that the reflected beam 103 is at the same frequency and pulse width as the emitted beam 102, with a time lag.
The control module 13 controls the transmitting module 11 to transmit the modulation signal 1300, and simultaneously controls the first tap 1201, the second tap 1202 and the third tap 1203 to transmit the first acquisition signal 1301, the second acquisition signal 1302 and the third acquisition signal 1303 respectively, and simultaneously the control module 13 further instructs the first tap 1201 to the third tap 1203 to be sequentially opened, and only one tap is opened at any time, so that when the first tap 1201 is located at a rising edge (located at a high level), the second tap 1202 is located at a falling edge (located at a low level); when the first tap 1201 is on the falling edge, the second tap 1202 is on the rising edge. The first acquisition unit (here the first tap 1201) receiving the acquisition signal is turned on in synchronization with the laser 111 in the transmitter module 11. Only when the tap is at a high level, it can receive the reflected beam from the object 20 to be measured, so that only one tap can receive the reflected beam 103 at any one time. Taking into consideration the influence of the ambient light, the third tap 1203 is used to obtain the electrical signal Q corresponding to the ambient light in the present embodiment 0 . The first tap 1201 and the second tap 1202 are sequentially used for collecting the reflected optical signals, and the obtained corresponding electrical signals are respectively recorded as Q A And Q B
It should be understood that the optical signal and the acquisition signal are transmitted frame by frame, and each frame of signal includes a plurality of pulse signals, so that a plurality of acquisition results can be obtained. To improve accuracy of the detection result, multiple sets of data are selected for averaging, for example, in the first tap 1201, the electrical signal acquired by each pulse signal is Q 1 With a mean value of Q 1 The corresponding second tap 1202 and so on.
Considering the case where the pulse period of the acquisition signals of the taps is the same as the period of the modulation signal 1300, the acquisition signals of the first tap 1201 and the second tap 1202 are relative to the transmission light signalTime delays of 0 and T, respectively p The time delay of the acquisition signal of the third tap 1203 with respect to the emission light signal is not less than 2T p . During one exposure, when the tap is positioned in a high level time period, electrons generated in the pixel are transferred into the corresponding tap; when the tap is at a low level, electrons still accumulate in the pixel, and to avoid affecting the acquisition of the tap at the next exposure, a clear tap is further included in the receiving module 12 for clearing the pixel of electrons generated during this period. That is, when the tap is at a high level, electrons generated by the pixels are sequentially transferred into the corresponding tap; when the tap is low, the electrons generated by the pixel are cleared by clearing the tap.
In addition, in order to obtain higher signal-to-noise ratio, each tap emits a plurality of pulse signals in one exposure process or in a plurality of exposure processes, and electrons accumulated for a plurality of times in the tap after the one exposure process is finished are finally output to be Q through a reading circuit and an analog-to-digital conversion circuit A 、Q B And Q 0
At this time, the time of flight Δt of the optical signal can be calculated by the following formula:
Figure BDA0002067363500000071
obviously, the above scheme has a limitation of measurement range, when the time delay of the third acquisition signal 1303 of the third tap 1203 relative to the transmitted optical signal is greater than 2T p The maximum distance that the depth camera can detect is about D max =T p C/2; when the time delay of the third acquisition signal 1303 relative to the emitted light signal is equal to 2T p The maximum distance that the depth camera can detect is about D max =2T p ·c/2=T p C. It should be noted that, under the condition that other test conditions are not changed, the pulse width T p The increase of the light source power consumption of the transmitting module 11 during the measurement process is caused, and the calculation error of the flight time deltat of the optical signal is also along with the pulse width T p And increases with increasing size.
Fig. 4 shows a flowchart of an implementation of a distance measurement method according to an embodiment of the present invention, which may be implemented by software and/or hardware. As shown in fig. 4, the distance measuring method includes the following steps.
Step S11: and emitting an optical signal to the object to be measured.
The optical signal may be a periodic pulse signal, and when the optical signal is emitted, the control module 13 may send a periodic modulation signal to the laser driver 112 of the emission module 11, and the laser driver 112 drives the laser 111 to emit a pulse laser signal according to the periodic modulation signal, where the pulse laser signal is modulated by the optical modulator 113 and then emitted to the space, so that a light beam may be emitted to the object 20 to be measured. In the present embodiment, the period of the optical signal emitted by the emitting module 11 is denoted as T mod_i Pulse width T of optical signal p_i Where i refers to the ith period of the optical signal.
Step S12: the receiving module receives the optical signals reflected by the body to be detected, the receiving module comprises a plurality of signal acquisition units, the signal acquisition units are different in time sequence, and each signal acquisition unit is provided with at least two groups of acquisition signals with different frequencies in one exposure time.
In the present embodiment, the one-time exposure time T is longer than one period T of the emitted light signal Mod_i The method comprises the steps of carrying out a first treatment on the surface of the The signal acquisition unit can be taps, and the period and pulse width of the acquired signals corresponding to the taps are respectively T Demod_i And T p_i (i=1, 2, … m), where m is the number of frequency bins of the acquired signal. Alternatively T Mod_i =k·n i ·T p_i (n i =1,2,…),n i The value of (2) determines the duty cycle of the optical signal; t (T) Demod_i =k·T p_i (k=3, 4, …), k being the number of taps.
Referring to fig. 7 and 8, when receiving an optical signal reflected by a body to be measured, different taps in the receiving module of the embodiment are continuously turned on sequentially according to the collected signal of the first frequency, and only one tap is turned on at the same time; then in one exposure time, different taps of the receiving module are successively and continuously started according to the acquisition signals of the second frequency, and the same timeOnly one tap is on. For example, the length of one exposure T in this embodiment is two optical signal periods (2T Mod_i ) In a first optical signal period, the first tap 1201, the second tap 1202 and the third tap 1203 are sequentially turned on according to the acquired signal of the first frequency; in the second optical signal period, the first tap 1201, the second tap 1202 and the third tap 1203 are sequentially turned on according to the acquisition signals of the second frequency, so that the reflected light can be acquired according to two groups of acquisition signals with different frequencies before and after one exposure time. Of course, in other embodiments, the number of frequency groups of the collected signals may be three or more, which is not limited to the above.
Of course, in other embodiments, the number of taps in the receiving module 12 may be four or more, which is not limited herein.
Step S13: the time of flight of the optical signal is calculated. Referring to fig. 5, one conceivable approach is as follows:
step S131: and acquiring signals acquired by different signal acquisition units of the receiving module, and calculating the corresponding electric signal intensity.
The intensity of the electric signal corresponding to the optical signal collected by the first tap is recorded as Q Ai The intensity of the electric signal corresponding to the optical signal collected by the second tap is recorded as Q Bi The intensity of the electric signal corresponding to the optical signal acquired by the third tap is recorded as Q Ci
Step S132: and determining the electric signal intensity generated by the ambient light, and the first electric signal intensity and the second electric signal intensity according to the electric signal intensities of the different signal acquisition units.
First calculate (Q Ai +Q Bi )、(Q Bi +Q Ci ) (Q) Ci +Q Ai ) The two items of the maximum value are respectively recorded as the first electric signal intensity Q in sequence 1i And a second electrical signal strength Q 2i The remaining term is denoted as Q 0i . It will be appreciated that due to the random distribution of ambient light, in each tap, the ambient light collected by the different pulse signals is a random value, and thus, after each tap is averaged, the ambient light measured by each tapCan be regarded as a constant value.
Step S133: calculating the time of flight of the optical signal, delta t of the time of flight of the optical signal i The calculation method is as follows:
Figure BDA0002067363500000091
wherein Q is 0i The intensity of the electrical signal generated for ambient light;
Q 1i and Q 2i The first electrical signal intensity and the second electrical signal intensity are respectively;
T p_i pulse width in the ith period of the optical signal;
M i according to Q 1i Is determined by the value of Q 1i Is Q Ai 、Q Bi Or Q Ci The values are respectively 0, 1 or 2;
X i according to Q 0i 、Q 1i And Q 2i And (5) determining.
When the frequency group number of the acquisition signal is 1 (m=1), N i The value of (2) cannot be solved, and the maximum test distance is D max =T Demod_i ·c/2。
When the frequency group number of the acquired signals is not less than 2, N i The value of (a) can be determined according to remainder theorem, or by traversing combinations within the maximum test distance, when Δt i N when variance takes minimum value i As N i Is a value of (a).
Maximum test distance D max The method comprises the following steps:
D max =min(LCM(D max_1 ,D max_2 …D max_i …D max_m ),min(L max_1 ,L max_2 ,…L max_i …L max_m ))
wherein LCM (D) max_1 ,D max_2 …D max_i …D max_m ) For D max_1 ,D max_2 …D max_i …D max_m Least common multiple of (2);
D max_i is the firstMaximum test distance in i cycles, D max_i =T Demod_i C/2, where T Demod_i An i-th period of the acquisition signal;
L max_i =T Mod_i c/2, where T Mod_i Is the i-th period of the optical signal.
At the time of light flight delta t i After that, the distance of the object to be measured can be calculated.
Referring to fig. 6, step S14: according to the flight time of the optical signal, calculating the distance of the object to be measured, wherein the distance calculation mode of the object to be measured is as follows:
Figure BDA0002067363500000101
wherein D is i The distance between the object to be measured, c is the speed of light, Δt i Is the time of flight of the beam.
Since the distance 101 between the transmitting module 11 and the receiving module 12 is typically only about a few millimeters, which is much smaller than the distance between the object 20 and the depth camera 10, it is negligible in calculation and is simply calculated using the above formula.
In one embodiment, as shown in fig. 7 and 8, the frequency bins of the acquired signal, m=2, n 1 =2,n 2 =3,T Mod_1 =T Mod_2 . In the case shown in fig. 7, the test flight distance is improved by 6 times that of the existing method; in the case shown in fig. 8, the test flight distance thereof is improved by 9 times as much as that of the conventional method. It should be understood that, in the distance measurement method provided in this embodiment, there is no increase in the pulse width of the acquisition signal, so this embodiment can achieve the effects of reducing power consumption and improving test accuracy while realizing test distance extension.
It should be noted that in some embodiments, four or more taps may be further provided, so that the pulse width may be further reduced while the remote measurement is completed, and the influence of ambient light may be reduced by multi-tap allocation, and multiple measurements of the ambient light signal may be implemented, so as to achieve the effects of further reducing power consumption and improving detection accuracy.
For example, considering the case where the number of taps is four, the distance value of the object to be measured can be obtained by the double-frequency acquisition signal of fig. 10 and 11, similarly to the three-tap principle. In the embodiments shown in fig. 10 and 11, the frequency bins of the acquisition signal m=2, n 1 =2,n 2 =3,T Mod_1 =T Mod_2
Referring to fig. 9, the intensity of the electrical signal corresponding to the optical signal collected by the first tap 1201 is denoted as Q Ai The intensity of the electrical signal corresponding to the optical signal collected by the second tap 1202 is recorded as Q Bi The intensity of the electrical signal corresponding to the optical signal collected by the third tap 1203 is recorded as Q Ci The intensity of the electrical signal corresponding to the optical signal collected by the fourth tap 1204 is recorded as Q Di . After the signals acquired by the different taps are acquired, the corresponding electric signal strength needs to be calculated. First calculate (Q Ai +Q Bi )、(Q Bi +Q Ci )、(Q Ci +Q Di ) (Q) Di +Q Ai ) The two items in the maximum value are respectively marked as Q in sequence 1i And Q 2i The remaining two averages are denoted as Q 0i . Then, the flight time delta t of the optical signal is calculated according to the formula (1) i And further calculates the distance of the object to be measured according to the formula (2). It should be understood that, in the case that the number of taps is four, at least two taps may collect the ambient light signal, and by averaging the ambient light signals collected by the plurality of taps, the detection accuracy of the ambient light may be effectively improved, so that the detection accuracy of the measurement distance may be improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
The distance measuring method provided by the embodiment of the invention has the beneficial effects that: the multiple taps are arranged in the depth camera, the acquisition time sequence of the multiple acquisition units is controlled through the control module, at least two groups of acquisition signals with different frequencies are arranged in each tap in one exposure time, the maximum measurement distance of the depth camera is effectively increased under the condition of not changing the pulse width, the contradiction that the pulse width is directly proportional to the test distance and the power consumption and is inversely related to the test precision in the existing test scheme is eliminated, the expansion of the measurement distance is not limited by the pulse width, and therefore the effects of keeping lower test power consumption and higher test precision under the condition of realizing a longer test distance are achieved.
Fig. 12 is a schematic diagram of a terminal device according to an embodiment of the present invention. As shown in fig. 12, the terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in the memory 61 and executable on the processor 60, such as a program of a distance measuring method. The processor 60, when executing the computer program 62, implements the steps of the various distance measurement method embodiments described above, such as steps S11 to S13 shown in fig. 4. Alternatively, the processor 60, when executing the computer program 62, performs the functions of the modules/units of the apparatus embodiments described above.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions for describing the execution of the computer program 62 in the terminal device 6.
The terminal device 6 may be a computing device such as a computer, a notebook, a palm computer, a cloud server, etc. The terminal device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 12 is merely an example of the terminal device 6 and does not constitute a limitation of the terminal device 6, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 60 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 61 may also be used for temporarily storing data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (8)

1. A distance measurement method, comprising:
transmitting an optical signal to a body to be measured;
receiving optical signals reflected by the body to be detected through a receiving module, wherein the receiving module comprises a plurality of signal acquisition units, the time sequences of the signal acquisition units are different, and each signal acquisition unit has at least two groups of acquisition signals with different frequencies in one exposure time;
calculating the time of flight of the optical signal;
the calculating the time of flight of the optical signal comprises:
receiving signals acquired by different signal acquisition units of the receiving module, and calculating the corresponding electric signal intensity;
determining the electric signal intensity generated by the ambient light, and the first electric signal intensity and the second electric signal intensity according to the electric signal intensities of the different signal acquisition units;
and calculating the flight time of the optical signal by the following steps:
Figure FDA0004214142800000011
wherein Q is 0i The intensity of the electrical signal generated for ambient light;
Q 1i and Q 2i The first electrical signal intensity and the second electrical signal intensity are respectively;
T p_i a pulse width in an ith period of the optical signal;
M i the value of (2) is 0 or 1 or 2;
k is the number of the signal acquisition units;
N i the value of (2) is determined according to the remainder theorem;
alternatively, the N i By traversing combinations within a maximum test distance, when deltat i N when variance takes minimum value i As N i Is a value of (2);
X i according to Q 0i 、Q 1i And Q 2i And (5) determining.
2. The distance measurement method according to claim 1, wherein the receiving, by the receiving module, the optical signal reflected back by the object to be measured includes:
in one exposure time, different signal acquisition units of the receiving module are continuously started successively according to acquisition signals of a first frequency, and only one signal acquisition unit is started at the same time;
and in the one exposure time, different signal acquisition units of the receiving module are continuously started in sequence according to the acquisition signals of the second frequency, and only one signal acquisition unit is started at the same time.
3. The distance measuring method according to claim 1, wherein the maximum test distance D max The method comprises the following steps:
D max =min(LCM(D max_1 ,D max_2 …D max_i …D max_m ),min(L max_1 ,L max_2 ,…L max_i …L max_
m ))
wherein LCM (D) max_1 ,D max_2 …D max_i …D max_m ) For D max_1 ,D max_2 …D max_i …D max_m Least common multiple of (2);
D max_i for the maximum test distance in the ith period, D max_i =T Demod_i C/2, where T Demod_i An i-th period of the acquisition signal;
L max_i =T Mod_i c/2, where T Mod_i Is the i-th period of the optical signal.
4. A distance measuring method according to any one of claims 1 to 3, wherein said step of calculating the time of flight of said optical signal further comprises, after:
according to the flight time of the optical signal, calculating the distance of the object to be measured, wherein the distance calculation mode of the object to be measured is as follows:
Figure FDA0004214142800000021
wherein D is i Is the distance of the body to be measured;
c is the speed of light;
Δt i is the time of flight of the beam.
5. A depth camera, comprising:
the transmitting module is used for transmitting optical signals to the to-be-detected body;
the receiving module is provided with a plurality of signal acquisition units and is used for receiving the optical signals reflected by the to-be-detected body;
a control module for controlling the emitting module to emit light signals,
the signal acquisition units in the receiving module are controlled to have different time sequences, and at least two groups of acquisition signals with different frequencies are sent to each signal acquisition unit in one exposure time;
the receiving module is used for acquiring the flight time of the optical signal according to the signal acquired by the receiving module;
the obtaining the flight time of the optical signal according to the signal collected by the receiving module comprises:
receiving signals acquired by different signal acquisition units of the receiving module, and calculating the corresponding electric signal intensity;
determining the electric signal intensity generated by the ambient light, and the first electric signal intensity and the second electric signal intensity according to the electric signal intensities of the different signal acquisition units;
and calculating the flight time of the optical signal by the following steps:
Figure FDA0004214142800000031
wherein Q is 0i The intensity of the electrical signal generated for ambient light;
Q 1i and Q 2i The first electrical signal intensity and the second electrical signal intensity are respectively;
T p_i a pulse width in an ith period of the optical signal;
M i the value of (2) is 0 or 1 or 2;
k is the number of the signal acquisition units;
N i the value of (2) is determined according to the remainder theorem;
alternatively, the N i By traversing combinations within a maximum test distance, when deltat i N when variance takes minimum value i As N i Is a value of (2);
X i according to Q 0i 、Q 1i And Q 2i And (5) determining.
6. The depth camera of claim 5, wherein the receiving module comprises a lens, a filter, and an image sensor disposed along an optical path;
the light beam is transmitted to the image sensor after being filtered by the optical filter after passing through the lens to adjust the transmission path;
the image sensor includes a plurality of pixels, each pixel including a plurality of the signal acquisition units.
7. Terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the distance measuring method according to any of claims 1 to 4 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the distance measuring method according to any one of claims 1 to 4.
CN201910425547.9A 2019-05-21 2019-05-21 Distance measurement method and depth camera Active CN110187355B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910425547.9A CN110187355B (en) 2019-05-21 2019-05-21 Distance measurement method and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910425547.9A CN110187355B (en) 2019-05-21 2019-05-21 Distance measurement method and depth camera

Publications (2)

Publication Number Publication Date
CN110187355A CN110187355A (en) 2019-08-30
CN110187355B true CN110187355B (en) 2023-07-04

Family

ID=67717110

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910425547.9A Active CN110187355B (en) 2019-05-21 2019-05-21 Distance measurement method and depth camera

Country Status (1)

Country Link
CN (1) CN110187355B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110673152A (en) * 2019-10-29 2020-01-10 炬佑智能科技(苏州)有限公司 Time-of-flight sensor and distance measuring method thereof
CN110673153A (en) * 2019-10-29 2020-01-10 炬佑智能科技(苏州)有限公司 Time-of-flight sensor and distance measuring method thereof
CN111025315B (en) * 2019-11-28 2021-11-19 奥比中光科技集团股份有限公司 Depth measurement system and method
CN112987020A (en) * 2019-11-29 2021-06-18 Oppo广东移动通信有限公司 Photographing method, photographing apparatus, electronic device, and storage medium
CN111142088B (en) * 2019-12-26 2022-09-13 奥比中光科技集团股份有限公司 Light emitting unit, depth measuring device and method
CN111366916B (en) * 2020-02-17 2021-04-06 山东睿思奥图智能科技有限公司 Method and device for determining distance between interaction target and robot and electronic equipment
CN113514851B (en) * 2020-03-25 2024-06-04 深圳市光鉴科技有限公司 Depth camera
CN111580119B (en) * 2020-05-29 2022-09-02 Oppo广东移动通信有限公司 Depth camera, electronic device and control method
CN115079187A (en) * 2021-03-10 2022-09-20 奥比中光科技集团股份有限公司 Flight time measuring method and device and time flight depth camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1318939A (en) * 2000-04-14 2001-10-24 宜霖科技股份有限公司 Electronic camera and its exposure method
CN101278549A (en) * 2005-10-04 2008-10-01 卢森特技术有限公司 Multiple exposure optical imaging apparatus
CN107773259A (en) * 2016-08-31 2018-03-09 上海奕瑞光电子科技股份有限公司 A kind of flat panel detector, x-ray imaging system and automatic exposure detection method

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9439040B2 (en) * 2014-08-15 2016-09-06 Wensheng Hua System and method of time of flight detection
JP6280002B2 (en) * 2014-08-22 2018-02-14 浜松ホトニクス株式会社 Ranging method and ranging device
CN104656095A (en) * 2015-02-11 2015-05-27 深圳市志奋领科技有限公司 Wireless 4G (4th-generation) distance measurement sensor based on TOF (time of flight) technology and realization method for wireless 4G distance measurement sensor
WO2017013857A1 (en) * 2015-07-22 2017-01-26 パナソニックIpマネジメント株式会社 Distance measurement device
US10481263B2 (en) * 2016-03-10 2019-11-19 Ricoh Company, Ltd. Range finding apparatus, moveable apparatus, robot, three dimensional measurement apparatus, method of measuring three dimensional information, and storage medium
JP6673084B2 (en) * 2016-08-01 2020-03-25 株式会社デンソー Light flight type distance measuring device
CN107576890A (en) * 2017-08-18 2018-01-12 北京睿信丰科技有限公司 A kind of time domain distance-finding method and device
CN108594254B (en) * 2018-03-08 2021-07-09 北京理工大学 Method for improving ranging precision of TOF laser imaging radar
CN109343070A (en) * 2018-11-21 2019-02-15 深圳奥比中光科技有限公司 Time flight depth camera
CN109613517B (en) * 2018-12-12 2021-01-15 北醒(北京)光子科技有限公司 TOF Lidar multi-machine anti-interference working method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1318939A (en) * 2000-04-14 2001-10-24 宜霖科技股份有限公司 Electronic camera and its exposure method
CN101278549A (en) * 2005-10-04 2008-10-01 卢森特技术有限公司 Multiple exposure optical imaging apparatus
CN107773259A (en) * 2016-08-31 2018-03-09 上海奕瑞光电子科技股份有限公司 A kind of flat panel detector, x-ray imaging system and automatic exposure detection method

Also Published As

Publication number Publication date
CN110187355A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
CN110187355B (en) Distance measurement method and depth camera
WO2021008209A1 (en) Depth measurement apparatus and distance measurement method
US11604255B2 (en) Personal LADAR sensor
CN110221274B (en) Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
CN110221272B (en) Time flight depth camera and anti-interference distance measurement method
WO2021051478A1 (en) Time-of-flight-based distance measurement system and method for dual-shared tdc circuit
CN109991584A (en) A kind of jamproof distance measurement method and depth camera
US9316735B2 (en) Proximity detection apparatus and associated methods having single photon avalanche diodes for determining a quality metric based upon the number of events
CN110320528B (en) Time depth camera and noise reduction distance measurement method for multi-frequency modulation and demodulation
US8804101B2 (en) Personal LADAR sensor
CN110221273B (en) Time flight depth camera and distance measuring method of single-frequency modulation and demodulation
CN109991583A (en) A kind of jamproof distance measurement method and depth camera
WO2021051481A1 (en) Dynamic histogram drawing time-of-flight distance measurement method and measurement system
WO2021051480A1 (en) Dynamic histogram drawing-based time of flight distance measurement method and measurement system
CN110441786B (en) TOF ranging method and device
CN109870704A (en) TOF camera and its measurement method
US20220043129A1 (en) Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
CN109917412A (en) A kind of distance measurement method and depth camera
WO2022241942A1 (en) Depth camera and depth calculation method
CN113504542B (en) Distance measuring system and method, device and equipment for calculating reflectivity of measured object
CN113030999B (en) Time-of-flight sensing system and image sensor for use therein
WO2023279621A1 (en) Itof distance measurement system and method for calculating reflectivity of measured object
WO2023279619A1 (en) Itof ranging system, and method for shielding fuzzy distance value
WO2023279620A1 (en) Itof ranging system, and method, apparatus, and device for determining relative accuracy thereof
WO2023279755A1 (en) Method and apparatus for masking ambiguity distance values of ranging system, and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant after: Obi Zhongguang Technology Group Co.,Ltd.

Address before: 12 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN ORBBEC Co.,Ltd.

GR01 Patent grant
GR01 Patent grant