WO2022166723A1 - Depth measurement method, chip, and electronic device - Google Patents

Depth measurement method, chip, and electronic device Download PDF

Info

Publication number
WO2022166723A1
WO2022166723A1 PCT/CN2022/074100 CN2022074100W WO2022166723A1 WO 2022166723 A1 WO2022166723 A1 WO 2022166723A1 CN 2022074100 W CN2022074100 W CN 2022074100W WO 2022166723 A1 WO2022166723 A1 WO 2022166723A1
Authority
WO
WIPO (PCT)
Prior art keywords
phase
image
extended
exposure
duration
Prior art date
Application number
PCT/CN2022/074100
Other languages
French (fr)
Chinese (zh)
Inventor
李明采
Original Assignee
深圳市汇顶科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市汇顶科技股份有限公司 filed Critical 深圳市汇顶科技股份有限公司
Publication of WO2022166723A1 publication Critical patent/WO2022166723A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only

Definitions

  • the embodiments of the present application relate to the field of ranging, and in particular, to a depth measurement method, a chip, and an electronic device.
  • Time of flight (TOF) technology is a way to obtain depth information.
  • the principle is to calculate the distance from the measuring device (such as TOF camera) to the target object through the flight time of light in the air.
  • the Metal Oxide Semiconductor (Complementary Metal Oxide Semiconductor, CMOS) pixel array is used as the light sensor at the receiving end, and the modulated light is used as the light source for measurement, for example, the modulated single pulse or continuously modulated light signal is emitted by the transmitting end, and the receiving end receives the target.
  • the light reflected back by the object is calculated by calculating the phase difference between the emitted light and the received light to calculate the current distance of the object.
  • the method used is to collect two exposure data for fusion.
  • the two exposure data are the exposure data at low exposure and the exposure data at high exposure.
  • the exposure data at low exposure includes: 4 sub-phases collected at low exposure Frame image
  • the exposure data at high exposure includes: 4 sub-frame images of different phases collected at high exposure.
  • the purpose of the embodiments of the present application is to provide a depth measurement method, a chip and an electronic device, so that the power consumption can be reduced while improving the dynamic range of ranging.
  • an embodiment of the present application provides a depth measurement method, including: acquiring N sub-frame images of a target scene according to preset N exposure durations corresponding to N phases respectively; wherein, the N is a natural number greater than or equal to 2, and the N phases are N phases with different phase differences from the phase of the emitted light; according to the N sub-frame images, determine the depth information of the target scene; wherein, the The N phases include a reference phase and an extended phase, and the N exposure durations include a reference exposure duration corresponding to the reference phase and an extended exposure duration corresponding to the extended phase; the reference exposure duration is less than or equal to the reference exposure duration in the reference phase.
  • the maximum duration for which the image is not exposed under the phase and the preset reference distance, and the extended exposure duration is between the reference exposure duration and the maximum duration for which the image is not exposed under the extended phase and the reference distance.
  • Embodiments of the present application further provide a depth measurement method, including: emitting emission light for depth measurement at a reference phase; collecting a first subframe image of a target scene according to a first phase and a reference exposure duration; wherein the The phase difference between the first phase and the reference phase is 0 degrees; the second subframe image of the target scene is collected according to the second phase; wherein, the phase difference between the second phase and the reference phase is 180 degrees; The third subframe image of the target scene is collected in three phases; wherein, the phase difference between the third phase and the reference phase is 90 degrees; the fourth subframe image of the target scene is collected according to the fourth phase; The phase difference between the four-phase and the reference phase is 270 degrees; wherein, the exposure duration used to collect the second subframe image, the exposure duration used to collect the third subframe image, and the exposure duration used to collect the fourth subframe image
  • the exposure duration used by a frame image is respectively greater than the reference exposure duration, and the first subframe image, the second subframe image, the third subframe image
  • Embodiments of the present application further provide a chip, which is set in an electronic device, the chip is connected to a memory in the electronic device, and the memory stores an instruction that can be executed by the chip, and the instruction is executed by the device.
  • the chip is executed, so that the chip can execute the above-mentioned depth measurement method.
  • Embodiments of the present application also provide an electronic device, including the above-mentioned chip and a memory connected to the chip.
  • N sub-frame images of the target scene are acquired according to the preset N exposure durations corresponding to the N phases, where N is a natural number greater than or equal to 2, and the N phases are the phase differences from the phase of the emitted light.
  • N different phases according to the N sub-frame images, the depth information of the target scene is determined; the N phases include the reference phase and the extended phase, and the N exposure durations include the reference exposure duration corresponding to the reference phase and the extended phase Extended exposure duration; the reference exposure duration is less than or equal to the maximum duration that the image will not be exposed under the reference phase and the preset reference distance, to ensure that the sub-frame images of the acquired target scene will not be exposed according to the reference exposure duration corresponding to the reference phase.
  • the extended exposure duration is between the reference exposure duration and the maximum duration that the image will not be exposed under the extended phase and the reference distance, ensuring that the sub-frame images of the acquired target scene will not be exposed according to the extended exposure duration corresponding to the extended phase.
  • the extended exposure duration is different from the reference exposure duration, it is beneficial to determine the depth information at different distances in the target scene under different exposure durations. In the extended exposure time and the reference exposure time, a longer exposure time is conducive to accurately determining the depth information of the distant point in the target scene, and a shorter exposure time is conducive to accurately determining the target scene.
  • the closer point is the depth information of the near point, which is beneficial to improve the dynamic range of ranging without exposure, and because one exposure duration is used in each phase to obtain one sub-frame image, it is not necessary to The phase adopts 2 different exposure durations to obtain 2 sub-frame images, so there is no need to fuse the sub-frame images obtained based on 2 different exposure durations for each phase, so the dynamic range of ranging can be improved while reducing the range. Small power consumption.
  • the method for determining the maximum duration of no exposure of the image in the extended phase and the reference distance is as follows: within the same exposure duration, the number of photons of the sub-frame image in the reference phase and the extended duration are obtained respectively. The number of photons of the sub-frame image in the phase; Calculate the first ratio of the photon number of the sub-frame image in the reference phase to the photon number of the sub-frame image in the extended phase; Based on the modulation frequency of the optical signal in the depth measurement , the extended phase and the reference distance, calculate the farthest measurement distance at which the number of photons of the subframe image under the extended phase reaches the limit value; Calculate the second ratio of the farthest measurement distance and the reference distance Square; select the smallest value among the squares of the first ratio and the second ratio, and multiply the smallest value by the reference phase and the maximum duration of the image under the preset reference distance without exposure, It is the maximum duration for which the image is not exposed under the extended phase and the reference distance.
  • one dimension is to ensure that the image is not exposed when measuring close range
  • the other dimension is to ensure that the image is not exposed when measuring long distance, so that the image can be more reasonably determined under the extended phase and reference distance.
  • the maximum duration allows the final extended phase and the maximum duration of the image not to be exposed at the reference distance to meet the requirements of non-exposure when measuring close range and non-exposure when measuring long distance.
  • the spread phase includes two of the first type spread phases and one of the second type spread phases, and the phase difference between the two first type spread phases is ⁇ . That is, in the embodiment of the present application, according to the preset four exposure durations corresponding to the four phases, four sub-frame images of the target scene are acquired, and the depth information of the target scene is determined by using the four sub-frame images, and the four-phase sampling method is adopted. In this way, ranging accuracy, power consumption and speed can be taken into account at the same time, that is, the power consumption and speed are not greatly affected while the ranging accuracy is improved.
  • the depth information of the target scene includes depth information of multiple pixels in the target scene, and after the determining the depth information of the target scene according to the N sub-frame images, further includes: according to the N sub-frame images sub-frame images, and determine multiple confidence levels corresponding to the depth information of the multiple pixels; If the extended exposure duration does not reach the maximum duration for which the image is not exposed under the extended phase and the reference distance, the extended exposure duration is increased. It is understandable that the depth measurement accuracy of several pixels in the target scene is low, indicating that the extended exposure time setting is more likely to be unreasonable, and the image is not exposed when the extended exposure time does not reach the extended phase and the reference distance. The maximum duration indicates that the current extended exposure duration still has room to increase.
  • the number of confidence levels that are less than the preset confidence level threshold among the multiple confidence levels exceeds the preset number threshold and the extended exposure duration does not reach the maximum duration of the extended phase and the reference distance that the image will not be exposed.
  • Large extended exposure duration can increase the extended exposure duration at a reasonable time. It is not necessary to set the extended exposure duration longer at the beginning, which is beneficial to obtain the minimum extended exposure duration that can make the confidence greater than the confidence threshold.
  • the power consumption is further reduced while the dynamic square range and ranging accuracy are improved.
  • the emission light used for depth measurement is emitted in the reference phase; the first sub-frame image of the target scene is collected according to the first phase and the reference exposure duration; wherein, the phase difference between the first phase and the reference phase is 0 degree ; Acquire the second subframe image of the target scene according to the second phase; wherein, the phase difference between the second phase and the reference phase is 180 degrees; Acquire the third subframe image of the target scene according to the third phase; wherein, The phase difference between the third phase and the reference phase is 90 degrees; the fourth subframe image of the target scene is collected according to the fourth phase; wherein, the phase difference between the fourth phase and the reference phase is 270 degrees; Wherein, the exposure duration used for collecting the second subframe image, the exposure duration used for collecting the third subframe image, and the exposure duration used for collecting the fourth subframe image are respectively greater than the reference exposure duration , the first subframe image, the second subframe image, the third subframe image and the fourth subframe image are used to determine a frame of depth image.
  • one frame of depth image can be determined based on four sub-frame images, because the exposure duration used for collecting the second sub-frame image, the exposure duration used for collecting the third sub-frame image and the fourth The exposure durations used by the subframe images are respectively longer than the reference exposure duration, so the depth information of the distant point in the target scene can be determined based on the second, third, and fourth subframe images compared with the first subframe image.
  • the image can determine depth information for near points in the target scene. Therefore, in this embodiment, the depth information of the far point and the depth information of the near point in the target scene can be obtained at the same time based on the four sub-frame images, thereby greatly improving the frame rate and reducing the dynamic range of distance measurement while improving the dynamic range. power consumption.
  • the depth image determined based on the four sub-frame images can also improve the measurement accuracy and also ensure that the time required for measurement is not very long, that is, the speed of determining a frame of depth image is increased.
  • 1 is a block diagram of the electronic device for depth measurement mentioned in the embodiment of the present application.
  • FIG. 2 is a waveform diagram of an optical signal transmitted, an optical signal received, and an optical signal sampled based on 4 phases in the related art mentioned in the embodiments of the present application;
  • 4 is a schematic diagram of different exposure durations corresponding to 4 phases when 0° is the reference phase and the reference distance is 0 mentioned in the embodiment of the present application;
  • 5 is a schematic diagram of different exposure durations corresponding to 4 phases when 0° is the reference phase and the reference distance is greater than 0 mentioned in the embodiment of the present application;
  • FIG. 6 is a flowchart of a method for determining the maximum duration that the image is not exposed under the extended phase and the reference distance mentioned in the embodiment of the present application;
  • FIG. 9 is a schematic structural diagram of the electronic device mentioned in the embodiment of the present application.
  • the embodiment of the present application relates to a depth measurement method, which is applied to an electronic device.
  • the electronic device is used to measure depth information, and the depth information can be understood as the distance between the object to be measured and the electronic device.
  • the electronic device may be a TOF ranging device, and may be a TOF camera in a specific implementation.
  • the depth information measured by the TOF camera can be understood as the distance between the object to be measured and the TOF camera.
  • the depth measurement technology involved in this embodiment is to periodically modulate the emitted optical signal, by measuring the phase delay of the reflected optical signal relative to the emitted optical signal, and then calculating the depth according to the phase delay and the speed of light.
  • This measurement technology It can be called indirect-time-of-flight (indirect-TOF, iTOF) technology.
  • iTOF indirect-time-of-flight
  • FIG. 1 For the module diagram of the electronic device for depth measurement, refer to FIG. 1 , including: a transmitting module 101 , a receiving module 102 , and a processing module 103 .
  • the transmitting module 101 for transmitting the optical signal modulated based on the modulation frequency f, may use a laser transmitter as a light source.
  • the receiving module 102 is used to receive the light reflected by the target object (reflected light for short), and obtain sub-frame images corresponding to different phases by sampling the reflected light at different phases.
  • the receiving module mainly uses a CMOS image sensor to detect the reflected light signal.
  • An image sensor may also be referred to as a photodetector, which generally includes a plurality of pixels distributed in an array. For example, in the commonly used 4-phase sampling method, through detection by the pixel array of the image sensor, 4 different phase subframes, that is, 4 subframe images with different phases, can be obtained. For each pixel in the 4 subframe images, it corresponds to The respective photon numbers are Q1, Q2, Q3, and Q4, respectively.
  • phase 2 for waveform diagrams of the transmitted optical signal, the received optical signal, and the optical signal sampled based on 4 phases, that is, the 4 phases adopt the same exposure duration.
  • the above phase can be understood as the phase offset of the starting position of the integration window relative to the starting position of the emission window, wherein the phase corresponding to the starting position of the emission window can be regarded as 0° as a reference, and the integration window can also be called the receive window.
  • the four phases in Figure 2 are 0°, 90°, 180°, and 270°, respectively.
  • phase offset of the starting position of the integration window of 0° relative to the starting position of the emission window is 0°, that is, the integration window of 0° and the emission window are completely coincident;
  • the phase offset of the starting position of the integration window of 90° relative to the starting position of the emission window is 90°;
  • the phase offset relative to the starting position of the emission window is 180°;
  • the phase offset of the starting position of the 270° integration window relative to the starting position of the emission window is 270°.
  • the integration window can be understood as: the window occupied by the high level of the working timing diagram of the photosensitive control switch of the pixel in the photodetector.
  • the integration window includes an effective integration window and an invalid integration window.
  • the valid integration window is the window in the integration window that actually receives the reflected light, that is, the shadow part of each integration window in Figure 2; the invalid integration window is the integration window that does not actually receive the light.
  • the window for reflected light i.e. the unshaded portion of each integration window in Figure 2.
  • the photosensitive control switch is turned on to a high level, and the high level indicates that it is allowed to start receiving reflected light. After the specified exposure duration ends, one phase subframe sampling is completed.
  • a sub-frame image can also be understood as an image formed by each pixel after all integration windows are integrated within a specified exposure duration.
  • the processing module 103 is configured to send the image acquisition command to the receiving module 102, and the receiving module 102 sends the image acquisition command to the transmitting module 101, so that the transmitting module 101 emits an optical signal.
  • the processing module 103 is further configured to receive the phase data sent by the receiving module 102, the phase data may include the image data of the 4 subframe images obtained by the above-mentioned 4 phase downsampling, and the image data may be embodied as the number of photons of the 4 subframe images. , denoted as Q1, Q2, Q3, Q4 in turn. Therefore, the processing module 103 can calculate the depth d by the following formula:
  • f is the modulation frequency of the optical signal
  • is the phase delay of the reflected optical signal relative to the emitted optical signal
  • c is the speed of light
  • A is the confidence of the measured depth d.
  • q1 is the number of photons received by the receiving module 102 in each integration window in the 0° phase subframe
  • n1 is the number of integration windows in the exposure duration corresponding to the 0° phase subframe
  • Q1 is the 0° phase subframe. The sum of the number of photons received by the receiving module 102 in all integration windows in a frame.
  • q2 is the number of photons received by the receiving module 102 in each integration window in the 180° phase subframe
  • n2 is the number of integration windows in the exposure duration corresponding to the 180° phase subframe
  • Q2 is the receiving module in the 180° phase subframe 102 The sum of the number of photons received in all integration windows.
  • q3 is the number of photons received by the receiving module 102 in each integration window in the 90° phase subframe
  • n3 is the number of integration windows in the exposure duration corresponding to the 90° phase subframe
  • Q3 is the receiving module in the 90° phase subframe 102 The sum of the number of photons received in all integration windows.
  • q4 is the number of photons received by the receiving module 102 in each integration window in the 270° phase subframe
  • n4 is the number of integration windows in the exposure duration corresponding to the 270° phase subframe
  • Q4 is the receiving module in the 270° phase subframe 102 The sum of the number of photons received in all integration windows.
  • each pixel on the photodetector in the receiving module 103 can independently obtain the number of photons, and the processing module 103 can calculate the depth information of a pixel in the target scene based on a pixel on the photodetector. Multiple pixels on the detector can calculate the depth information of multiple pixels in the target scene.
  • the number of photons acquired by the pixel can be understood as: the reflected light signal is imaged on the pixel to accumulate photocharge during the exposure time, that is, the number of photons generated by the accumulated reflected light in the pixel during the exposure time can also be understood as: The sum of the number of photons received under all integration windows during the exposure duration.
  • two exposure data (4 sub-frame images of low exposure + 4 sub-frame images of high exposure) are used for fusion to expand the dynamic measurement range, and the 4 phases under low exposure use the same exposure duration, and the high exposure The same exposure duration is used for the 4 phases under exposure.
  • each phase needs to be sampled based on 2 different exposure durations to obtain 2 sub-frame images, and 8 sub-frame images obtained based on 4 phase sampling are fused to output a frame of depth information, but this significantly reduces the The frame rate increases and the power consumption increases.
  • the fusion of 8 sub-frame images can be understood as: obtaining a frame of low-exposure depth information based on the low-exposure 4 sub-frame images, obtaining a frame of high-exposure depth information based on the high-exposure 4 sub-frame images, and then combining the low-exposure depth information
  • the depth information of , and the depth information of high exposure are fused to obtain the final measured depth information.
  • the following depth measurement method is provided. Refer to FIG. 3 for a flowchart of the depth measurement method in this embodiment, including:
  • Step 301 Acquire N sub-frame images of the target scene according to the preset N exposure durations corresponding to the N phases respectively.
  • Step 302 Determine the depth information of the target scene according to the N subframe images.
  • N is a natural number greater than or equal to 2
  • the N phases are N phases with different phase differences from the phase of the emitted light.
  • one exposure duration is used to obtain one sub-frame image, and it is not necessary to adopt two different exposure durations for each phase to obtain two sub-frame images.
  • each phase is sampled based on one exposure duration to obtain one subframe image, and one frame of depth information is output based on N subframe images collected under N different phases.
  • N is a natural number greater than or equal to 2 and less than 8, that is, at most one frame of depth information can be output according to 7 subframe images obtained based on 7 different phase samplings. Fusion to output a frame of depth information can increase the frame rate and reduce power consumption.
  • N is 4 to balance the dynamic range, frame rate, measurement accuracy, and power consumption. As well as the time it takes to measure, to a certain extent, it can improve the frame rate and measurement accuracy while improving the dynamic range, and it can also ensure that the power consumption will not be large, and the measurement time will not be very long.
  • the N different phases include the reference phase and the extended phase, and the N exposure durations include the reference exposure duration corresponding to the reference phase and the extended exposure duration corresponding to the extended phase; the reference exposure duration is less than or equal to the reference phase and the preset reference distance.
  • the maximum duration for which the lower image will not be exposed to ensure that the sub-frame images of the acquired target scene will not be overexposed according to the benchmark exposure duration corresponding to the benchmark phase.
  • the extended exposure duration is between the reference exposure duration and the maximum duration that the image will not be exposed under the extended phase and the reference distance, ensuring that the sub-frame images of the acquired target scene will not be overexposed according to the extended exposure duration corresponding to the extended phase.
  • the reference phase can be selected according to actual needs, and the reference phase corresponds to the reference distance.
  • the reference distance corresponding to 0° is the shortest distance range0 of the application ranging
  • range0 can be understood as the shortest distance that needs to be measured in the ranging scenario applied by the TOF ranging device. It can also be understood that, assuming that the closest distance of the application ranging is represented as range0, then referring to Figure 4, when the integration window is completely aligned with the emission window, the phase is recorded as the reference phase 0°, that is, the reference phase is the phase difference with the emitted light phase is 0° phase.
  • range0 is 0 meters
  • the number of photons received in the integration window of the 0° phase subframe is full photons, that is to say, the integration windows of the 0° phase subframe are all valid integration windows, and the 180° phase subframe is an effective integration window.
  • the number of photons received in the integration window of the bit subframe is 0, that is to say, the integration windows of the 180° phase subframe are invalid integration windows.
  • full photons can be understood as the total number of photons emitted in an emission window.
  • the closest distance range0 to which ranging is applied is greater than 0, such as may be 0.3 meters.
  • range0 is greater than 0, you can refer to Figure 5.
  • the number of photons received in the integration window of the 0° phase subframe is less than full photons, that is to say, the integration windows of the 0° phase subframe are not all valid integration windows but also invalid ones. Integration window.
  • the number of photons received in the integration window of the 180° phase subframe is greater than 0, that is to say, the integration windows of the 180° phase subframe are not all invalid integration windows but also include a small part of the valid integration windows.
  • the shaded parts in FIG. 5 are all effective integration windows.
  • t0 the maximum duration for which the image is not exposed under the reference phase and the preset reference distance
  • t1 the reference exposure duration
  • t0 the reference exposure duration
  • the TOF ranging device includes a photosensitive chip.
  • the number of photons that the photosensitive chip can receive is limited. If the limit value is reached, the photosensitive chip is considered to be saturated. If the exposure time is increased, the number of photons obtained will no longer be obtained. Increase. Therefore, t0 can be understood as the critical exposure duration that prevents the number of photons from increasing as the exposure duration increases.
  • t0 when the reference phase is 0° and the reference distance is range0, t0 can be determined by:
  • the starting point of the integration window of the reference phase is adjusted to be separated from the starting point of the emitted optical signal by 0°, that is, the starting point of the integration window of the reference phase is aligned with the starting point of the emitted optical signal.
  • the starting point of the integration window of the reference phase is aligned with the starting point of the emitted optical signal.
  • align the 0° frame to the modulated wave emitted by the transmitter module so that the integration window of the 0° frame is the largest, and then increase the exposure time from small to large, for example, the exposure time gradually increases from 1ms to 10ms. After the photosensitive chip receives The number of photons will not increase any more.
  • the method of determining the maximum duration of the image without exposure under the extended phase and the reference distance can refer to FIG. 6 , including:
  • Step 601 within the same exposure duration, obtain the photon number of the sub-frame image under the reference phase and the photon number of the sub-frame image under the extended phase, respectively.
  • Step 602 Calculate a first ratio of the photon number of the subframe image in the reference phase to the photon number of the subframe image in the extended phase.
  • Step 603 Calculate the farthest measurement distance at which the number of photons of the subframe image under the extended phase reaches a limit value based on the modulation frequency, the extended phase and the reference distance of the optical signal in the depth measurement.
  • Step 604 Calculate the square of the second ratio of the farthest measured distance to the reference distance.
  • Step 605 Select the smallest value among the squares of the first ratio and the second ratio, and use the product of the smallest value and the reference phase and the maximum duration of the image not to be exposed under the preset reference distance as the extended phase and the reference distance. The maximum length of time that an image will not be exposed.
  • the reference phase is taken as an example of 0°, and the following takes the extended phase of 90° and 180° as examples to illustrate the method of determining the maximum duration of the image without exposure under the extended phase and the reference distance:
  • t3 The maximum duration for which the image is not exposed under the extended phase of 90° and the reference distance is abbreviated as t3, and the determination method of t3 is as follows:
  • the number of photons in the subframe image at 90° is at most half of the number of photons in the subframe image at 0°, that is, the above-mentioned first ratio is 2.
  • the reference distance is 0.3m
  • the number of photons at 90° reaches the maximum limit value.
  • t2 The maximum duration for which the image is not exposed under the extended phase of 180° and the reference distance is abbreviated as t2, and the determination method of t2 is as follows:
  • the reference distance is 0.3m
  • the number of photons at 180° reaches the maximum limit of the limit value.
  • the square of the second ratio is: 1/0.3 ⁇ 3, and the square of the second ratio is approximately equal to 9.
  • the expansion phases are selected to be 90° and 180° respectively.
  • the maximum duration of the image without exposure can also be calculated based on the above method. For example, if 270° is selected, the maximum duration for which the image is not exposed under the extended phase of 270° and the reference distance is abbreviated as t4, and it can be calculated that t1 ⁇ t4 ⁇ 2t1. If 45° is selected, the maximum duration for which the image is not exposed under the extended phase of 45° and the reference distance is abbreviated as t5, and it can be calculated that t1 ⁇ t5 ⁇ 4/3t1.
  • the number of photons received by the receiving window of the 0° phase subframe (also called the integration window) is the largest, and the corresponding 90° and 270° receive at most 1/2 of the photons.
  • the determined t3 and t4 are beneficial to ensure that the measurement at close range is not overexposed.
  • the number of photons received in the receiving window of the 180° phase subframe will gradually increase.
  • the illuminance decays with the square of the distance that is, the decay of the number of photons is inversely proportional to the square of the distance. Therefore, by calculating the square of the above-mentioned second ratio, the determined extended phase of 180° and the maximum duration t2 during which the image is not exposed at the reference distance is beneficial to ensure that the image is not exposed when measuring long distances.
  • one dimension is to ensure that the image is not exposed when measuring close range
  • the other dimension is to ensure that the image is not exposed when measuring long distance, so that the image can be more reasonably determined under the extended phase and reference distance.
  • the maximum duration allows the final extended phase and the maximum duration of the image not to be exposed at the reference distance to meet the requirements of non-exposure when measuring close range and non-exposure when measuring long distance.
  • the spread phase includes a first type of spread phase and/or a second type of spread phase.
  • the following describes how to determine the maximum duration of the image without exposure under the first type of extended phase and the reference distance, and the method for determining the maximum duration of the image without exposure under the second type of extended phase and reference distance:
  • the phase difference between the first type of extended phase and the reference phase is greater than 0 and less than or equal to ⁇ /2.
  • the first type of extended phase and the maximum duration of the image without exposure at the reference distance is determined as follows: TOF ranging equipment is exposed at the same exposure. During the time period, obtain the photon number of the sub-frame image under the reference phase and the photon number of the sub-frame image under the first type of extended phase respectively, and calculate the photon number of the sub-frame image under the reference phase and the sub-frame image under the first type of extended phase.
  • the first ratio of the number of photons of the frame image, and the product of the first ratio and the reference phase and the maximum duration of the image under the preset reference distance is not exposed as the maximum duration of the image under the extended phase and the reference distance. It can be understood that the manner of determining the above-mentioned first ratio is similar to steps 501 to 502, and details are not repeated here.
  • the first type of extended phase can be selected in two ranges of [270°, 0°) and (0°, 90°].
  • the phase difference between the second type of extended phase and the reference phase is greater than ⁇ /2 and less than or equal to ⁇ .
  • the second type of extended phase and the maximum duration of the image under the reference distance without exposure is determined as follows: based on the optical signal in the depth measurement. Modulation frequency, the second type of extended phase and the reference distance, calculate the farthest measurement distance where the number of photons of the subframe image under the second type of extended phase reaches the limit value; calculate the square of the second ratio of the farthest measurement distance to the reference distance, The product of the square of the second ratio and the maximum duration of no exposure of the image under the reference phase and the preset reference distance is taken as the maximum duration of the image under the second type of extended phase and the reference distance without exposure. It can be understood that the manner of determining the square of the second ratio is similar to that in steps 503 to 504 , and details are not repeated here.
  • the second type of extended phase can be selected within the interval range of (90°, 180°).
  • different types of extended phases use different methods to calculate the maximum duration of the image without exposure, which can be combined with the characteristics of different types of extended phases. Calculates a ratio that can more quickly and reasonably calculate the maximum time the image will not be exposed.
  • N is 4, the 4 different phases include: a reference phase and three extension phases, the extension phases include two first-type extension phases and one second-type extension phase, and two first-type extension phases
  • the phase difference between is ⁇ .
  • the two first-type extended phases can be selected in the range of [270°, 0°), (0°, 90°], for example, 90° and 270° are selected.
  • the second type of extended phase is selected within the range of (90°, 180°], for example, 180° is selected.
  • the reference exposure duration corresponding to 0° is t1, and t1 can be equal to the image at 0° and range0. The maximum duration of no exposure.
  • the above method is used to set The 4-phase exposure time of the 4-phase can ensure that the close distance will not be overexposed, and the long distance because 90°, 270°, and 180° increase the exposure time compared to t1, so the dynamic range of ranging is improved, and the maximum can be increased.
  • the extended exposure duration is greater than the reference exposure duration and less than or equal to The maximum length of time that the image will not be exposed under the extended phase and reference distance.
  • the reference phase is 0°
  • the extension phase is 90°
  • the reference exposure duration is t1
  • the extension exposure duration is t3
  • t3 is between the reference exposure duration t1 and the maximum duration 2t1 where the image is not exposed under the extension phase and reference distance. Between means: t1 ⁇ t3 ⁇ 2t1.
  • the extended exposure duration is greater than or equal to the extended phase and the reference distance.
  • the maximum length of time that the image will not be exposed at the reference distance and less than the reference exposure time For example, if the reference phase is 90°, the extension phase is 0°, the reference exposure duration is t3, and the extension exposure duration is t1, then t1 is between the reference exposure duration t3 and the maximum duration t3/ Between 2 means: t3/2 ⁇ t1 ⁇ t3.
  • Fig. 4 only takes 4-phase sampling as an example. In the specific implementation, it is not limited to 4-phase sampling, but can also be 2-phase sampling (such as 0° and 90°), 8-phase sampling (such as 0°) , 45°, 90°, 135°, 180°, 225°, 270°, 315°), etc. 4-phase sampling can take into account ranging accuracy, power consumption and speed at the same time, that is, it will not have much impact on power consumption and speed while improving ranging accuracy.
  • the TOF ranging device acquires 2 sub-frame images of the target scene according to the preset 2 exposure durations corresponding to the 2 phases respectively.
  • the TOF ranging device determines the depth information of the target scene according to the two subframe images. For example, two subframe images are called 0° phase subframe and 90° phase subframe respectively, the image data of the 0° phase subframe is the number of photons Q1, and the image data of the 90° phase subframe is the number of photons Q3.
  • the following formula calculates the depth information in the target scene:
  • d is the calculated depth
  • f is the modulation frequency of the optical signal
  • is the phase delay of the reflected optical signal relative to the emitted optical signal
  • c is the speed of light.
  • the TOF ranging device acquires 4 sub-frame images of the target scene according to the preset 4 exposure durations corresponding to the 4 phases respectively.
  • the depth information of the target scene is determined according to the four subframe images.
  • the 4 subframe images may be respectively referred to as a 0° phase subframe, a 180° phase subframe, a 90° phase subframe, and a 270° phase subframe.
  • the image data of the four phase subframes are: Q1, Q2, Q3, Q4, and the depth information in the target scene can be calculated by the following formula:
  • d is the calculated depth
  • f is the modulation frequency of the optical signal
  • is the phase delay of the reflected optical signal relative to the emitted optical signal
  • c is the speed of light.
  • the flowchart of the depth measurement method can refer to FIG. 7, including:
  • Step 701 Emit the emission light for depth measurement at the reference phase.
  • the reference phase may be 0° mentioned in the above example, but is not limited thereto.
  • the TOF ranging device can emit emitted light for depth measurement in a reference phase.
  • Step 702 Collect a first sub-frame image of the target scene according to the first phase and the reference exposure duration.
  • the phase difference between the first phase and the reference phase is 0 degrees.
  • the first phase is 0°
  • the first subframe image is the above-mentioned 0° phase subframe. Since the difference between the reference phase and the first phase is 0 degrees, the first phase can also be understood as the reference phase.
  • the reference exposure duration is less than or equal to the maximum duration during which the image is not exposed under the reference phase and the preset reference distance.
  • the preset reference distance can be understood as the closest distance to be measured in the ranging scenario applied by the above TOF ranging device, and can also be understood as the minimum detection distance of the depth measurement method.
  • Step 703 Collect a second subframe image of the target scene according to the second phase.
  • the phase difference between the second phase and the reference phase is 180 degrees.
  • the second phase is 180°
  • the second subframe image is the aforementioned 180° phase subframe.
  • Step 704 Collect a third subframe image of the target scene according to the third phase.
  • the phase difference between the third phase and the reference phase is 90 degrees.
  • the third phase is 90°
  • the third subframe image is the above-mentioned 90° phase subframe.
  • Step 705 Collect a fourth subframe image of the target scene according to the fourth phase.
  • the phase difference between the fourth phase and the reference phase is 270 degrees.
  • the fourth phase is 270°
  • the fourth subframe image is the above-mentioned 270° phase subframe.
  • the exposure duration (eg, t2 in FIG. 4 ) used for collecting the second subframe image, the exposure duration (eg, t3 in FIG. 4 ) used for collecting the third subframe image, and the The exposure duration (such as t4 in FIG. 4 ) used for the fourth subframe image is respectively greater than the reference exposure duration, the first subframe image, the second subframe image, and the third subframe image
  • the image and the fourth sub-frame image are used to determine a frame of depth image. That is, one frame of depth image may be output according to the first subframe image, the second subframe image, the third subframe image and the fourth subframe image.
  • the 4 subframe images collected in the above steps 702 to 705 can be understood as when N is 4 in step 301, according to the 4 preset exposure durations corresponding to the 4 phases respectively, the acquired 4 subframes in the target scene image.
  • the first sub-frame image, the second sub-frame image, the third sub-frame image, and the fourth sub-frame image are collected in sequence.
  • the first phase mentioned in the above steps 702 to 705 may be understood as the reference phase, and the second phase, the third phase, and the fourth phase may be understood as three extended phases.
  • the exposure duration used for collecting the third sub-frame image (such as t3 in FIG. 4 ) and the exposure time used for collecting the fourth sub-frame image (such as t4 in FIG. 4 ) are both shorter than those used for collecting the second sub-frame image
  • the exposure duration used by the frame image eg t2 in Figure 4. Referring to FIG. 4, that is, t1 ⁇ t3 ⁇ t2, t1 ⁇ t4 ⁇ t2, and t1 ⁇ t2. That is to say, in the 4 exposure durations,
  • the exposure duration t2 used to collect the second sub-frame image is the largest, and when the phase difference between the second phase and the reference phase is 180 degrees, the depth that can be measured based on the second sub-frame image collected based on the second phase is the largest, so t2 is the largest and can be measured. Increase the maximum depth of the measurement to further increase the dynamic range of the measurement.
  • the exposure duration (eg, t3 in FIG. 4 ) used to acquire the third sub-frame image is shorter than the exposure duration (eg, t4 in FIG. 4 ) used to acquire the fourth sub-frame image, refer to FIG. 4 , that is, t1 ⁇ t3 ⁇ t4.
  • t1, t2, t3, and t4 are different. Since one subframe image collected based on one exposure duration can accurately obtain the depth information within one depth range in the target scene, the depth information collected based on four exposure durations can be accurately obtained.
  • the 4 sub-frame images can accurately obtain the depth information in 4 depth ranges in the target scene (for example, nearer spots, near spots, far spots, and far spots in the target scene), which is beneficial to improve the measurement dynamic range. At the same time, the measurement accuracy in different depth ranges is improved.
  • the reference exposure duration is less than or equal to the maximum duration that the image will not be exposed under the reference phase and the preset reference distance, so as to ensure that the subframe image of the acquired target scene is not exposed according to the reference exposure duration corresponding to the reference phase.
  • the extended exposure duration is between the reference exposure duration and the maximum duration that the image will not be exposed under the extended phase and the reference distance, ensuring that the sub-frame images of the acquired target scene will not be exposed according to the extended exposure duration corresponding to the extended phase.
  • the extended exposure duration is different from the reference exposure duration, it is beneficial to determine the depth information at different distances in the target scene under different exposure durations.
  • a longer exposure time is conducive to accurately determining the depth information of the distant point in the target scene, and a shorter exposure time is conducive to accurately determining the target scene.
  • the closer point is the depth information of the near point, which is beneficial to improve the dynamic range of ranging without exposure, and because one exposure duration is used in each phase to obtain one sub-frame image, it is not necessary to The phase adopts 2 different exposure durations to obtain 2 sub-frame images, so there is no need to fuse the sub-frame images obtained based on 2 different exposure durations for each phase, so the dynamic range of ranging can be improved while reducing the range. Small power consumption.
  • the embodiments of the present application relate to a depth measurement method.
  • the implementation details of the depth measurement method of this embodiment are described below in detail. The following contents are only provided for the convenience of understanding, and are not necessary for implementing this solution.
  • the flowchart of the depth measurement method in this embodiment may refer to FIG. 8 , including:
  • Step 801 Acquire N sub-frame images of the target scene according to the preset N exposure durations corresponding to the N phases respectively.
  • Step 802 Determine the depth information of the target scene according to the N subframe images.
  • steps 801 to 802 are substantially the same as steps 301 to 302 in the above-mentioned embodiment, and the repetition is not avoided to be repeated here.
  • Step 803 Determine a plurality of confidence levels corresponding to the depth information of the plurality of pixels according to the N sub-frame images.
  • the TOF ranging device may determine the depth information of multiple pixels in the target scene based on multiple pixels on the photodetector and N sub-frame images.
  • multiple pixel points in the target scene may include near points and far points, that is, the TOF ranging device can measure and obtain the depth information of the near point and the far point in the target scene.
  • the confidence level corresponding to the depth information of each pixel point can represent the credibility of the depth information of the pixel point, that is, whether the measured depth information of the pixel point is accurate. For the manner of determining the confidence level, reference may be made to the relevant description in the first embodiment, which will not be repeated in this embodiment to avoid repetition.
  • Step 804 If the number of confidence levels that are less than the preset confidence level threshold in the multiple confidence levels exceeds the preset number threshold and the extended exposure duration does not reach the maximum duration of the extended phase and the reference distance that the image will not be exposed, increase the extended exposure duration.
  • the preset confidence threshold and the preset quantity threshold can be set according to actual needs. For example, if the accuracy of the ranging is desired to be high, the confidence threshold can be set high and the quantity threshold can be set small.
  • the reference phase is 0°
  • the extended phase includes: 180°, 90°, and 270°.
  • the reference exposure duration is equal to the maximum duration that the image will not be exposed at 0° and range0, that is, the reference exposure duration is t1, then theoretically, the value range of the extended exposure duration t2 corresponding to 180° can be: t1 ⁇ t2 ⁇ 9t1, 90°
  • t2, t3, and t4 When setting t2, t3, and t4, you can gradually increase t2, t3, and t4 in their respective value ranges to find the minimum exposure duration that can make the confidence greater than the confidence threshold. It is not necessary to set t2, t3, The setting of t4 is relatively large. For example, if it is set to the maximum duration of no exposure, it is not necessary to directly set t2 to 9t1 at the beginning, and set t3 and t4 to 2t1, which can improve the dynamic range and accuracy of ranging, and at the same time, further reduce power consumption.
  • the depth measurement accuracy of several pixels in the target scene is low, indicating that the extended exposure time setting is more likely to be unreasonable, and the image is not exposed when the extended exposure time does not reach the extended phase and the reference distance.
  • the maximum duration indicates that the current extended exposure duration still has room to increase. Therefore, in this embodiment, the number of confidence levels that are less than the preset confidence level threshold among the multiple confidence levels exceeds the preset number threshold and the extended exposure duration does not reach the maximum duration of the extended phase and the reference distance that the image will not be exposed. Large extended exposure duration can increase the extended exposure duration at a reasonable time. It is not necessary to set the extended exposure duration longer at the beginning, which is beneficial to obtain the minimum extended exposure duration that can make the confidence greater than the confidence threshold. The power consumption is further reduced while the dynamic square range and ranging accuracy are improved.
  • the embodiment of the present application relates to a chip.
  • the chip 901 is connected to a memory 902 in an electronic device.
  • the memory 902 stores instructions that can be executed by the chip 901 , and the instructions are executed by the chip 901 . So that the chip 901 can perform the depth measurement method in the above embodiment.
  • the memory 902 and the chip 901 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more chips 901 and various circuits of the memory 902 together.
  • the bus may also connect together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein.
  • the bus interface provides the interface between the bus and the transceiver.
  • a transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other devices over a transmission medium.
  • the data processed by the chip 901 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the chip 901 .
  • Chip 901 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management, and other control functions.
  • the memory 902 can be used to store data used by the chip 901 when performing operations.
  • the embodiment of the present application relates to an electronic device, as shown in FIG. 9 , including the chip 901 described in the foregoing embodiment and a memory connected to the chip 901 .
  • the embodiments of the present application relate to a computer-readable storage medium storing a computer program.
  • the above method embodiments are implemented when the computer program is executed by the processor.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .

Abstract

Embodiments of the present application relate to the field of ranging, and disclosed therein are a depth measurement method, a chip, and an electronic device. The depth measurement method comprises: according to preset N exposure durations respectively corresponding to N phases, acquiring N subframe images of a target scene, N being a natural number greater than or equal to two; and determining depth information of the target scene according to the N subframe images. The N phases comprise a reference phase and an extended phase, and the N exposure durations comprise a reference exposure duration corresponding to the reference phase and an extended exposure duration corresponding to the extended phase; the reference exposure duration is less than or equal to the maximum duration for which an image in the reference phase and a preset reference distance is not overexposed; the extended exposure duration is between the reference exposure duration and the maximum duration for which an image in the extended phase and the reference distance is not overexposed; and the extended exposure duration is different from the reference exposure duration, so that power consumption can be reduced while improving the dynamic range of ranging.

Description

深度测量方法、芯片和电子设备Depth measurement method, chip and electronic device
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本申请基于申请号为202110184297.1,申请日为2021年02月08日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以引入方式并入本申请。This application is based on the Chinese patent application with the application number 202110184297.1 and the filing date on February 8, 2021, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated by reference into this application.
技术领域technical field
本申请实施例涉及测距领域,特别涉及一种深度测量方法、芯片和电子设备。The embodiments of the present application relate to the field of ranging, and in particular, to a depth measurement method, a chip, and an electronic device.
背景技术Background technique
深度信息的获取目前在许多领域有较广泛的应用。飞行时间(Time of flight,TOF)技术作为深度信息获取的一种方式,原理是通过光在空气中的飞行时间来计算测量装置(比如TOF摄像机)到目标物体的距离,大多TOF摄像机都采用互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)像素阵列作为接收端的光传感器,并采用调制光作为测量用的光源,比如由发射端发射调制过的单脉冲或连续调制光信号,接收端接受目标物体反射回来的光,通过计算发射光与接收光的相位差来计算物体当前的距离。The acquisition of depth information is currently widely used in many fields. Time of flight (TOF) technology is a way to obtain depth information. The principle is to calculate the distance from the measuring device (such as TOF camera) to the target object through the flight time of light in the air. The Metal Oxide Semiconductor (Complementary Metal Oxide Semiconductor, CMOS) pixel array is used as the light sensor at the receiving end, and the modulated light is used as the light source for measurement, for example, the modulated single pulse or continuously modulated light signal is emitted by the transmitting end, and the receiving end receives the target. The light reflected back by the object is calculated by calculating the phase difference between the emitted light and the received light to calculate the current distance of the object.
目前为了扩大TOF摄像机测距的动态范围,采用的方式是:采集两次曝 光数据进行融合。两次曝光数据分别为低曝光时的曝光数据和高曝光时的曝光数据,以四相位法飞行时间测距为例,低曝光时的曝光数据包括:低曝光时采集的4个不同相位的子帧图像,高曝光时的曝光数据包括:高曝光时采集的4个不同相位的子帧图像。At present, in order to expand the dynamic range of the TOF camera ranging, the method used is to collect two exposure data for fusion. The two exposure data are the exposure data at low exposure and the exposure data at high exposure. Taking the four-phase time-of-flight ranging as an example, the exposure data at low exposure includes: 4 sub-phases collected at low exposure Frame image, the exposure data at high exposure includes: 4 sub-frame images of different phases collected at high exposure.
然而,发明人发现相关技术中至少存在如下问题:为了扩大动态范围,需要采集两次曝光数据进行融合才能获得一帧深度数据,功耗较大。However, the inventor found that there are at least the following problems in the related art: in order to expand the dynamic range, it is necessary to collect two exposure data for fusion to obtain one frame of depth data, and the power consumption is relatively large.
发明内容SUMMARY OF THE INVENTION
本申请实施例的目的在于提供一种深度测量方法、芯片和电子设备,使得可以在提高测距的动态范围的同时减小功耗。The purpose of the embodiments of the present application is to provide a depth measurement method, a chip and an electronic device, so that the power consumption can be reduced while improving the dynamic range of ranging.
为解决上述技术问题,本申请的实施例提供了一种深度测量方法,包括:根据预设的与N个相位分别对应的N个曝光时长,获取目标场景的N个子帧图像;其中,所述N为大于或等于2的自然数,所述N个相位是与发射光相位的相位差各不相同的N个相位;根据所述N个子帧图像,确定所述目标场景的深度信息;其中,所述N个相位包括基准相位和扩展相位,所述N个曝光时长包括与基准相位对应的基准曝光时长和与所述扩展相位对应的扩展曝光时长;所述基准曝光时长小于或等于在所述基准相位及预设的基准距离下图像不过曝的最大时长,所述扩展曝光时长介于所述基准曝光时长和在所述扩展相位及所述基准距离下图像不过曝的最大时长之间。In order to solve the above technical problem, an embodiment of the present application provides a depth measurement method, including: acquiring N sub-frame images of a target scene according to preset N exposure durations corresponding to N phases respectively; wherein, the N is a natural number greater than or equal to 2, and the N phases are N phases with different phase differences from the phase of the emitted light; according to the N sub-frame images, determine the depth information of the target scene; wherein, the The N phases include a reference phase and an extended phase, and the N exposure durations include a reference exposure duration corresponding to the reference phase and an extended exposure duration corresponding to the extended phase; the reference exposure duration is less than or equal to the reference exposure duration in the reference phase. The maximum duration for which the image is not exposed under the phase and the preset reference distance, and the extended exposure duration is between the reference exposure duration and the maximum duration for which the image is not exposed under the extended phase and the reference distance.
本申请的实施例还提供了一种深度测量方法,包括:以基准相位发射用于深度测量的发射光;根据第一相位和基准曝光时长采集目标场景的第一子帧图像;其中,所述第一相位与所述基准相位的相位差为0度;根据第二相位采 集目标场景的第二子帧图像;其中,所述第二相位与所述基准相位的相位差为180度;根据第三相位采集目标场景的第三子帧图像;其中,所述第三相位与所述基准相位的相位差为90度;根据第四相位采集目标场景的第四子帧图像;其中,所述第四相位与所述基准相位的相位差为270度;其中,采集所述第二子帧图像所使用的曝光时长、采集所述第三子帧图像所使用的曝光时长和采集所述第四子帧图像所使用的曝光时长分别大于所述基准曝光时长,所述第一子帧图像、所述第二子帧图像、所述第三子帧图像和所述第四子帧图像用于确定一帧深度图像。Embodiments of the present application further provide a depth measurement method, including: emitting emission light for depth measurement at a reference phase; collecting a first subframe image of a target scene according to a first phase and a reference exposure duration; wherein the The phase difference between the first phase and the reference phase is 0 degrees; the second subframe image of the target scene is collected according to the second phase; wherein, the phase difference between the second phase and the reference phase is 180 degrees; The third subframe image of the target scene is collected in three phases; wherein, the phase difference between the third phase and the reference phase is 90 degrees; the fourth subframe image of the target scene is collected according to the fourth phase; The phase difference between the four-phase and the reference phase is 270 degrees; wherein, the exposure duration used to collect the second subframe image, the exposure duration used to collect the third subframe image, and the exposure duration used to collect the fourth subframe image The exposure duration used by a frame image is respectively greater than the reference exposure duration, and the first subframe image, the second subframe image, the third subframe image and the fourth subframe image are used to determine a Frame depth image.
本申请的实施例还提供了一种芯片,设置于电子设备中,所述芯片与所述电子设备中的存储器连接,所述存储器存储有可被所述芯片执行的指令,所述指令被所述芯片执行,以使所述芯片能够执行上述的深度测量方法。Embodiments of the present application further provide a chip, which is set in an electronic device, the chip is connected to a memory in the electronic device, and the memory stores an instruction that can be executed by the chip, and the instruction is executed by the device. The chip is executed, so that the chip can execute the above-mentioned depth measurement method.
本申请的实施例还提供了一种电子设备,包括上述的芯片和与所述芯片连接的存储器。Embodiments of the present application also provide an electronic device, including the above-mentioned chip and a memory connected to the chip.
本申请实施例,根据预设的与N个相位分别对应的N个曝光时长,获取目标场景的N个子帧图像,N为大于或等于2的自然数,N个相位是与发射光相位的相位差各不相同的N个相位;根据N个子帧图像,确定目标场景的深度信息;N个相位包括基准相位和扩展相位,N个曝光时长包括与基准相位对应的基准曝光时长和与扩展相位对应的扩展曝光时长;基准曝光时长小于或等于在基准相位和预设的基准距离下图像不过曝的最大时长,确保根据基准相位对应的基准曝光时长,获取的目标场景的子帧图像不过曝。扩展曝光时长介于基准曝光时长和在扩展相位及基准距离下图像不过曝的最大时长之间,确保根据扩展相位对应的扩展曝光时长,获取的目标场景的子帧图像不过曝。另外,由 于扩展曝光时长与基准曝光时长不同,因此在不同曝光时长下,有利于确定目标场景中不同距离处的深度信息。在扩展曝光时长与基准曝光时长中,较长的一个曝光时长下有利于准确的确定目标场景中较远处即远景点的深度信息,较短的一个曝光时长下有利于准确的确定目标场景中较近处即近景点的深度信息,从而有利于在不过曝的同时提高测距的动态范围,而且由于每个相位下采用1个曝光时长,获取1个子帧图像,并不需要对每个相同相位采用2个不同的曝光时长,获取2个子帧图像,从而无需对基于每个相位采用2个不同的曝光时长,获取的子帧图像进行融合,因此可以在提高测距的动态范围的同时减小功耗。In this embodiment of the present application, N sub-frame images of the target scene are acquired according to the preset N exposure durations corresponding to the N phases, where N is a natural number greater than or equal to 2, and the N phases are the phase differences from the phase of the emitted light. N different phases; according to the N sub-frame images, the depth information of the target scene is determined; the N phases include the reference phase and the extended phase, and the N exposure durations include the reference exposure duration corresponding to the reference phase and the extended phase Extended exposure duration; the reference exposure duration is less than or equal to the maximum duration that the image will not be exposed under the reference phase and the preset reference distance, to ensure that the sub-frame images of the acquired target scene will not be exposed according to the reference exposure duration corresponding to the reference phase. The extended exposure duration is between the reference exposure duration and the maximum duration that the image will not be exposed under the extended phase and the reference distance, ensuring that the sub-frame images of the acquired target scene will not be exposed according to the extended exposure duration corresponding to the extended phase. In addition, since the extended exposure duration is different from the reference exposure duration, it is beneficial to determine the depth information at different distances in the target scene under different exposure durations. In the extended exposure time and the reference exposure time, a longer exposure time is conducive to accurately determining the depth information of the distant point in the target scene, and a shorter exposure time is conducive to accurately determining the target scene. The closer point is the depth information of the near point, which is beneficial to improve the dynamic range of ranging without exposure, and because one exposure duration is used in each phase to obtain one sub-frame image, it is not necessary to The phase adopts 2 different exposure durations to obtain 2 sub-frame images, so there is no need to fuse the sub-frame images obtained based on 2 different exposure durations for each phase, so the dynamic range of ranging can be improved while reducing the range. Small power consumption.
另外,所述在所述扩展相位及所述基准距离下图像不过曝的最大时长的确定方式为:在相同曝光时长内,分别获取所述基准相位下的子帧图像的光子数和所述扩展相位下的子帧图像的光子数;计算所述基准相位下的子帧图像的光子数与所述扩展相位下的子帧图像的光子数的第一比值;基于深度测量中光信号的调制频率、所述扩展相位以及所述基准距离,计算所述扩展相位下的子帧图像的光子数达到极限值的最远测量距离;计算所述最远测量距离与所述基准距离的第二比值的平方;在所述第一比值和所述第二比值的平方中选择最小的数值,并将所述最小的数值与所述基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为所述扩展相位及所述基准距离下图像不过曝的最大时长。通过将第一比值和第二比值的平方中最小的数值与基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为扩展相位及基准距离下图像不过曝的最大时长,有利于考虑到不同维度下的比值关系,一种维度为确保测量近距离时不过曝,另一种维度为确保测量远距离时不过曝,从而可以更加合理的确 定扩展相位及基准距离下图像不过曝的最大时长,使得最终确定的扩展相位及基准距离下图像不过曝的最大时长能够同时满足测量近距离时不过曝和测量远距离时不过曝。In addition, the method for determining the maximum duration of no exposure of the image in the extended phase and the reference distance is as follows: within the same exposure duration, the number of photons of the sub-frame image in the reference phase and the extended duration are obtained respectively. The number of photons of the sub-frame image in the phase; Calculate the first ratio of the photon number of the sub-frame image in the reference phase to the photon number of the sub-frame image in the extended phase; Based on the modulation frequency of the optical signal in the depth measurement , the extended phase and the reference distance, calculate the farthest measurement distance at which the number of photons of the subframe image under the extended phase reaches the limit value; Calculate the second ratio of the farthest measurement distance and the reference distance Square; select the smallest value among the squares of the first ratio and the second ratio, and multiply the smallest value by the reference phase and the maximum duration of the image under the preset reference distance without exposure, It is the maximum duration for which the image is not exposed under the extended phase and the reference distance. By taking the product of the smallest value among the squares of the first ratio and the second ratio and the maximum duration of the image not being exposed under the reference phase and the preset reference distance as the maximum duration of the image not being exposed under the extended phase and the reference distance, it is beneficial to Considering the ratio relationship between different dimensions, one dimension is to ensure that the image is not exposed when measuring close range, and the other dimension is to ensure that the image is not exposed when measuring long distance, so that the image can be more reasonably determined under the extended phase and reference distance. The maximum duration allows the final extended phase and the maximum duration of the image not to be exposed at the reference distance to meet the requirements of non-exposure when measuring close range and non-exposure when measuring long distance.
另外,扩展相位包括两个所述第一类扩展相位和一个所述第二类扩展相位,且两个所述第一类扩展之间的相位差为π。即,本申请实施例根据预设的与4个相位分别对应的4个曝光时长,获取目标场景的4个子帧图像,通过4个子帧图像,确定目标场景的深度信息,采用了4相位采样的方式,可以同时兼顾测距精度、功耗以及速度,即在提高测距精度的同时不会对功耗和速度有太大影响。In addition, the spread phase includes two of the first type spread phases and one of the second type spread phases, and the phase difference between the two first type spread phases is π. That is, in the embodiment of the present application, according to the preset four exposure durations corresponding to the four phases, four sub-frame images of the target scene are acquired, and the depth information of the target scene is determined by using the four sub-frame images, and the four-phase sampling method is adopted. In this way, ranging accuracy, power consumption and speed can be taken into account at the same time, that is, the power consumption and speed are not greatly affected while the ranging accuracy is improved.
另外,所述目标场景的深度信息包括所述目标场景中多个像素点的深度信息,在所述根据所述N个子帧图像确定所述目标场景的深度信息之后,还包括:根据所述N个子帧图像,确定与所述多个像素点的深度信息分别对应的多个置信度;若所述多个置信度中小于预设的置信度阈值的置信度的数量超过预设的数量阈值且所述扩展曝光时长未达到所述扩展相位及所述基准距离下图像不过曝的最大时长,增大所述扩展曝光时长。可以理解的是,对目标场景中若干个像素点测量的深度准确性均较低,说明扩展曝光时长设置的不合理的可能性较大,扩展曝光时长未达到扩展相位及基准距离下图像不过曝的最大时长说明当前的扩展曝光时长还有可以增大的空间。因此,本实施例中在多个置信度中小于预设的置信度阈值的置信度的数量超过预设的数量阈值且扩展曝光时长未达到扩展相位及基准距离下图像不过曝的最大时长,增大扩展曝光时长,可以在合理的时机增大扩展曝光时长,无需一开始就将扩展曝光时长设置的较大,有利于得到可以使置信度大于置信度阈值的最小扩展曝光时长,能够在提高测 距动态方范围和测距准确度的同时,进一步降低功耗。In addition, the depth information of the target scene includes depth information of multiple pixels in the target scene, and after the determining the depth information of the target scene according to the N sub-frame images, further includes: according to the N sub-frame images sub-frame images, and determine multiple confidence levels corresponding to the depth information of the multiple pixels; If the extended exposure duration does not reach the maximum duration for which the image is not exposed under the extended phase and the reference distance, the extended exposure duration is increased. It is understandable that the depth measurement accuracy of several pixels in the target scene is low, indicating that the extended exposure time setting is more likely to be unreasonable, and the image is not exposed when the extended exposure time does not reach the extended phase and the reference distance. The maximum duration indicates that the current extended exposure duration still has room to increase. Therefore, in this embodiment, the number of confidence levels that are less than the preset confidence level threshold among the multiple confidence levels exceeds the preset number threshold and the extended exposure duration does not reach the maximum duration of the extended phase and the reference distance that the image will not be exposed. Large extended exposure duration can increase the extended exposure duration at a reasonable time. It is not necessary to set the extended exposure duration longer at the beginning, which is beneficial to obtain the minimum extended exposure duration that can make the confidence greater than the confidence threshold. The power consumption is further reduced while the dynamic square range and ranging accuracy are improved.
另外,以基准相位发射用于深度测量的发射光;根据第一相位和基准曝光时长采集目标场景的第一子帧图像;其中,所述第一相位与所述基准相位的相位差为0度;根据第二相位采集目标场景的第二子帧图像;其中,所述第二相位与所述基准相位的相位差为180度;根据第三相位采集目标场景的第三子帧图像;其中,所述第三相位与所述基准相位的相位差为90度;根据第四相位采集目标场景的第四子帧图像;其中,所述第四相位与所述基准相位的相位差为270度;其中,采集所述第二子帧图像所使用的曝光时长、采集所述第三子帧图像所使用的曝光时长和采集所述第四子帧图像所使用的曝光时长分别大于所述基准曝光时长,所述第一子帧图像、所述第二子帧图像、所述第三子帧图像和所述第四子帧图像用于确定一帧深度图像。本发明实施方式中可以基于4个子帧图像确定一帧深度图像,由于四子帧图像中采集第二子帧图像所使用的曝光时长、采集第三子帧图像所使用的曝光时长和采集第四子帧图像所使用的曝光时长分别大于基准曝光时长,因此基于第二、三、四子帧图像相比于第一子帧图像可以确定目标场景中的远景点的深度信息,基于第一子帧图像可以确定目标场景中的近景点的深度信息。因此,本实施方式中基于4个子帧图像可以同时得到目标场景中的远景点的深度信息和近景点的深度信息,从而在提高测距的动态范围的同时极大的提高帧率并减小了功耗。同时基于4个子帧图像确定的深度图像,还可以在提高测量精度的同时还可以保证测量需要花费的时间不会很长,即提高确定一帧深度图像的速度。In addition, the emission light used for depth measurement is emitted in the reference phase; the first sub-frame image of the target scene is collected according to the first phase and the reference exposure duration; wherein, the phase difference between the first phase and the reference phase is 0 degree ; Acquire the second subframe image of the target scene according to the second phase; wherein, the phase difference between the second phase and the reference phase is 180 degrees; Acquire the third subframe image of the target scene according to the third phase; wherein, The phase difference between the third phase and the reference phase is 90 degrees; the fourth subframe image of the target scene is collected according to the fourth phase; wherein, the phase difference between the fourth phase and the reference phase is 270 degrees; Wherein, the exposure duration used for collecting the second subframe image, the exposure duration used for collecting the third subframe image, and the exposure duration used for collecting the fourth subframe image are respectively greater than the reference exposure duration , the first subframe image, the second subframe image, the third subframe image and the fourth subframe image are used to determine a frame of depth image. In the embodiment of the present invention, one frame of depth image can be determined based on four sub-frame images, because the exposure duration used for collecting the second sub-frame image, the exposure duration used for collecting the third sub-frame image and the fourth The exposure durations used by the subframe images are respectively longer than the reference exposure duration, so the depth information of the distant point in the target scene can be determined based on the second, third, and fourth subframe images compared with the first subframe image. The image can determine depth information for near points in the target scene. Therefore, in this embodiment, the depth information of the far point and the depth information of the near point in the target scene can be obtained at the same time based on the four sub-frame images, thereby greatly improving the frame rate and reducing the dynamic range of distance measurement while improving the dynamic range. power consumption. At the same time, the depth image determined based on the four sub-frame images can also improve the measurement accuracy and also ensure that the time required for measurement is not very long, that is, the speed of determining a frame of depth image is increased.
附图说明Description of drawings
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。One or more embodiments are exemplified by the pictures in the corresponding drawings, and these exemplifications do not constitute limitations of the embodiments, and elements with the same reference numerals in the drawings are denoted as similar elements, Unless otherwise stated, the figures in the accompanying drawings do not constitute a scale limitation.
图1是本申请实施例中提到的用于进行深度测量的电子设备的模块图;1 is a block diagram of the electronic device for depth measurement mentioned in the embodiment of the present application;
图2是本申请实施例中提到的相关技术中发射的光信号、接收的光信号以及基于4相位进行采样的光信号的波形图;2 is a waveform diagram of an optical signal transmitted, an optical signal received, and an optical signal sampled based on 4 phases in the related art mentioned in the embodiments of the present application;
图3是本申请实施例中提到的深度测量方法的流程图;3 is a flowchart of the depth measurement method mentioned in the embodiment of the present application;
图4是本申请实施例中提到的以0°为基准相位且基准距离为0时,4相位对应的不同曝光时长的示意图;4 is a schematic diagram of different exposure durations corresponding to 4 phases when 0° is the reference phase and the reference distance is 0 mentioned in the embodiment of the present application;
图5是本申请实施例中提到的以0°为基准相位且基准距离大于0时,4相位对应的不同曝光时长的示意图;5 is a schematic diagram of different exposure durations corresponding to 4 phases when 0° is the reference phase and the reference distance is greater than 0 mentioned in the embodiment of the present application;
图6是本申请实施例中提到的在扩展相位及基准距离下图像不过曝的最大时长的确定方式的流程图;6 is a flowchart of a method for determining the maximum duration that the image is not exposed under the extended phase and the reference distance mentioned in the embodiment of the present application;
图7是本申请实施例中提到的一个例子中的深度测量方法的流程图;7 is a flowchart of the depth measurement method in an example mentioned in the embodiment of the present application;
图8是本申请一实施例中提到的深度测量方法的流程图;8 is a flowchart of a depth measurement method mentioned in an embodiment of the present application;
图9是本申请实施例中提到的电子设备的结构示意图。FIG. 9 is a schematic structural diagram of the electronic device mentioned in the embodiment of the present application.
具体实施方式Detailed ways
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合附图对本申请的各实施例进行详细的阐述。然而,本领域的普通技术人员可以理解,在本申请各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以 实现本申请所要求保护的技术方案。以下各个实施例的划分是为了描述方便,不应对本申请的具体实现方式构成任何限定,各个实施例在不矛盾的前提下可以相互结合相互引用。In order to make the objectives, technical solutions and advantages of the embodiments of the present application more clear, each embodiment of the present application will be described in detail below with reference to the accompanying drawings. However, those of ordinary skill in the art can understand that, in each embodiment of the present application, many technical details are provided for the reader to better understand the present application. However, even without these technical details and various changes and modifications based on the following embodiments, the technical solutions claimed in the present application can be realized. The following divisions of the various embodiments are for the convenience of description, and should not constitute any limitation on the specific implementation of the present application, and the various embodiments may be combined with each other and referred to each other on the premise of not contradicting each other.
本申请实施例涉及一种深度测量方法,应用于电子设备,该电子设备用于测量深度信息,深度信息可以理解为待测对象与电子设备之间的距离。该电子设备可以为TOF测距设备,在具体实现中可以表现为TOF相机。TOF相机测量的深度信息可以理解为待测对象与TOF相机之间的距离。下面对本实施例的深度测量方法的实现细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本方案的必须。The embodiment of the present application relates to a depth measurement method, which is applied to an electronic device. The electronic device is used to measure depth information, and the depth information can be understood as the distance between the object to be measured and the electronic device. The electronic device may be a TOF ranging device, and may be a TOF camera in a specific implementation. The depth information measured by the TOF camera can be understood as the distance between the object to be measured and the TOF camera. The implementation details of the depth measurement method in this embodiment will be specifically described below, and the following contents are only provided for the convenience of understanding, and are not necessary for implementing this solution.
为便于对本实施例的理解,下面对本实施例涉及的深度测量原理进行说明:In order to facilitate the understanding of this embodiment, the depth measurement principle involved in this embodiment is described below:
本实施例涉及的深度测量技术是为对发射光信号进行周期性调制,通过对反射光信号相对于发射光信号的相位延迟进行测量,再依据相位延迟和光速对深度进行计算,这种测量技术可以称为间接-飞行时间(indirect-TOF,iTOF)技术。用于进行深度测量的电子设备的模块图可以参考图1,包括:发射模块101、接收模块102、处理模块103。The depth measurement technology involved in this embodiment is to periodically modulate the emitted optical signal, by measuring the phase delay of the reflected optical signal relative to the emitted optical signal, and then calculating the depth according to the phase delay and the speed of light. This measurement technology It can be called indirect-time-of-flight (indirect-TOF, iTOF) technology. For the module diagram of the electronic device for depth measurement, refer to FIG. 1 , including: a transmitting module 101 , a receiving module 102 , and a processing module 103 .
发射模块101,用于发射基于调制频率f调制过的光信号,可以采用诸如激光发射器作为光源。The transmitting module 101, for transmitting the optical signal modulated based on the modulation frequency f, may use a laser transmitter as a light source.
接收模块102,用于接收目标物反射回来的光(简称反射光),通过对反射光进行不同相位的采样得到不同相位对应的子帧图像,接收模块中主要由CMOS图像传感器来检测反射光信号。图像传感器也可以称之为光探测器,其一般包含阵列式分布的多个像素。比如常用的4相位采样法,通过图像传感器 的像素阵列进行检测,可以得到4个不同的相位子帧即4个不同相位的子帧图像,4个子帧图像中每个像素而言,其对应有各自的光子数,分别为Q1、Q2、Q3、Q4。相关技术中,发射的光信号、接收的光信号以及基于4相位进行采样的光信号的波形图可以参考图2,即4相位采用相同的曝光时长。上述的相位可以理解为积分窗口的起始位置相对于发射窗口的起始位置的相位偏移量,其中,发射窗口的起始位置对应的相位作为参考基准可以视为0°,积分窗口也可以称为接收窗口。图2中的4个相位分别为0°、90°、180°、270°,通过图2可以看出0°的积分窗口的起始位置相对于发射窗口的起始位置的相位偏移量为0°,即0°的积分窗口和发射窗口完全重合;90°的积分窗口的起始位置相对于发射窗口的起始位置的相位偏移量为90°;180°的积分窗口的起始位置相对于发射窗口的起始位置的相位偏移量为180°;270°的积分窗口的起始位置相对于发射窗口的起始位置的相位偏移量为270°。The receiving module 102 is used to receive the light reflected by the target object (reflected light for short), and obtain sub-frame images corresponding to different phases by sampling the reflected light at different phases. The receiving module mainly uses a CMOS image sensor to detect the reflected light signal. . An image sensor may also be referred to as a photodetector, which generally includes a plurality of pixels distributed in an array. For example, in the commonly used 4-phase sampling method, through detection by the pixel array of the image sensor, 4 different phase subframes, that is, 4 subframe images with different phases, can be obtained. For each pixel in the 4 subframe images, it corresponds to The respective photon numbers are Q1, Q2, Q3, and Q4, respectively. In the related art, reference may be made to FIG. 2 for waveform diagrams of the transmitted optical signal, the received optical signal, and the optical signal sampled based on 4 phases, that is, the 4 phases adopt the same exposure duration. The above phase can be understood as the phase offset of the starting position of the integration window relative to the starting position of the emission window, wherein the phase corresponding to the starting position of the emission window can be regarded as 0° as a reference, and the integration window can also be called the receive window. The four phases in Figure 2 are 0°, 90°, 180°, and 270°, respectively. From Figure 2, it can be seen that the phase offset of the starting position of the integration window of 0° relative to the starting position of the emission window is 0°, that is, the integration window of 0° and the emission window are completely coincident; the phase offset of the starting position of the integration window of 90° relative to the starting position of the emission window is 90°; the starting position of the integration window of 180° The phase offset relative to the starting position of the emission window is 180°; the phase offset of the starting position of the 270° integration window relative to the starting position of the emission window is 270°.
积分窗口可以理解为:光探测器中像素的感光控制开关的工作时序图的高电平所占的窗口。积分窗口包括有效积分窗口和无效积分窗口,有效积分窗口为积分窗口中实际接收到反射光的窗口,即图2中的每个积分窗口的阴影部分;无效积分窗口为积分窗口中实际未接收到反射光的窗口,即图2中每个积分窗口的非阴影部分。图2中各相位子帧对应的波形图中,感光控制开关打开为高电平,高电平表示允许开始接收反射光,感光控制开关关闭为低电平,低电平表示停止接收反射光,在规定的曝光时长结束后,一个相位子帧采样完成。在具体实现中,规定的曝光时长内通常有多个积分窗口,比如图2中的270°子帧对应的曝光时长内有3个积分窗口,每个积分窗口内接收的光子数为q4,3个积分窗口均接收完光子数后,可以认为270°相位子帧采样完成,其中一个 相位子帧也可以称为一个子帧图像。一个子帧图像还可以理解为各像素在规定的曝光时长内所有积分窗口积分完后形成的图像。The integration window can be understood as: the window occupied by the high level of the working timing diagram of the photosensitive control switch of the pixel in the photodetector. The integration window includes an effective integration window and an invalid integration window. The valid integration window is the window in the integration window that actually receives the reflected light, that is, the shadow part of each integration window in Figure 2; the invalid integration window is the integration window that does not actually receive the light. The window for reflected light, i.e. the unshaded portion of each integration window in Figure 2. In the waveform diagram corresponding to each phase sub-frame in Figure 2, the photosensitive control switch is turned on to a high level, and the high level indicates that it is allowed to start receiving reflected light. After the specified exposure duration ends, one phase subframe sampling is completed. In the specific implementation, there are usually multiple integration windows within the specified exposure duration. For example, there are 3 integration windows in the exposure duration corresponding to the 270° subframe in Figure 2, and the number of photons received in each integration window is q4, 3 After each integration window has received the number of photons, it can be considered that the 270° phase subframe sampling is completed, and one phase subframe can also be referred to as a subframe image. A sub-frame image can also be understood as an image formed by each pixel after all integration windows are integrated within a specified exposure duration.
处理模块103,用于将发起采图命令发送给接收模块102,接收模块102将采图命令发送至发射模块101,以使发射模块101发射出光信号。处理模块103还用于接收接收模块102发送的相位数据,该相位数据可以包括上述4个相位下采样得到的4个子帧图像的图像数据,该图像数据可以具体表现为4个子帧图像的光子数,依次表示为Q1、Q2、Q3、Q4。从而,处理模块103可以通过如下公式计算深度d:The processing module 103 is configured to send the image acquisition command to the receiving module 102, and the receiving module 102 sends the image acquisition command to the transmitting module 101, so that the transmitting module 101 emits an optical signal. The processing module 103 is further configured to receive the phase data sent by the receiving module 102, the phase data may include the image data of the 4 subframe images obtained by the above-mentioned 4 phase downsampling, and the image data may be embodied as the number of photons of the 4 subframe images. , denoted as Q1, Q2, Q3, Q4 in turn. Therefore, the processing module 103 can calculate the depth d by the following formula:
Q=sinφ=Q3-Q4;Q3=n3*q3,Q4=n4*q4Q=sinφ=Q3-Q4; Q3=n3*q3, Q4=n4*q4
I=cosφ=Q1-Q2;Q1=n1*q1,Q2=n2*q2I=cosφ=Q1-Q2; Q1=n1*q1, Q2=n2*q2
tanφ=sinφ/cosφ=(Q3-Q4)/(Q1-Q2)tanφ=sinφ/cosφ=(Q3-Q4)/(Q1-Q2)
A=1/2(Q 2+I 2) 1/2 A=1/2(Q 2 +I 2 ) 1/2
d=(c/2f)×(φ/2π)d=(c/2f)×(φ/2π)
其中,f为光信号的调制频率、φ为反射光信号相对于发射光信号的相位延迟,c为光速,A为测量的深度d的置信度。参考图2,q1为0°相位子帧中接收模块102在每个积分窗口内接收到的光子数,n1为0°相位子帧对应的曝光时长内积分窗口的数量,Q1为0°相位子帧中接收模块102在所有积分窗口内接收到的光子数的总和。q2为180°相位子帧中接收模块102在每个积分窗口下接收到的光子数,n2为180°相位子帧对应的曝光时长内积分窗口的数量,Q2为180°相位子帧中接收模块102在所有积分窗口内接收到的光子数的总和。q3为90°相位子帧中接收模块102在每个积分窗口下接收到的光子数,n3为90°相位子帧对应的曝光时长内积分窗口的数量,Q3为90°相位子帧中接收 模块102在所有积分窗口内接收到的光子数的总和。q4为270°相位子帧中接收模块102在每个积分窗口下接收到的光子数,n4为270°相位子帧对应的曝光时长内积分窗口的数量,Q4为270°相位子帧中接收模块102在所有积分窗口内接收到的光子数的总和。图2中,n1=n2=n3=n4=3。Among them, f is the modulation frequency of the optical signal, φ is the phase delay of the reflected optical signal relative to the emitted optical signal, c is the speed of light, and A is the confidence of the measured depth d. Referring to FIG. 2 , q1 is the number of photons received by the receiving module 102 in each integration window in the 0° phase subframe, n1 is the number of integration windows in the exposure duration corresponding to the 0° phase subframe, and Q1 is the 0° phase subframe. The sum of the number of photons received by the receiving module 102 in all integration windows in a frame. q2 is the number of photons received by the receiving module 102 in each integration window in the 180° phase subframe, n2 is the number of integration windows in the exposure duration corresponding to the 180° phase subframe, and Q2 is the receiving module in the 180° phase subframe 102 The sum of the number of photons received in all integration windows. q3 is the number of photons received by the receiving module 102 in each integration window in the 90° phase subframe, n3 is the number of integration windows in the exposure duration corresponding to the 90° phase subframe, and Q3 is the receiving module in the 90° phase subframe 102 The sum of the number of photons received in all integration windows. q4 is the number of photons received by the receiving module 102 in each integration window in the 270° phase subframe, n4 is the number of integration windows in the exposure duration corresponding to the 270° phase subframe, and Q4 is the receiving module in the 270° phase subframe 102 The sum of the number of photons received in all integration windows. In FIG. 2, n1=n2=n3=n4=3.
在具体实现中,接收模块103中的光探测器上各像素均可独立获取光子数,处理模块103基于光探测器上的一个像素可以计算得到目标场景中的一个像素点的深度信息,基于光探测器上多个像素可以计算得到目标场景中的多个像素点的深度信息。其中,像素获取的光子数可以理解为:在曝光时长内将反射光信号成像在像素上以累积光电荷,即在曝光时长内累积反射光在像素中生成的光子数,还可以理解为:在曝光时长内的所有积分窗口下接收到的光子数的总和。In a specific implementation, each pixel on the photodetector in the receiving module 103 can independently obtain the number of photons, and the processing module 103 can calculate the depth information of a pixel in the target scene based on a pixel on the photodetector. Multiple pixels on the detector can calculate the depth information of multiple pixels in the target scene. Among them, the number of photons acquired by the pixel can be understood as: the reflected light signal is imaged on the pixel to accumulate photocharge during the exposure time, that is, the number of photons generated by the accumulated reflected light in the pixel during the exposure time can also be understood as: The sum of the number of photons received under all integration windows during the exposure duration.
在一种实现方式中,采用两次曝光数据(低曝光的4个子帧图像+高曝光的4个子帧图像)进行融合以扩大动态测量范围,低曝光下4个相位采用相同的曝光时长,高曝光下4个相位采用相同的曝光时长。高曝光与低曝光的不同之处在于,高曝光下采用的曝光时长大于低曝光下采用的曝光时长。即采集一次曝光数据,不同相位对应的曝光时长保持一致,假设0°、180°、90°、270°对应的曝光时长分别为t1、t2、t3、t4,则t1=t2=t3=t4。总共需要采集8个子帧图像,每个相位下采用2个不同的曝光时长(低曝光的曝光时长和高曝光的曝光时长),获取2个子帧图像,4个相位就可以得到8个子帧图像。为了提高动态范围,每个相位需要基于2个不同的曝光时长进行采样,得到2个子帧图像,对基于4个相位采样得到的8个子帧图像进行融合以输出一帧深度信息,但这样明显降低了帧率,增大了功耗。其中,8个子帧图像进行融合可以理解 为:基于低曝光的4个子帧图像得到一帧低曝光的深度信息,基于高曝光的4个子帧图像得到一帧高曝光的深度信息,再将低曝光的深度信息和高曝光的深度信息进行融合得到最终测量的深度信息。In one implementation, two exposure data (4 sub-frame images of low exposure + 4 sub-frame images of high exposure) are used for fusion to expand the dynamic measurement range, and the 4 phases under low exposure use the same exposure duration, and the high exposure The same exposure duration is used for the 4 phases under exposure. The difference between high exposure and low exposure is that the exposure time used in high exposure is longer than the exposure time used in low exposure. That is, the exposure data is collected once, and the exposure durations corresponding to different phases remain the same. Assuming that the exposure durations corresponding to 0°, 180°, 90°, and 270° are t1, t2, t3, and t4, respectively, then t1=t2=t3=t4. A total of 8 subframe images need to be collected, and 2 different exposure durations (the exposure duration of low exposure and the exposure duration of high exposure) are used in each phase to obtain 2 subframe images, and 8 subframe images can be obtained in 4 phases. In order to improve the dynamic range, each phase needs to be sampled based on 2 different exposure durations to obtain 2 sub-frame images, and 8 sub-frame images obtained based on 4 phase sampling are fused to output a frame of depth information, but this significantly reduces the The frame rate increases and the power consumption increases. Among them, the fusion of 8 sub-frame images can be understood as: obtaining a frame of low-exposure depth information based on the low-exposure 4 sub-frame images, obtaining a frame of high-exposure depth information based on the high-exposure 4 sub-frame images, and then combining the low-exposure depth information The depth information of , and the depth information of high exposure are fused to obtain the final measured depth information.
本实施例中为了在扩大动态测量范围的同时降低功耗,提供如下的深度测量方法,本实施例的深度测量方法的流程图可以参考图3,包括:In this embodiment, in order to reduce power consumption while expanding the dynamic measurement range, the following depth measurement method is provided. Refer to FIG. 3 for a flowchart of the depth measurement method in this embodiment, including:
步骤301:根据预设的与N个相位分别对应的N个曝光时长,获取目标场景的N个子帧图像。Step 301: Acquire N sub-frame images of the target scene according to the preset N exposure durations corresponding to the N phases respectively.
步骤302:根据N个子帧图像,确定目标场景的深度信息。Step 302: Determine the depth information of the target scene according to the N subframe images.
其中,N为大于或等于2的自然数,N个相位是与发射光相位的相位差各不相同的N个相位。每个相位下采用1个曝光时长,获取1个子帧图像,并不需要对每个相位采用2个不同的曝光时长,获取2个子帧图像。本实施例中为了提高动态范围,每个相位基于1个曝光时长进行采样,得到1个子帧图像,基于N个不同相位下采集的N个子帧图像,输出一帧深度信息。Wherein, N is a natural number greater than or equal to 2, and the N phases are N phases with different phase differences from the phase of the emitted light. In each phase, one exposure duration is used to obtain one sub-frame image, and it is not necessary to adopt two different exposure durations for each phase to obtain two sub-frame images. In this embodiment, in order to improve the dynamic range, each phase is sampled based on one exposure duration to obtain one subframe image, and one frame of depth information is output based on N subframe images collected under N different phases.
在一个例子中,N为大于或等于2且小于8的自然数,即至多可以根据基于7个不同的相位采样得到的7个子帧图像,输出一帧深度信息,这样相比于对8个子帧图像进行融合以输出一帧深度信息的方式可以提高帧率,降低功耗。In one example, N is a natural number greater than or equal to 2 and less than 8, that is, at most one frame of depth information can be output according to 7 subframe images obtained based on 7 different phase samplings. Fusion to output a frame of depth information can increase the frame rate and reduce power consumption.
在一个例子中,N=4,本实施例可以根据基于4个不同的相位采样得到的4个子帧图像,输出一帧深度信息,即根据4个依序先后获得的子帧图像,确定目标场景的深度信息。考虑到相位越多测量精度越高,但同时功耗以及测量需要花费的时间也会相对越多,帧率也会随之降低,N取4可以权衡动态范围、帧率、测量精度、功耗以及测量需要花费的时间,在一定程度上可以在提 高动态范围的同时、提高帧率和测量精度的,还可以保证功耗不会很大,测量需要花费的时间也不会很长。In an example, N=4, this embodiment can output one frame of depth information according to 4 subframe images obtained based on 4 different phase samplings, that is, determine the target scene according to the 4 subframe images obtained in sequence depth information. Considering that the more phases, the higher the measurement accuracy, but at the same time, the power consumption and measurement time will be relatively more, and the frame rate will be reduced accordingly. N is 4 to balance the dynamic range, frame rate, measurement accuracy, and power consumption. As well as the time it takes to measure, to a certain extent, it can improve the frame rate and measurement accuracy while improving the dynamic range, and it can also ensure that the power consumption will not be large, and the measurement time will not be very long.
N个不同的相位包括基准相位和扩展相位,N个曝光时长包括与基准相位对应的基准曝光时长和与扩展相位对应的扩展曝光时长;基准曝光时长小于或等于在基准相位及预设的基准距离下图像不过曝的最大时长,确保根据基准相位对应的基准曝光时长,获取的目标场景的子帧图像不会过曝。扩展曝光时长介于基准曝光时长和在扩展相位及基准距离下图像不过曝的最大时长之间,确保根据扩展相位对应的扩展曝光时长,获取的目标场景的子帧图像不会过曝。The N different phases include the reference phase and the extended phase, and the N exposure durations include the reference exposure duration corresponding to the reference phase and the extended exposure duration corresponding to the extended phase; the reference exposure duration is less than or equal to the reference phase and the preset reference distance. The maximum duration for which the lower image will not be exposed, to ensure that the sub-frame images of the acquired target scene will not be overexposed according to the benchmark exposure duration corresponding to the benchmark phase. The extended exposure duration is between the reference exposure duration and the maximum duration that the image will not be exposed under the extended phase and the reference distance, ensuring that the sub-frame images of the acquired target scene will not be overexposed according to the extended exposure duration corresponding to the extended phase.
在具体实现中,基准相位可以根据实际需要进行选择,基准相位与基准距离对应。比如,基准相位为0°时,0°对应的基准距离为应用测距的最近距离range0,range0可以理解为TOF测距设备所应用的测距场景中需要测量的最近距离。也可以理解为,假设应用测距的最近距离表示为range0,则参考图4当积分窗口完全对准发射窗口时,该相位记为基准相位0°即基准相位是与发射光相位的相位差为0°的相位。通过图4可以看出,range0为0米,0°相位子帧的积分窗口内接收到的光子数为满光子,也就是说0°相位子帧的积分窗口都是有效积分窗口,180°相位子帧的积分窗口内接收到的光子数为0,也就是说180°相位子帧的积分窗口都是无效积分窗口。其中,满光子可以理解为在一个发射窗口内发射的光子数总和。图4中,Q1=3q1,Q2=0,Q3=6q3,Q4=5q4。In specific implementation, the reference phase can be selected according to actual needs, and the reference phase corresponds to the reference distance. For example, when the reference phase is 0°, the reference distance corresponding to 0° is the shortest distance range0 of the application ranging, and range0 can be understood as the shortest distance that needs to be measured in the ranging scenario applied by the TOF ranging device. It can also be understood that, assuming that the closest distance of the application ranging is represented as range0, then referring to Figure 4, when the integration window is completely aligned with the emission window, the phase is recorded as the reference phase 0°, that is, the reference phase is the phase difference with the emitted light phase is 0° phase. As can be seen from Figure 4, range0 is 0 meters, and the number of photons received in the integration window of the 0° phase subframe is full photons, that is to say, the integration windows of the 0° phase subframe are all valid integration windows, and the 180° phase subframe is an effective integration window. The number of photons received in the integration window of the bit subframe is 0, that is to say, the integration windows of the 180° phase subframe are invalid integration windows. Among them, full photons can be understood as the total number of photons emitted in an emission window. In FIG. 4, Q1=3q1, Q2=0, Q3=6q3, Q4=5q4.
在一个例子中,应用测距的最近距离range0大于0,比如可能为0.3米。当range0大于0时,可以参考图5,0°相位子帧的积分窗口内接收到的光子数小于满光子,也就是说0°相位子帧的积分窗口并不都是有效积分窗口还包括无效积分窗口。180°相位子帧的积分窗口内接收到的光子数大于0,也就是说 180°相位子帧的积分窗口并不都是无效积分窗口还包括小部分的有效积分窗口。图5中的阴影部分均为有效积分窗口,需要说明的是,由于180°相位子帧的有效积分窗口很小,因此图5中q2并未标准在阴影区域中。图5中,Q1=3q1,Q2=6q2,Q3=6q3,Q4=5q4。In one example, the closest distance range0 to which ranging is applied is greater than 0, such as may be 0.3 meters. When range0 is greater than 0, you can refer to Figure 5. The number of photons received in the integration window of the 0° phase subframe is less than full photons, that is to say, the integration windows of the 0° phase subframe are not all valid integration windows but also invalid ones. Integration window. The number of photons received in the integration window of the 180° phase subframe is greater than 0, that is to say, the integration windows of the 180° phase subframe are not all invalid integration windows but also include a small part of the valid integration windows. The shaded parts in FIG. 5 are all effective integration windows. It should be noted that since the effective integration window of the 180° phase subframe is very small, q2 in FIG. 5 is not standard in the shaded area. In FIG. 5, Q1=3q1, Q2=6q2, Q3=6q3, Q4=5q4.
为便于说明,下面将在基准相位及预设的基准距离下图像不过曝的最大时长简称为t0,基准曝光时长简称为t1,则t1小于或等于t0。在一个例子中,t0的确定方式可以如下:For the convenience of description, the maximum duration for which the image is not exposed under the reference phase and the preset reference distance is abbreviated as t0, and the reference exposure duration is abbreviated as t1, and t1 is less than or equal to t0. In one example, t0 can be determined as follows:
在基准距离下,调整基准相位的积分窗口的起始点与发射的光信号的起始点间隔基准相位,逐渐增大预设的起始曝光时长,并在每次增大起始曝光时长后获取在基准相位下的光子数,直到获取的光子数与上一次获取的光子数相同即获取的光子数不再增加,说明光子数已经到达极限值,将上一次获取的光子数所基于的曝光时长作为t0。可以理解的是,TOF测距设备中包括感光芯片,感光芯片能够接收的光子数是有限的,如果达到其极限值就认为感光芯片已经饱和,再增加曝光时长,获取的光子数也不会再增加。因此,t0可以理解为随着曝光时长的增加,使得光子数不会再增加的临界曝光时长。Under the reference distance, adjust the distance between the starting point of the integration window of the reference phase and the starting point of the emitted optical signal to the reference phase, gradually increase the preset initial exposure duration, and obtain the The number of photons in the reference phase, until the number of photons acquired is the same as the number of photons acquired last time, that is, the number of photons acquired does not increase, indicating that the number of photons has reached the limit value, and the exposure duration based on the number of photons acquired last time is taken as t0. It is understandable that the TOF ranging device includes a photosensitive chip. The number of photons that the photosensitive chip can receive is limited. If the limit value is reached, the photosensitive chip is considered to be saturated. If the exposure time is increased, the number of photons obtained will no longer be obtained. Increase. Therefore, t0 can be understood as the critical exposure duration that prevents the number of photons from increasing as the exposure duration increases.
比如,基准相位为0°,基准距离为range0时,t0的确定方式可以:For example, when the reference phase is 0° and the reference distance is range0, t0 can be determined by:
在range0下,调整基准相位的积分窗口的起始点与发射的光信号的起始点间隔0°,即基准相位的积分窗口的起始点与发射的光信号的起始点对齐。比如,参考图4,让0°帧对齐到发射模块发射的调制波,使得0°帧积分窗口为最大,然后从小往大增加曝光时长,比如曝光时长从1ms逐渐增加到10ms后感光芯片收到的光子数不会再增加了,可以认为在0°相位及range0下图像不过曝的最大时长为10ms,即t0=10ms,0°对应的曝光时长t1可以设为10ms, 即t1=t0。In range0, the starting point of the integration window of the reference phase is adjusted to be separated from the starting point of the emitted optical signal by 0°, that is, the starting point of the integration window of the reference phase is aligned with the starting point of the emitted optical signal. For example, referring to Figure 4, align the 0° frame to the modulated wave emitted by the transmitter module, so that the integration window of the 0° frame is the largest, and then increase the exposure time from small to large, for example, the exposure time gradually increases from 1ms to 10ms. After the photosensitive chip receives The number of photons will not increase any more. It can be considered that the maximum duration of image overexposure at 0° phase and range0 is 10ms, that is, t0=10ms, and the exposure duration t1 corresponding to 0° can be set to 10ms, that is, t1=t0.
在一个例子中,在扩展相位及基准距离下图像不过曝的最大时长的确定方式可以参考图6,包括:In an example, the method of determining the maximum duration of the image without exposure under the extended phase and the reference distance can refer to FIG. 6 , including:
步骤601:在相同曝光时长内,分别获取基准相位下的子帧图像的光子数和扩展相位下的子帧图像的光子数。Step 601 : within the same exposure duration, obtain the photon number of the sub-frame image under the reference phase and the photon number of the sub-frame image under the extended phase, respectively.
步骤602:计算基准相位下的子帧图像的光子数与扩展相位下的子帧图像的光子数的第一比值。Step 602: Calculate a first ratio of the photon number of the subframe image in the reference phase to the photon number of the subframe image in the extended phase.
步骤603:基于深度测量中光信号的调制频率、扩展相位以及基准距离,计算扩展相位下的子帧图像的光子数达到极限值的最远测量距离。Step 603: Calculate the farthest measurement distance at which the number of photons of the subframe image under the extended phase reaches a limit value based on the modulation frequency, the extended phase and the reference distance of the optical signal in the depth measurement.
步骤604:计算最远测量距离与基准距离的第二比值的平方。Step 604: Calculate the square of the second ratio of the farthest measured distance to the reference distance.
步骤605:在第一比值和第二比值的平方中选择最小的数值,并将最小的数值与基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为扩展相位及基准距离下图像不过曝的最大时长。Step 605: Select the smallest value among the squares of the first ratio and the second ratio, and use the product of the smallest value and the reference phase and the maximum duration of the image not to be exposed under the preset reference distance as the extended phase and the reference distance. The maximum length of time that an image will not be exposed.
为便于理解,基准相位以0°为例,下面分别以扩展相位为90°和180°为例,对在扩展相位及基准距离下图像不过曝的最大时长的确定方式进行举例说明:For ease of understanding, the reference phase is taken as an example of 0°, and the following takes the extended phase of 90° and 180° as examples to illustrate the method of determining the maximum duration of the image without exposure under the extended phase and the reference distance:
扩展相位90°及基准距离下图像不过曝的最大时长简称为t3,t3的确定方式为:The maximum duration for which the image is not exposed under the extended phase of 90° and the reference distance is abbreviated as t3, and the determination method of t3 is as follows:
首先,参考图4可以看出,相同曝光时长(比如T)内,90°下的子帧图像的光子数至多为0°下的子帧图像的光子数的一半,即上述的第一比值为2。First of all, referring to FIG. 4 , it can be seen that within the same exposure duration (such as T), the number of photons in the subframe image at 90° is at most half of the number of photons in the subframe image at 0°, that is, the above-mentioned first ratio is 2.
其次,假设基于深度测量中光信号的调制频率f=100Mhz,在f=100Mhz时最大测量距离d=c/2f=1.5m,假设基准距离为0.3m,90°下光子数达到极限 值的最远测量距离为:0.3+1.5/(90°/360°)=0.3+1.5/4=0.675m。则第二比值为:0.675/0.3=2.25,第二比值的平方约等于5。Secondly, assuming that the modulation frequency of the optical signal in the depth measurement is f=100Mhz, the maximum measurement distance d=c/2f=1.5m when f=100Mhz, and assuming that the reference distance is 0.3m, the number of photons at 90° reaches the maximum limit value. The far measurement distance is: 0.3+1.5/(90°/360°)=0.3+1.5/4=0.675m. Then the second ratio is: 0.675/0.3=2.25, and the square of the second ratio is approximately equal to 5.
最后,在第一比值(2)和第二比值的平方(5)中选择最小的数值即2,则t3=2t0,当基准曝光时长t1=t0时,t3=2t1。也就是说,t3介于t1和2t1之间,t1<t3≤2t1。Finally, select the smallest value, namely 2, between the first ratio (2) and the square of the second ratio (5), then t3=2t0, and when the reference exposure duration t1=t0, t3=2t1. That is, t3 is between t1 and 2t1, and t1<t3≤2t1.
扩展相位180°及基准距离下图像不过曝的最大时长简称为t2,t2的确定方式为:The maximum duration for which the image is not exposed under the extended phase of 180° and the reference distance is abbreviated as t2, and the determination method of t2 is as follows:
首先,参考图4可以看出,相同曝光时长(比如T)内,180°下的子帧图像的光子数为0即没有接收到光子,因此上述第一比值理论为无限大。First, referring to FIG. 4 , it can be seen that within the same exposure duration (such as T), the number of photons in a sub-frame image at 180° is 0, that is, no photons are received, so the above-mentioned first ratio is theoretically infinite.
其次,假设基于深度测量中光信号的调制频率f=100Mhz,在f=100Mhz时最大测量距离d=c/2f=1.5m,假设基准距离为0.3m,180°下光子数达到极限值的最远测量距离为:0.3+1.5/(180°/360°)=0.3+1.5/2=1.05m≈1m。则第二比值的平方为:1/0.3≈3,第二比值的平方约等于9。Secondly, assuming that the modulation frequency of the optical signal in the depth measurement is f=100Mhz, the maximum measurement distance d=c/2f=1.5m when f=100Mhz, and assuming that the reference distance is 0.3m, the number of photons at 180° reaches the maximum limit of the limit value. The far measurement distance is: 0.3+1.5/(180°/360°)=0.3+1.5/2=1.05m≈1m. Then the square of the second ratio is: 1/0.3≈3, and the square of the second ratio is approximately equal to 9.
最后,在第一比值(无限大)和第二比值的平方(9)中选择最小的数值即9,则t2=9t0,当基准曝光时长t1=t0时,t2=9t1。也就是说,t2介于t1和9t1之间,t1<t2≤9t1。Finally, select the smallest value 9 among the first ratio (infinity) and the square of the second ratio (9), then t2=9t0, and when the reference exposure duration t1=t0, t2=9t1. That is, t2 is between t1 and 9t1, and t1<t2≤9t1.
需要说明的是,上述示例只是为了便于理解分别选择扩展相位为90°和180°进行说明,在具体实现中,选择其他相位时也可以基于上述方式计算图像不过曝的最大时长。比如,如果选择270°,在扩展相位270°及基准距离下图像不过曝的最大时长简称为t4,可以计算得出t1<t4≤2t1。如果选择45°,在扩展相位45°及基准距离下图像不过曝的最大时长简称为t5,可以计算得出t1<t5≤4/3t1。It should be noted that the above example is only for the convenience of understanding that the expansion phases are selected to be 90° and 180° respectively. In specific implementation, when selecting other phases, the maximum duration of the image without exposure can also be calculated based on the above method. For example, if 270° is selected, the maximum duration for which the image is not exposed under the extended phase of 270° and the reference distance is abbreviated as t4, and it can be calculated that t1<t4≤2t1. If 45° is selected, the maximum duration for which the image is not exposed under the extended phase of 45° and the reference distance is abbreviated as t5, and it can be calculated that t1<t5≤4/3t1.
可以理解的是,根据调制光原理,在range0上,0°相位子帧的接收窗口(也可以称为积分窗口)接收的光子数为最大,对应90°、270°至多接收到1/2的光子,即上述的第一比值为2,故最大可以设置t3=t4=2t1。在range0上,通过计算第一比值的方式,确定的t3、t4有利于确保测量近距离时不过曝。It can be understood that, according to the principle of modulated light, on range0, the number of photons received by the receiving window of the 0° phase subframe (also called the integration window) is the largest, and the corresponding 90° and 270° receive at most 1/2 of the photons. Photons, that is, the above-mentioned first ratio is 2, so the maximum can be set to t3=t4=2t1. On range0, by calculating the first ratio, the determined t3 and t4 are beneficial to ensure that the measurement at close range is not overexposed.
另外,在基准距离的基础上,随着距离的增大,在180°相位子帧的接收窗口内接收到的光子数会逐渐增加,考虑到一个点光源发出的光,在离光源任意距离处的照度,随距离的平方衰减,也就是说,光子数的衰减和距离的平方成反比。因此,通过计算上述第二比值的平方的方式,确定的扩展相位180°及基准距离下图像不过曝的最大时长t2,有利于确保测量远距离时不过曝。In addition, on the basis of the reference distance, as the distance increases, the number of photons received in the receiving window of the 180° phase subframe will gradually increase. Considering the light emitted by a point light source, at any distance from the light source The illuminance decays with the square of the distance, that is, the decay of the number of photons is inversely proportional to the square of the distance. Therefore, by calculating the square of the above-mentioned second ratio, the determined extended phase of 180° and the maximum duration t2 during which the image is not exposed at the reference distance is beneficial to ensure that the image is not exposed when measuring long distances.
通过将第一比值和第二比值的平方中最小的数值与基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为扩展相位及基准距离下图像不过曝的最大时长,有利于考虑到不同维度下的比值关系,一种维度为确保测量近距离时不过曝,另一种维度为确保测量远距离时不过曝,从而可以更加合理的确定扩展相位及基准距离下图像不过曝的最大时长,使得最终确定的扩展相位及基准距离下图像不过曝的最大时长能够同时满足测量近距离时不过曝和测量远距离时不过曝。By taking the product of the smallest value among the squares of the first ratio and the second ratio and the maximum duration of the image not being exposed under the reference phase and the preset reference distance as the maximum duration of the image not being exposed under the extended phase and the reference distance, it is beneficial to Considering the ratio relationship between different dimensions, one dimension is to ensure that the image is not exposed when measuring close range, and the other dimension is to ensure that the image is not exposed when measuring long distance, so that the image can be more reasonably determined under the extended phase and reference distance. The maximum duration allows the final extended phase and the maximum duration of the image not to be exposed at the reference distance to meet the requirements of non-exposure when measuring close range and non-exposure when measuring long distance.
在一个例子中,扩展相位包括第一类扩展相位和/或第二类扩展相位。下面分别对第一类扩展相位及基准距离下图像不过曝的最大时长的确定方式,以及第二类扩展相位及基准距离下图像不过曝的最大时长的确定方式进行说明:In one example, the spread phase includes a first type of spread phase and/or a second type of spread phase. The following describes how to determine the maximum duration of the image without exposure under the first type of extended phase and the reference distance, and the method for determining the maximum duration of the image without exposure under the second type of extended phase and reference distance:
第一类扩展相位与基准相位之间的相位差大于0且小于或等于π/2,第一类扩展相位及基准距离下图像不过曝的最大时长的确定方式为:TOF测距设备在相同曝光时长内,分别获取基准相位下的子帧图像的光子数和第一类扩展 相位下的子帧图像的光子数,计算基准相位下的子帧图像的光子数与第一类扩展相位下的子帧图像的光子数的第一比值,并将第一比值与基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为扩展相位及基准距离下图像不过曝的最大时长。可以理解的是,上述第一比值的确定方式和步骤501至步骤502类似,此处不再赘述。The phase difference between the first type of extended phase and the reference phase is greater than 0 and less than or equal to π/2. The first type of extended phase and the maximum duration of the image without exposure at the reference distance is determined as follows: TOF ranging equipment is exposed at the same exposure. During the time period, obtain the photon number of the sub-frame image under the reference phase and the photon number of the sub-frame image under the first type of extended phase respectively, and calculate the photon number of the sub-frame image under the reference phase and the sub-frame image under the first type of extended phase. The first ratio of the number of photons of the frame image, and the product of the first ratio and the reference phase and the maximum duration of the image under the preset reference distance is not exposed as the maximum duration of the image under the extended phase and the reference distance. It can be understood that the manner of determining the above-mentioned first ratio is similar to steps 501 to 502, and details are not repeated here.
比如,基准相位为0°,则第一类扩展相位可以在[270°,0°)、(0°,90°]这两个区间范围内选择。For example, if the reference phase is 0°, the first type of extended phase can be selected in two ranges of [270°, 0°) and (0°, 90°].
第二类扩展相位与基准相位之间的相位差大于π/2且小于或等于π,第二类扩展相位及基准距离下图像不过曝的最大时长的确定方式为:基于深度测量中光信号的调制频率、第二类扩展相位以及基准距离,计算第二类扩展相位下的子帧图像的光子数达到极限值的最远测量距离;计算最远测量距离与基准距离的第二比值的平方,并将第二比值的平方与基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为第二类扩展相位及基准距离下图像不过曝的最大时长。可以理解的是,上述第二比值的平方的确定方式和步骤503至步骤504类似,此处不再赘述。The phase difference between the second type of extended phase and the reference phase is greater than π/2 and less than or equal to π. The second type of extended phase and the maximum duration of the image under the reference distance without exposure is determined as follows: based on the optical signal in the depth measurement. Modulation frequency, the second type of extended phase and the reference distance, calculate the farthest measurement distance where the number of photons of the subframe image under the second type of extended phase reaches the limit value; calculate the square of the second ratio of the farthest measurement distance to the reference distance, The product of the square of the second ratio and the maximum duration of no exposure of the image under the reference phase and the preset reference distance is taken as the maximum duration of the image under the second type of extended phase and the reference distance without exposure. It can be understood that the manner of determining the square of the second ratio is similar to that in steps 503 to 504 , and details are not repeated here.
比如,基准相位为0°,则第二类扩展相位可以在(90°,180°]这个区间范围内选择。For example, if the reference phase is 0°, the second type of extended phase can be selected within the interval range of (90°, 180°).
通过区分扩展相位的类型,即第一类扩展相位和第二类扩展相位,不同类型的扩展相位采用不同的方式计算图像不过曝的最大时长,可以结合不同类型的扩展相位的特征直接针对性的计算一种比值,从而能够更加快速且合理的计算出图像不过曝的最大时长。By distinguishing the types of extended phases, namely the first type of extended phase and the second type of extended phase, different types of extended phases use different methods to calculate the maximum duration of the image without exposure, which can be combined with the characteristics of different types of extended phases. Calculates a ratio that can more quickly and reasonably calculate the maximum time the image will not be exposed.
在一个例子中,N取4,4个不同的相位包括:一个基准相位和三个扩展 相位,扩展相位包括两个第一类扩展相位和一个第二类扩展相位,且两个第一类扩展之间的相位差为π。比如,如果基准相位为0°,则两个第一类扩展相位可以在[270°,0°)、(0°,90°]这两个区间范围内选择,比如选择出90°和270°。第二类扩展相位在(90°,180°]这个区间范围内选择,比如选择出180°。参考图4,0°对应的基准曝光时长为t1,t1可以等于在0°及range0下图像不过曝的最大时长。分别设置180°、90°、270°对应的曝光时长分别为t2、t3、t4,其中,t1<t3=t4≤2t1;t1<t2≤9t1。本示例通过上述方式设置的4相位的曝光时长,可以确保近距离不会过曝,而远距离由于90°、270°、180°相比于t1增大了曝光时长,因此测距的动态范围提高了,最大可以提高到两倍。而且相当于只需采集4个子帧图像,相比同类的高动态范围方案(采集8个子帧图像,且需要高曝光的4个子帧图像和低曝光的4个子帧图像融合),极大地提高了帧率(只采集4个子帧,不需要两次采集融合),极大地减小了功耗。In one example, N is 4, the 4 different phases include: a reference phase and three extension phases, the extension phases include two first-type extension phases and one second-type extension phase, and two first-type extension phases The phase difference between is π. For example, if the reference phase is 0°, the two first-type extended phases can be selected in the range of [270°, 0°), (0°, 90°], for example, 90° and 270° are selected. . The second type of extended phase is selected within the range of (90°, 180°], for example, 180° is selected. Referring to Figure 4, the reference exposure duration corresponding to 0° is t1, and t1 can be equal to the image at 0° and range0. The maximum duration of no exposure. The exposure durations corresponding to 180°, 90°, and 270° are set as t2, t3, and t4, respectively, where t1<t3=t4≤2t1; t1<t2≤9t1. In this example, the above method is used to set The 4-phase exposure time of the 4-phase can ensure that the close distance will not be overexposed, and the long distance because 90°, 270°, and 180° increase the exposure time compared to t1, so the dynamic range of ranging is improved, and the maximum can be increased. And it is equivalent to only need to collect 4 sub-frame images, compared with the same kind of high dynamic range scheme (8 sub-frame images are collected, and the fusion of 4 sub-frame images with high exposure and 4 sub-frame images with low exposure is required), The frame rate is greatly improved (only 4 subframes are collected, no need to collect and merge twice), and the power consumption is greatly reduced.
在一个例子中,当扩展相位大于基准相位,扩展曝光时长介于基准曝光时长和在扩展相位及基准距离下图像不过曝的最大时长之间是指:扩展曝光时长大于基准曝光时长且小于或等于在扩展相位及基准距离下图像不过曝的最大时长。比如,基准相位为0°时,扩展相位为90°,基准曝光时长为t1,扩展曝光时长为t3,则t3介于基准曝光时长t1和在扩展相位及基准距离下图像不过曝的最大时长2t1之间是指:t1<t3≤2t1。In an example, when the extended phase is greater than the reference phase, and the extended exposure duration is between the reference exposure duration and the maximum duration of no-exposure images under the extended phase and the reference distance, it means: the extended exposure duration is greater than the reference exposure duration and less than or equal to The maximum length of time that the image will not be exposed under the extended phase and reference distance. For example, when the reference phase is 0°, the extension phase is 90°, the reference exposure duration is t1, and the extension exposure duration is t3, then t3 is between the reference exposure duration t1 and the maximum duration 2t1 where the image is not exposed under the extension phase and reference distance. Between means: t1<t3≤2t1.
在另一个例子中,当扩展相位小于基准相位,扩展曝光时长介于基准曝光时长和在扩展相位及基准距离下图像不过曝的最大时长之间是指:扩展曝光时长大于或等于在扩展相位及基准距离下图像不过曝的最大时长且小于基准曝 光时长。比如,基准相位为90°,扩展相位为0°,基准曝光时长为t3,扩展曝光时长为t1,则t1介于基准曝光时长t3和在扩展相位及基准距离下图像不过曝的最大时长t3/2之间是指:t3/2≤t1<t3。In another example, when the extended phase is smaller than the reference phase, and the extended exposure duration is between the reference exposure duration and the maximum duration of no image exposure under the extended phase and the reference distance, it means: the extended exposure duration is greater than or equal to the extended phase and the reference distance. The maximum length of time that the image will not be exposed at the reference distance and less than the reference exposure time. For example, if the reference phase is 90°, the extension phase is 0°, the reference exposure duration is t3, and the extension exposure duration is t1, then t1 is between the reference exposure duration t3 and the maximum duration t3/ Between 2 means: t3/2≤t1<t3.
需要说明的是,图4中只是以4相位采样为例,在具体实现中,并不限于4相位采样,也可以为2相位采样(比如0°和90°)、8相位采样(比如0°、45°、90°、135°、180°、225°、270°、315°)等。4相位采样可以同时兼顾测距精度、功耗以及速度,即在提高测距精度的同时不会对功耗和速度有太大影响。It should be noted that Fig. 4 only takes 4-phase sampling as an example. In the specific implementation, it is not limited to 4-phase sampling, but can also be 2-phase sampling (such as 0° and 90°), 8-phase sampling (such as 0°) , 45°, 90°, 135°, 180°, 225°, 270°, 315°), etc. 4-phase sampling can take into account ranging accuracy, power consumption and speed at the same time, that is, it will not have much impact on power consumption and speed while improving ranging accuracy.
在一个例子中,如果为2相位采样,则步骤301中,TOF测距设备根据预设的与2个相位分别对应的2个曝光时长,获取目标场景的2个子帧图像。步骤302中,TOF测距设备根据2个子帧图像,确定目标场景的深度信息。比如,2个子帧图像分别称为0°相位子帧和90°相位子帧,0°相位子帧的图像数据为光子数Q1,90°相位子帧的图像数据为光子数Q3,则可以通过如下公式计算目标场景中的深度信息:In one example, if it is 2-phase sampling, in step 301, the TOF ranging device acquires 2 sub-frame images of the target scene according to the preset 2 exposure durations corresponding to the 2 phases respectively. In step 302, the TOF ranging device determines the depth information of the target scene according to the two subframe images. For example, two subframe images are called 0° phase subframe and 90° phase subframe respectively, the image data of the 0° phase subframe is the number of photons Q1, and the image data of the 90° phase subframe is the number of photons Q3. The following formula calculates the depth information in the target scene:
tanφ=sinφ/cosφ=Q3/Q1tanφ=sinφ/cosφ=Q3/Q1
d=(c/2f)×(φ/2π)d=(c/2f)×(φ/2π)
其中,d为计算的深度,f为光信号的调制频率、φ为反射光信号相对于发射光信号的相位延迟,c为光速。Among them, d is the calculated depth, f is the modulation frequency of the optical signal, φ is the phase delay of the reflected optical signal relative to the emitted optical signal, and c is the speed of light.
在另一个例子中,如果为4相位采样,则步骤301中,TOF测距设备根据预设的与4个相位分别对应的4个曝光时长,获取目标场景的4个子帧图像。步骤302中,根据4个子帧图像,确定目标场景的深度信息。比如,4个子帧图像分别可以称为0°相位子帧、180°相位子帧、90°相位子帧,270°相位 子帧。参考图4,4个相位子帧的图像数据依次为:Q1、Q2、Q3、Q4,可以通过如下公式计算目标场景中的深度信息:In another example, if it is 4-phase sampling, in step 301, the TOF ranging device acquires 4 sub-frame images of the target scene according to the preset 4 exposure durations corresponding to the 4 phases respectively. In step 302, the depth information of the target scene is determined according to the four subframe images. For example, the 4 subframe images may be respectively referred to as a 0° phase subframe, a 180° phase subframe, a 90° phase subframe, and a 270° phase subframe. Referring to Figure 4, the image data of the four phase subframes are: Q1, Q2, Q3, Q4, and the depth information in the target scene can be calculated by the following formula:
tanφ=sinφ/cosφ=(Q3-Q4)/(Q1-Q2)tanφ=sinφ/cosφ=(Q3-Q4)/(Q1-Q2)
d=(c/2f)×(φ/2π)d=(c/2f)×(φ/2π)
其中,d为计算的深度,f为光信号的调制频率、φ为反射光信号相对于发射光信号的相位延迟,c为光速。Among them, d is the calculated depth, f is the modulation frequency of the optical signal, φ is the phase delay of the reflected optical signal relative to the emitted optical signal, and c is the speed of light.
在一个例子中,深度测量方法的流程图可以参考图7,包括:In one example, the flowchart of the depth measurement method can refer to FIG. 7, including:
步骤701:以基准相位发射用于深度测量的发射光。Step 701: Emit the emission light for depth measurement at the reference phase.
其中,基准相位可以为上述示例中提到的0°,但并不以此为限。TOF测距设备可以以基准相位发射用于深度测量的发射光。Wherein, the reference phase may be 0° mentioned in the above example, but is not limited thereto. The TOF ranging device can emit emitted light for depth measurement in a reference phase.
步骤702:根据第一相位和基准曝光时长采集目标场景的第一子帧图像。Step 702: Collect a first sub-frame image of the target scene according to the first phase and the reference exposure duration.
其中,第一相位与基准相位的相位差为0度。在一个例子中,当基准相位为0°时,第一相位即为0°,第一子帧图像即为上述的0°相位子帧。由于,基准相位和第一相位相差0度,因此,第一相位也可以理解为基准相位。The phase difference between the first phase and the reference phase is 0 degrees. In an example, when the reference phase is 0°, the first phase is 0°, and the first subframe image is the above-mentioned 0° phase subframe. Since the difference between the reference phase and the first phase is 0 degrees, the first phase can also be understood as the reference phase.
在具体实现中,基准曝光时长小于或等于在基准相位及预设的基准距离下图像不过曝的最大时长。其中,预设的基准距离可以理解为上述TOF测距设备所应用的测距场景中需要测量的最近距离,还可以理解为深度测量方法的最小检测距离。In a specific implementation, the reference exposure duration is less than or equal to the maximum duration during which the image is not exposed under the reference phase and the preset reference distance. The preset reference distance can be understood as the closest distance to be measured in the ranging scenario applied by the above TOF ranging device, and can also be understood as the minimum detection distance of the depth measurement method.
步骤703:根据第二相位采集目标场景的第二子帧图像。Step 703: Collect a second subframe image of the target scene according to the second phase.
其中,第二相位与基准相位的相位差为180度。在一个例子中,当基准相位为0°时,第二相位即为180°,第二子帧图像即为上述的180°相位子帧。The phase difference between the second phase and the reference phase is 180 degrees. In an example, when the reference phase is 0°, the second phase is 180°, and the second subframe image is the aforementioned 180° phase subframe.
步骤704:根据第三相位采集目标场景的第三子帧图像。Step 704: Collect a third subframe image of the target scene according to the third phase.
其中,所述第三相位与所述基准相位的相位差为90度。在一个例子中,当基准相位为0°时,第三相位即为90°,第三子帧图像即为上述的90°相位子帧。The phase difference between the third phase and the reference phase is 90 degrees. In an example, when the reference phase is 0°, the third phase is 90°, and the third subframe image is the above-mentioned 90° phase subframe.
步骤705:根据第四相位采集目标场景的第四子帧图像。Step 705: Collect a fourth subframe image of the target scene according to the fourth phase.
其中,所述第四相位与所述基准相位的相位差为270度。在一个例子中,当基准相位为0°时,第四相位即为270°,第四子帧图像即为上述的270°相位子帧。The phase difference between the fourth phase and the reference phase is 270 degrees. In an example, when the reference phase is 0°, the fourth phase is 270°, and the fourth subframe image is the above-mentioned 270° phase subframe.
在具体实现中,采集所述第二子帧图像所使用的曝光时长(比如图4中的t2)、采集所述第三子帧图像所使用的曝光时长(比如图4中的t3)和采集所述第四子帧图像所使用的曝光时长(比如图4中的t4)分别大于所述基准曝光时长,所述第一子帧图像、所述第二子帧图像、所述第三子帧图像和所述第四子帧图像用于确定一帧深度图像。也就是说,根据第一子帧图像、第二子帧图像、第三子帧图像和第四子帧图像可以输出一帧深度图像。In a specific implementation, the exposure duration (eg, t2 in FIG. 4 ) used for collecting the second subframe image, the exposure duration (eg, t3 in FIG. 4 ) used for collecting the third subframe image, and the The exposure duration (such as t4 in FIG. 4 ) used for the fourth subframe image is respectively greater than the reference exposure duration, the first subframe image, the second subframe image, and the third subframe image The image and the fourth sub-frame image are used to determine a frame of depth image. That is, one frame of depth image may be output according to the first subframe image, the second subframe image, the third subframe image and the fourth subframe image.
上述步骤702至步骤705中采集到的4个子帧图像,可以理解为步骤301中N取4时,根据预设的4个相位分别对应的4个曝光时长,获取的目标场景中的4个子帧图像。上述第一子帧图像、第二子帧图像、第三子帧图像、第四子帧图像依序先后采集。上述步骤702至步骤705中提到的第一相位可以理解为基准相位,第二相位、第三相位、第四相位可以理解为3个扩展相位。The 4 subframe images collected in the above steps 702 to 705 can be understood as when N is 4 in step 301, according to the 4 preset exposure durations corresponding to the 4 phases respectively, the acquired 4 subframes in the target scene image. The first sub-frame image, the second sub-frame image, the third sub-frame image, and the fourth sub-frame image are collected in sequence. The first phase mentioned in the above steps 702 to 705 may be understood as the reference phase, and the second phase, the third phase, and the fourth phase may be understood as three extended phases.
在一个例子中,采集第三子帧图像所使用的曝光时长(比如图4中的t3)和采集第四子帧图像所使用的曝光时长(比如图4中的t4)均小于采集第二子帧图像所使用的曝光时长(比如图4中的t2)。参考图4,即t1<t3<t2,t1<t4<t2,t1<t2。也就是说,4个曝光时长中,In one example, the exposure duration used for collecting the third sub-frame image (such as t3 in FIG. 4 ) and the exposure time used for collecting the fourth sub-frame image (such as t4 in FIG. 4 ) are both shorter than those used for collecting the second sub-frame image The exposure duration used by the frame image (eg t2 in Figure 4). Referring to FIG. 4, that is, t1<t3<t2, t1<t4<t2, and t1<t2. That is to say, in the 4 exposure durations,
采集第二子帧图像所使用的曝光时长t2最大,第二相位与基准相位的相位差为180度时,基于第二相位采集的第二子帧图像能够测量的深度最大,因此t2最大,能够增大测量的最大深度,从而进一步增大测量的动态范围。The exposure duration t2 used to collect the second sub-frame image is the largest, and when the phase difference between the second phase and the reference phase is 180 degrees, the depth that can be measured based on the second sub-frame image collected based on the second phase is the largest, so t2 is the largest and can be measured. Increase the maximum depth of the measurement to further increase the dynamic range of the measurement.
在一个例子中,采集第三子帧图像所使用的曝光时长(比如图4中的t3)小于采集所述第四子帧图像所使用的曝光时长(比如图4中的t4),参考图4,即t1<t3<t4。In one example, the exposure duration (eg, t3 in FIG. 4 ) used to acquire the third sub-frame image is shorter than the exposure duration (eg, t4 in FIG. 4 ) used to acquire the fourth sub-frame image, refer to FIG. 4 , that is, t1<t3<t4.
在一个例子中,t1、t2、t3、t4各不相同,由于基于1个曝光时长采集的1个子帧图像可以准确得出目标场景中1个深度范围内的深度信息,基于4个曝光时长采集的4个子帧图像可以准确得出目标场景中4个深度范围内(比如,目标场景中的较近景点、近景点、较远景点、远景点)的深度信息,有利于在提高测量动态范围的同时,提高不同深度范围内的测量精度。In one example, t1, t2, t3, and t4 are different. Since one subframe image collected based on one exposure duration can accurately obtain the depth information within one depth range in the target scene, the depth information collected based on four exposure durations can be accurately obtained. The 4 sub-frame images can accurately obtain the depth information in 4 depth ranges in the target scene (for example, nearer spots, near spots, far spots, and far spots in the target scene), which is beneficial to improve the measurement dynamic range. At the same time, the measurement accuracy in different depth ranges is improved.
本实施例中,基准曝光时长小于或等于在基准相位和预设的基准距离下图像不过曝的最大时长,确保根据基准相位对应的基准曝光时长,获取的目标场景的子帧图像不过曝。扩展曝光时长介于基准曝光时长和在扩展相位及基准距离下图像不过曝的最大时长之间,确保根据扩展相位对应的扩展曝光时长,获取的目标场景的子帧图像不过曝。另外,由于扩展曝光时长与基准曝光时长不同,因此在不同曝光时长下,有利于确定目标场景中不同距离处的深度信息。在扩展曝光时长与基准曝光时长中,较长的一个曝光时长下有利于准确的确定目标场景中较远处即远景点的深度信息,较短的一个曝光时长下有利于准确的确定目标场景中较近处即近景点的深度信息,从而有利于在不过曝的同时提高测距的动态范围,而且由于每个相位下采用1个曝光时长,获取1个子帧图像,并不需要对每个相同相位采用2个不同的曝光时长,获取2个子帧图像,从而 无需对基于每个相位采用2个不同的曝光时长,获取的子帧图像进行融合,因此可以在提高测距的动态范围的同时减小功耗。In this embodiment, the reference exposure duration is less than or equal to the maximum duration that the image will not be exposed under the reference phase and the preset reference distance, so as to ensure that the subframe image of the acquired target scene is not exposed according to the reference exposure duration corresponding to the reference phase. The extended exposure duration is between the reference exposure duration and the maximum duration that the image will not be exposed under the extended phase and the reference distance, ensuring that the sub-frame images of the acquired target scene will not be exposed according to the extended exposure duration corresponding to the extended phase. In addition, since the extended exposure duration is different from the reference exposure duration, it is beneficial to determine the depth information at different distances in the target scene under different exposure durations. In the extended exposure time and the reference exposure time, a longer exposure time is conducive to accurately determining the depth information of the distant point in the target scene, and a shorter exposure time is conducive to accurately determining the target scene. The closer point is the depth information of the near point, which is beneficial to improve the dynamic range of ranging without exposure, and because one exposure duration is used in each phase to obtain one sub-frame image, it is not necessary to The phase adopts 2 different exposure durations to obtain 2 sub-frame images, so there is no need to fuse the sub-frame images obtained based on 2 different exposure durations for each phase, so the dynamic range of ranging can be improved while reducing the range. Small power consumption.
本申请实施例涉及一种深度测量方法,下面对本实施例的深度测量方法的实现细节进行具体的说明,以下内容仅为方便理解提供的实现细节,并非实施本方案的必须。The embodiments of the present application relate to a depth measurement method. The implementation details of the depth measurement method of this embodiment are described below in detail. The following contents are only provided for the convenience of understanding, and are not necessary for implementing this solution.
本实施例中,本实施例的深度测量方法的流程图可以参考图8,包括:In this embodiment, the flowchart of the depth measurement method in this embodiment may refer to FIG. 8 , including:
步骤801:根据预设的与N个相位分别对应的N个曝光时长,获取目标场景的N个子帧图像。Step 801: Acquire N sub-frame images of the target scene according to the preset N exposure durations corresponding to the N phases respectively.
步骤802:根据N个子帧图像,确定目标场景的深度信息。Step 802: Determine the depth information of the target scene according to the N subframe images.
其中,步骤801至步骤802与上述实施例中步骤301至步骤302大致相同,未避免重复此处不再赘述。Wherein, steps 801 to 802 are substantially the same as steps 301 to 302 in the above-mentioned embodiment, and the repetition is not avoided to be repeated here.
步骤803:根据N个子帧图像,确定与多个像素点的深度信息分别对应的多个置信度。Step 803 : Determine a plurality of confidence levels corresponding to the depth information of the plurality of pixels according to the N sub-frame images.
其中,TOF测距设备可以基于光探测器上的多个像素以及N个子帧图像,确定目标场景中多个像素点的深度信息。在具体实现中,目标场景中多个像素点可能包括近景点和远景点,即TOF测距设备可以测量得到目标场景中近景点的深度信息和远景点的深度信息。每个像素点的深度信息对应的置信度可以表征该像素点的深度信息的可信程度,即测量的该像素点的深度信息是否准确。置信度的确定方式可以参考第一实施例中的相关描述,为避免重复本实施例对此不再赘述。Wherein, the TOF ranging device may determine the depth information of multiple pixels in the target scene based on multiple pixels on the photodetector and N sub-frame images. In a specific implementation, multiple pixel points in the target scene may include near points and far points, that is, the TOF ranging device can measure and obtain the depth information of the near point and the far point in the target scene. The confidence level corresponding to the depth information of each pixel point can represent the credibility of the depth information of the pixel point, that is, whether the measured depth information of the pixel point is accurate. For the manner of determining the confidence level, reference may be made to the relevant description in the first embodiment, which will not be repeated in this embodiment to avoid repetition.
步骤804:若多个置信度中小于预设的置信度阈值的置信度的数量超过预设的数量阈值且扩展曝光时长未达到扩展相位及基准距离下图像不过曝的 最大时长,增大扩展曝光时长。Step 804: If the number of confidence levels that are less than the preset confidence level threshold in the multiple confidence levels exceeds the preset number threshold and the extended exposure duration does not reach the maximum duration of the extended phase and the reference distance that the image will not be exposed, increase the extended exposure duration.
其中,当目标场景中一个像素点的深度信息对应的置信度小于预设的置信度阈值,说明对该像素点测量得到的深度准确性较低,若多个置信度中小于预设的置信度阈值的置信度的数量超过预设的数量阈值,说明对目标场景中若干个像素点测量的深度准确性均较低,在确保图像不过曝的前提下,增大曝光时长可以提高置信度。图像不过曝的前提即确定当前的扩展曝光时长未达到扩展相位及基准距离下图像不过曝的最大时长。在具体实现中,预设的置信度阈值和预设的数量阈值可以根据实际需要进行设置。比如,希望测距的精度较高,可以将置信度阈值设置的较高,将数量阈值设置的较小。Wherein, when the confidence corresponding to the depth information of a pixel in the target scene is less than the preset confidence threshold, it means that the depth measured for the pixel is less accurate, and if the multiple confidences are smaller than the preset confidence The number of confidence levels of the threshold exceeds the preset number threshold, indicating that the depth accuracy of several pixel points in the target scene is low. On the premise of ensuring that the image is not exposed, increasing the exposure time can improve the confidence level. The premise of the image not being exposed is to determine that the current extended exposure duration does not reach the maximum duration of the image not being exposed under the extended phase and the reference distance. In specific implementation, the preset confidence threshold and the preset quantity threshold can be set according to actual needs. For example, if the accuracy of the ranging is desired to be high, the confidence threshold can be set high and the quantity threshold can be set small.
在一个例子中,参考图4,基准相位为0°,扩展相位包括:180°、90°、270°。基准曝光时长等于在0°及range0下图像不过曝的最大时长,即基准曝光时长为t1,则理论上180°对应的扩展曝光时长t2的取值范围可以为:t1<t2≤9t1,90°和270°对应的扩展曝光时长t3、t4的取值范围可以为:t1<t3=t4≤2t1。在设置t2、t3、t4时可以在各自的取值范围内通过逐渐增大t2、t3、t4从而寻找到可以使置信度大于置信度阈值的最小曝光时长,无需一开始就将t2、t3、t4设置的较大,比如设置为不过曝的最大时长即无需一开始就将t2直接设置为9t1,将t3、t4设置为2t1,能够在提高测距动态方范围和测距准确度的同时,进一步降低功耗。In an example, referring to FIG. 4 , the reference phase is 0°, and the extended phase includes: 180°, 90°, and 270°. The reference exposure duration is equal to the maximum duration that the image will not be exposed at 0° and range0, that is, the reference exposure duration is t1, then theoretically, the value range of the extended exposure duration t2 corresponding to 180° can be: t1<t2≤9t1, 90° The value ranges of the extended exposure durations t3 and t4 corresponding to 270° may be: t1<t3=t4≤2t1. When setting t2, t3, and t4, you can gradually increase t2, t3, and t4 in their respective value ranges to find the minimum exposure duration that can make the confidence greater than the confidence threshold. It is not necessary to set t2, t3, The setting of t4 is relatively large. For example, if it is set to the maximum duration of no exposure, it is not necessary to directly set t2 to 9t1 at the beginning, and set t3 and t4 to 2t1, which can improve the dynamic range and accuracy of ranging, and at the same time, further reduce power consumption.
可以理解的是,对目标场景中若干个像素点测量的深度准确性均较低,说明扩展曝光时长设置的不合理的可能性较大,扩展曝光时长未达到扩展相位及基准距离下图像不过曝的最大时长说明当前的扩展曝光时长还有可以增大的空间。因此,本实施例中在多个置信度中小于预设的置信度阈值的置信度的数 量超过预设的数量阈值且扩展曝光时长未达到扩展相位及基准距离下图像不过曝的最大时长,增大扩展曝光时长,可以在合理的时机增大扩展曝光时长,无需一开始就将扩展曝光时长设置的较大,有利于得到可以使置信度大于置信度阈值的最小扩展曝光时长,能够在提高测距动态方范围和测距准确度的同时,进一步降低功耗。It is understandable that the depth measurement accuracy of several pixels in the target scene is low, indicating that the extended exposure time setting is more likely to be unreasonable, and the image is not exposed when the extended exposure time does not reach the extended phase and the reference distance. The maximum duration indicates that the current extended exposure duration still has room to increase. Therefore, in this embodiment, the number of confidence levels that are less than the preset confidence level threshold among the multiple confidence levels exceeds the preset number threshold and the extended exposure duration does not reach the maximum duration of the extended phase and the reference distance that the image will not be exposed. Large extended exposure duration can increase the extended exposure duration at a reasonable time. It is not necessary to set the extended exposure duration longer at the beginning, which is beneficial to obtain the minimum extended exposure duration that can make the confidence greater than the confidence threshold. The power consumption is further reduced while the dynamic square range and ranging accuracy are improved.
上面各种方法的步骤划分,只是为了描述清楚,实现时可以合并为一个步骤或者对某些步骤进行拆分,分解为多个步骤,只要包括相同的逻辑关系,都在本专利的保护范围内;对算法中或者流程中添加无关紧要的修改或者引入无关紧要的设计,但不改变其算法和流程的核心设计都在该专利的保护范围内。The steps of the above various methods are divided only for the purpose of describing clearly. During implementation, they can be combined into one step or some steps can be split and decomposed into multiple steps. As long as the same logical relationship is included, they are all within the protection scope of this patent. ;Adding insignificant modifications to the algorithm or process or introducing insignificant designs, but not changing the core design of the algorithm and process are all within the scope of protection of this patent.
本申请实施例涉及一种芯片,如图9所示,芯片901与电子设备中的存储器902连接,所述存储器902存储有可被芯片901执行的指令,所述指令被所述芯片901执行,以使所述芯片901能够执行上述实施例中的深度测量方法。The embodiment of the present application relates to a chip. As shown in FIG. 9 , the chip 901 is connected to a memory 902 in an electronic device. The memory 902 stores instructions that can be executed by the chip 901 , and the instructions are executed by the chip 901 . So that the chip 901 can perform the depth measurement method in the above embodiment.
其中,存储器902和芯片901采用总线方式连接,总线可以包括任意数量的互联的总线和桥,总线将一个或多个芯片901和存储器902的各种电路连接在一起。总线还可以将诸如外围设备、稳压器和功率管理电路等之类的各种其他电路连接在一起,这些都是本领域所公知的,因此,本文不再对其进行进一步描述。总线接口在总线和收发机之间提供接口。收发机可以是一个元件,也可以是多个元件,比如多个接收器和发送器,提供用于在传输介质上与各种其他装置通信的单元。经芯片901处理的数据通过天线在无线介质上进行传输,进一步,天线还接收数据并将数据传送给芯片901。The memory 902 and the chip 901 are connected by a bus, and the bus may include any number of interconnected buses and bridges, and the bus connects one or more chips 901 and various circuits of the memory 902 together. The bus may also connect together various other circuits, such as peripherals, voltage regulators, and power management circuits, which are well known in the art and therefore will not be described further herein. The bus interface provides the interface between the bus and the transceiver. A transceiver may be a single element or multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other devices over a transmission medium. The data processed by the chip 901 is transmitted on the wireless medium through the antenna, and further, the antenna also receives the data and transmits the data to the chip 901 .
芯片901负责管理总线和通常的处理,还可以提供各种功能,包括定 时,外围接口,电压调节、电源管理以及其他控制功能。而存储器902可以被用于存储芯片901在执行操作时所使用的数据。 Chip 901 is responsible for managing the bus and general processing, and may also provide various functions including timing, peripheral interface, voltage regulation, power management, and other control functions. The memory 902 can be used to store data used by the chip 901 when performing operations.
本申请实施例涉及一种电子设备,如图9所示,包括上述实施例所述的芯片901和与所述芯片901连接的存储器。The embodiment of the present application relates to an electronic device, as shown in FIG. 9 , including the chip 901 described in the foregoing embodiment and a memory connected to the chip 901 .
本申请实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述方法实施例。The embodiments of the present application relate to a computer-readable storage medium storing a computer program. The above method embodiments are implemented when the computer program is executed by the processor.
即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。That is, those skilled in the art can understand that all or part of the steps in the method for implementing the above embodiments can be completed by instructing the relevant hardware through a program, and the program is stored in a storage medium and includes several instructions to make a device ( It may be a single chip microcomputer, a chip, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。Those of ordinary skill in the art can understand that the above-mentioned embodiments are specific embodiments for realizing the present application, and in practical applications, various changes in form and details can be made without departing from the spirit and the spirit of the present application. scope.

Claims (15)

  1. 一种深度测量方法,其特征在于,包括:A depth measurement method, comprising:
    根据预设的与N个相位分别对应的N个曝光时长,获取目标场景的N个子帧图像;其中,所述N为大于或等于2的自然数,所述N个相位是与发射光相位的相位差各不相同的N个相位;According to the preset N exposure durations corresponding to the N phases, N sub-frame images of the target scene are acquired; wherein, the N is a natural number greater than or equal to 2, and the N phases are the phases of the emitted light phase N phases with different differences;
    根据所述N个子帧图像,确定所述目标场景的深度信息;determining the depth information of the target scene according to the N subframe images;
    其中,所述N个相位包括基准相位和扩展相位,所述N个曝光时长包括与基准相位对应的基准曝光时长和与所述扩展相位对应的扩展曝光时长;所述基准曝光时长小于或等于在所述基准相位及预设的基准距离下图像不过曝的最大时长,所述扩展曝光时长介于所述基准曝光时长和在所述扩展相位及所述基准距离下图像不过曝的最大时长之间,所述扩展曝光时长与所述基准曝光时长不同。Wherein, the N phases include a reference phase and an extended phase, and the N exposure durations include a reference exposure duration corresponding to the reference phase and an extended exposure duration corresponding to the extended phase; the reference exposure duration is less than or equal to The maximum duration for which the image is not exposed under the reference phase and the preset reference distance, and the extended exposure duration is between the reference exposure duration and the maximum duration for which the image is not exposed under the extended phase and the reference distance. , the extended exposure duration is different from the reference exposure duration.
  2. 根据权利要求1所述的深度测量方法,其特征在于,所述扩展相位包括第一类扩展相位和/或第二类扩展相位;The depth measurement method according to claim 1, wherein the extended phase comprises a first type of extended phase and/or a second type of extended phase;
    所述第一类扩展相位与所述基准相位之间的相位差大于0且小于或等于π/2,所述第一类扩展相位及所述基准距离下图像不过曝的最大时长的确定方式为:The phase difference between the first type of extended phase and the reference phase is greater than 0 and less than or equal to π/2, and the first type of extended phase and the maximum duration of the image not exposed at the reference distance are determined as follows: :
    在相同曝光时长内,分别获取所述基准相位下的子帧图像的光子数和所述第一类扩展相位下的子帧图像的光子数;Within the same exposure duration, obtain the photon number of the sub-frame image under the reference phase and the photon number of the sub-frame image under the first type of extended phase respectively;
    计算所述基准相位下的子帧图像的光子数与所述第一类扩展相位下的子帧图像的光子数的第一比值,并将所述第一比值与所述基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为所述第一类扩展相位及所述基准距离 下图像不过曝的最大时长;Calculate the first ratio of the number of photons of the subframe image under the reference phase to the number of photons of the subframe image under the first type of extended phase, and compare the first ratio to the reference phase and the preset The product of the maximum duration that the image is not exposed at the reference distance is taken as the first type of extended phase and the maximum duration of the image that is not exposed at the reference distance;
    所述第二类扩展相位与所述基准相位之间的相位差大于π/2且小于或等于π,所述第二类扩展相位及所述基准距离下图像不过曝的最大时长的确定方式为:The phase difference between the second type of extended phase and the reference phase is greater than π/2 and less than or equal to π, the second type of extended phase and the maximum duration of the image under the reference distance without exposure is determined as follows: :
    基于深度测量中光信号的调制频率、所述第二类扩展相位以及所述基准距离,计算所述第二类扩展相位下的子帧图像的光子数达到极限值的最远测量距离;Based on the modulation frequency of the optical signal in the depth measurement, the second type of extended phase and the reference distance, calculate the farthest measurement distance at which the number of photons of the subframe image under the second type of extended phase reaches a limit value;
    计算所述最远测量距离与所述基准距离的第二比值的平方,并将所述第二比值的平方与所述基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为所述第二类扩展相位及所述基准距离下图像不过曝的最大时长。Calculate the square of the second ratio of the farthest measurement distance and the reference distance, and use the product of the square of the second ratio and the reference phase and the maximum duration of the image not to be exposed under the preset reference distance as The second type of extended phase and the maximum duration for which the image is not exposed at the reference distance.
  3. 根据权利要求2所述的深度测量方法,其特征在于,所述扩展相位包括两个所述第一类扩展相位和一个所述第二类扩展相位,且两个所述第一类扩展之间的相位差为π。The depth measurement method according to claim 2, wherein the spread phase comprises two spread phases of the first type and one spread phase of the second type, and the distance between the two spread phases of the first type is The phase difference is π.
  4. 根据权利要求3所述的深度测量方法,其特征在于,两个所述第一类扩展相位分别为90°、270°,一个所述第二类扩展相位为180°。The depth measurement method according to claim 3, wherein the two first-type extended phases are respectively 90° and 270°, and one of the second-type extended phases is 180°.
  5. 根据权利要求1至4任一项所述的深度测量方法,其特征在于,所述基准相位为0°。The depth measurement method according to any one of claims 1 to 4, wherein the reference phase is 0°.
  6. 根据权利要求1至5任一项所述的深度测量方法,其特征在于,所述基准曝光时长等于在所述基准相位及预设的基准距离下图像不过曝的最大时长。The depth measurement method according to any one of claims 1 to 5, wherein the reference exposure duration is equal to the maximum duration during which the image is not exposed under the reference phase and a preset reference distance.
  7. 根据权利要求1所述的深度测量方法,其特征在于,所述在所述扩展相位及所述基准距离下图像不过曝的最大时长的确定方式为:The depth measurement method according to claim 1, wherein the method for determining the maximum duration of the image without exposure under the extended phase and the reference distance is:
    在相同曝光时长内,分别获取所述基准相位下的子帧图像的光子数和所述 扩展相位下的子帧图像的光子数;In the same exposure duration, respectively obtain the photon number of the subframe image under the reference phase and the photon number of the subframe image under the extended phase;
    计算所述基准相位下的子帧图像的光子数与所述扩展相位下的子帧图像的光子数的第一比值;calculating a first ratio of the number of photons of the subframe image under the reference phase to the number of photons of the subframe image under the extended phase;
    基于深度测量中光信号的调制频率、所述扩展相位以及所述基准距离,计算所述扩展相位下的子帧图像的光子数达到极限值的最远测量距离;Based on the modulation frequency of the optical signal in the depth measurement, the extended phase and the reference distance, calculate the farthest measurement distance at which the number of photons of the subframe image under the extended phase reaches a limit value;
    计算所述最远测量距离与所述基准距离的第二比值的平方;calculating the square of the second ratio of the farthest measured distance to the reference distance;
    在所述第一比值和所述第二比值的平方中选择最小的数值,并将所述最小的数值与所述基准相位及预设的基准距离下图像不过曝的最大时长的乘积,作为所述扩展相位及所述基准距离下图像不过曝的最大时长。The smallest value is selected from the square of the first ratio and the second ratio, and the product of the smallest value and the reference phase and the maximum time period during which the image is not exposed under the preset reference distance is used as the The maximum duration for which the image is not exposed under the extended phase and the reference distance.
  8. 根据权利要求1至7任一项所述的深度测量方法,其特征在于,所述目标场景的深度信息包括所述目标场景中多个像素点的深度信息,在所述根据所述N个子帧图像确定所述目标场景的深度信息之后,还包括:The depth measurement method according to any one of claims 1 to 7, wherein the depth information of the target scene includes the depth information of a plurality of pixels in the target scene. After the image determines the depth information of the target scene, it further includes:
    根据所述N个子帧图像,确定与所述多个像素点的深度信息分别对应的多个置信度;determining, according to the N sub-frame images, a plurality of confidence levels corresponding to the depth information of the plurality of pixels respectively;
    若所述多个置信度中小于预设的置信度阈值的置信度的数量超过预设的数量阈值且所述扩展曝光时长未达到所述扩展相位及所述基准距离下图像不过曝的最大时长,增大所述扩展曝光时长。If the number of confidence levels among the plurality of confidence levels that are less than a preset confidence level threshold exceeds a preset number threshold and the extended exposure duration does not reach the extended phase and the maximum duration for the image not to be exposed at the reference distance , and increase the extended exposure duration.
  9. 一种深度测量方法,其特征在于,包括:A depth measurement method, comprising:
    以基准相位发射用于深度测量的发射光;emit light for depth measurement at a reference phase;
    根据第一相位和基准曝光时长采集目标场景的第一子帧图像;其中,所述第一相位与所述基准相位的相位差为0度;The first sub-frame image of the target scene is collected according to the first phase and the reference exposure duration; wherein, the phase difference between the first phase and the reference phase is 0 degrees;
    根据第二相位采集目标场景的第二子帧图像;其中,所述第二相位与所述 基准相位的相位差为180度;The second subframe image of the target scene is collected according to the second phase; wherein, the phase difference between the second phase and the reference phase is 180 degrees;
    根据第三相位采集目标场景的第三子帧图像;其中,所述第三相位与所述基准相位的相位差为90度;The third subframe image of the target scene is collected according to the third phase; wherein, the phase difference between the third phase and the reference phase is 90 degrees;
    根据第四相位采集目标场景的第四子帧图像;其中,所述第四相位与所述基准相位的相位差为270度;The fourth subframe image of the target scene is collected according to the fourth phase; wherein, the phase difference between the fourth phase and the reference phase is 270 degrees;
    其中,采集所述第二子帧图像所使用的曝光时长、采集所述第三子帧图像所使用的曝光时长和采集所述第四子帧图像所使用的曝光时长分别大于所述基准曝光时长,所述第一子帧图像、所述第二子帧图像、所述第三子帧图像和所述第四子帧图像用于确定一帧深度图像。Wherein, the exposure duration used for collecting the second subframe image, the exposure duration used for collecting the third subframe image, and the exposure duration used for collecting the fourth subframe image are respectively greater than the reference exposure duration , the first subframe image, the second subframe image, the third subframe image and the fourth subframe image are used to determine a frame of depth image.
  10. 根据权利要求9所述的深度测量方法,其特征在于,所述采集所述第三子帧图像所使用的曝光时长和采集所述第四子帧图像所使用的曝光时长均小于采集所述第二子帧图像所使用的曝光时长。The depth measurement method according to claim 9, wherein the exposure time used for collecting the third subframe image and the exposure time used for collecting the fourth subframe image are both smaller than the exposure time used for collecting the first subframe image. Exposure duration used for two subframe images.
  11. 根据权利要求10所述的深度测量方法,其特征在于,所述采集所述第三子帧图像所使用的曝光时长小于采集所述第四子帧图像所使用的曝光时长。The depth measurement method according to claim 10, wherein the exposure duration used for collecting the third subframe image is shorter than the exposure duration used for collecting the fourth subframe image.
  12. 根据权利要求9至11任意一项所述的深度测量方法,其特征在于,所述第一子帧图像、所述第二子帧图像、所述第三子帧图像、所述第四子帧图像依序先后采集。The depth measurement method according to any one of claims 9 to 11, wherein the first subframe image, the second subframe image, the third subframe image, and the fourth subframe image Images are acquired sequentially.
  13. 根据权利要求9至12任意一项所述的深度测量方法,其特征在于,所述基准曝光时长等于在所述基准相位及所述深度测量方法的最小检测距离下图像不过曝的最大时长。The depth measurement method according to any one of claims 9 to 12, wherein the reference exposure duration is equal to the maximum duration for which the image is not exposed under the reference phase and the minimum detection distance of the depth measurement method.
  14. 一种芯片,其特征在于,设置于电子设备中,所述芯片与所述电子设备中的存储器连接,所述存储器存储有可被所述芯片执行的指令,所述指令被 所述芯片执行,以使所述芯片能够执行如权利要求1至13中任一所述的深度测量方法。A chip, characterized in that it is arranged in an electronic device, the chip is connected to a memory in the electronic device, the memory stores an instruction that can be executed by the chip, and the instruction is executed by the chip, so that the chip can perform the depth measurement method as claimed in any one of claims 1 to 13 .
  15. 一种电子设备,其特征在于,包括如权利要求14所述的芯片和与所述芯片连接的存储器。An electronic device, characterized by comprising the chip according to claim 14 and a memory connected to the chip.
PCT/CN2022/074100 2021-02-08 2022-01-26 Depth measurement method, chip, and electronic device WO2022166723A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110184297.1 2021-02-08
CN202110184297.1A CN112954230B (en) 2021-02-08 2021-02-08 Depth measurement method, chip and electronic device

Publications (1)

Publication Number Publication Date
WO2022166723A1 true WO2022166723A1 (en) 2022-08-11

Family

ID=76245486

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074100 WO2022166723A1 (en) 2021-02-08 2022-01-26 Depth measurement method, chip, and electronic device

Country Status (2)

Country Link
CN (1) CN112954230B (en)
WO (1) WO2022166723A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954230B (en) * 2021-02-08 2022-09-09 深圳市汇顶科技股份有限公司 Depth measurement method, chip and electronic device
CN113538551B (en) * 2021-07-12 2023-08-15 Oppo广东移动通信有限公司 Depth map generation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893563A (en) * 2010-04-19 2010-11-24 清华大学 Phase measurement method of variable exposure time imaging phase shift
US20120098964A1 (en) * 2010-10-22 2012-04-26 Mesa Imaging Ag System and Method for Multi TOF Camera Operation Using Phase Hopping
CN107229056A (en) * 2016-03-23 2017-10-03 松下知识产权经营株式会社 Image processing apparatus, image processing method and recording medium
CN107894215A (en) * 2017-12-26 2018-04-10 东南大学 HDR optical grating projection method for three-dimensional measurement based on fully automatic exposure
CN111580067A (en) * 2019-02-19 2020-08-25 光宝电子(广州)有限公司 Operation device, sensing device and processing method based on time-of-flight ranging
CN112954230A (en) * 2021-02-08 2021-06-11 深圳市汇顶科技股份有限公司 Depth measurement method, chip and electronic device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2594959B1 (en) * 2011-11-17 2017-01-04 Heptagon Micro Optics Pte. Ltd. System and method for multi TOF camera operation using phase hopping
EP3550330B1 (en) * 2016-11-29 2022-05-04 Nuvoton Technology Corporation Japan Distance measuring device
US10852402B2 (en) * 2017-12-07 2020-12-01 Texas Instruments Incorporated Phase anti-aliasing using spread-spectrum techniques in an optical distance measurement system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893563A (en) * 2010-04-19 2010-11-24 清华大学 Phase measurement method of variable exposure time imaging phase shift
US20120098964A1 (en) * 2010-10-22 2012-04-26 Mesa Imaging Ag System and Method for Multi TOF Camera Operation Using Phase Hopping
CN107229056A (en) * 2016-03-23 2017-10-03 松下知识产权经营株式会社 Image processing apparatus, image processing method and recording medium
CN107894215A (en) * 2017-12-26 2018-04-10 东南大学 HDR optical grating projection method for three-dimensional measurement based on fully automatic exposure
CN111580067A (en) * 2019-02-19 2020-08-25 光宝电子(广州)有限公司 Operation device, sensing device and processing method based on time-of-flight ranging
CN112954230A (en) * 2021-02-08 2021-06-11 深圳市汇顶科技股份有限公司 Depth measurement method, chip and electronic device

Also Published As

Publication number Publication date
CN112954230A (en) 2021-06-11
CN112954230B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
WO2022166723A1 (en) Depth measurement method, chip, and electronic device
CN110596722B (en) System and method for measuring flight time distance with adjustable histogram
CN110596721B (en) Flight time distance measuring system and method of double-shared TDC circuit
CN110596725B (en) Time-of-flight measurement method and system based on interpolation
US10545239B2 (en) Distance-measuring imaging device and solid-state imaging device
US20220082698A1 (en) Depth camera and multi-frequency modulation and demodulation-based noise-reduction distance measurement method
US9207065B2 (en) Depth sensing apparatus and method
WO2021051481A1 (en) Dynamic histogram drawing time-of-flight distance measurement method and measurement system
WO2021051480A1 (en) Dynamic histogram drawing-based time of flight distance measurement method and measurement system
JP7043218B2 (en) Optical sensors, distance measuring devices, and electronic devices
US20220043129A1 (en) Time flight depth camera and multi-frequency modulation and demodulation distance measuring method
EP3308193A1 (en) Time-of-flight (tof) system calibration
CN110221274A (en) Time flight depth camera and the distance measurement method of multifrequency modulation /demodulation
CN111045029A (en) Fused depth measuring device and measuring method
CN110221273A (en) Time flight depth camera and the distance measurement method of single-frequency modulation /demodulation
CN110361751A (en) The distance measurement method of time flight depth camera and the reduction noise of single-frequency modulation /demodulation
WO2023000756A1 (en) Ranging method and apparatus, terminal, and non-volatile computer-readable storage medium
KR20210031710A (en) Electronic device and method
WO2022241942A1 (en) Depth camera and depth calculation method
US11561291B2 (en) High pulse repetition frequency lidar
CN114814881A (en) Laser ranging method and laser ranging chip
CN115616608B (en) Single photon three-dimensional imaging distance super-resolution method and system
WO2022242348A1 (en) Dtof depth image acquisition method and apparatus, electronic device, and medium
CN211086592U (en) Pixel circuit and time-of-flight sensor
WO2024050895A1 (en) Itof depth measurement system and depth measurement method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22749002

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22749002

Country of ref document: EP

Kind code of ref document: A1