WO2021077358A1 - Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur - Google Patents

Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur Download PDF

Info

Publication number
WO2021077358A1
WO2021077358A1 PCT/CN2019/113038 CN2019113038W WO2021077358A1 WO 2021077358 A1 WO2021077358 A1 WO 2021077358A1 CN 2019113038 W CN2019113038 W CN 2019113038W WO 2021077358 A1 WO2021077358 A1 WO 2021077358A1
Authority
WO
WIPO (PCT)
Prior art keywords
frequency
low
exposure
pixel
frequency exposure
Prior art date
Application number
PCT/CN2019/113038
Other languages
English (en)
Chinese (zh)
Inventor
叶天翔
罗鹏飞
刘维辉
唐样洋
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980101481.6A priority Critical patent/CN114556048B/zh
Priority to PCT/CN2019/113038 priority patent/WO2021077358A1/fr
Publication of WO2021077358A1 publication Critical patent/WO2021077358A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C3/00Measuring distances in line of sight; Optical rangefinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • This application relates to the technical field of time-of-flight ranging, in particular to a ranging method, a ranging device, and a computer-readable storage medium.
  • the basic principle of TOF is to continuously emit light pulses to the target object, and then use the sensor to receive the light signal returned from the target object, and obtain the target object distance by detecting the flight time of the light pulse.
  • the ToF camera 100 includes an active light source emitter 101 and a ToF sensor 102.
  • the active light source emitter 101 may be a light emitting diode (LED), a vertical cavity surface emitting laser (VCSEL), or the like.
  • the active light source transmitter 101 emits a continuous sine wave laser signal to the target object (as shown by curve 1 in FIG. 1). After the laser signal reaches the target object and is reflected by the target object (as shown by curve 2 in Figure 1), it is received by the ToF sensor 102. By comparing the phase difference ⁇ between the emitted laser signal and the received laser signal, the target can be calculated The distance of the object from the ToF camera and the intensity of the received laser signal.
  • Multi-frequency technology is to add one or more frequency modulation waves to mix. Each modulation wave measurement has a different ambiguous distance. The true distance is the value measured by multiple frequency modulation waves. The corresponding frequency of this value is The greatest common divisor of multiple frequencies is called the hitting frequency. Among them, the lower the hit frequency, the longer the measurement distance.
  • Dual-frequency ranging refers to the use of two modulated signals of different frequencies for ranging, and exposure of the modulated signals of each frequency for different times.
  • the high-frequency modulation signal can be exposed separately (referred to as high-frequency exposure) to obtain a high-frequency exposure image
  • the low-frequency modulation signal can be exposed (referred to as Low-frequency exposure) to obtain low-frequency exposure images.
  • high frequency and low frequency are relative terms.
  • the higher frequency is called high frequency
  • the lower frequency is called low frequency.
  • the power consumption of the TOF camera is proportional to the exposure (working) time.
  • more exposure time is allocated to the low-frequency modulation signal to ensure the signal-to-noise ratio of the low-frequency exposure image, thereby ensuring that the correct phase unwrapping frequency coefficient is obtained without affecting the ranging accuracy.
  • the phase unwrapping coefficient is used to restore the true phase delay of each frequency, which can be obtained by the unwrapping algorithm.
  • the power consumption of the system is higher.
  • the embodiments of the present application provide a ranging method and a ranging device, which can reduce power consumption while the ranging accuracy is unchanged.
  • an embodiment of the present application discloses a ranging method, including:
  • the distance between the distance measuring device and the target object is calculated according to the high-frequency exposure image and the low-frequency exposure image.
  • high frequency and low frequency refer to the frequency of the modulation signal emitted by the ranging device, which is relatively speaking.
  • the higher frequency is called high frequency
  • the lower frequency It is called low frequency.
  • the signal-to-noise ratio of the low-frequency exposure image can be improved. Therefore, even if less time is allocated to When the signal is modulated at low frequency, the correct phase unwrapping coefficient can also be ensured, thereby ensuring the accuracy of ranging. In this way, the purpose of reducing system power consumption is achieved under the premise of ensuring accuracy.
  • the determining the high-frequency exposure time and the low-frequency exposure time includes: performing a pre-exposure for a preset time to obtain the pre-exposure Image, and the high-frequency exposure time and the low-frequency exposure time are respectively determined according to the strength of the reference signal received by the pre-exposure image within the preset time.
  • the determining the high-frequency exposure time and the low-frequency exposure time separately according to the signal intensity received by the pre-exposure image within the preset time includes: obtaining the pre-exposure The strength of the reference signal received by the image within the preset time; the strength of the high-frequency target signal and the strength of the low-frequency target signal are determined respectively according to the strength of the reference signal; the strength of the high-frequency target signal is determined according to the reference signal strength, the strength of the high-frequency target signal, and the The preset time is used to calculate the high-frequency exposure time; and the low-frequency exposure time is calculated according to the reference signal strength, the low-frequency target signal strength, the number of pixels in each combination, and the preset time; wherein, The low-frequency exposure time is inversely proportional to the number of pixels in each combination. In this way, the low-frequency exposure time can be quickly calculated according to the number of pixels in each combination, which further improves the calculation efficiency of the high-frequency exposure time and the low-frequency exposure time.
  • the relationship between the variance or standard deviation of the measured distance noise at different frequencies and the received signal strength is combined to determine the high-frequency target signal strength and the low-frequency target signal strength.
  • the signal strength corresponding to the appropriate distance noise variance or standard deviation should be selected as the target Signal strength.
  • the appropriate distance noise variance or standard deviation means that the target signal strength corresponding to the distance noise variance meets the requirements of ranging accuracy and the correct phase unwrapping coefficient can be obtained.
  • a specific formula for calculating the low-frequency exposure time is as follows:
  • a 2L represents the low-frequency target signal strength
  • a 1 represents the reference signal strength
  • b is the number of pixels in each combination
  • T 1 is the preset time.
  • the reference signal intensity is the average signal intensity of all pixels of the pre-exposure image; or, the reference signal intensity is the average signal intensity of the characteristic pixels of the pre-exposure image; wherein, The characteristic pixels represent pixels corresponding to the characteristic regions of the pre-exposure image. In this way, the calculation accuracy of the high-frequency exposure time and the low-frequency exposure time can be improved.
  • the calculating the distance between the distance measuring device and the target object based on the high-frequency exposure image and the low-frequency exposure image includes: calculating the high-frequency value of each pixel in the high-frequency exposure image Phase delay information, and calculating the low-frequency phase delay information of each pixel combined in the low-frequency exposure image; up-sampling the low-frequency phase delay information of each pixel combined in the low-frequency exposure image to obtain
  • the high-frequency exposure image is a low-frequency exposure image with the same resolution; the high-frequency unwrapping coefficient of each pixel is calculated according to the high-frequency phase delay information of each pixel and the low-frequency phase delay information of each pixel; according to the height of each pixel
  • the frequency unwrapping coefficient and the high-frequency phase delay information of each pixel calculates the distance between each pixel and the subject that the pixel is exposed to.
  • the combination can be processed Restore the subsequent images to improve the accuracy of subsequent ranging.
  • a continuous wave modulation mode or a pulse wave modulation mode is used, and different modulation modes can be used according to specific requirements, which improves the applicability of the ranging method.
  • a continuous wave modulation mode and chopping technique are used when performing high-frequency exposure. In this way, the mismatch caused by the capacitance, the set voltage, and the background light at the charge collection site during high-frequency modulation can be eliminated, and the accuracy of the depth information can be improved.
  • the continuous wave modulation mode is used for high frequency exposure, and the continuous wave modulation mode or pulse wave modulation mode is used for low frequency exposure. In this way, the power consumption of the system can be further reduced.
  • adjacent pixels are exposed in different phases to further reduce system power consumption.
  • an embodiment of the present application discloses a distance measuring device, which includes a determination module, an exposure module, and a calculation module.
  • the determining module is used to determine the high-frequency exposure time and the low-frequency exposure time, perform combination processing on low-frequency pixels when determining the low-frequency exposure time, and determine the low-frequency exposure time according to the number of combinations.
  • the exposure module is configured to perform high-frequency exposure according to the high-frequency exposure time to obtain a high-frequency exposure image, and perform low-frequency exposure according to the low-frequency exposure time to obtain a low-frequency exposure image.
  • the calculation module is used to calculate the distance between the distance measuring device and the target object according to the high-frequency exposure image and the low-frequency exposure image.
  • high frequency and low frequency refer to the frequency of the modulation signal emitted by the ranging device, which is relatively speaking.
  • the higher frequency is called high frequency
  • the lower frequency It is called low frequency.
  • the determining module since the determining module performs a combination process on the low-frequency pixels when determining the low-frequency exposure time, the signal-to-noise ratio of the low-frequency exposure image can be improved. Therefore, even when less time is allocated When giving low-frequency modulation signals, it can also ensure that the correct phase unwrapping coefficient is obtained, thereby ensuring the accuracy of ranging. In this way, the purpose of reducing system power consumption is achieved under the premise of ensuring accuracy.
  • the determining module is specifically configured to perform pre-exposure according to a preset time to obtain a pre-exposed image, and according to the reference signal received by each pixel of the pre-exposed image within the preset time The intensity determines the high-frequency exposure time and the low-frequency exposure time respectively.
  • the determining module includes an acquiring unit and a determining unit.
  • the acquiring unit is configured to acquire the reference signal intensity received by the pre-exposure image at the preset time.
  • the determining unit is used to determine the high-frequency target signal strength and the low-frequency target signal strength respectively according to the reference signal strength.
  • the determining unit is further configured to calculate the high-frequency exposure time according to the reference signal strength, the high-frequency target signal strength, and the preset time; and, according to the reference signal strength and the low-frequency target signal strength Calculating the low-frequency exposure time based on the determined number of combinations and the preset time; wherein the low-frequency exposure time is inversely proportional to the number of pixels in each combination.
  • the determining unit is specifically configured to determine the high-frequency target signal strength and the low-frequency target signal strength in combination with the relationship curve of the measured distance noise variance or standard deviation of different frequencies and the received signal strength. Specifically, when the distance noise variance corresponding to the reference signal strength is determined by the relationship curve between the distance noise variance or standard deviation and the received signal strength, the signal strength corresponding to the appropriate distance noise variance or standard deviation should be selected as the target Signal strength.
  • the appropriate distance noise variance or standard deviation means that the target signal strength corresponding to the distance noise variance meets the requirements of ranging accuracy and the correct phase unwrapping coefficient can be obtained.
  • a specific formula used by the determining unit to calculate the low-frequency exposure time is as follows:
  • A2L represents the low-frequency target signal strength
  • A1 represents the reference signal strength
  • b is the number of pixels in each combination
  • T1 is the preset time.
  • the reference signal intensity is the average signal intensity of all pixels of the pre-exposure image; or, the reference signal intensity is the average signal intensity of the characteristic pixels of the pre-exposure image; wherein, The characteristic pixels represent pixels corresponding to the characteristic regions of the pre-exposure image.
  • the calculation module includes a calculation unit and a sampling unit.
  • the calculation unit is configured to calculate the high-frequency phase delay information of each pixel in the high-frequency exposure image, and calculate the low-frequency phase delay information of each pixel combined in the low-frequency exposure image.
  • the sampling unit is used for up-sampling the low-frequency phase delay information of each pixel combined in the low-frequency exposure image to obtain a low-frequency exposure image with the same resolution as the high-frequency exposure image.
  • the calculation unit is also used to calculate the high-frequency unwrapping coefficient of each pixel according to the high-frequency phase delay information of each pixel and the low-frequency phase delay information of each pixel; and according to the high-frequency unwrapping coefficient of each pixel Calculate the distance between each pixel and the subject when the pixel is exposed by using the high-frequency phase delay information of each pixel.
  • a continuous wave modulation mode or a pulse wave modulation mode is used, and different modulation modes can be used according to specific requirements, which improves the applicability of the ranging method.
  • a continuous wave modulation mode and chopping technique are used when performing high-frequency exposure. In this way, the mismatch caused by the capacitance, the set voltage, and the background light at the charge collection site during high-frequency modulation can be eliminated, and the accuracy of the depth information can be improved.
  • the continuous wave modulation mode is used for high frequency exposure, and the continuous wave modulation mode or pulse wave modulation mode is used for low frequency exposure. In this way, the power consumption of the system can be further reduced.
  • adjacent pixels are exposed in different phases to further reduce system power consumption.
  • the present application provides a distance measuring device, including a transmitter, a receiving sensor, and a processor.
  • the processor is respectively coupled with the transmitter and the receiver.
  • the processor is configured to execute the method described in the first aspect and any possible implementation manner in the first aspect.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, the computer program includes at least one piece of code, the at least one piece of code can be executed by a computer, to control the computer to execute such as the first One aspect and the method described in any possible implementation of the first aspect.
  • the present application provides a computer program product containing instructions.
  • the computer program product When the computer program product is run on an electronic device, the electronic device is caused to perform the operations described in the first aspect and any one of the possible implementation manners in the first aspect. method.
  • Fig. 1 is a schematic diagram of the ranging principle of a TOF camera in the background art.
  • FIG. 2 is a schematic structural diagram of a distance measuring device in an embodiment of the application.
  • FIG. 3 is a schematic structural diagram of a distance measuring device in another embodiment of the application.
  • FIG. 4 is a flowchart of a distance measurement method in an embodiment of this application.
  • Fig. 5 is a detailed flowchart of step S11.
  • Fig. 6 is a detailed flowchart of step S13.
  • Fig. 7 is a functional block diagram of a distance measuring device in an embodiment of the application.
  • Figure 8 is a diagram of the sub-function modules of the determining module.
  • Figure 9 is a diagram of the sub-function modules of the computing module.
  • the embodiments of the present application provide a ranging device and a ranging method applied to the ranging device.
  • the ranging method can reduce power consumption while the ranging accuracy is unchanged.
  • the distance measurement method achieves the above-mentioned functions by adjusting the proportion of the time of high-frequency exposure and low-frequency exposure, such as reducing the low-frequency exposure time by combining low-frequency pixels, thereby solving the problem of low-frequency signal-to-noise ratio. Causes the problem of phase unwrapping failure.
  • the distance measuring device 100 includes a transmitter 10, a receiver 20 and a processor 30.
  • the transmitter 10 is used to transmit optical signals.
  • the transmitter 10 may be a light emitting diode (Light Emitting Diode, LED) or a laser diode (Laser Diode, LD).
  • LED Light Emitting Diode
  • LD Laser Diode
  • the laser has good collimation and high energy. Compared with the same number of LED lights, the laser emitter has a larger detection range and is more suitable for long-distance detection.
  • the receiver 20 is used to receive the light signal reflected by the target object 200.
  • the receiver 20 is composed of a plurality of pixels (not shown) arranged in two dimensions.
  • the receiver 20 is used to perform a receiving operation of reflected light at each pixel, and to generate an electric charge corresponding to the light amount of the reflected light (received light amount) obtained by the light receiving operation.
  • the receiver 20 may include a photosensitive element, and the photosensitive element includes at least one of the following: a photodiode, an avalanche photodiode, and a charge-coupled element.
  • the processor 30 is configured to determine the distance between the target object 200 and the distance measuring device 100 according to the optical signal emitted by the transmitting unit 10 and the optical signal reflected by the target object 200 received by the receiving unit 20. Specifically, the processor 30 is configured to determine the phase difference between the optical signal emitted by the transmitting unit 10 and the optical signal reflected by the target object 200 received by the receiving unit 20, and determine the target object according to the phase difference The distance between 200 and the distance measuring device 100.
  • the processor 30 may be a central processing unit (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.
  • the processor is the control center of the distance measuring device 100, and various interfaces and lines are used to connect the entire distance measuring device. 100 parts.
  • the process in which the processor 30 controls the transmitter 10 to emit light signals to the target object 200, and the light signal reflected by the target object 200 is received by the receiver 20 and forms an image is called exposure.
  • the processor 30 controls the transmitter 10 to transmit an optical signal of a certain frequency f L for distance detection, and allows the receiver 20 to perform the same frequency four-phase delay (0 degrees, 90 degrees, 180 degrees, 270 degrees, ) Exposure for a period of time (t L ).
  • the exposure value of the receiver 20 DCS0 L, DCS1 L, DCS2 L, DCS3 L
  • the processor 30 controls the transmitter 11 to transmit another optical signal of frequency f H for distance detection, and allows the receiver 20 to perform the same frequency four-phase delay (0 degree, 90 degree, 180 degree, 270 degree,) exposure for a period of time.
  • the exposure value (DCS0 H, DCS1 H, DCS2 H, DCS3 H) receiver 20 is also transmitted to the processor 30.
  • the processor 30 outputs a measured distance for each pixel after receiving the data of multiple exposures.
  • FIG. 3 is a schematic structural diagram of a distance measuring device 100 in another embodiment of the application.
  • the distance measuring device 100 further includes a driving unit 40, a lens 50 and an A/D conversion unit 60.
  • the driving unit 12 is connected between the processor 30 and the transmitter 10 for driving the transmitter 10 to emit light signals.
  • the lens 50 is used to converge the light signal reflected by the target object 200.
  • the A/D conversion unit 60 is connected between the receiver 20 and the processor 30, and is configured to perform A/D conversion on the pixel signal from the receiver 20 and output the converted pixel signal To processor 30.
  • the distance measurement device 100 in the embodiment of the present application uses a dual-frequency distance measurement technology to perform distance measurement.
  • the principle of dual-frequency ranging technology is described in detail below.
  • the dual-frequency ranging in the embodiments of the present application refers to using a low-frequency modulation signal and a high-frequency modulation signal to respectively perform exposure to detect the distance.
  • high frequency and low frequency are relative terms.
  • the two-frequency modulation signal used in dual-frequency ranging technology of which the higher frequency is called high frequency, and the lower frequency is called For low frequency.
  • the frequency of the high frequency and the low frequency can be different by about 2 times, for example, the low frequency is 100 MHz, and the high frequency is 250 MHz, but it is not limited to this, and can be specifically set according to actual ranging requirements.
  • the ambiguity distance of the high-frequency signal during ranging is relatively small, for example, the ambiguity distance corresponding to 250MHz is 0.6m, so when the object exceeds 0.6m, when only the 250MHz waveform is used to detect it, the distance measurement is not accurate.
  • the fuzzy distance corresponding to 100MHz is 1.5m, so when the object beyond 1.5m is detected using only 100MHz waveform, the distance cannot be measured accurately. But the minimum working distance when working with 100MHz+250MHz signals becomes 3m. Therefore, when measuring any pixel at a certain frequency within 3m, the tested distances d L and d H are determined by two variables, one is the unwrapping coefficient n L , n H , and the other is the phase delay ⁇ H , ⁇ L.
  • the specific formula is as follows:
  • c is the speed of light (3*10 8 m/s); two unwrapping coefficients are used to restore the true phase delay of each frequency, which can be calculated by the unwrapping algorithm; and the phase delay is the exposure of the receiver 20
  • the values (DCS0, DCS1, DCS2, DCS3) are calculated; D is the fuzzy distance.
  • the dual-frequency ranging error formula is as follows:
  • a pix is the pixel area, too large pixels will reduce the resolution, large size and high cost;
  • RE is quantum efficiency, also known as QE, which directly affects system performance;
  • FF is the fill rate;
  • ⁇ active is the light source received by the pixel Luminous power, an efficient light source will bring benefits to the system power consumption;
  • ⁇ ambient is the ambient light power received by the pixel, which is determined by the user's environment, light source, and bandpass filter;
  • t int is the exposure time, and the time should not be too long ;
  • N system is the system noise, which is determined by the readout circuit;
  • F mod is the modulation frequency. Adjusting this parameter will improve the accuracy, but will reduce the measurable distance;
  • C mod is the modulation contrast, also known as MC, and is designed with the internal electric field of the pixel Related, q is a constant.
  • the distance measuring device 100 For the distance measuring device 100 with dual-frequency ranging technology, if the two frequency values are similar (such as plus or minus 10%), and the exposure time of the two frequencies is equal (such as plus or minus 10%), the final output distance can be calculated twice The average of the measured distance is obtained. If the two frequencies are quite different (for example, the difference is about 50%), there are two ways to obtain the test distance, one way is through averaging, the other way is to use the high frequency range measurement result as the final output, low frequency Although it also participates in the exposure, it is mainly to solve the unwrapping coefficient of the high-frequency component and does not participate in other calculations. Therefore, the distance measuring device 100 can be used to increase the high-frequency exposure time and reduce the low-frequency exposure time as long as the unwrapping coefficient is calculated correctly. , Then the accuracy of ranging can be improved.
  • FIG. 4 is a flowchart of a ranging method in an embodiment of this application.
  • the distance measurement method is applied to the distance measurement device 100 in FIG. 1 or FIG. 2.
  • the ranging method includes the following steps.
  • Step S11 Determine a high-frequency exposure time t H and a low-frequency exposure time t L , perform combination processing on low-frequency pixels when determining the low-frequency exposure time, and determine the low-frequency exposure time t L according to the number of combinations.
  • performing combination processing on low-frequency pixels refers to performing binning processing on low-frequency pixels.
  • binning is an image readout mode in which the charges induced in adjacent pixels are added together and read out in a pixel mode.
  • Binning is divided into horizontal direction binning and vertical direction binning.
  • the horizontal direction binning is to add the charges of adjacent rows and read out, while the vertical direction binning is to add the charges of adjacent columns to read out.
  • This technology is binning.
  • the advantage is that several pixels can be combined as one pixel to increase sensitivity, output speed, and reduce resolution.
  • the aspect ratio of the image does not change, for example, when using 2*2 binning,
  • the image resolution will be reduced by 75%.
  • a combination of 1*1, 1*2, 2*1, 1*3, 3*1, or 2*2 can be performed according to actual needs, which is not limited here.
  • pre-exposure may be performed according to a preset time T1 to obtain a pre-exposed image, and according to each pixel of the pre-exposed image
  • the strength of the reference signal received within the preset time T1 determines the high-frequency exposure time and the low-frequency exposure time respectively.
  • the pre-exposure can be performed using a modulation signal with a higher frequency, or a modulation signal with a lower frequency, which is not limited here.
  • Step S12 performing high-frequency exposure according to the high-frequency exposure time to obtain a high-frequency exposure image, and performing low-frequency exposure according to the low-frequency exposure time to obtain a low-frequency exposure image.
  • the sequence of high-frequency exposure and low-frequency exposure is not limited.
  • high-frequency exposure may be performed first, or low-frequency exposure may be performed first.
  • Step S13 Calculate the distance between the distance measuring device and the target object according to the high-frequency exposure image and the low-frequency exposure image.
  • the signal-to-noise ratio of the low-frequency exposure image can be improved. Therefore, even if less time is allocated to When the signal is modulated at low frequency, the correct phase unwrapping coefficient can also be ensured, thereby ensuring the accuracy of ranging. In this way, the purpose of reducing system power consumption is achieved under the premise of ensuring accuracy.
  • the high-frequency exposure time t H and the low-frequency exposure time are respectively determined according to the signal strength of the pre-exposure image received within the preset time T 1 t L specifically includes the following steps.
  • Step S111 the pre-exposure image acquired at the predetermined time T 1 of the received reference signal strength A 1.
  • the reference signal intensity A 1 is the average signal intensity of all pixels of the pre-exposure image; or, the reference signal intensity is the average signal intensity of the characteristic pixels of the pre-exposure image; wherein, The characteristic pixels represent pixels corresponding to the characteristic regions of the pre-exposure image.
  • the characteristic area may be the area corresponding to the nose, eyes, and mouth, and the specific characteristic area may be determined according to the actual exposed subject.
  • Step S112 Determine the high-frequency target signal strength A 2H and the low-frequency target signal strength A 2L respectively according to the reference signal strength A 1 .
  • the relationship curve between the variance or standard deviation of the measured distance noise at different frequencies and the received signal strength can be combined to determine the high-frequency target signal strength A 2H and the low-frequency target signal strength A 2L .
  • the signal strength corresponding to the appropriate distance noise variance or standard deviation should be selected as the target signal strength.
  • the appropriate distance noise variance or standard deviation means that the target signal strength corresponding to the distance noise variance meets the requirements of ranging accuracy and the correct phase unwrapping coefficient can be obtained.
  • Step S113 Calculate the high-frequency exposure time t H according to the reference signal strength A 1 , the high-frequency target signal strength A 2H, and the preset time T 1 ; and, according to the reference signal strength A 1 , The low-frequency target signal intensity A 2L , the number of pixels b of each combination, and the preset time T 1 are used to calculate the low-frequency exposure time t L ; wherein the low-frequency exposure time t L and the number of pixels of each combination b It is inversely proportional.
  • a formula for specifically calculating the high-frequency exposure time t H and the low-frequency exposure time t L is as follows:
  • step S13 specifically includes the following steps:
  • Step S131 Calculate the high-frequency phase delay information of each pixel in the high-frequency exposure image, and calculate the low-frequency phase delay information of each pixel in the low-frequency exposure image.
  • Step S132 Up-sampling the low-frequency phase delay information of each pixel combined in the low-frequency exposure image to obtain a low-frequency exposure image with the same resolution as the high-frequency exposure image.
  • the phase delay information of each pixel of the low-frequency exposure image with a resolution of 160*120 is obtained after the combination of the low-frequency phase delay information of each pixel.
  • the phase delay information of each pixel of the low-frequency exposure image with a resolution of 320*240 is obtained.
  • step S133 the high-frequency unwrapping coefficient of each pixel is calculated according to the high-frequency phase delay information of each pixel and the low-frequency phase delay information of each pixel.
  • Step S134 Calculate the distance between each pixel and the object photographed when the pixel is exposed according to the high-frequency unwrapping coefficient of each pixel and the high-frequency phase delay information of each pixel.
  • the combination can be processed Restore the subsequent images to improve the accuracy of subsequent ranging.
  • a continuous wave modulation mode or a pulse wave modulation mode is used. That is, high-frequency exposure can be switched between continuous wave modulation mode and pulse wave to mode; low-frequency exposure can also be switched between continuous wave modulation mode and pulse wave to mode.
  • the continuous wave modulation mode can be used by default, or it can be switched according to the use environment of the distance measuring device 100.
  • the continuous wave modulation mode is used when the distance measuring device 100 is applied to an indoor environment
  • the continuous wave modulation mode is used when the distance measuring device 100 is applied to an indoor environment.
  • the pulse wave modulation mode is used in the outdoor environment, and different modulation modes can be adopted according to specific needs, which improves the applicability of the ranging method.
  • dual-frequency mid-high/low-frequency exposure modulation can be divided into 1, 2, 3, 4 or more times.
  • the embodiment of the present application refers to a group of continuous waveforms, rather than a periodic waveform.
  • the number of high and low frequency exposures will be described in detail below with a specific embodiment.
  • Example 1 Continuous wave exposure modulation is used for high frequency exposure 4 times, and continuous wave exposure modulation is used for low frequency exposure 2 times. Refer to the table below for details.
  • high-frequency exposure modulation is performed first, and then low-frequency exposure modulation is performed.
  • the sequence of high-frequency exposure modulation and low-frequency exposure modulation can be exchanged.
  • f H represents high-frequency exposure modulation
  • f L represents low-frequency exposure modulation.
  • a 0 represents an exposure with a phase of 0°
  • a 180 represents an exposure with a phase of 180°
  • a 90 represents an exposure with a phase of 90°
  • a 270 represents an exposure with a phase of 270°.
  • a 0 A 180 indicates that the phase window A and the phase window B perform phase 0° and 180° exposure respectively
  • a 180 A 0 indicates that the phase window A and the phase window B perform phase 180° and 0° exposure respectively.
  • the high-frequency continuous wave A 0 A 180 , A 90 A 270 , A 180 A 0 , A 270 A 90 and the low-frequency continuous wave A 0 A 180 , A 90 A 270 are exposed in sequence. In actual operation, the order can be exchange.
  • the use of chopping technology that is , exposure of A 0 A 180 and A 180 A 0
  • the chopping technology is also called the mismatch elimination technology of stored charge components.
  • Embodiment 2 High-frequency exposure uses continuous wave exposure modulation 4 times, and low-frequency exposure uses pulse wave exposure modulation once. See the table below for details.
  • Embodiment 2 The difference between Embodiment 2 and Embodiment 1 is that the low frequency adopts A 0 A 180 pulse wave exposure, and the exposure is performed once. Since the distance information obtained by low-frequency modulation is mainly phase unwrapping, pulse modulation is used, and 0° and 180° pulse modulation is used. The depth information for phase unwrapping can be obtained through only one exposure, which can further reduce the system Power consumption. It should be noted that in actual operation, the pulse modulation phases of the phase window A and the phase window B can be exchanged.
  • Embodiment 3 High-frequency exposure uses continuous wave exposure modulation twice, and low-frequency exposure uses continuous wave exposure modulation twice. See the table below for details.
  • the high-frequency continuous wave A 0 A 180 , A 90 A 270 and the low-frequency continuous wave A 0 A 180 , A 90 A 270 are sequentially exposed.
  • the sequence of high-frequency exposure and low-frequency exposure Can be exchanged.
  • the high-frequency modulation does not use chopping technology to eliminate the offset caused by process, device and environmental factors. The accuracy of the depth information will be relatively reduced, but the power consumption will be reduced accordingly.
  • Embodiment 4 High-frequency exposure uses continuous wave exposure modulation twice, and low-frequency exposure uses pulse wave exposure modulation once. See the table below for details.
  • the exposure of the high-frequency continuous wave A 0 A 180 , A 90 A 270 and the low-frequency pulse wave A 0 A 180 are sequentially performed.
  • the sequence of high-frequency exposure and low-frequency exposure can be exchanged.
  • the high-frequency modulation does not use the chopping technology to eliminate the misalignment caused by the process, the device, and the environmental factors. The accuracy of the depth information will be relatively reduced, but the power consumption will be reduced accordingly.
  • Low-frequency exposure modulation can obtain depth information for phase unwrapping with only one exposure, further reducing overall power consumption. It should be noted that in actual operation, the pulse modulation phases of the phase window A and the phase window B can be exchanged.
  • the above exposure combinations are only examples, and other exposure modulation times combinations with similar principles are also included in the protection scope of the present application.
  • the pixels are spatially separated during exposure, that is, adjacent pixels are exposed to different phases, which can further reduce power consumption. As long as the principle is the same, it is also included in the scope of protection of the present application.
  • the processor 30 is configured to execute the ranging method in any of the foregoing implementation manners.
  • the distance measuring device 100 includes a determination module 110, an exposure module 120, and a calculation module 130.
  • the determination module 110 may be used to implement the method shown in step S11 in the foregoing method embodiment
  • the exposure module 120 may be used to implement the method shown in step S12 in the foregoing method embodiment
  • the calculation module 130 may It is used to implement the method shown in step S13 in the foregoing method embodiment.
  • the determining module 110 includes an acquiring unit 111 and a determining unit 112.
  • the acquiring unit 111 may be used to implement the method shown in step S111 in the foregoing method embodiment; the determining unit may be respectively used to implement the method shown in step S112 and step S113 in the foregoing method embodiment.
  • the calculation module 130 includes a calculation unit 131 and a sampling unit 132.
  • the calculation unit 131 may be used to implement the method shown in step S131 in the foregoing method embodiment.
  • the sampling unit 132 may be used to implement the method shown in step S132 in the foregoing method embodiment.
  • the calculation unit 131 may also be used to implement the methods shown in step S133 and step S134 in the foregoing method embodiment.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or a data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk).

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

L'invention concerne un dispositif de télémétrie et un procédé de télémétrie appliqué au dispositif de télémétrie. Le dispositif de télémétrie comprend un émetteur et un récepteur. Le procédé de télémétrie consiste à : déterminer un temps d'exposition à haute fréquence et un temps d'exposition à basse fréquence ; lors de la détermination du temps d'exposition à basse fréquence, combiner des pixels basse fréquence et déterminer le temps d'exposition à basse fréquence selon le nombre de pixels de chaque combinaison (S11) ; effectuer une exposition à haute fréquence selon le temps d'exposition à haute fréquence de façon à obtenir une image d'exposition à haute fréquence, et effectuer une exposition à basse fréquence selon le temps d'exposition à basse fréquence de façon à obtenir une image d'exposition à basse fréquence (S12) ; et calculer la distance entre un dispositif de télémétrie et un objet cible selon l'image d'exposition à haute fréquence et l'image d'exposition à basse fréquence (S13). Le procédé décrit peut réduire la consommation d'énergie du système tout en garantissant la précision.
PCT/CN2019/113038 2019-10-24 2019-10-24 Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur WO2021077358A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201980101481.6A CN114556048B (zh) 2019-10-24 2019-10-24 测距方法、测距装置及计算机可读存储介质
PCT/CN2019/113038 WO2021077358A1 (fr) 2019-10-24 2019-10-24 Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/113038 WO2021077358A1 (fr) 2019-10-24 2019-10-24 Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2021077358A1 true WO2021077358A1 (fr) 2021-04-29

Family

ID=75620323

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/113038 WO2021077358A1 (fr) 2019-10-24 2019-10-24 Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN114556048B (fr)
WO (1) WO2021077358A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024116047A1 (fr) * 2022-11-30 2024-06-06 Cilag Gmbh International Compartimentage de pixels trame par trame

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310246A1 (en) * 2009-06-04 2010-12-09 Digital Imaging Systems Gmbh Method for using a variable aperture to tune image quality parameters in a camera system
CN104853113A (zh) * 2015-05-15 2015-08-19 零度智控(北京)智能科技有限公司 自适应调节相机曝光时间的装置及方法
CN105651245A (zh) * 2014-11-12 2016-06-08 原相科技股份有限公司 光学测距系统及方法
CN105872392A (zh) * 2015-01-23 2016-08-17 原相科技股份有限公司 具有动态曝光时间的光学测距系统
CN106289158A (zh) * 2015-06-24 2017-01-04 三星电机株式会社 距离检测装置以及包括该距离检测装置的相机模块
US20180302545A1 (en) * 2015-01-13 2018-10-18 Pixart Imaging Inc. Optical measurement system with dynamic exposure time and operating method therefor

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5171158B2 (ja) * 2007-08-22 2013-03-27 浜松ホトニクス株式会社 固体撮像装置及び距離画像測定装置
US8629976B2 (en) * 2007-10-02 2014-01-14 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems
CN104833979B (zh) * 2015-04-27 2017-03-15 北京航天控制仪器研究所 一种激光测距及激光测距数据的信号处理的方法
US10191154B2 (en) * 2016-02-11 2019-01-29 Massachusetts Institute Of Technology Methods and apparatus for time-of-flight imaging
CN106686319B (zh) * 2016-12-27 2019-07-16 浙江大华技术股份有限公司 一种图像曝光的控制方法及其装置
CN108401457A (zh) * 2017-08-25 2018-08-14 深圳市大疆创新科技有限公司 一种曝光的控制方法、装置以及无人机
CN108132458B (zh) * 2017-12-22 2020-07-17 北京锐安科技有限公司 室内测距方法、装置、设备及存储介质
CN109597091B (zh) * 2018-12-28 2022-11-11 豪威科技(武汉)有限公司 Tof测距的相位解包裹的方法及tof测距系统
CN109903241B (zh) * 2019-01-31 2021-06-15 武汉市聚芯微电子有限责任公司 一种tof相机系统的深度图像校准方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100310246A1 (en) * 2009-06-04 2010-12-09 Digital Imaging Systems Gmbh Method for using a variable aperture to tune image quality parameters in a camera system
CN105651245A (zh) * 2014-11-12 2016-06-08 原相科技股份有限公司 光学测距系统及方法
US20180302545A1 (en) * 2015-01-13 2018-10-18 Pixart Imaging Inc. Optical measurement system with dynamic exposure time and operating method therefor
CN105872392A (zh) * 2015-01-23 2016-08-17 原相科技股份有限公司 具有动态曝光时间的光学测距系统
CN104853113A (zh) * 2015-05-15 2015-08-19 零度智控(北京)智能科技有限公司 自适应调节相机曝光时间的装置及方法
CN106289158A (zh) * 2015-06-24 2017-01-04 三星电机株式会社 距离检测装置以及包括该距离检测装置的相机模块

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024116047A1 (fr) * 2022-11-30 2024-06-06 Cilag Gmbh International Compartimentage de pixels trame par trame

Also Published As

Publication number Publication date
CN114556048A (zh) 2022-05-27
CN114556048B (zh) 2023-09-26

Similar Documents

Publication Publication Date Title
WO2021008209A1 (fr) Appareil de mesure de profondeur et procédé de mesure de distance
US20210181317A1 (en) Time-of-flight-based distance measurement system and method
WO2020009011A1 (fr) Appareil de mesure de distance optique
CN111045029B (zh) 一种融合的深度测量装置及测量方法
WO2021023285A1 (fr) Procédé et appareil de traitement d'écho pour un lidar imageur, procédé et appareil de télémétrie et système de lidar imageur
WO2022160611A1 (fr) Procédé, système et dispositif de mesure de distance à base de fusion temporelle
WO2021051481A1 (fr) Procédé de mesure de distance par temps de vol par traçage d'un histogramme dynamique et système de mesure associé
US20220120872A1 (en) Methods for dynamically adjusting threshold of sipm receiver and laser radar, and laser radar
TWI722519B (zh) 飛時測距感測器以及飛時測距方法
WO2020006924A1 (fr) Procédé et dispositif de mesure d'informations de profondeur basées sur un module à temps de vol (tof)
WO2021051480A1 (fr) Procédé de mesure de distance de temps de vol basé sur un dessin d'histogramme dynamique et système de mesure
US11885669B2 (en) Systems and methods for imaging and sensing vibrations
TWI723413B (zh) 測量一成像感測器與一物體間之一距離的系統及方法
WO2022241942A1 (fr) Caméra de profondeur et procédé de calcul de profondeur
WO2023004628A1 (fr) Procédé et appareil de correction de réflectivité, support de stockage lisible par ordinateur et dispositif terminal
WO2021077358A1 (fr) Procédé de télémétrie, dispositif de télémétrie et support de stockage lisible par ordinateur
CN110986816A (zh) 一种深度测量系统及其测量方法
WO2022000147A1 (fr) Procédé de traitement d'images de profondeur et dispositif
US20220284543A1 (en) Signal processing apparatus and signal processing method
TWI693421B (zh) 飛行時間測距裝置以及飛行時間測距方法
WO2022160622A1 (fr) Procédé, dispositif et système de mesure de distance
WO2023279621A1 (fr) Système de mesure de distance itof et procédé de calcul de la réflectivité d'un objet mesuré
WO2022204895A1 (fr) Procédé et appareil pour obtenir une carte de profondeur, dispositif informatique, et support de stockage lisible
WO2023279619A1 (fr) Système de télémétrie de temps de vol indirect, et procédé de protection d'une valeur de distance floue
WO2023279755A1 (fr) Procédé et appareil pour masquer des valeurs de distance d'ambiguïté d'un système de télémétrie, et dispositif

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19949813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19949813

Country of ref document: EP

Kind code of ref document: A1