CN112987022A - Distance measurement method and device, computer readable medium and electronic equipment - Google Patents

Distance measurement method and device, computer readable medium and electronic equipment Download PDF

Info

Publication number
CN112987022A
CN112987022A CN202110267995.8A CN202110267995A CN112987022A CN 112987022 A CN112987022 A CN 112987022A CN 202110267995 A CN202110267995 A CN 202110267995A CN 112987022 A CN112987022 A CN 112987022A
Authority
CN
China
Prior art keywords
dot matrix
optical signal
matrix optical
target
photosensitive pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110267995.8A
Other languages
Chinese (zh)
Inventor
黄毅鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110267995.8A priority Critical patent/CN112987022A/en
Publication of CN112987022A publication Critical patent/CN112987022A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/491Details of non-pulse systems
    • G01S7/4912Receivers
    • G01S7/4915Time delay measurement, e.g. operational details for pixel components; Phase measurement

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

The disclosure provides a distance measuring method and device, a computer readable medium and electronic equipment, and relates to the technical field of laser distance measurement. The method comprises the following steps: in a pixel screening stage, emitting a dot matrix optical signal to a target area so that the dot matrix optical signal is reflected at the target area and a reflected dot matrix optical signal is generated; receiving the reflection dot matrix optical signal, and screening a target photosensitive pixel in a photosensitive pixel area through the reflection dot matrix optical signal; and in the distance measurement stage, the target photosensitive pixel in a working state receives the reflection dot matrix optical signal, and a distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflection dot matrix optical signal. This openly can effectively reduce the consumption of the iToF ranging system of collocation lattice light source scheme, reduces photosensor's calculated amount, promotes range finding efficiency, promotes the output frame rate of range finding result.

Description

Distance measurement method and device, computer readable medium and electronic equipment
Technical Field
The present disclosure relates to the field of laser ranging technologies, and in particular, to a ranging method, a ranging apparatus, a computer-readable medium, and an electronic device.
Background
The optical ranging imaging technology can obtain complete three-dimensional structural information of a scene, help a machine to realize high-precision identification, positioning, scene reconstruction and the like, and becomes one of essential basic technologies for application of Augmented Reality (AR). The optical ranging imaging technology is widely applied to the iToF (indirect Time-of-Flight) and the dToF (direct Time-of-Flight), and compared with the dToF technology, the iToF technology is lower in cost and lower in manufacturing difficulty.
Currently, in a related iToF ranging scheme, a surface light source is replaced by a dot matrix light source. For the receiving-end photosensitive sensor, only the area irradiated by the dot-matrix light source generates a valid signal, and other areas not irradiated by the dot-matrix light source generate a valid signal, although corresponding photosensitive pixels are also in operation, only noise signals are collected, which causes unnecessary power consumption, and the received noise signals cause an increase in calculation amount, thereby reducing the ranging efficiency and the frame rate of the ranging result.
Disclosure of Invention
The disclosure is directed to a ranging method, a ranging apparatus, a computer readable medium, and an electronic device, so as to avoid, at least to some extent, the problems of unnecessary power consumption, increased computation amount, decreased ranging efficiency, and a frame rate of a ranging result caused by a photosensitive pixel receiving a noise signal in an iToF scheme using a lattice light source.
According to a first aspect of the present disclosure, there is provided a ranging method, including:
in a pixel screening stage, emitting a dot matrix optical signal to a target area so that the dot matrix optical signal is reflected at the target area and a reflected dot matrix optical signal is generated;
receiving the reflection dot matrix optical signal, and screening a target photosensitive pixel in a photosensitive pixel area through the reflection dot matrix optical signal;
and in the distance measurement stage, the target photosensitive pixel in a working state receives the reflection dot matrix optical signal, and a distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflection dot matrix optical signal.
According to a second aspect of the present disclosure, there is provided a ranging apparatus comprising:
the device comprises an emitting module, a pixel screening module and a ranging module, wherein the emitting module is used for emitting a dot matrix optical signal to a target area in a pixel screening stage and a ranging stage so that the dot matrix optical signal is reflected at the target area and a reflected dot matrix optical signal is generated;
the receiving module is electrically connected with the transmitting module and used for receiving the reflection dot matrix optical signals in a pixel screening stage and screening target photosensitive pixels in a photosensitive pixel area through the reflection dot matrix optical signals; or in the distance measurement stage, the target photosensitive pixel in the working state receives the reflection dot matrix optical signal, and the distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflection dot matrix optical signal.
According to a third aspect of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the above-mentioned method.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus, comprising:
a processor; and
a memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the above-described method.
In the distance measurement method provided by one embodiment of the disclosure, in a pixel screening stage, a dot matrix optical signal is emitted to a target area so as to be reflected at the target area and generate a reflected dot matrix optical signal, the reflected dot matrix optical signal is received, and a target photosensitive pixel is obtained by screening the reflected dot matrix optical signal in a photosensitive pixel area; and in the distance measurement stage, the reflected dot matrix optical signal is received only by the target photosensitive pixel in the working state, and the distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflected dot matrix optical signal. On one hand, before each frame of measurement, the target photosensitive pixel receiving the effective signal is screened in the pixel screening stage, so that the ranging result with higher precision and accuracy can be obtained only by starting the target photosensitive pixel in the ranging stage, unnecessary power consumption caused by starting the photosensitive pixel which cannot receive the effective signal is avoided, and the power consumption of the ranging system is reduced; on the other hand, the distance measuring system only needs to calculate the reflected dot matrix optical signals detected by the target photosensitive pixels, so that the calculation amount is effectively reduced, the distance measuring efficiency is improved, and the frame rate of the distance measuring result is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty. In the drawings:
FIG. 1 shows a schematic diagram of an electronic device to which embodiments of the present disclosure may be applied;
FIG. 2 schematically illustrates a flow chart of a ranging method in an exemplary embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart for determining an addressing area corresponding to each light spot in an exemplary embodiment of the disclosure;
FIG. 4 schematically illustrates a flow chart for screening target photosensitive pixels in an exemplary embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart for screening target photosensitive pixels according to light intensity information in an exemplary embodiment of the present disclosure;
FIG. 6 schematically illustrates another flow chart for screening target photosensitive pixels in exemplary embodiments of the present disclosure;
FIG. 7 schematically illustrates a flow chart for screening target photosensitive pixels according to confidence scores in an exemplary embodiment of the present disclosure;
fig. 8 schematically shows a composition diagram of a ranging apparatus in an exemplary embodiment of the present disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic view of an electronic device to which embodiments of the present disclosure may be applied.
The following takes the mobile terminal 100 in fig. 1 as an example, and exemplifies the configuration of the electronic device. It will be appreciated by those skilled in the art that the configuration of figure 1 can also be applied to fixed type devices, in addition to components specifically intended for mobile purposes. In other embodiments, mobile terminal 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware. The interfacing relationship between the components is only schematically illustrated and does not constitute a structural limitation of the mobile terminal 100. In other embodiments, the mobile terminal 100 may also interface differently than shown in fig. 1, or a combination of multiple interfaces.
As shown in fig. 1, the mobile terminal 100 may specifically include: a processor 110, an internal memory 121, an external memory interface 122, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 171, a receiver 172, a microphone 173, an earphone interface 174, a sensor module 180, a display 190, a camera module 191, an indicator 192, a motor 193, a button 194, and a Subscriber Identity Module (SIM) card interface 195, and the like. Wherein the sensor module 180 may include a depth sensor 1801, a pressure sensor 1802, a gyroscope sensor 1803, and the like.
Processor 110 may include one or more processing units, such as: the Processor 110 may include an Application Processor (AP), a modem Processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband Processor, and/or a Neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
The NPU is a Neural-Network (NN) computing processor, which processes input information quickly by using a biological Neural Network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU may implement applications such as intelligent recognition of the mobile terminal 100, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
A memory is provided in the processor 110. The memory may store instructions for implementing six modular functions: detection instructions, connection instructions, information management instructions, analysis instructions, data transmission instructions, and notification instructions, and are controlled to be executed by the processor 110.
The charging management module 140 is configured to receive charging input from a charger. The power management module 141 is used for connecting the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives the input of the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the display screen 190, the camera module 191, the wireless communication module 160, and the like.
The wireless communication function of the mobile terminal 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like. Wherein, the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals; the mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the mobile terminal 100; the modem processor may include a modulator and a demodulator; the Wireless communication module 160 may provide a solution for Wireless communication including Wireless Local Area Network (WLAN) (e.g., Wireless Fidelity (Wi-Fi) network), Bluetooth (BT), and the like, applied to the mobile terminal 100. In some embodiments, the antenna 1 of the mobile terminal 100 is coupled to the mobile communication module 150 and the antenna 2 is coupled to the wireless communication module 160 so that the mobile terminal 100 can communicate with networks and other devices through wireless communication techniques.
The mobile terminal 100 implements a display function through the GPU, the display screen 190, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to a display screen 190 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The mobile terminal 100 may implement a photographing function through the ISP, the camera module 191, the video codec, the GPU, the display screen 190, the application processor, and the like. The ISP is used for processing data fed back by the camera module 191; the camera module 191 is used for capturing still images or videos; the digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals; the video codec is used to compress or decompress digital video, and the mobile terminal 100 may also support one or more video codecs.
The external memory interface 122 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the mobile terminal 100. The external memory card communicates with the processor 110 through the external memory interface 122 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (e.g., audio data, a phonebook, etc.) created during use of the mobile terminal 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk Storage device, a Flash memory device, a Universal Flash Storage (UFS), and the like. The processor 110 executes various functional applications of the mobile terminal 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The mobile terminal 100 may implement an audio function through the audio module 170, the speaker 171, the receiver 172, the microphone 173, the earphone interface 174, and the application processor. Such as music playing, recording, etc.
The depth sensor 1801 is used to acquire depth information of a scene. In some embodiments, the depth sensor may be disposed in the camera module 191. The depth sensor 1801 may be a dToF lens capable of measuring a real distance according to a flight time of the pulse laser, and specifically, the dToF lens may include a SPAD pixel array for receiving a photon-generated current of the reflected light and counting the current through a TDC circuit, which is not limited thereto in this exemplary embodiment.
The pressure sensor 1802 is used to sense a pressure signal, which can be converted into an electrical signal. In some embodiments, the pressure sensor 1802 may be disposed on the display screen 190. The pressure sensors 1802 can be of a wide variety, such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, and the like.
The gyro sensor 1803 may be used to determine a motion gesture of the mobile terminal 100. In some embodiments, the angular velocity of the mobile terminal 100 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensors 1803. The gyro sensor 1803 may be used to photograph anti-shake, navigation, body-feel game scenes, and the like.
In addition, sensors with other functions, such as an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc., may be provided in the sensor module 180 according to actual needs.
Other devices for providing auxiliary functions may also be included in the mobile terminal 100. For example, the keys 194 include a power-on key, a volume key, etc., through which a user can generate key signal inputs related to user settings and function control of the mobile terminal 100. As another example, indicator 192, motor 193, SIM card interface 195, etc.
The iToF is a method of calculating a distance by measuring a phase difference between a transmitted wave and an echo to calculate a time of flight of light between a sensor and an object to be measured. In the related iToF scheme, an emitting end generally emits a surface light source to irradiate on an object or a target area, and a receiving end receives light reflected by the surface light source on the object or the target area corresponding to light, so that depth information of a field area covered by the whole surface light source is calculated through a photosensitive sensor. However, the light of the surface light source is not concentrated enough and the power consumption is high, so that the iToF scheme cannot perform measurement for a long time, and the distance that the iToF scheme using the surface light source can measure is short, so that the method cannot be applied to scenes with long distances.
In another related iToF scheme, in order to improve the distance and accuracy of measurement, or reduce the power consumption of the ranging system under the condition of keeping the same distance measurement precision, a lattice light source is used to replace a surface light source. Therefore, if the light source has the same power consumption, the energy of each point of the point light source is more concentrated, and the corresponding signal-to-noise ratio is higher, so that the measurement precision and distance can be improved, or the power consumption can be reduced. However, in this scheme, for the photosensitive sensor at the receiving end, since only the area illuminated by the dot array light source has a valid signal, the corresponding photosensitive pixel can measure valid depth information. In other areas not irradiated by the light reflected by the dot matrix light source, although the photosensitive pixels not irradiated by the light reflected by the dot matrix light source in the photosensitive sensor are also working, the collected light signals are only noise signals, and actually, the pixels which do not detect the effective light signals are not required to be opened to work, so that unnecessary power consumption is caused, and the noise signals collected by the photosensitive pixels which are not irradiated by the light reflected by the dot matrix light source are also transmitted to the processing module to be processed, so that the calculation amount is large, and the processing efficiency of the ranging system is reduced.
Based on one or more problems in the related art, the present exemplary embodiment first provides a ranging method, which may be applied to a terminal device, for example, the terminal device may be a smart phone, a ranging device, an AR device, and the like.
Referring to fig. 2, fig. 2 shows a flow of a ranging method in the present exemplary embodiment, which may include the following steps S210 to S230:
in step S210, in a pixel screening phase, a dot matrix light signal is emitted to a target area, so that the dot matrix light signal is reflected at the target area and a reflected dot matrix light signal is generated.
In an exemplary embodiment, the pixel screening stage refers to a time period for selecting a photosensitive pixel receiving a valid signal before ranging is started for each frame, the ranging stage refers to a time period for only gating the photosensitive pixels screened by the pixel screening stage for ranging, and the pixel screening stage and the ranging stage together measure distance or depth information of a complete frame. The time consumed by the pixel screening stage is generally less than the time consumed by the ranging stage, for example, the time consumed by the pixel screening stage may be set to 10 microseconds, and the time consumed by the ranging stage may be set to be greater than or equal to 200 microseconds.
The target area may refer to an area that needs to be measured far and near within a certain distance, for example, the target area may be a three-dimensional human face, may also be an object surface in an indoor environment, and of course, may also be other areas that need to be measured for distance, which is not particularly limited in this example embodiment.
The lattice optical signal refers to an optical signal generated by a lattice light source array, where the lattice light source may be a light source array composed of Vertical-Cavity Surface-Emitting lasers (VCSELs), the optical signal may be a modulated pulse Laser signal with a specific frequency emitted by the Vertical-Cavity Surface-Emitting lasers, and the modulated pulse Laser signal may be reflected at a target region to obtain a reflected lattice optical signal.
In step S220, the reflected dot matrix optical signal is received, and a target photosensitive pixel is screened in the photosensitive pixel region by the reflected dot matrix optical signal.
In an exemplary embodiment, the photosensitive pixel area refers to an area where photosensitive pixels in the photosensitive sensor are distributed, and the photosensitive pixel area receives a reflected dot matrix optical signal generated by reflection at a target area, so that the optical signal can be converted into an electrical signal by a Photodiode (PD) corresponding to the photosensitive pixel, and then the electrical signal is processed to obtain distance data.
The target photosensitive pixel refers to a photosensitive pixel which is detected in a pixel screening stage and can effectively receive a reflected dot matrix optical signal, that is, the target photosensitive pixel can receive an effective signal, and signals received by photosensitive pixels in a photosensitive pixel area except the target photosensitive pixel are noise signals, for example, the target photosensitive pixel may be screened according to the intensity of the optical signal received by the photosensitive pixel, or the target photosensitive pixel may be screened according to the output result of the photosensitive pixel, or of course, other manners capable of screening photosensitive pixels that can receive effective signals may be used.
In step S230, in the ranging phase, the target photosensitive pixel in the working state receives the reflected dot matrix optical signal, and a ranging result is determined according to a phase difference between the dot matrix optical signal and the reflected dot matrix optical signal.
In an exemplary embodiment, the target photosensitive pixel in the operating state refers to a gated target photosensitive pixel, for example, a photosensitive pixel gate control circuit may be designed in advance, and then a "reset" (RST) signal of each photosensitive pixel may be controlled by the photosensitive pixel gate control circuit, when the RST signal is set to a high level, an FD (Floating Diffusion) is directly communicated with the high level of the entire circuit, and at this time, a PD signal cannot be effectively transmitted to the FD, and the photosensitive pixel does not operate. When the photosensitive pixel is required to be in a working state, the RST signal is set to be low level. Of course, the specific gating manner is related to the pre-designed photosensitive pixel gating control circuit, and this example is not limited in particular.
The ranging result is a result obtained by processing the received reflected lattice optical signal, for example, the ranging result may be real distance information to the target area, or may be a depth map corresponding to the target area, and specifically, real distance data between the terminal device and the target area may be calculated according to the relation (1):
Figure BDA0002972779550000091
where d may represent the actual distance data between the terminal device and the target area, c may represent the speed of light, i.e. 299792458m/s,
Figure BDA0002972779550000092
a phase difference between the lattice optical signal and the reflected lattice optical signal may be represented, and f may represent a frequency corresponding to the modulated lattice optical signal.
Next, step S210 to step S230 in the present exemplary embodiment will be further explained.
In an exemplary embodiment, after receiving the reflected dot matrix optical signal by the target photosensitive pixel in the active state, pixels other than the target photosensitive pixel in the photosensitive pixel region may also be set to the off state.
The off state refers to a photosensitive pixel which is not gated, for example, the RST signal of each photosensitive pixel can be controlled by a photosensitive pixel gating control circuit, when the RST signal is set to be at a high level, the FD is directly communicated with the high level of the whole circuit, at this time, the PD signal cannot be effectively transmitted to the FD, and the photosensitive pixel does not work, that is, pixels except for a target photosensitive pixel in the photosensitive pixel region are set to be in the off state.
When the distance measurement is carried out, all photosensitive pixels are initialized in the pixel screening stage of each frame, namely all photosensitive pixels in the photosensitive pixel area are gated to be in a working state, then target photosensitive pixels are screened, the position coordinates of the target photosensitive pixels in the photosensitive pixel area are recorded, and then the pixel screening stage is finished; in the distance measuring stage, all photosensitive pixels are initialized, namely all photosensitive pixels in the photosensitive pixel area are gated to be in a working state, then the target photosensitive pixels are kept in the working state according to the recorded position coordinates of the target photosensitive pixels in the photosensitive pixel area, and the photosensitive pixels in the photosensitive pixel area except the target photosensitive pixels are set to be in a turn-off state, namely a turn-off state.
In an exemplary embodiment, since the dot matrix optical signal is reflected at the target area, the light spot reflected to the photosensitive pixel area may drift due to the parallax effect, and therefore, in the actual measurement, it may be the case that the target photosensitive pixel receiving the valid signal at each frame measurement is different. The position change of the light spot projected on the photosensitive pixel region caused by the parallax is mainly along a baseline direction, and the baseline direction is a direction of a central connecting line of the VCSEL and the photosensitive pixel region, so that the change of the same light spot on the photosensitive pixel region is in a certain range.
Based on the above analysis, in order to more quickly screen the target photosensitive pixel in the photosensitive pixel area, the variation range of the projection light spot corresponding to each dot matrix optical signal may be framed in the photosensitive pixel area, that is, the photosensitive pixel area is grouped and then screened, as shown in fig. 3, the photosensitive pixel area may be specifically grouped by the steps in fig. 3:
step S310, acquiring light source attribute data corresponding to the dot matrix optical signal;
step S320, dividing the photosensitive pixel area according to the light source attribute data to obtain an addressing area corresponding to each light spot in the reflection dot matrix optical signal.
The light source attribute data refers to a relevant parameter between the dot matrix optical signal and the photosensitive pixel area receiving the reflected dot matrix optical signal, for example, the light source attribute data may be a focal length corresponding to a dot light source in the dot light source array, a baseline length corresponding to the dot light source in the dot light source array and the photosensitive pixel area, or a size of the photosensitive pixel area, or of course, a relevant parameter between other types of dot matrix optical signals and the photosensitive pixel area receiving the reflected dot matrix optical signal, which is not limited in this example embodiment.
The addressing area refers to a possible projection area of a light spot on which a dot matrix optical signal is projected, for example, the light-sensitive pixel area may be divided according to diameter data of the light spot, for example, light-sensitive pixels within 2 to 10 times of the diameter of the light spot may be used as the addressing area of the same light spot, of course, the specific division of the light-sensitive pixel area depends on light source attribute data in the ranging system, such as various optical parameters and hardware parameters, which is not particularly limited in this example.
In an exemplary embodiment, after the photosensitive pixel area is divided into the addressing areas corresponding to the light spots, the step in fig. 4 may be implemented to screen the target photosensitive pixel, and as shown in fig. 4, the screening specifically may include:
step S410, obtaining light intensity information generated when each photosensitive pixel in the addressing area receives the reflection dot matrix optical signal;
step S420, screening target photosensitive pixels in the addressed area according to the light intensity information.
In actual measurement, due to the influence of an ambient light signal, the light signals received by all the photosensitive pixels in an addressing area are not totally reflected by the dot matrix light signal, but because the power of the dot matrix light signal is relatively strong, the obtained light intensity information of the reflected dot matrix light signal is generally higher than that of the ambient light signal or a noise signal, and therefore, a target photosensitive pixel receiving effective information can be screened through the light intensity information.
Specifically, in the pixel selection stage, the photosensitive pixel with the highest intensity of the received light intensity information in the addressed area may be directly used as the center of the light spot, that is, the target photosensitive pixel. However, in actual measurement, there are cases where: when the measurement distance is long or the background noise information and the ambient light information are strong, it is possible that the photosensitive pixel with the highest light intensity information in an addressed area does not correspond to the real spot position. In order to ensure the accuracy of the target photosensitive pixel obtained by screening and avoid that the photosensitive pixel really receiving the effective signal is not filtered, the photosensitive pixel of which the light intensity information received in the addressing area is greater than or equal to the light intensity threshold value can be used as the target photosensitive pixel.
The light intensity threshold refers to a preset threshold for screening the target photosensitive pixels, for example, the light intensity threshold may be 1 candela or 2 candela, and the specific light intensity threshold is set by self-definition according to a measurement system and a measurement environment used in actual measurement, which is not particularly limited in this example embodiment.
Fig. 5 schematically illustrates a flow chart for screening target photosensitive pixels according to light intensity information in an exemplary embodiment of the present disclosure.
Referring to fig. 5, in step S501, when each frame starts to be measured, all photosensitive pixels in the photosensitive pixel region are initialized in the pixel screening stage;
step S502, emitting modulation lattice optical signals through a point light source array VCSEL;
step S503, all the photosensitive pixels are in working state, and output light intensity information;
step S504, in the addressing area corresponding to each set light spot, one or more target photosensitive pixels are screened according to the light intensity information;
step S505, in the ranging stage, initializing all photosensitive pixels in the photosensitive pixel area;
step S506, the point light source array VCSEL emits modulation lattice optical signals;
step S507, giving a gating signal of the target photosensitive pixel determined in the pixel screening stage, and starting ranging;
step S508, outputting a ranging result by the target photosensitive pixel in the working state, and performing data processing;
step S509, after data processing, each light spot corresponds to one position and one depth information, and a ranging result is output, and one frame of measurement is finished.
Further, due to various interference factors existing in the actual measurement, in order to further ensure the accuracy of the target photosensitive pixel obtained by screening and the robustness of the screening result, the screening of the target photosensitive pixel may also be implemented through the steps in fig. 6, which is shown with reference to fig. 6, and specifically includes:
step S610, calculating confidence scores of the maximum light intensity information and all the light intensity information in each addressing area;
step S620, determining a target addressing area with the confidence score greater than or equal to a confidence threshold, and taking a photosensitive pixel corresponding to the maximum light intensity information in the target addressing area as a target photosensitive pixel.
The maximum light intensity information refers to light intensity information with the largest value among all light intensity information measured in the addressed region, the confidence score refers to a score for measuring accuracy of the screened target photosensitive pixel, and the confidence score may be a ratio of the maximum light intensity information in the addressed region to a sum of all light intensity information in the addressed region.
The confidence threshold is a numerical value used for determining whether the confidence score of the screened target photosensitive pixel meets the standard, for example, the confidence threshold may be 0.5, and if the ratio of the maximum light intensity information in the addressed region to the sum of all light intensity information in the addressed region, that is, the confidence score is 0.6, it may be considered that a correct target photosensitive pixel exists in the addressed region, so that the addressed region is used as the target addressed region, and the photosensitive pixel corresponding to the maximum light intensity information in the target addressed region is used as the target photosensitive pixel. If the ratio of the maximum light intensity information in the addressing area to the sum of all the light intensity information in the addressing area, i.e. the confidence score, is 0.4 and is less than the confidence threshold, it can be considered that the light intensity of the return light spot corresponding to the addressing area is too weak, the condition of continuously measuring the distance is not met, or the addressing area does not have a target photosensitive pixel, and the addressing area cannot output a ranging result or a depth value. Of course, the confidence threshold may also be 0.4, and may specifically be set by self-definition according to different ranging system parameters and ranging environments, which is not particularly limited in this example embodiment.
Fig. 7 schematically illustrates a flowchart for screening target photosensitive pixels according to a confidence score in an exemplary embodiment of the present disclosure.
Referring to fig. 7, in step S701, when each frame starts to be measured, all photosensitive pixels in the photosensitive pixel region are initialized in the pixel screening stage;
step S702, emitting a modulation lattice optical signal by a point light source array VCSEL;
step S703, all the photosensitive pixels are in working state, outputting light intensity information;
step S704, the addressing area corresponding to each light spot is independently judged, whether the confidence score (the ratio of the maximum light intensity information in the addressing area to the sum of all the light intensity information in the addressing area, namely the signal-to-noise ratio) is larger than or equal to the confidence threshold value or not is judged, if the confidence score is judged to be larger than or equal to the confidence threshold value, step S705 is executed, otherwise, the process is ended;
step 705, taking the photosensitive pixel corresponding to the maximum light intensity information in the addressing area as a target photosensitive pixel;
step S706, in the ranging stage, initializing all photosensitive pixels in the photosensitive pixel area;
step S707, the point light source array VCSEL emits a modulation lattice optical signal;
step 708, giving a gating signal of the target photosensitive pixel determined in the pixel screening stage, and starting ranging;
step 709, outputting a ranging result by the target photosensitive pixel in the working state, and performing data processing;
step S710, after data processing, each light spot corresponds to a position and a depth information, and a ranging result is output, and a frame of measurement is finished.
In summary, in the pixel screening stage, the dot matrix optical signal is emitted to the target area, so that the dot matrix optical signal is reflected at the target area to generate a reflected dot matrix optical signal, the reflected dot matrix optical signal is received, and the target photosensitive pixel is obtained by screening the reflected dot matrix optical signal in the photosensitive pixel area; and in the distance measurement stage, the reflected dot matrix optical signal is received only by the target photosensitive pixel in the working state, and the distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflected dot matrix optical signal. On one hand, before each frame of measurement, the target photosensitive pixel receiving the effective signal is screened in the pixel screening stage, so that the ranging result with higher precision and accuracy can be obtained only by starting the target photosensitive pixel in the ranging stage, unnecessary power consumption caused by starting the photosensitive pixel which cannot receive the effective signal is avoided, and the power consumption of the ranging system is reduced; on the other hand, the distance measuring system only needs to calculate the reflected dot matrix optical signals detected by the target photosensitive pixels, so that the calculation amount is effectively reduced, the distance measuring efficiency is improved, and the frame rate of the distance measuring result is improved.
It is noted that the above-mentioned figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Further, referring to fig. 8, a distance measuring apparatus 800 is further provided in the present exemplary embodiment, and includes a transmitting module 810 and a receiving module 820. Wherein:
the emitting module 810 may be configured to emit a dot matrix optical signal to a target area during a pixel screening phase and a ranging phase, so that the dot matrix optical signal is reflected at the target area and a reflected dot matrix optical signal is generated;
the receiving module 820 is electrically connected to the transmitting module 810, and may be configured to receive the reflection dot matrix optical signal in a pixel screening stage, and screen a target photosensitive pixel in a photosensitive pixel area through the reflection dot matrix optical signal; or in the distance measurement stage, the target photosensitive pixel in the working state receives the reflection dot matrix optical signal, and the distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflection dot matrix optical signal.
In an exemplary embodiment, the emitting module 810 may further include a point light source array and a Diffractive Optical Element (DOE), where the DOE refers to an Optical element capable of precisely controlling the light intensity distribution while maintaining high diffraction efficiency, and specifically:
the optical diffraction element can be arranged on an optical path between the point light source array and the target area and is used for copying a dot matrix optical signal emitted by the point light source array to obtain a dot matrix optical signal of which the dot light source density is greater than or equal to a density threshold value.
The density threshold refers to a preset threshold for determining whether the number of light spots in the dot matrix optical signal meets the requirement, for example, the density threshold may be 3000, and if the number of the dot light sources in the dot light source array is 1000, the replication multiple of the optical diffraction element is at least 3 times, that is, a light beam generated by one dot light source generates three light beams after passing through the optical diffraction element with the replication multiple of 3 times, of course, the density threshold may also be other values, and specifically, the density threshold is set by self-definition according to the depth map resolution required in the actual measurement, which is not specially limited in this example embodiment.
The dot matrix optical signals generated by the dot light source array are copied through the optical diffraction element, so that the low power can be ensured, the resolution of a depth map obtained by ranging is effectively increased, and the accuracy of a ranging result is improved.
In an exemplary embodiment, the receiving module 820 may further be configured to:
and setting photosensitive pixels except the target photosensitive pixel in the photosensitive pixel area to be in an off state.
In an exemplary embodiment, the receiving module 820 may further be configured to:
acquiring light source attribute data corresponding to the dot matrix optical signal;
and dividing the photosensitive pixel area according to the light source attribute data to obtain an addressing area corresponding to each light spot in the reflection dot matrix optical signal.
In an exemplary embodiment, the receiving module 820 may further be configured to:
acquiring light intensity information generated when each photosensitive pixel in the addressing area receives the reflection dot matrix optical signal;
and screening target photosensitive pixels in the addressing area according to the light intensity information.
In an exemplary embodiment, the receiving module 820 may further be configured to:
and taking the photosensitive pixel with the light intensity information received in the addressing area larger than or equal to a light intensity threshold value as a target photosensitive pixel.
In an exemplary embodiment, the receiving module 820 may further be configured to:
calculating confidence scores of the maximum light intensity information and all the light intensity information in each addressing area;
and determining a target addressing area with the confidence score larger than or equal to a confidence threshold value, and taking a photosensitive pixel corresponding to the maximum light intensity information in the target addressing area as a target photosensitive pixel.
The specific details of each module in the above apparatus have been described in detail in the method section, and details that are not disclosed may refer to the method section, and thus are not described again.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
Exemplary embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the above-mentioned "exemplary methods" section of this specification, when the program product is run on the terminal device, for example, any one or more of the steps in fig. 3 to 7 may be performed.
It should be noted that the computer readable media shown in the present disclosure may be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Furthermore, program code for carrying out operations of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A method of ranging, comprising:
in a pixel screening stage, emitting a dot matrix optical signal to a target area so that the dot matrix optical signal is reflected at the target area and a reflected dot matrix optical signal is generated;
receiving the reflection dot matrix optical signal, and screening a target photosensitive pixel in a photosensitive pixel area through the reflection dot matrix optical signal;
and in the distance measurement stage, the target photosensitive pixel in a working state receives the reflection dot matrix optical signal, and a distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflection dot matrix optical signal.
2. The method of claim 1, wherein the receiving the reflected dot matrix light signal by the target photoactive pixel in an active state further comprises:
and setting photosensitive pixels except the target photosensitive pixel in the photosensitive pixel area to be in an off state.
3. The method of claim 1, further comprising:
acquiring light source attribute data corresponding to the dot matrix optical signal;
and dividing the photosensitive pixel area according to the light source attribute data to obtain an addressing area corresponding to each light spot in the reflection dot matrix optical signal.
4. The method of claim 3, wherein the screening of the target photosensitive pixel in the photosensitive pixel area by the reflected dot matrix light signal comprises:
acquiring light intensity information generated when each photosensitive pixel in the addressing area receives the reflection dot matrix optical signal;
and screening target photosensitive pixels in the addressing area according to the light intensity information.
5. The method of claim 4, wherein the screening target photosensitive pixels in the addressed area according to the light intensity information comprises:
and taking the photosensitive pixel with the light intensity information received in the addressing area larger than or equal to a light intensity threshold value as a target photosensitive pixel.
6. The method of claim 4, wherein the screening target photosensitive pixels in the addressed area according to the light intensity information comprises:
calculating confidence scores of the maximum light intensity information and all the light intensity information in each addressing area;
and determining a target addressing area with the confidence score larger than or equal to a confidence threshold value, and taking a photosensitive pixel corresponding to the maximum light intensity information in the target addressing area as a target photosensitive pixel.
7. A ranging apparatus, comprising:
the device comprises an emitting module, a pixel screening module and a ranging module, wherein the emitting module is used for emitting a dot matrix optical signal to a target area in a pixel screening stage and a ranging stage so that the dot matrix optical signal is reflected at the target area and a reflected dot matrix optical signal is generated;
the receiving module is electrically connected with the transmitting module and used for receiving the reflection dot matrix optical signals in a pixel screening stage and screening target photosensitive pixels in a photosensitive pixel area through the reflection dot matrix optical signals; or in the distance measurement stage, the target photosensitive pixel in the working state receives the reflection dot matrix optical signal, and the distance measurement result is determined according to the phase difference between the dot matrix optical signal and the reflection dot matrix optical signal.
8. The apparatus of claim 7, wherein the emission module comprises an array of point light sources and an optical diffraction element;
the optical diffraction element is arranged on a light path between the point light source array and the target area and is used for copying the dot matrix optical signals emitted by the point light source array to obtain the dot matrix optical signals of which the dot light source density is greater than or equal to a density threshold value.
9. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 6.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1 to 6 via execution of the executable instructions.
CN202110267995.8A 2021-03-11 2021-03-11 Distance measurement method and device, computer readable medium and electronic equipment Pending CN112987022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110267995.8A CN112987022A (en) 2021-03-11 2021-03-11 Distance measurement method and device, computer readable medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110267995.8A CN112987022A (en) 2021-03-11 2021-03-11 Distance measurement method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN112987022A true CN112987022A (en) 2021-06-18

Family

ID=76334593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110267995.8A Pending CN112987022A (en) 2021-03-11 2021-03-11 Distance measurement method and device, computer readable medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN112987022A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111751833A (en) * 2020-06-24 2020-10-09 深圳市汇顶科技股份有限公司 Method and device for obtaining polishing and reflected light data

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109031332A (en) * 2018-08-07 2018-12-18 上海炬佑智能科技有限公司 Flight time distance measuring sensor and its control method
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110400338A (en) * 2019-07-11 2019-11-01 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110609293A (en) * 2019-09-19 2019-12-24 深圳奥锐达科技有限公司 Distance detection system and method based on flight time
CN111045030A (en) * 2019-12-18 2020-04-21 深圳奥比中光科技有限公司 Depth measuring device and method
US20200158876A1 (en) * 2018-11-21 2020-05-21 Zoox, Inc. Intensity and Depth Measurements in Time-of-Flight Sensors

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109031332A (en) * 2018-08-07 2018-12-18 上海炬佑智能科技有限公司 Flight time distance measuring sensor and its control method
US20200158876A1 (en) * 2018-11-21 2020-05-21 Zoox, Inc. Intensity and Depth Measurements in Time-of-Flight Sensors
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110400338A (en) * 2019-07-11 2019-11-01 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110609293A (en) * 2019-09-19 2019-12-24 深圳奥锐达科技有限公司 Distance detection system and method based on flight time
CN111045030A (en) * 2019-12-18 2020-04-21 深圳奥比中光科技有限公司 Depth measuring device and method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111751833A (en) * 2020-06-24 2020-10-09 深圳市汇顶科技股份有限公司 Method and device for obtaining polishing and reflected light data

Similar Documents

Publication Publication Date Title
EP3700190A1 (en) Electronic device for providing shooting mode based on virtual character and operation method thereof
US9646410B2 (en) Mixed three dimensional scene reconstruction from plural surface models
US20220222789A1 (en) Electronic device applying bokeh effect to image and controlling method thereof
CN112505713A (en) Distance measuring device and method, computer readable medium, and electronic apparatus
KR102472156B1 (en) Electronic Device and the Method for Generating Depth Information thereof
CN112596069A (en) Distance measuring method and system, computer readable medium and electronic device
KR102552923B1 (en) Electronic device for acquiring depth information using at least one of cameras or depth sensor
US10140722B2 (en) Distance measurement apparatus, distance measurement method, and non-transitory computer-readable storage medium
KR102524982B1 (en) Apparatus and method for applying noise pattern to image processed bokeh
US20220128659A1 (en) Electronic device including sensor and method of operation therefor
CN113344839B (en) Depth image acquisition device, fusion method and terminal equipment
CN112987022A (en) Distance measurement method and device, computer readable medium and electronic equipment
CN112433382B (en) Speckle projection device and method, electronic equipment and distance measurement system
CN111316059A (en) Method and apparatus for determining size of object using proximity device
CN115702443A (en) Applying stored digital makeup enhancements to recognized faces in digital images
US20200363902A1 (en) Electronic device and method for acquiring biometric information using light of display
KR20190035358A (en) An electronic device controlling a camera based on an external light and control method
US20220268935A1 (en) Electronic device comprising camera and method thereof
US10735665B2 (en) Method and system for head mounted display infrared emitter brightness optimization based on image saturation
KR20200117460A (en) Electronic device and method for controlling heat generation thereof
US11283970B2 (en) Image processing method, image processing apparatus, electronic device, and computer readable storage medium
KR20220151932A (en) Electronic device and operation method thereof
EP3951426A1 (en) Electronic device and method for compensating for depth error according to modulation frequency
CN105323460A (en) Image processing device and control method thereof
KR20200069096A (en) Electronic device and method for acquiring depth information of object by using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination