WO2022083301A1 - 3d图像传感器测距系统及使用该系统进行测距的方法 - Google Patents

3d图像传感器测距系统及使用该系统进行测距的方法 Download PDF

Info

Publication number
WO2022083301A1
WO2022083301A1 PCT/CN2021/115878 CN2021115878W WO2022083301A1 WO 2022083301 A1 WO2022083301 A1 WO 2022083301A1 CN 2021115878 W CN2021115878 W CN 2021115878W WO 2022083301 A1 WO2022083301 A1 WO 2022083301A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
time
photosensitive
preset
image sensor
Prior art date
Application number
PCT/CN2021/115878
Other languages
English (en)
French (fr)
Inventor
陈如新
杜德涛
Original Assignee
睿镞科技(北京)有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 睿镞科技(北京)有限责任公司 filed Critical 睿镞科技(北京)有限责任公司
Publication of WO2022083301A1 publication Critical patent/WO2022083301A1/zh
Priority to US18/304,845 priority Critical patent/US20230273321A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • G01S17/14Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein a voltage or current pulse is initiated and terminated in accordance with the pulse transmission and echo reception respectively, e.g. using counters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • G01S7/4815Constructional features, e.g. arrangements of optical elements of transmitters alone using multiple transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4816Constructional features, e.g. arrangements of optical elements of receivers alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/484Transmitters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith
    • H04N25/76Addressed sensors, e.g. MOS or CMOS sensors
    • H04N25/77Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components

Definitions

  • the embodiments of the present application relate to the field of lidar ranging, and in particular, to a 3D image sensor ranging system and a method for using the system for ranging.
  • LiDAR systems are playing an increasingly important role in environmental recognition.
  • the laser beam can be used in particular for scanning the surroundings and enabling distance measurements to be made of objects in the surroundings.
  • Lidar systems typically include at least one light source for emitting light towards objects in the surrounding environment and a receiver for receiving light reflected by the object.
  • the lidar system can determine the distance of an object from the lidar system based on the time difference between the light emitted by the light source and the light received by the receiver (ie, the time of flight of the light).
  • lidar systems With the wider application of lidar systems, people are more expected to obtain lidar systems with smaller size, longer range and higher efficiency.
  • improvements to improve the efficiency, reduce the volume, and effectively avoid the mutual interference between the emitted light and the reflected light is one of the difficult problems to be solved urgently.
  • a 3D image sensor ranging system may include at least one array of light-emitting cells, at least one array of light-sensing cells, and at least one computing component.
  • Each lighting unit array may include at least one lighting unit for emitting light to at least one target scene.
  • Each photosensitive unit array may include at least one photosensitive unit for receiving at least a portion of the light emitted by the light-emitting unit array and reflected by the target scene, and generating a sensing vector according to the received light.
  • the calculation component calculates at least one of the distance between the light-emitting unit array and the target scene and the light intensity of the reflected light according to the sensing vector generated by the photosensitive unit array.
  • the divergence angle of the light emitted by the light-emitting unit fluctuates with time, wherein the maximum value of the divergence angle is greater than the first spatial resolution threshold.
  • the first spatial resolution threshold may be greater than twice the spatial resolution of the 3D image sensor ranging system.
  • the 3D image sensor ranging system may further include a scanning unit for controlling the light-emitting unit array to perform irradiation scanning in a spatial angle range corresponding to at least part of the target scene.
  • a scanning unit for controlling the light-emitting unit array to perform irradiation scanning in a spatial angle range corresponding to at least part of the target scene.
  • at least a part of the light-emitting unit array includes a light-emitting scanning control component for controlling the illumination scanning performed by the light-emitting unit array in a spatial angle range corresponding to at least part of the target scene.
  • a random error between an actual scanning space angle with a first preset angle ratio and a preset scanning space angle of the light emitted by the light-emitting unit array is greater than the first space angle Resolution threshold.
  • the sensing vector may include at least one of a distance between the light emitting unit and the target scene, an intensity of the reflected light, a phase of the reflected light, and a spectrum of the reflected light.
  • the calculation component is configured to: obtain the emission time t0 of the light; obtain the arrival time t1 of the single photon or single light pulse in the sensing vector reaching the photosensitive unit; determine the light-emitting unit and the target based on the obtained t0 and t1 The distance between the scenes; and the number of photosensitive electrons in the sensing vector or the voltage reading of the collection capacitor in the photosensitive cell is determined as the light intensity of the reflected light.
  • each photosensitive cell includes a first capacitor C1 and a second capacitor C2, and the computing component is configured to: obtain the emission time t0 of the light; obtain a voltage reading of the first capacitor C1 and a voltage reading of the second capacitor C2 voltage reading; determine the arrival time t1 of the light reaching the photosensitive unit according to the voltage reading; calculate the distance between the light-emitting unit and the target scene based on the obtained t0 and t1; compare the voltage reading of the first capacitor C1 with the second capacitor The sum of the voltage readings of C2 is determined as the light intensity of the emitted light.
  • the computing component is configured to: obtain the light emission time t0 and the preset emission light pulse width T0; obtain the same photosensitive unit that reaches the photosensitive unit array within the preset first time interval threshold value T_1 at the earliest The time t_1 of the electron group of 2 electrons, the time for the second electron in the group to arrive/appear in the same photosensitive unit is t_1+ ⁇ t 1 , and at the same time, 2 electrons that meet the same interval condition and reach the same photosensitive unit are obtained.
  • the system calculates the maximum electron group number n_best corresponding to the emitted light pulse and the corresponding maximum electron group number n_best according to a predetermined rule and according to the above ⁇ n1,...,n_m ⁇ and ⁇ T_1,...,T_m ⁇
  • the computing component is configured to: during the execution of the scan on a predetermined regular basis, determine whether the current scan point should emit probe light based on past sensing vectors before the current scan point, wherein at the second Within the preset time range, there are at least the second preset non-light-emitting proportion of the number of times that the detection light is not emitted.
  • the computing component determines that at least two light-emitting units use strong light and weak light to scan the target scene successively, if the scanning of weak light has obtained the distance between the light-emitting unit array and the target scene by measurement, then It is determined that the detection light is not emitted for the previous scanning point; or, when the calculation component determines that the distance of the current light intensity detection is less than or greater than the predetermined value, it is determined that the detection light is not emitted for the current scanning point; or, When the calculation component determines that the currently scanned target area is an unimportant and unconcerned area, it is determined that: the current light emission should be skipped according to the second preset non-light emission ratio; or when the calculation component determines that the second When the divergence angle of a certain scan within the preset time range has detected most of the current pixels, it is determined that the detection light is not emitted for the current pre-exit scan point.
  • the second preset non-luminescence ratio may be 1%, 5%,
  • the computing component is configured to: determine, for the current scan point, a sensing vector resulting from at least one past, temporally recent measurement; to determine at least one other past, temporally recent measurement; And according to the determined sensing vector and the determined measurement, it is decided whether the probe light should be emitted for the current scanning point; wherein, the computing component is configured to perform the following processing to obtain the sensing vector: 1) obtaining and current scanning 2) Obtain the second sensing vector of the current period closest to the current scanning point; 3) According to the first sensing vector and the second sensing vector, predict the current the scanning characteristic of the scanning point, the scanning characteristic may include at least one of the emission intensity, emission frequency, emission area, pulse distinguishability characteristic, attention degree, and scanning area of the current scanning point; and 4) according to the determined Scanning the feature to determine whether the light-emitting unit should be allowed to perform the operation of emitting probe light;
  • each photosensitive unit is configured to: determine whether the number or amplitude of photosensitive electrons in the received light pulse is less than a predetermined electron number threshold and signal amplitude threshold, respectively, and if so, discard the The light includes information, wherein the electron number threshold and the signal amplitude threshold gradually decrease from the preset threshold according to a preset law with time at the beginning of light emission.
  • the light beams that can be simultaneously emitted by at least two light-emitting units in the light-emitting unit array at least partially overlap in spatial angle, and the wavelength ranges included in the light beams are at least partially different.
  • the light emitted by the light-emitting unit may also include at least two scanning beams with different divergence angles.
  • the computing part is further configured to obtain at least one sub-region of interest in the target scene using the sensing vector measured in the second preset time range in the past; and to issue an instruction to: In the third preset time range, the scanning density of the sub-areas of interest is greater than the first multiple threshold, and/or the scanning frequency is greater than or less than the second multiple threshold, and/or the average light energy per unit time is greater than or less than the other areas The third multiple threshold.
  • at least one of the sub-regions of interest may be determined through embedded calculation and/or a preset rule in the photosensitive unit, wherein the photosensitive unit outputs a sensing vector smaller than the number of sub-pixels of the image sensor at a second preset ratio .
  • Another aspect of the present application also provides a method for measuring distance using a 3D image sensor ranging system, including: emitting light to at least one pair of target scenes through a light-emitting unit included in at least one light-emitting unit array; receiving light through a photosensitive unit at least a part of the light emitted by the light-emitting unit and reflected by the target scene, and generating a sensing vector according to the received light; and calculating the distance between the light-emitting unit array and the target scene and the intensity of the reflected light according to the generated sensing vector at least one of them.
  • Another aspect of the present application also provides an apparatus for optical ranging, comprising: at least one 3D image sensor ranging system according to any one of the above embodiments; and at least one of the 3D image sensors integrated therein Semiconductor chips for image sensor ranging systems.
  • Another aspect of the present application also provides a method for forming an optical ranging device, comprising: forming at least one 3D image sensor ranging system as described in any of the above embodiments, and combining at least one of the 3D image sensor ranging systems
  • the 3D image sensor ranging system is integrated in the same semiconductor chip.
  • FIG. 1 is an exemplary system architecture diagram of a 3D image sensor ranging system according to an embodiment of the present application
  • FIG. 2 is an exemplary system architecture diagram of a 3D image sensor ranging system according to another embodiment of the present application
  • FIG. 3 is a schematic diagram of overlapping light beams emitted by a light emitting unit according to another embodiment of the present application.
  • FIG. 4 is a flowchart for obtaining a sensing vector according to another embodiment of the present application.
  • FIG. 5 is a schematic diagram of a circuit structure of a photosensitive unit according to another embodiment of the present application.
  • FIG. 6 is a flowchart of a method for ranging using a 3D image sensor ranging system according to another embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a computer system of an electronic device suitable for implementing the 3D imaging method of the embodiment of the present application.
  • the thickness, size and shape of components may be slightly exaggerated for convenience of explanation.
  • the spherical or aspherical shapes shown in the figures are shown by way of example. That is, the shape of the spherical or aspherical surface is not limited to the shape of the spherical or aspherical surface shown in the drawings.
  • the drawings are examples only and are not drawn strictly to scale.
  • spatially relative terms such as “above,” “upper,” “below,” and “lower” may be used herein to describe one element as shown in the figures and another element relationship. Such spatially relative terms are intended to encompass different orientations of the device in use or operation other than the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “above” or “upper” another element would then be oriented “below” or relative to the other element “Lower”. Thus, depending on the spatial orientation of the device, the wording "above” includes both “above” and “below” orientations. The device may also be otherwise oriented (eg, rotated 90 degrees or at other orientations) and the spatially relative phraseology used herein should be interpreted accordingly.
  • FIG. 1 shows a 3D image sensor ranging system 100 according to one embodiment of the present application.
  • the 3D image sensor ranging system 100 may include at least one light-emitting unit array 10 , at least one photosensitive unit array 20 and at least one computing component 30 .
  • At least one lighting unit array 10 may include at least one lighting unit for emitting light to at least one target scene.
  • Each photosensitive unit array 20 includes at least one photosensitive unit for receiving at least a portion of the light emitted by the light-emitting unit array and reflected by the target scene, and generating a sensing vector according to the received light.
  • Each calculation component 30 calculates at least one of the following according to the sensing vector generated by the photosensitive unit array 20: 1) the distance between the light-emitting unit and the target scene; and 2) the light intensity of the reflected light.
  • the light-emitting unit array 10 includes at least one light-emitting unit.
  • the light-emitting unit is configured to emit light pulses to the target scene according to a predetermined rule, so as to illuminate the target scene.
  • light pulses can be emitted to the target scene according to a preset rule.
  • the light emitting unit array 10 may emit light pulses with wavelengths in the range of, for example, 300nm-750nm, 700nm-1000nm, 900nm-1600nm, 1um-5um, or 3um-15um.
  • the pulse width may be, for example, 0.1ps-5ns, 1ns-100ns, 100ns-10us or 10us-10ms.
  • the parameters of the wavelength and pulse width of the light pulses emitted by the light-emitting unit array 10 are only exemplified here, but the present application is not limited thereto, and other parameters of other wavelengths and pulse widths that do not deviate from the teachings of the present application are also allowed. .
  • each light emitting unit may be a semiconductor laser, fiber laser, solid state laser, or the like.
  • the light pulses emitted by each lighting unit may be modulated linearly polarized light, circularly polarized light, elliptically polarized light, or unpolarized light.
  • the pulse repetition frequency of the optical pulses can be selected from the range of 1 Hz-100 Hz, 100 Hz-10 kHz, 10 kHz-1 MHz or 1 MHz-100 MHz.
  • the coherence length of the light pulse may be less than 100m, 10m, 1m, 1mm.
  • the light pulses emitted by each light-emitting unit are directed towards the target scene.
  • the target scene may include, for example, the subject 50 .
  • the maximum value of the fluctuation amplitude over time of the divergence angle of the light-emitting unit is greater than the first spatial resolution threshold.
  • the divergence angle of the light emitted by each light emitting unit toward the target scene 50 is greater than a first spatial resolution threshold, wherein the first spatial resolution threshold includes a horizontal first spatial resolution threshold and a vertical first spatial resolution threshold.
  • the horizontal first spatial resolution threshold may be 0.1°, 1°, 2°, 5°, 10°, or 0.01*system horizontal field of view (FOV), or 0.02*system horizontal FOV, or 0.1*system horizontal FOV.
  • the vertical first spatial resolution threshold may be 0.1°, 1°, 2°, 5°, 10°, or 0.01*system vertical FOV, or 0.02*system vertical FOV, or 0.1*system vertical FOV.
  • the 3D image sensor ranging system 100 may further include a light emitting scan control part 101 , and the light emitting scan control part 101 may be integrated with at least a part of the light emitting units of the light emitting unit array 10 .
  • the light-emitting scanning control component 101 is shown with a dotted line, indicating that the component 101 can be integrated into the light-emitting unit array 10 .
  • the scanning control component 101 can control the scanning to a spatial angle range corresponding to at least part of the target scene, that is, to control all the outgoing light rays of the light-emitting unit array 10 .
  • the scanning of a general one-line beam is to irradiate the spot to the center of all (1-1000) ⁇ (1-200) two-hundred grids according to a simple rule.
  • the inverse dispersion angle of the emission line of the lidar is optimized to a size smaller than 200 ⁇ 1000 grids as much as possible.
  • the scanning design is that the scanning line is fixed, and it does not take into account whether the divergent light spot covers several grids.
  • n ⁇ m spot grids can be effectively detected at the same time (for example, 3 ⁇ 3 spot grids)
  • the scanning does not need to perform the next horizontal scan according to the horizontal +1; the vertical does not need to add 1 each time.
  • the angle trajectory of single beam scanning is controlled to move randomly/fuzzily within a certain range
  • the divergence angle of the beam needs to be changed randomly/fuzzily within a certain range, so as to ensure that the light spot can completely cover all the beams within a certain period of time.
  • the larger the divergence angle of the emitted light the less optical scanning of the target scene is made with that light. But the larger the divergence angle, the smaller the maximum distance that the detector (ie, the photosensitive element array 20 ) can detect.
  • One of the goals of this application is to use a light emission and scanning system with the lowest possible quality and low cost, while achieving configurable optimal system resolution, distance range, and output point cloud rate.
  • the light-emitting scanning control component 101 is configured so that the light emitted by the light-emitting unit satisfies: within a first preset time range, at least a first preset angle ratio of the actual scanning space angle to the preset scanning space angle
  • the random error of the spatial angle is greater than the first spatial resolution threshold, where the first spatial resolution threshold is greater than twice the spatial resolution of the system.
  • ordinary lidars have a spatial resolution of system design, for example, the horizontal resolution is 0.1 degrees. Then, when the ordinary mechanical scanning lidar is scanning horizontally, the system will emit a laser every 0.1 degrees, in order to obtain a spatial resolution of 0.1 degrees in the horizontal direction. Similarly, vertical scanning can be performed to obtain the desired vertical scanning.
  • the previous scanning lidars generally operate according to this principle.
  • the Flash lidar its use is similar to that of an ordinary camera, except that it emits a laser flash that illuminates the entire field. Similar to ordinary cameras, it has a special image sensor with m ⁇ n pixels. For example, the traditional camera has 1024 ⁇ 768 pixels. Then, when the viewing angle of the camera (determined by the camera optical lens) is 100 degrees horizontally and 76 degrees vertically At that time, the horizontal resolution of this camera or Flash lidar is 100/1204 ⁇ 0.1 degree, and the vertical resolution is 76/768 ⁇ 0.1 degree.
  • the horizontal resolution of this camera or Flash lidar is 100/1204 ⁇ 0.1 degree, and the vertical resolution is 76/768 ⁇ 0.1 degree.
  • the random error is beneficial to ensure full coverage of the scene by the emitted light, and at the same time greatly reduce the system cost.
  • the system intends to emit a laser as fine as possible that is always parallel, so that the system can obtain the best angular resolution and signal-to-noise ratio, but it is difficult to achieve very precise control, especially At the time of semiconductor (or no mechanical scanning device) controlled emission.
  • the smaller the divergence angle the better, and it is all fixed/constant.
  • the fixed control of the divergence angle is relaxed, and the divergence angle of the emitted light is allowed to fluctuate within a range that is beneficial to the manufacturing cost.
  • the fluctuation of the divergence angle in a 0.1 degree resolution system, the divergence angle can be 1 degree, 2 degrees or 3 degrees.
  • the optimization of the system setup is a balance between the power of the emitted light, the maximum distance measured, the photoelectric efficiency of the detection sensor, and the cost of manufacture.
  • the light emitted by the light emitting unit 10 may include at least two scanning beams with different divergence angles, as shown in FIG. 3 .
  • a laser beam with a larger cross-section can be emitted, so that the sub-divergence angle is small, and the distance measurement is longer.
  • the photosensitive unit array 20 includes at least one photosensitive unit.
  • the photosensitive cell array is configured to receive at least part of the light reflected by the target scene, and provide at least a portion of the sensing vectors included in the information of the reflected light to the computing unit 30, wherein each sensing vector may include the photosensitive cells and the target object At least one of the distance between, the intensity of the emitted light, the phase of the emitted light, and the spectrum of the emitted light.
  • the photosensitive unit may include a photosensor and a filter (not shown).
  • the photoelectric sensor generates photosensitive electrons in response to the reflected light received through the photoelectric effect.
  • the corresponding light intensity can be obtained by calculating the number of photosensitive electrons, and the corresponding distance between the light-emitting unit and the target scene can be determined by multiplying the time interval between the photosensitive electrons and the time interval of emitting light by the speed of light.
  • the filter can be set in front of the photoelectric sensor to obtain the light intensity of a specific band, and obtain the spectrum of a specific band by modulating and demodulating the light of the specific band.
  • the phase of the light modulated by the low frequency can be obtained by modulation and demodulation with the electrical signal of the same frequency.
  • the phase of the beam itself can be obtained through the time and space generated by the photosensitive electrons.
  • At least one photosensitive cell collects electrons associated with photosensitive electrons during exposure with at least 2 capacitors, and uses at least 2 capacitors at the end of exposure The measured value of , calculates the sensing vector of the corresponding pixel.
  • the photoelectric converter in the photosensitive unit can convert the optical signal into an electrical signal. In this way, by processing the electrical signal, the image information of the point in the target scene can be restored.
  • the resulting reflected light can enter the photoelectric converter.
  • the photoelectric converter converts the optical signal into a corresponding electrical signal by photoelectrically converting the light reflected by the target scene during exposure.
  • the signal value of the electrical signal can be characterized by, for example, the number of photosensitive electrons (ie, the number of charges) obtained after photoelectric conversion of the optical signal.
  • the functional relationship between the optical signal and the electrical signal before and after photoelectric conversion is known.
  • the sensing vector of each pixel point corresponding to the target scene can be calculated, and then the image information of the point in the target scene can be restored.
  • the sensing vector of each pixel point may be, for example, a set of data including information such as the distance, light intensity, phase, and spectrum of the pixel point.
  • the signal value of the electrical signal obtained after photoelectric conversion can be characterized by the amount of electric charge obtained by photoelectric conversion.
  • at least 2 capacitors are used to collect and collect photosensitive electrons during exposure (ie, charges obtained after photoelectric conversion), wherein the at least 2 capacitors have different charge-discharge characteristics.
  • FIG. 5 exemplarily shows a schematic diagram of a circuit structure of a photosensitive unit according to an embodiment of the present application. As shown in FIG.
  • the photosensitive unit may include two capacitors C1 and C2 , a variable shunt, and an avalanche diode APD (or a single photon avalanche diode (SPAD), a photodiode PD).
  • APD avalanche diode
  • SPAD single photon avalanche diode
  • the photosensitive unit may include two capacitors C1 and C2 , a variable shunt, and an avalanche diode APD (or a single photon avalanche diode (SPAD), a photodiode PD).
  • Each component is a conventional component with its own properties and functions, so they will not be described individually, but for the sake of clarity
  • symbols such as reset reset, gate control selection select, control input Vcontrol of the variable shunt, and output output are reserved as reference numerals.
  • the measured values of the above-mentioned two capacitors C1 and C2 are amplified and read out to calculate the sensing vector of the corresponding pixel (such as distance, light intensity, phase and spectrum, etc.) .
  • the specific processing of obtaining the distance and light intensity with reference to the measured values of the capacitors C1 and C2 will be further described later with reference to the calculation unit 30.
  • the existing technology can be used to achieve .
  • the light-emitting unit array 20 emits light once is called exposure time
  • the photo-sensing unit in the photo-sensing unit receives at least part of the light reflected by the target scene and converts it into photo-sensitive electronic information.
  • the number of electrons or the signal amplitude in the photosensitive electronic information is smaller than the preset threshold in the photosensitive unit, the subsequent processing of the photosensitive electronic information will not be performed, and the threshold of the number of electrons and the signal amplitude threshold will be changed from the preset threshold by the preset threshold value with time at the start of light emission.
  • the preset regularity gradually decreases.
  • each photosensitive unit in the photosensitive unit array 20 may also be configured to: determine whether the number or amplitude of photosensitive electrons in the received light is less than a predetermined electron number threshold and signal amplitude threshold, respectively, and if so , the information included in the light is discarded, wherein the electron number threshold and the signal amplitude threshold gradually decrease from the preset threshold according to a preset law with time at the start of light emission.
  • the electron number threshold and signal amplitude threshold are limited to decrease with time because: the later arriving signal is weaker, and the earlier arriving signal, the stray light signal is also stronger, through the gradually decreasing threshold , which can improve the anti-jamming energy of the system, avoid unnecessary detection time, and make better preparations for long-distance weak signals.
  • the photosensitive unit includes at least one of an APD, a photodiode (PD) or a single photon avalanche diode (SPAD) (silicon-based SiPM or a composite material generated from group 3-5 elements such as InGaAs, etc.).
  • APD photodiode
  • SPAD single photon avalanche diode
  • the calculation unit 30 calculates the distance between the target scene corresponding to each photosensitive unit (photosensitive pixel) and the light-emitting unit and the relative light intensity of the reflected light according to the above-mentioned sensing vector measured by the photosensitive unit array 20 .
  • the sensing vector may include at least one of the distance between the emitted light and the target scene, the intensity of the reflected light, the phase of the reflected light, and the spectrum of the reflected light.
  • At least one photosensitive unit/pixel in the photosensitive unit array 20 uses at least 2 capacitors to collect electrons related to photosensitive electrons during exposure, and uses the measured values of at least 2 capacitors to calculate the corresponding electrons at the end of exposure.
  • the sensing vector of the pixel uses at least 2 capacitors to collect electrons related to photosensitive electrons during exposure, and uses the measured values of at least 2 capacitors to calculate the corresponding electrons at the end of exposure. The sensing vector of the pixel.
  • Light intensity value of C1+value of C2.
  • the method used by the calculation unit 30 to calculate the distance and reflected light intensity of the corresponding target scene may include:
  • the maximum electron group number n_max is determined as the light intensity.
  • the method used by the calculation unit 30 to calculate the distance and reflected light intensity of the corresponding target scene may include:
  • the maximum electron group number n_max is determined as the light intensity of the reflected light.
  • the calculation part 30 is further configured to: in the process of scanning the emitted light according to the predetermined rule, based on the past sensing vector, determine whether the current scanning point emits the detection light and send the corresponding execution instruction to the scanning control part 101 (with the According to the quality control, whether to emit a scanning beam to the target object), wherein within the second preset time range, there is at least a second preset non-emitting proportion of no detection light times.
  • the second preset non-luminous ratio is 1%, 5%, 20%, 30% or 80%.
  • the calculation component 30 determines that the at least two light-emitting units use strong light and weak light to scan the target scene successively, if the scanning of the weak light has already obtained the distance through measurement, it is determined that the former scanning point does not emit detection light , and send corresponding commands to the scan control unit 101 .
  • the calculation unit 30 determines that the current detection distance of the light intensity is smaller than the predetermined value or larger than the predetermined value, it determines that the current scanning point does not emit detection light, and sends a corresponding instruction to the scanning control unit 101 .
  • the control unit 101 controls the light-emitting unit 10 not to emit detection light according to the instruction.
  • the computing component 30 is configured to perform the above-described operations of determining whether the current scan point emits probe light before each said scan.
  • computing component 30 may be configured to: determine a sensing vector from at least one past, temporally recent measurement; determine at least one other past, spatially recent measurement; and, based on the determined sensing The vector, along with the determined measurement, determines whether the current scan point emits probe light.
  • a corresponding instruction is sent to the scan control part 101 , so that the light emitting unit 10 does not need to send probe light to the target object under the control of the scan control part 101 .
  • AI needs to identify objects first, and then decide whether to reduce the light intensity for scanning.
  • the system of this embodiment can at least partially solve the deficiencies in the prior art.
  • step S101 the calculating part 30 acquires the first sensing vector of the past period closest to the current scan point time.
  • the information is pre-stored in any suitable storage section in chronological order.
  • step S102 the calculating part 30 obtains the second sensing vector of the current period with the closest distance to the current scanning point. Since the scan angle and scan time (current frame, or previous frames) are known when each scan process is performed, the second sensing vector can be obtained from this information.
  • the current period/past period may be the current/past one frame, or the current/past horizontal completion of a line scan.
  • step S103 the calculation component 30 determines the emission intensity, emission frequency, emission area, pulse distinguishability characteristic, current scan of the current period according to the sensing vector of the current period and the acquired first and second sensing vectors area.
  • step S104 the calculation part 30 determines whether the light-emitting cell array 10 should be allowed to perform a light-emitting operation in the current period.
  • the computing component 30 can determine that the current period can prevent the light-emitting unit array 10 from performing a light-emitting operation, and send a corresponding control instruction to the scan control unit 101 to prevent the light-emitting unit array 10 from performing a light-emitting operation. Specifically as described above.
  • step S104 If the result of the judgment in step S104 is "Yes", then in step S105, the corresponding current scanning angle and the maximum possible coverage of the current divergence angle of the sensing vector of the photosensitive unit are obtained, and then return to step S101, otherwise, jump back directly Go to step S101.
  • the computing unit 30 is further configured to obtain at least one sub-region of interest (attention) in the target scene using the sensing vector measured in the second preset time range in the past.
  • the point cloud data converted from 1000x1000 resolution real-world 3D image sensors or virtual world 3D images of the past 5 frames can be connected into a two-dimensional array of deep learning neural networks (including RNN, CNN, ResNet, LSTM, The tensor at the input of GRU, sequence model, etc.), the deep learning neural network has been pre-labeled when offline (for example, manual labeling, using simple computer primitives and object information for labeling, but other automatic labeling methods are also allowed.
  • the computing unit 30 sends an instruction to the scanning control unit 101 to make: the scanning density of the obtained sub-region of interest is greater than that of other regions in the third preset time range than that of the first The multiple threshold, and/or the scanning frequency is greater or less than the second multiple threshold, and/or the average light energy per unit time is greater than or less than the third multiple threshold.
  • the second preset time range and the third preset time range may be, for example, 0.001 seconds, 0.01 seconds, 0.1 seconds, 1 second, 10 seconds. Accordingly, regions of interest can be detected better and faster. For example, if there is an oncoming fast approaching ahead, it is necessary to provide detection results faster. For example, some children play on the roadside in the distance, and they need to scan more intensively before they can judge the children's intentions and/or future actions.
  • Figure 2 illustrates a 3D image sensor ranging system 100' according to one embodiment of the present application.
  • the 3D image sensor ranging system 100 ′ not only includes at least one light-emitting unit array 10 , at least one photosensitive unit array 20 and at least one computing unit 30 , but also includes at least one independent light scanning unit 40 for controlling Scan to a range of spatial angles corresponding to at least part of the target scene. Since the light-emitting unit array 10 , the photosensitive unit array 20 and the computing component 30 have been described correspondingly above, they will not be repeated here.
  • the light scanning part 40 can perform the same function as the scanning control part 101 and has a similar configuration, and thus a detailed description thereof is also omitted here.
  • At least one 3D image sensor ranging system as described in any of the above embodiments can be formed through step 1), and step 2) at least one 3D image sensor ranging system can be integrated in the In the same semiconductor chip, the device for optical ranging is formed, in other words, the device for optical ranging formed according to this embodiment may include at least one 3D image sensor ranging system as described in any of the above embodiments ; and a semiconductor chip for integrating therein at least one of the 3D image sensor ranging systems.
  • FIG. 6 shows a method 200 for ranging using a 3D image sensor ranging system according to an embodiment of the present application.
  • the method 200 includes: step S201, emitting light to at least one pair of target scenes through the light-emitting units included in at least one light-emitting unit array; step S202, receiving at least a part of the light-emitting units emitted by the light-emitting unit through the target scene by the photosensitive unit and generating a sensing vector according to the received light; and step S203, calculating at least one of the distance between the light-emitting unit array and the target scene and the intensity of the reflected light according to the generated sensing vector.
  • the maximum value of the temporal fluctuation amplitude of the divergence angles of the light-emitting units is greater than the first spatial resolution threshold.
  • the random error between the actual scanning space angle of the light-emitting unit in the first preset angle ratio and the preset scanning space angle is greater than the first spatial resolution threshold.
  • the first spatial resolution threshold includes a horizontal first spatial resolution threshold and a vertical first spatial resolution threshold.
  • the horizontal first spatial resolution threshold may be 0.1°, 1°, 2°, 5°, 10°, or 0.01*system horizontal field of view (FOV), or 0.02*system horizontal FOV, or 0.1*system horizontal FOV.
  • the vertical first spatial resolution threshold may be 0.1°, 1°, 2°, 5°, 10°, or 0.01*system vertical FOV, or 0.02*system vertical FOV, or 0.1*system vertical FOV.
  • the sensing vector may include at least one of a distance between the light-emitting unit and the target scene, a light intensity of the reflected light, a phase of the reflected light, and a spectrum of the reflected light.
  • the calculating step S203 may include: obtaining the emission time t0 of the light, obtaining the time t1 of a single photon or single light pulse in the sensing vector, based on the obtained emission time t0 and the single photon or single light pulse Determine the distance between the light-emitting unit and the target scene at the time t1; and determine the number of photosensitive electrons in the sensing vector or the voltage reading of the collection capacitor as the intensity of the reflected light.
  • the photosensitive unit may include a circuit structure as shown in FIG. 5 . That is, the photosensitive unit may include two capacitors C1 and C2, a variable shunt, and an avalanche diode APD. The photosensitive electrons are delivered to the two capacitors C1 and C2 through the control of the variable shunt.
  • the calculating step S203 may include: obtaining the light emission time t0, obtaining the voltage reading value of the first capacitor C1 and the voltage reading value of the second capacitor C2, and determining the arrival of the light to the photosensitive unit according to the voltage reading values At time t1, the distance between the light-emitting unit and the target scene is calculated based on the obtained emission time t0 and arrival time t1, that is (t1-t0) ⁇ light speed/2; then the voltage reading of the first capacitor C1 and the second The sum of the voltage readings of capacitor C2 determines the intensity of the emitted light.
  • the calculating step S203 may further include: obtaining the light emission time t0 and the preset emission light pulse width T0; obtaining the same photosensitive sensor that reaches the photosensitive unit array within the preset first time interval threshold T_1 at the earliest
  • the time t_1 of the electron group of 2 electrons of the unit, the time for the second electron in the group to arrive/appear in the same photosensitive unit is t_1+ ⁇ t 1 , and at the same time, 2 electrons that meet the same interval condition and reach the same photosensitive unit are obtained
  • the current scanning point may be determined whether the current scanning point emits detection light based on past sensing vectors, wherein within the second preset time range, at least there are The number of times of not emitting detection light for the second preset non-emitting proportion. For example, when it is determined that at least two light emitting units scan the target scene with strong light and weak light respectively, if the scanning of the weak light has obtained the distance through measurement, it is determined that the former scanning point does not emit probe light. Alternatively, when it is determined that the current detection distance of the light intensity is smaller than the predetermined value or larger than the predetermined value, it is determined that the current scanning point does not emit the detection light.
  • the current emission will be skipped according to the second preset non-emission ratio.
  • the divergence angle of a certain scan within the second preset time range has detected most of the current pixels, it is determined that the current pre-scanning point does not emit detection light.
  • the second preset non-light emission ratio may be 1%, 5%, 20%, 30%, 80%, and the step of determining whether the current scanning point emits probe light before each scan.
  • step S203 of calculating may further include: determining a sensing vector resulting from at least one past, temporally recent measurement; determining at least another past, temporally recent measurement; and according to the determined The sensing vector and the determined measurement determine whether the current scanning point emits probe light; wherein, the sensing vector can be obtained through various steps in the flowchart shown in FIG. 4 .
  • the light emitting unit may emit at least two scanning beams with different divergence angles to the target scene.
  • the calculating step S203 may further include: obtaining at least one sub-region of interest in the target scene by using the sensing vector measured in the second preset time range in the past; and issuing an instruction such that: in the third preset time range
  • the scanning density of the sub-region of interest is set to be greater than the first multiple threshold, and/or the scanning frequency is greater than or less than the second multiple threshold, and/or the average light energy per unit time is greater than or less than the third multiple threshold.
  • At least one sub-region of interest can also be determined through embedded calculation and/or preset rules in the photosensitive unit, wherein, in step S201, the photosensitive unit outputs the number of sub-pixels of the image sensor that is smaller than the second preset ratio sensor vector.
  • the second preset ratio is, for example, 1%, 5%, 20%, 30%, and 80%.
  • FIG. 7 shows a schematic structural diagram of a computer system 700 of an electronic device suitable for implementing the 3D imaging method of the embodiment of the present application.
  • the electronic device shown in FIG. 7 is only an example, and should not impose any limitations on the functions and scope of use of the embodiments of the present application.
  • computer system 700 includes one or more processors 701 (eg, CPUs) that can be loaded into random access memory (RAM) according to a program stored in read only memory (ROM) 702 or from storage portion 706 ) 703 to execute various appropriate actions and processes.
  • processors 701 eg, CPUs
  • RAM random access memory
  • ROM read only memory
  • storage portion 706 storage portion 706
  • various programs and data required for the operation of the system 700 are also stored.
  • the processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
  • An input/output (I/O) interface 705 is also connected to bus 704 .
  • the following components are connected to the I/O interface 705: a storage section 706 including a hard disk and the like; and a communication section 707 including a network interface card such as a LAN card, a modem, and the like.
  • the communication section 707 performs communication processing via a network such as the Internet.
  • Drivers 708 are also connected to I/O interface 705 as needed.
  • a removable medium 709 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 708 as needed so that a computer program read therefrom is installed into the storage section 706 as needed.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication portion 707 and/or installed from the removable medium 709 .
  • CPU central processing unit
  • the above-described functions defined in the method of the present application are performed.
  • the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional procedural programming language - such as "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., using an Internet service provider through Internet connection.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or operations , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments of the present application may be implemented in a software manner, and may also be implemented in a hardware manner.
  • the described unit can also be provided in the processor, for example, it can be described as: a processor includes an acquisition unit and a 3D image generation unit. Among them, the names of these units do not constitute a limitation of the unit itself under certain circumstances.
  • the acquisition unit can also be described as "a unit that acquires the depth information of the point in the scene to be shot corresponding to at least one pixel". .
  • the present application also provides a computer-readable medium, which may be included in the apparatus described in the above-mentioned embodiments, or may exist independently without being assembled into the apparatus.
  • the above-mentioned computer-readable medium carries one or more programs, which, when executed by the apparatus, cause the apparatus to execute the above-described ranging method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Measurement Of Optical Distance (AREA)

Abstract

一种3D图像传感器测距系统以及利用该系统进行测距的方法。该系统包括:至少一个发光单元阵列(10)、至少一个感光单元阵列(20)和至少一个计算部件(30)。每个发光单元阵列(10)可包括至少一个用于向至少一个对目标场景发射光的发光单元。每个感光单元阵列(20)可包括至少一个感光单元,用于接收至少一部分由发光单元阵列发射的、经由目标场景反射的光,并根据所接收的光生成传感向量。计算部件(30)根据感光单元阵列(20)生成的传感向量计算发光单元阵列(10)与目标场景之间的距离以及反射的光的光强中的至少之一。

Description

3D图像传感器测距系统及使用该系统进行测距的方法
交叉引用
本申请要求于2020年10月23日向中国专利局提交的、发明名称为“3D图像传感器测距系统及使用该系统进行测距的方法”的第202011149482.9号发明专利申请的优先权,上述专利申请的全部内容通过引用并入本文。
技术领域
本申请实施例涉及激光雷达测距领域,具体涉及一种3D图像传感器测距系统以及使用该系统进行测距的方法。
背景技术
激光雷达系统在环境识别中具有越来越重要的意义。激光射束尤其可以用于扫描周围环境并且能够对周围环境中的对象进行距离测量。激光雷达系统通常包括至少一个光源和接收器,光源用于向周围环境中的对象发射光,而接收器则用于接收由对象反射的光。激光雷达系统可基于光源发射光与接收器接收光的时间差(即,光的飞行时间)来确定对象距激光雷达系统的距离。
随着激光雷达系统的应用范围越来越广泛,人们更加期望获得体积更小、测距更远、效率更高的激光雷达系统。然而,在使激光雷达系统集成化的进程中,如何提高效率、减小体积、有效地避免发射光与反射光之间的相互干扰是亟待解决的难题之一。
发明内容
在本申请的一个方面,公开了一种3D图像传感器测距系统。该系统可包括至少一个发光单元阵列、至少一个感光单元阵列和至少一个计算部件。每个发光单元阵列可包括至少一个用于向至 少一个对目标场景发射光的发光单元。每个感光单元阵列可包括至少一个感光单元,用于接收至少一部分由发光单元阵列发射、经由目标场景反射的光,并根据所接收的光生成传感向量。计算部件根据感光单元阵列生成的传感向量计算发光单元阵列与目标场景之间的距离和反射光的光强中的至少之一。
在一个实施方式中,发光单元的发射的光的发散角随着时间波动,其中发散角的最大值大于第一空间分辨率阈值。该第一空间分辨率阈值可大于3D图像传感器测距系统的空间分辨率的2倍。
根据本申请一个实施方案的3D图像传感器测距系统还可包括扫描部,用于控制发光单元阵列在与至少部分所述目标场景对应的空间角度范围进行照射扫描。作为一种选择,发光单元阵列的至少一部分包含发光扫描控制部件,用于控制发光单元阵列在与至少部分目标场景对应的空间角度范围进行的照射扫描。
在一个示例中,在第一预设时间范围内,发光单元阵列发出的光的、具有第一预设角度比例的实际扫描空间角度与预设的扫描空间角度的随机误差大于所述第一空间分辨率阈值。通过有意设计大的扫描误差,可以达到降低系统成本的目标,而且随机的误差,有益于保证发射光对场景的全覆盖,同时极大地降低系统成本。
在一个示例中,传感向量可包括:发光单元与目标场景之间的距离、所述反射的光的光强、反射光的相位、和反射的光的光谱中的至少之一。其中,计算部件被配置为:获得光的发射时间t0;获得传感向量中的单光子或单个光脉冲到达感光单元的到达时间t1;基于所获得的t0和t1确定出发光单元与所述目标场景之间的距离;以及将传感向量中的光敏电子数或感光单元中的收集电容的电压读值确定为反射光的光强。
作为一种选择,每个感光单元包括第一电容C1和第二电容C2,计算部件被配置为:获得所述光的发射时间t0;获得第一电容C1的 电压读值和第二电容C2的电压读值;根据电压读值确定出光到达感光单元的到达时间t1;基于所获得的t0和t1计算出发光单元与目标场景之间的距离;将第一电容C1的电压读值和第二电容C2的电压读值之和确定为发射光的光强。
作为另外一种选择,计算部件被配置为:获得光的发射时间t0,以及预设发射光脉冲宽度T0;获得最早在预设第一时间间隔阈值T_1内到达感光单元阵列中的同一个感光单元的2个电子的电子群的时间t_1,该群中的第二个电子到达/出现在同一个感光单元的时间为t_1+Δt 1,同时获得满足相同间隔条件到达所述同一个感光单元的2个电子的电子群的个数n_1,其中,Δt 1<T_1;然后依次获得最早在预设第m时间间隔阈值T_m内到达所述同一个感光单元的m+1个电子的电子群的时间t_m,同时获得满足相同条件的m+1个电子的电子群的个数n_m,其中m大于等于2;利用对应的电子群数n_1、…、n_m,获得与最大电子群数n_max=max{n1,…,n_m}对应的电子群到达时间t_max={t_1,…,t_2};基于规则[距离=(t_max-t0)×C/2光速]确定出发光单元阵列与目标场景之间的距离;以及将最大电子群数n_max确定为反射光的光强。
在另外一种实施例中,系统按预定规律,根据上述的{n1,…,n_m}和{T_1,…,T_m}计算出对应所述发射光脉冲对应对的最大电子群数n_best以及对应的电子群到达时间t_best,然后基于规则[距离=(t_best-t0)×C/2光速]确定出发光单元阵列与目标场景之间的距离;以及将最大电子群数n_best确定为反射光的光强。
作为进一步的示例性选择,计算部件还可被配置为:获得光的发射时间t0;获得最早在预设第一时间间隔阈值内同时到达感光单元阵列中的不同但相邻的感光单元的2个电子群的时间t_1,同时获得满足相同间隔条件到达相邻感光单元的2个电子的电子群的个数n_1;然后依次获得最早在预设第m时间间隔阈值内到达的m+1个电子群的时间t_m,同时获得满足相同间隔条件到达相邻感光单元的m+1个 电子的电子群的个数n_m,其中m≥2,以及对应的电子群数n_m,获得最大电子群数n_max对应的电子群到达时间t_max;基于规则[距离=(t_max-t0)×C/2光速/2]确定出发光单元阵列与目标场景之间的距离;以及将最大电子群数n_max确定为反射光的光强。
在一个示例性实施方案中,计算部件被配置为:在按预定规律的执行扫描过程中,基于当前扫描点之前的过去的传感向量,决定当前扫描点是否应该发射探测光,其中在第二预设时间范围内,至少有第二预设不发光比例的不发探测光次数。例如,当计算部件确定出至少两个发光单元分别用强光和弱光对所述目标场景先后进行扫描时,如果弱光的扫描已经通过测量获得发光单元阵列与目标场景之间的距离,则确定出:针对前的扫描点不发射探测光;或者,当计算部件确定出当前的光强探测的距离小于预定值或大于预定值,则确定出:针对当前扫描点不发射探测光;或者,当计算部件确定出当前扫描的目标区域是不重要、不受关注区域时,则确定出:应该按所述第二预设不发光比例跳过当前的光发射;或者当计算部件确定出第二预设时间范围内的某次扫描的发散角已经探测到当前大部分像素时,则确定出:针对当前出前扫描点不发射探测光。在本文中,第二预设不发光比例可为1%、5%、20%、30%或80%。此外,计算部件被配置为在每次所述扫描前决定:这对当前扫描点是否应该发射探测光。
在一个实施方式中,计算部件被配置为:针对当前扫描点,确定至少一个过去的、在时间上最近的测量所得的传感向量;确定至少另一个过去的、在空间角度上最近的测量;以及根据所确定的传感向量以及确定出的所述测量,决定是否应该针对当前扫描点发射探测光;其中,计算部件被配置为执行以下处理来所述传感向量:1)获取与当前扫描点时间最近的过去时段的第一传感向量;2)获取与当前扫描点距离最近的当前时段的第二传感向量;3)根据第一传感向量以及第二传感向量,预判当前扫描点的扫描特性,扫描特性可包括当前扫描点的发射强度、发射频率、发射区域、脉冲可区分特性、受关注 度、和扫描区域中的至少之一;以及4)根据所确定的所述扫描特征,确定当前是否应该允许发光单元进行发射探测光的操作;
如果是,获得对应的当前扫描角度和当前发散角的最大可能覆盖的感光单元的传感向量;否则跳回到步骤1),重新执行所述步骤1)至步骤4)。
在一个实施方式中,每个感光单元被配置为:确定所接收到的光脉冲中的光敏电子的个数或幅度是否分别小于预定的电子数阈值和信号幅度阈值,如果是,则放弃所述光包括的信息,其中,电子数阈值和信号幅度阈值在发光开始随时间从预设阈值按预设规律逐渐减小。发光单元阵列中的至少两个发光单元可同时发射的光束在空间角度上至少部分重叠,以及所述光束各自包含的波长范围至少部分不相同。发光单元发出的光也可包括至少包含两种发散角不同的扫描光束。
在本申请的一个实施例中,计算部被进一步设置为使用在过去第二预设时间范围所测得的传感向量,获得目标场景中至少一个受关注子区域;并且发出指令以使得:在第三预设时间范围对所述受关注子区域做比其他区域扫描密集度大于第一倍数阈值、和/或扫描频率大于或小于第二倍数阈值、和/或单位时间平均光能量大于或小于第三倍数阈值。作为示例,可通过感光单元内的嵌入式计算和/或预设规律,确定至少一个所述关注子区域,其中,感光单元输出小于第二预设比例的图像传感器的子像素数目的传感向量。
本申请的另外一个方面还提供了一种利用3D图像传感器测距系统进行测距的方法,包括:通过至少一个发光单元阵列中包括的发光单元向至少一个对目标场景发射光;通过感光单元接收至少一部分由发光单元发射的、经由目标场景反射的光,并根据所接收的光生成传感向量;以及根据生成的传感向量计算发光单元阵列与目标场景之间的距离和反射光的光强中的至少之一。
本申请的另一个方面还提供了一种用于光学测距的装置,包括:至少一个如上述任一实施方式所述的3D图像传感器测距系统;以及用于在其中集成至少一个所述3D图像传感器测距系统的半导体芯片。
本申请的另一个方面还提供了一种一种形成用于光学测距装置的方法,包括:形成至少一个如上述任一实施方式所述的3D图像传感器测距系统,并将至少一个所述3D图像传感器测距系统集成在同一个半导体芯片中。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:
图1是根据本申请一个实施方式的3D图像传感器测距系统的示例性系统架构图;
图2是根据本申请另一个实施方式的3D图像传感器测距系统的示例性系统架构图;
图3是根据本申请另一个实施方式的发光单元发出的重叠光束的示意图;
图4是根据本申请另一个实施方式的用于获得传感向量的流程图;
图5是根据本申请另一个实施方式的感光单元的电路结构示意图;
图6是根据本申请另一个实施方式的使用3D图像传感器测距系统进行测距的方法流程图;以及
图7是适于用来实现本申请实施例的3D成像方法的电子设备的计算机系统的结构示意图。
具体实施方式
为了更好地理解本申请,将参考附图对本申请的各个方面做出更详细的说明。应理解,这些详细说明只是对本申请的示例性实 施方式的描述,而非以任何方式限制本申请的范围。在说明书全文中,相同的附图标号指代相同的元件。表述“和/或”包括相关所列项中的任何一个及任何两个或更多的任何组合。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
本申请中描述的特征可以以不同形式实施,并且不应被理解为限于本申请中描述的示例。更确切地,提供本申请中描述的示例仅仅是为了说明实现本申请中描述的方法、装置和/或系统的诸多可能方式中的一些方式,这些方式将在理解本申请的公开内容之后显而易见。
相对于示例或实施方式的措辞“可”的使用(例如,关于示例或实施方式可包括或实现的内容)意指存在包括或实现这种特征的至少一个示例或实施方式,而全部示例或实施方式不限于此。
应注意,在本说明书中,“第一”、“第二”等的表述仅用于将一个特征与另一个特征区分开来,而不表示对特征的任何限制。
在附图中,为了便于说明,可能已稍微夸大了各部件的厚度、尺寸和形状。具体来讲,附图中所示的球面或非球面的形状通过示例的方式示出。即,球面或非球面的形状不限于附图中示出的球面或非球面的形状。附图仅为示例而并非严格按比例绘制。
在整个说明书中,当诸如一个元件被描述为位于另一元件“上”、“连接到”或“联接到”另一元件时,该元件可直接位于该另一元件“上”、直接“连接到”或直接“联接到”该另一元件,或者可存在介于该元件与该另一元件之间的一个或多个其它元件。相反地,当元件被描述为“直接位于”另一元件“上”、“直接连接到”或“直接联接到”另一元件时,则可不存在介于该元件与该另一元件之间的其它元件。
为了便于描述,可在本文中使用诸如“在……上方”“较上”、“在……下方”和“较下”的空间相对措辞来描述如附图中所示的一个元件与另一元件的关系。除附图中描绘的定向之外,这种空间相 对措辞旨在包括设备在使用或操作中的不同定向。例如,如果附图中的设备被翻转,则被描述为在另一元件“上方”或相对于另一元件“较上”的元件将在该另一元件“下方”或相对于该另一元件“较下”。因此,根据设备的空间定向,措辞“在……上方”包括“在……上方”和“在……下方”两种定向。设备还可以以其它方式定向(例如,旋转90度或处于其它定向),并且应相应地解释本文中使用的空间相对措辞。
还应理解的是,用语“包括”、“包括有”、“具有”、“包含”和/或“包含有”,当在本说明书中使用时表示存在所陈述的特征、元件和/或部件,但不排除还存在一个或多个其它特征、元件、部件和/或它们的组合。此外,当诸如“...中的至少一个”的表述出现在所列特征的列表之后时,修饰列表中的全部特征,而不是仅仅修饰列表中的单独元件。
如在本文中使用的,词语“大致”、“大约”以及类似的词语用作表近似的词语,而不用作表程度的词语,并且旨在说明本领域普通技术人员能够认识到的测量值或计算值中的固有偏差。
除非另外限定,否则本文中使用的所有术语(包括技术术语和科学术语)均具有与本申请所属领域普通技术人员的通常理解相同的含义。还应理解的是,术语(例如在常用词典中定义的术语)应被解释为具有与它们在相关技术的上下文中的含义一致的含义,而不应以理想化或过于形式化的意义进行解释,除非本文中明确如此限定。
需要说明的是,在不冲突的情况下,本申请中的实施方式及实施方式中的特征可以相互组合。另外,除非明确限定或与上下文相矛盾,否则本申请所记载的方法中包含的具体步骤不必限于所记载的顺序,而可以任意顺序执行或并行地执行
图1所示为根据本申请一个实施方式的3D图像传感器测距系统100。如图所示,3D图像传感器测距系统100可包括至少一个发光单元阵列10、至少一个感光单元阵列20和至少一个计算部件30。至少 一个发光单元阵列10可包括至少一个用于向至少一个对目标场景发射光的发光单元。每个感光单元阵列20包括至少一个感光单元,用于接收至少一部分由发光单元阵列发射、经由目标场景反射的光,并根据所接收的光生成传感向量。每个计算部件30根据感光单元阵列20生成的传感向量计算以下至少之一:1)发光单元和目标场景的距离;和2)反射的光的光强。
发光单元阵列10
发光单元阵列10包含至少一个发光单元。发光单元被配置为向目标场景按照按预定规律发射光脉冲,以照亮目标场景。例如可按照预设规律向目标场景发射光脉冲。发光单元阵列10可以发射波长在例如300nm-750nm、700nm-1000nm、900nm-1600nm、1um-5um或3um-15um范围内的光脉冲。脉冲宽度可以例如是0.1ps-5ns、1ns-100ns、100ns-10us或10us-10ms。这里仅以示例的方式例举了发光单元阵列10所发射的光脉冲的波长和脉宽的参数,然而本申请不限于此,其它未背离本申请教导的其它波长和脉宽的参数也是允许的。
在一些实施方式中,每个发光单元可以是半导体激光器、光纤激光器、固态激光器等。在一些实施方式中,每个发光单元所发射的光脉冲可以是经调制的线性偏振光、圆偏振光、椭圆偏振光,或者非偏振光。光脉冲的脉冲重频可以选择自1Hz-100Hz、100Hz-10kHz、10kHz-1MHz或1MHz-100MHz的范围。光脉冲的相干长度可以小于100m、10m、1m、1mm。每个发光单元发射的光脉冲射向目标场景。目标场景可以包括例如被摄物50。
发光单元的发散角的、随着时间波动幅度的最大值大于第一空间分辨率阈值。每个发光单元发射向目标场景50的光的发散角大于第一空间分辨率阈值,其中,第一空间分辨率阈值包含水平第一空间分辨率阈值和垂直第一空间分辨率阈值。水平第一空间分辨率阈值可以是0.1°,1°,2°、5°、10°、或者0.01*系统水平视场角(FOV)、或者0.02*系统水平FOV、或者0.1*系统水平FOV。垂直第一空间分辨率 阈值可以是0.1°,1°,2°、5°、10°、或者0.01*系统垂直FOV、或者0.02*系统垂直FOV、或者0.1*系统垂直FOV。
3D图像传感器测距系统100还可包括发光扫描控制部件101,该发光扫描控制部件101可与发光单元阵列10的至少一部分发光单元形成为一体。在图1中,发光扫描控制部件101用虚线示出,表示该部件101可集成到发光单元阵列10中。扫描控制部件101能够控制向对至少部分目标场景对应的空间角度范围进行扫描,即,用于控制发光单元阵列10的所有出射光线。例如,如果我们把目标场景按水平角度x(比如取值范围:1-1000)、和垂直角度y(比如取值范围:1-200)来描述。一般的一线光束的扫描就是按某种简单规律让光斑照射到所有(1-1000)×(1-200)两百的格子的中心。这简单的扫描规律比如是:1)在垂直=1的时候,水平从1到1000进行每次加1进行扫描;然后2)垂直=2,水平又1到1000每次加1进行扫描。一般激光雷达的发射线条的反散角是优化在尽量例如小于200×1000个格子的尺寸。但是,当发散角大的时候,一个光束的光斑,就可能同时照量好多个格子。现有的激光雷达,扫描的设计是扫描的线路是固定的,益不考虑发散的光斑是否覆盖了几个格子。但是当可以同时有效探测n×m个光斑格子的时候(比如3×3各光斑格子),扫描就不需要按照水平+1进行下一个水平扫描;垂直也不需要每次加1。单光束扫描的角度轨迹控制在一定范围内随机/模糊移动时,就需要光束的发散角也在一定范围随机/模糊变化,这样子就能保证,在一定的时间里,光斑能够完全覆盖所有的目标场景所定义的空间角分辨率所定义的格子。
简单地说,发射的光的发散角越大,利用该光对目标场景进行的光学扫描就越少。但是发散角越大,探测器(即,感光单元阵列20)能探测的最大距离就越小。本申请的目标之一在于使用一个尽可能低质量、低成本地光发射和扫描系统,同时达到可配置的最优的系统分辨率、距离量程、输出点云数率。
在一个实施例中,发光扫描控制部件101被配置为使得上述发光单元发出的光满足:在第一预设时间范围内,至少有第一预设角度比 例的实际扫描空间角度与预设的扫描空间角度的随机误差大于第一空间分辨率阈值,其中,第一空间分辨率阈值大于本系统的空间分辨率的2倍。应该理解,一般普通的激光雷达,都有一个系统设计的空间分辨率,比如水平分辨率是0.1度。那么当普通机械扫描式激光雷达在进行水平扫描的时候,系统每隔0.1度,就会发射一个激光,以期获得水平方向0.1度的空间分辨率。同理可以进行垂直扫描,以获得预期的垂直扫描。在先的扫描激光雷达,一般都是按照如此原理操作。对应Flash激光雷达,它的使用和普通相机很像,只是它会发一个照亮全场的激光闪光。和普通相机相似,它有一个m×n像素的特殊图像传感器,比如传统的摄像头具有1024×768个像素,那么,当相机的视角(由相机光学镜头决定)是水平100度,垂直76度的时候,这个相机或Flash激光雷达的水平分辨率就是100/1204≈0.1度,垂直分辨率就是76/768≈0.1度。在本申请中,通过有意设计大的扫描误差,可以达到降低系统成本的目标,而且随机的误差,有益于保证发射光对场景的全覆盖,同时极大地降低系统成本。
此外,在常规扫描激光雷达系统中,该系统意图发射一个永远平行的尽可能细的激光,以便系统获得最好的角分辨率和信噪比,但这是很难到达很精确的控制,特别在半导体(或无机械扫描器件)控制的发射的时候。在普通的测距方案中,发散角都是越小越好,而且都是固定/恒定的。在本申请的该实施方式中,放宽了对发散角的固定不变的控制,让发射光的发散角在一个对制造成本有益的范围内波动。这个发散角的波动,在一个0.1度分辨率的系统里,发散角可以是1度、2度或者3度。系统设置的优化在于发射光的功率、测量的最大距离、探测传感器的光电效率、以及制造成本之间取得平衡。
此外,发光单元阵列10中的至少两个发光单元同时发射的光束在空间角度上至少部分重叠,以及这些光束各自包含的波长范围至少部分不相同。发光单元10发出的光可包括至少包含两种发散角不同的扫描光束,如图3所示。此外,通过上述的配置可以发射截面更大的激光束,这样子发散角小,测距更远,同时,可以同时探测像素间 距更近的对应物体,使得子空间分辨率更好。
感光单元阵列20
感光单元阵列20包括至少一个感光单元。感光单元阵列用于接收至少部分经由目标场景反射的光,并向计算单元30提供包括在反射的光的信息中的至少一部分传感向量,其中,每个传感向量可包括感光单元与目标对象之间的距离、发射光的光强、发射光的相位和发射光的光谱至少之一。
在一个示例中,感光单元可包括光电传感器和滤光片(未示出)。光电传感器通过光电效应,响应于接收到的反射光产生光敏电子。可通过计算光敏电子的数量获得对应的光强,以及通过计算光敏电子产生的时间和发射光的时间间隔乘以光速来确定发光单元和目标场景之间对应的距离。滤波片可设置在光电传感器前,用于获得特定波段的光强,通过对特定波段的光进行调制解调获得特定波段光谱。被低频调制的光的相位可以通过和同样频率的电信号调制解调获得。此外,还可以通过光敏电子产生的时间和空间来获得光束本身的相位。
例如,在一个实施例中,至少一个感光单元(在本中还被称为“像素”)用至少2个电容收集曝光期间的与光敏电子相关的电子,并在曝光结束时用至少2个电容的测量值计算对应像素的传感向量。感光单元中的光电转换器可以将光信号转换为电信号。这样一来,利用对电信号的处理,便可以还原出目标场景中的点的图像信息。
具体地,在曝光期间,由发光单元发射的光经目标场景中的各点反射后,所得到的反射光线可进入光电转换器。光电转换器通过对曝光期间由目标场景反射的光进行光电转换来将光信号转换为对应的电信号。在这里,电信号的信号值例如可以由对光信号进行光电转换之后得到的与光敏电子的个数(即,电荷数量)来表征。对于确定的光电转换器而言,光电转换前后的光信号和电信号之间的函数关系是已知的。由此,通过检测电信号的信号值,可以计算得到与目标场景对应的各像素点的传感向量,进而还原出目标场景中的点的图像信息。在实施例中,各像素点的传感向量例如可以是包含该像素点的距离、 光强、相位和光谱等信息的一组数据。
如上所述,光电转换后得到的电信号的信号值可以由光电转换得到的电荷数量来表征。在一个实施例中,在感光阵列中的每个感光单元中,使用至少2个电容来收集收集曝光期间的与光敏电子(即,光电转换后得到的电荷),其中该至少2个电容具有不同的充放电特性。图5示例性地示出了根据本申请一个实施方式的感光单元的电路结构示意图。如图5所示,感光单元可包括两个电容C1和C2、可变分流器以及雪崩二极管APD(或者为单光子雪崩二极管(SPAD)、光电二极管PD)。当某一时刻t1入射的光信号在光敏区间被转换成电信号后,在时间上可变分流器的控制下,将光敏电子的一部分q1输送到电容C1,将光敏电子的另一部分q2输送到电容C2,其中q1+q2是在t1时刻产生的总电子数。在另一时刻t2入射的光信号在光敏区间被转换成电信号后,在可变分流器的控制下,将光敏电子的一部分q1’输送到电容C1,将光敏电子的另一部分q2’输送到电容C2,其中q1’+q2’是在t2时刻产生的总电子数。因为分流器的分流比例随时间不同,q1/q2和q1’/q2’也不同,并且此比例值是对应于一个确定的时间。在图5中使用通用的电气元器件表示了在具体实施时使用的部件的示意图,各个部件均为具有其自身属性和功能的常规部件,因此不再对它们逐个进行单独的描述,但是为了清楚起见,在图5中,作为附图标号保留了重置reset、栅极控制选择select、可变分流器的控制输入Vcontrol,输出output等符号。
曝光结束时,上述2个电容C1和C2的测量值(即,电容所收集的电荷)被放大、读出,用来计算对应像素的传感向量(例如距离、光强、相位和光谱等)。在后文中将参照计算单元30做进一步描述参照电容C1和C2的测量值获得距离和光强的具体处理,至于根据电容C1和C2的测量值获得相位和光谱等可采用现有的技术来实现。
此外,在本文中,发光单元阵列20每发射一次光的时间被称为曝光时间,感光单元中的光电感测单元接收至少部分经目标场景反射的光,并转化成光敏电子信息。当光敏电子信息中的电子数量或信号 幅度小于感光单元中的预设阈值,则不对本次光敏电子信息进行后续处理,并且,电子数阈值和信号幅度阈值在发光开始随时间从预设阈值按预设规律逐渐减小。具体而言,感光单元阵列20中的每个感光单元还可被配置为:确定所接收到的光中的光敏电子的个数或幅度是否分别小于预定的电子数阈值和信号幅度阈值,如果是,则放弃该光中包括的信息,其中,电子数阈值和信号幅度阈值在发光开始随时间从预设阈值按预设规律逐渐减小。之所以将电子数阈值和信号幅度阈值限定为随时间而减小是因为:越晚到达的信号的强度越弱,而越早到的信号,杂散光灯信号也越强,通过逐渐降低的阈值,可提高了系统抗干扰的能量、避免了不必要的探测时间、为远距离弱信号的做了更好的准备。
感光单元至少包括APD、光电二极管(PD)或者单光子雪崩二极管(SPAD)中的一种(硅基的SiPM或者3-5族元素生成的合成材料如InGaAs等)。
计算单元30
计算单元30根据感光单元阵列20测量的上述传感向量,计算对应每个感光单元(感光像素)所对应的目标场景与发光单元之间的距离和反射光的相对光强。
在一个示例中,传感向量可包括:发射的光与目标场景之间的距离、反射光的光强、反射光的相位、和反射光的光谱中的至少之一。
在一个实施例中,计算单元30用于计算对应的目标场景的距离和反射光强的方法可包括:1)获得光发射时间t0,2)获得传感向量里的单光子或单个光脉冲(多光子)的时间t1,3)距离=(t1-t0)×C光速/2;以及4)获得传感向量里的光敏电子数或收集电容的电压读值,将之作为光强。
在另外一个实施例中,感光单元阵列20中的至少一个感光单元/像素用至少2个电容收集曝光期间的与光敏电子相关的电子,并在曝光结束时用至少2个电容的测量值计算对应像素的传感向量。相应地,计算单元30用于计算对应的目标场景的距离和反射光强的方法 可包括:1)获得光发射时间t0,2)获得电容C1的读值和C2的读值,3)根据电压读值确定出光到达感光单元的到达时间t1;4)通过公式计算发射光与目标场景50之间的距离,即,距离=(t1-t0)×C光速/2,5)然后计算发射光的光强=C1的值+C2的值。
在另外一个实施例中,计算单元30用于计算对应的目标场景的距离和反射光强的方法可包括:
获得光的发射时间t0,以及预设发射光脉冲宽度T0,
获得最早在预设第一时间间隔阈值T_1内到达感光单元阵列中的同一个感光单元的2个电子的电子群的时间t_1,该群中的第二个电子到达/出现在同一个感光单元的时间为t_1+Δt 1,同时获得满足相同间隔条件到达同一个感光单元的2个电子的电子群的个数n_1,其中,Δt 1<T_1;
然后依次获得最早在预设第m时间间隔阈值T_m内到达同一个感光单元的m+1个电子的电子群的时间t_m,同时获得满足相同条件的m+1个电子的电子群的个数n_m,其中m大于等于2;
利用对应的电子群数n_1、…、n_m,获得与最大电子群数n_max=max{n1,…,n_m}对应的电子群到达时间t_max={t_1,…,t_2};
基于规则[距离=(t_max-t0)×C/2光速]确定出发光单元阵列与目标场景之间的距离;以及
将所述最大电子群数n_max确定为所述光强。
在另外一个实施例中,计算单元30用于计算对应的目标场景的距离和反射光强的方法可包括:
获得所述光的发射时间t0;
获得最早在预设第一时间间隔阈值内同时到达感光单元阵列中的不同但相邻的感光单元的2个电子群的时间t_1,同时获得满足相同间隔条件到达相邻感光单元的2个电子的电子群的个数n_1;
然后依次获得最早在预设第m时间间隔阈值内到达的m+1个电子群的时间t_m,同时获得满足相同间隔条件到达所述相邻感光单元的m+1个电子的电子群的个数n_m,其中m≥2,以及对应的电子群 数n_m,获得最大电子群数n_max对应的电子群到达时间t_max;
基于规则[距离=(t_max-t0)×C/2光速/2]确定出发光单元阵列与所述目标场景之间的距离;以及
将最大电子群数n_max确定为反射光的光强。
计算部件30还被配置为:在按预定规律的发射光进行扫描的过程中,基于过去的传感向量,决定当前扫描点是否发射探测光并将相应的执行指令发送到扫描控制部101(以根据质量控制是否要向目标物体发射扫描用的光束),其中在第二预设时间范围内,至少具有第二预设不发光比例的不发探测光次数。例如,第二预设不发光比例为1%、5%、20%、30%或80%。作为示例,当计算部件确30决定出至少两个发光单元分别用强光和弱光对目标场景先后进行扫描时,如果弱光的扫描已经通过测量获得距离,则确定出前扫描点不发射探测光,并向扫描控制部101发送相应的指令。此外,当计算部件30确定出当前的光强探测的距离小于预定值或大于预定值,则决定当前扫描点不发射探测光,并向扫描控制部101发送相应的指令。可选地,当计算部件30确定出当前扫描的目标区域是不重要、不受关注区域时,会按第二预设不发光比例跳过当前的发射,因此向扫描控制部101发送相应的指令。另外,当计算部件30确定出第二预设时间范围内的某次的扫描的发散角已经探测个当前大部分像素时,则决定当前出前扫描点不发射探测光,因此向扫描控制部101发送相应的指令,控制部101则根据该指令控制发光单元10不发射探测光。在一个示例中,计算部件30被配置为在每次所述扫描前执行决定当前扫描点是否发射探测光的上述操作。例如,计算部件30可被配置为:确定至少一个过去的、在时间上最近的测量所得的传感向量;确定至少另一个过去的、在空间角度上最近的测量;以及根据所确定的传感向量以及所确定的测量,决定当前扫描点是否发射探测光。当决定出不需要发送探测光时向扫描控制部101发送相应的指令,以在扫描控制部101的控制下使得发光单元10无需向目标对象发送探测光。作为一个示例,使用当前时段(本帧)的前一个扫描中的一个像素的距离值和光强值, 以及使用前一帧里的同一个扫描点的距离值和光强值,当光强太大或距离小于15米大于5米(举例),则当前不发光。
此外,在一些现有技术中,需要先AI识别物体,而后决定是否降低光强进行扫描。在另外一些现有技术中,需要把目标场景分成有限的区域,按区域决定是否降低光强扫描或者改变扫描密度扫描。还有一些现有技术需要使用仅仅一个预设的距离阈值、或仅仅一个预设的光强阈值,决定当前扫描是否降低/增加光强或改变密度。该实施方式的系统至少部分能解决现有技术中的不足。
下面将参照图4描述根据本申请一个实施方式的、计算部件30用来获得上述传感向量的具体步骤。如图所示,在步骤S101中,计算部件30获取与当前扫描点时间最近的过去时段的第一传感向量。在这里,这些信息是按照时间先后顺序预先进行保存在任何合适的存储部分。在步骤S102中,计算部件30获取与当前扫描点距离最近的当前时段的第二传感向量。因为在执行每个扫描处理时,都知道扫描角和扫描时间(当前帧、或前几帧),因此可根据这些信息来获得第二传感向量。在本文中,当前时段/过去时段可以是当前/过去一帧,也可以是当前/过去水平完成一条线的扫描。
在步骤S103中,计算部件30根据当前时段的传感向量、以及所获取得的第一和第二传感向量,决定当前时段的发射强度、发射频率、发射区域、脉冲可区分特性、当前扫描区域。
在步骤S104中,计算部件30确定当前时段是否应该允许发光单元阵列10进行发光操作。特别地,计算部件30可决定出当前时段可以使得发光单元阵列10不进行发光操作,并向扫描控制部101发送相应的使得发光单元阵列10不进行发光操作的控制指令。具体如上所述。
如果在步骤S104中判断的结果为“是”,则在步骤S105中获得对应的当前扫描角度和当前发散角的最大可能覆盖的感光单元的传感向量然后调回到步骤S101,否则直接跳回到步骤S101。
在一个实施方式中,计算单元30被进一步设置为使用在过去第 二预设时间范围所测得的传感向量,获得目标场景中至少一个受关注(注意力)子区域。例如,可将过去5帧的1000x1000分辨率的现实世界3D图像传感器或虚拟世界3D图像所转化的点云数据连接成为一个二维数组的深度学习神经网络(其中包含RNN、CNN、ResNet、LSTM、GRU、序列模型等)的输入端的张量,该深度学习神经网络在离线的时候已经使用预先标注(例如,手工标注,使用计算机简单图元和物体信息做标注,但是其它自动标注的方式也是允许的)的大量数据训练好输出与1000×1000分辨率场景对应的受关注子区域。使用所述深度学习神经网络,获得实时输出的受关注子区域,其中1表示受关注,0表示不受关注,多个帧图像里不同空间定位的1表示多个子区域在不同时间受关注。在这里以示例的方式例举了各种数值,但是本申请并不限于此,例如本领域技术人员可以采用其它数目的帧、其它数目的分辨率和其它数目的受关注子区域。
在获得了至少一个受关注子区域后,计算单元30向所述扫描控制单元101发出指令以使得:在第三预设时间范围对获得的受关注子区域做比其他区域扫描密集度大于第一倍数阈值、和/或扫描频率大于或小于第二倍数阈值、和/或单位时间平均光能量大于或小于第三倍数阈值。第二预设时间范围和第三预设时间范围可例如是0.001秒、0.01秒、0.1秒、1秒、10秒。相应地,可以更好地分辨、更快地探测受关注区域。比如前方有个迎面快速开来,就需要更快地提供探测结果。比如有些小孩在远方路边玩,就需要更密集地扫描,才可以判断小孩们的意图/或未来行动。
图2为所示为根据本申请一个实施方式的3D图像传感器测距系统100’。如图所示,3D图像传感器测距系统100’除了包括至少一个发光单元阵列10、至少一个感光单元阵列20和至少一个计算部件30外,还包括至少一个独立的光扫描部件40,用于控制向对至少部分目标场景对应的空间角度范围进行扫描。因为上面已经对发光单元阵列10、感光单元阵列20和计算部件30进行了相应的描述,因此在此不再赘述。另外,光扫描部件40可完成与扫描控制部101相同的功能, 并具有相似的配置,因此在这里也省略了对其详细的描述。
根据本申请的一个实施方式,可通过:步骤1)形成至少一个如上述任一实施方式所述的3D图像传感器测距系统,以及步骤2)将至少一个所述3D图像传感器测距系统集成在同一个半导体芯片中,从而形成用于光学测距的装置,换言之,根据该实施方式形成的用于光学测距的装置可包括至少一个如上述任一实施方式所述的3D图像传感器测距系统;以及用于在其中集成至少一个所述3D图像传感器测距系统的半导体芯片。本领域技术人员通过上述内容已经清楚了3D图像传感器测距系统的具体构成,因此可基于本申请教导使用本领域公知的一些技术手段来形成上述的3D图像传感器测距系统并执行将其集成到半导体的处理的步骤。
图6所示为根据本申请一个实施方式的、一种利用3D图像传感器测距系统进行测距的方法200。如图所示,方法200包括:步骤S201,通过至少一个发光单元阵列中包括的发光单元向至少一个对目标场景发射光;步骤S202,通过感光单元接收至少一部分由发光单元发射的、经由目标场景反射的光,并根据所接收的光生成传感向量;以及步骤S203,根据生成的传感向量计算发光单元阵列与目标场景之间的距离和反射光的光强中的至少之一。
在通过至少一个发光单元阵列中包括的发光单元向至少一个对目标场景发射光的步骤S201中,发光单元的发散角的、随着时间波动幅度的最大值大于第一空间分辨率阈值。在该步骤中,在第一预设时间范围内,发光单元的、第一预设角度比例的实际扫描空间角度与预设的扫描空间角度的随机误差大于第一空间分辨率阈值。如上所述,第一空间分辨率阈值包含水平第一空间分辨率阈值和垂直第一空间分辨率阈值。水平第一空间分辨率阈值可以是0.1°,1°,2°、5°、10°、或者0.01*系统水平视场角(FOV)、或者0.02*系统水平FOV、或者0.1*系统水平FOV。垂直第一空间分辨率阈值可以是0.1°,1°,2°、5°、10°、或者0.01*系统垂直FOV、或者0.02*系统垂直FOV、或者0.1*系统垂直FOV。
传感向量可包括:发光单元与目标场景之间的距离、反射光的光强、反射光的相位、和反射光的光谱中的至少之一。在这种情况下,计算的步骤S203可包括:获得光的发射时间t0,获得传感向量中的单光子或单个光脉冲的时间t1,基于所获得的发射时间t0和单光子或单个光脉冲的时间t1确定出发光单元与目标场景之间的距离;以及将传感向量中的光敏电子数或收集电容的电压读值确定为反射光的光强。
根据本申请的一个实施例,感光单元可包括如图5所示的电路结构。即,感光单元可包括两个电容C1和C2、可变分流器以及雪崩二极管APD。通过对可变分流器的控制,将光敏电子输送到两个电容C1和C2。在这种情况下,计算的步骤S203可包括:获得光的发射时间t0,获得第一电容C1的电压读值和第二电容C2的电压读值,根据电压读值确定出光到达感光单元的到达时间t1,基于所获得的发射时间t0和到达时间t1计算出发光单元与目标场景之间的距离,即(t1-t0)×光速/2;然后将第一电容C1的电压读值和第二电容C2的电压读值之和确定为发射光的光强。
作为一种选择,计算的步骤S203还可包括:获得光的发射时间t0,以及预设发射光脉冲宽度T0;获得最早在预设第一时间间隔阈值T_1内到达感光单元阵列中的同一个感光单元的2个电子的电子群的时间t_1,该群中的第二个电子到达/出现在同一个感光单元的时间为t_1+Δt 1,同时获得满足相同间隔条件到达同一个感光单元的2个电子的电子群的个数n_1,其中,Δt 1<T_1;然后依次获得最早在预设第m时间间隔阈值T_m内到达同一个感光单元的m+1个电子的电子群的时间t_m,同时获得满足相同条件的m+1个电子的电子群的个数n_m,其中m大于等于2;利用对应的电子群数n_1、…、n_m,获得与最大电子群数n_max=max{n1,…,n_m}对应的电子群到达时间t_max={t_1,…,t_2};基于规则[距离=(t_max-t0)×C/2光速]确定出发光单元阵列与目标场景之间的距离;以及将最大电子群数n_max确定为反射光的光强。
在另外一个示例中,计算的步骤S203可包括:获得光的发射时间t0;获得最早在预设第一时间间隔阈值内同时到达感光单元阵列中的不同但相邻的感光单元的2个电子群的时间t_1,同时获得满足相同间隔条件到达相邻感光单元的2个电子的电子群的个数n_1;然后依次获得最早在预设第m时间间隔阈值内到达的m+1个电子群的时间t_m,同时获得满足相同间隔条件到达相邻感光单元的m+1个电子的电子群的个数n_m,其中m>=2,以及对应的电子群数n_m,获得最大电子群数n_max对应的电子群到达时间t_max;基于规则[距离=(t_max-t0)×C/2光速/2]确定出发光单元阵列与目标场景之间的距离;以及将最大电子群数n_max确定为反射光的光强。
在一个实施例中,在按预定规律的发射用于扫描的光的过程中,可基于过去的传感向量,决定当前扫描点是否发射探测光,其中在第二预设时间范围内,至少有第二预设不发光比例的不发探测光次数。例如,当确定出至少两个发光单元分别用强光和弱光对目标场景先后进行扫描时,如果弱光的扫描已经通过测量获得距离,则确定出前扫描点不发射探测光。或者,当确定出当前的光强探测的距离小于预定值或大于预定值,则决定当前扫描点不发射探测光。或者,当确定出定出当前扫描的目标区域是不重要、不受关注区域时,会按第二预设不发光比例跳过当前的发射。或者,当确定出在第二预设时间范围内的某次的扫描的发散角已经探测个当前大部分像素时,则决定当前出前扫描点不发射探测光。第二预设不发光比例可以是1%、5%、20%、30%、80%,并且在每次所述扫描前决定当前扫描点是否发射探测光的步骤。
在一个示例中,计算的步骤S203还可包括:确定至少一个过去的、在时间上最近的测量所得的传感向量;确定至少另一个过去的、在空间角度上最近的测量;以及根据所确定的传感向量以及所确定的测量,决定当前扫描点是否发射探测光;其中,可通过图4所示的流程图中的各个步骤获得传感向量。
在一个示例中,还可以确定所接收到的光中的光敏电子的个数或幅度是否分别小于预定的电子数阈值和信号幅度阈值,如果是,则放弃所述光中包括的信息,其中,电子数阈值和信号幅度阈值随时间而减小。此外,在所述发射的步骤S201中,发光单元同时发射的光束在空间角度上至少部分重叠,以及光束各自包含的波长范围至少部分不相同。发光单元可向目标场景发射包括至少包含两种发散角不同的扫描光束。
此外,所述计算的步骤S203还可包括:使用在过去第二预设时间范围所测得的传感向量,获得目标场景中至少一个受关注子区域;并且发出指令以使得:在第三预设时间范围对受关注子区域做比其他区域扫描密集度大于第一倍数阈值、和/或扫描频率大于或小于第二倍数阈值、和/或单位时间平均光能量大于或小于第三倍数阈值。
此外,还可通过在感光单元内的嵌入式计算和/或预设规律,确定至少一个关注子区域,其中,在步骤S201中,感光单元输出小于第二预设比例的图像传感器的子像素数目的传感向量。第二预设比例例如为1%、5%、20%、30%、80%。
下面参考图7,其示出了适于用来实现本申请实施例的3D成像方法的电子设备的计算机系统700的结构示意图。图7示出的电子设备仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。
如图7所示,计算机系统700包括一个或多个处理器701(例如,CPU),其可以根据存储在只读存储器(ROM)702中的程序或者从存储部分706加载到随机访问存储器(RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有系统700操作所需的各种程序和数据。处理器701、ROM 702以及RAM703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
以下部件连接至I/O接口705:包括硬盘等的存储部分706;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分707。通信部分707经由诸如因特网的网络执行通信处理。驱动器708也根据需要连接至I/O接口705。可拆卸介质709,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器708上,以便于从其上读出的计算机程序根据需要被安装入存储部分706。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分707从网络上被下载和安装,和/或从可拆卸介质709被安装。在该计算机程序被中央处理单元(CPU)701执行时,执行本申请的方法中限定的上述功能。需要说明的是,本申请所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器 件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取成单元和3D图像生成单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,获取单元还可以被描述为“获取至少一个 像素点对应的待拍摄场景中的点的深度信息的单元”。
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的装置中所包含的;也可以是单独存在,而未装配入该装置中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该装置执行时,使得该装置执行上述描述的测距方法。
以上描述仅为本申请的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本申请中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本申请中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (50)

  1. 一种3D图像传感器测距系统,包括:
    至少一个发光单元阵列,每个所述发光单元阵列包括至少一个用于向目标场景发射光的发光单元;
    至少一个感光单元阵列,每个所述感光单元阵列包括至少一个感光单元,用于接收至少一部分由所述发光单元发射的、经由所述目标场景反射的光,并根据所接收的光生成传感向量;以及
    至少一个计算部件,根据所述感光单元生成的传感向量计算所述发光单元阵列与所述目标场景之间的距离和所述反射的光的光强中的至少之一。
  2. 根据权利要求1所述的3D图像传感器测距系统,其中,所述发光单元发射的光的发散角随着时间波动,其中发散角的最大值大于第一空间分辨率阈值。
  3. 根据权利要求2所述的3D图像传感器测距系统,其中,还包括:
    扫描部,用于控制所述发光单元阵列在与至少部分所述目标场景对应的空间角度范围进行照射扫描。
  4. 根据权利要求2所述的3D图像传感器测距系统,其中,所述发光单元阵列的至少一部分包括发光扫描控制部件,用于控制所述发光单元阵列在与所述目标场景对应的空间角度范围进行的照射扫描。
  5. 根据权利要求3或4所述的3D图像传感器测距系统,其中,在第一预设时间范围内,所述发光单元阵列发出的光的、具有第一预设角度比例的实际扫描空间角度与预设的扫描空间角度的随机误差大于所述第一空间分辨率阈值。
  6. 根据权利要求2或5所述的3D图像传感器测距系统,其中,所述第一空间分辨率阈值大于所述3D图像传感器测距系统的空间分辨率的2倍。
  7. 根据权利要求5所述的3D图像传感器测距系统,其中,所述传感向量包括:所述发光单元与所述目标场景之间的距离所述发射光的光强、所述发射光的相位、和所述发射光的光谱中的至少之一。
  8. 根据权利要求7所述的3D图像传感器测距系统,其中,所述感光单元可包括光电传感器,所述光电传感器通过光电效应,响应于接收到的反射光产生光敏电子,以及
    其中,所述计算部件被配置为:
    获得发射所述光的时间t0,
    获得所述传感向量中的单光子或单个光脉冲到达所述感光单元的t1,
    基于所获得的t0和t1确定出所述发光单元与所述目标场景之间的所述距离,以及
    将所述传感向量中的光敏电子数或所述感光单元中的收集电容的电压读值确定为所述光强。
  9. 根据权利要求7所述的3D图像传感器测距系统,其中,每个所述感光单元包括第一电容C1和第二电容C2,所述计算部件被配置为:
    获得所述光的发射时间t0;
    获得所述第一电容C1的电压读值和所述第二电容C2的电压读值,
    根据所述电压读值确定出所述光到达所述感光单元的到达时间t1,
    基于所获得的t0和t1计算出所述发光单元与所述目标场景之间的所述距离;以及
    根据所述第一电容C1的电压读值和所述第二电容C2的电压读值计算所述光强。
  10. 根据权利要求7所述的3D图像传感器测距系统,其中,所述计算部件被配置为:
    获得所述光的发射时间t0,
    获得最早在预设第一时间间隔阈值T_1内到达所述感光单元阵列中的同一个感光单元的2个电子的电子群的时间t_1,所述群中的第二个电子到达/出现在所述同一个感光单元的时间为t_1+Δt 1,同时获得满足相同间隔条件到达所述同一个感光单元的2个电子的电子群的个数n_1,其中,Δt 1<T_1;
    然后依次获得最早在预设第m时间间隔阈值T_m内到达所述同一个感光单元的m+1个电子的电子群的时间t_m,同时获得满足相同条件的m+1个电子的电子群的个数n_m,其中m大于等于2;
    利用对应的电子群数n_1、…、n_m,获得与最大电子群数n_max=max{n1,…,n_m}对应的电子群到达时间t_max={t_1,…,t_m};
    基于规则[距离=(t_max-t0)×C/2光速]确定出所述距离;以及
    将所述最大电子群数n_max确定为所述光强。
  11. 根据权利要求7所述的3D图像传感器测距系统,其中,所述计算部件被配置为:
    获得所述光的发射时间t0;
    获得最早在预设第一时间间隔阈值内同时到达所述感光单元阵列中的不同但相邻的感光单元的2个电子群的时间t_1,同时获得满足相同间隔条件到达所述相邻感光单元的2个电子的电子群的个数n_1;
    然后依次获得最早在预设第m时间间隔阈值内到达的m+1个电子群的时间t_m,同时获得满足相同间隔条件到达所述相邻感光单元的m+1个电子的电子群的个数n_m,其中m≥2,以及对应的电子群数n_m,获得最大电子群数n_max对应的电子群到达时间t_max;
    基于规则[距离=(t_max-t0)×C/2光速/2]确定出所述距离;以及
    将最大电子群数n_max确定为所述光强。
  12. 根据权利要求3或4所述的3D图像传感器测距系统,其中,所述计算部件被配置为:在按预定规律的执行扫描过程中,基于当前扫描点之前的、过去的传感向量,决定所述当前扫描点是否应该发射探测光,其中,在第二预设时间范围内,至少有第二预设不发光比例的不发探测光次数。
  13. 根据权利要求12所述的3D图像传感器测距系统,其中,当所述计算部件确定出至少两个所述发光单元分别用强光和弱光对所述目标场景先后进行扫描时,如果弱光的扫描已经通过测量获得所述距离,则确定出:针对前的扫描点不发射探测光。
  14. 根据权利要求12所述的3D图像传感器测距系统,其中,当所述计算部件确定出当前的光强探测的所述距离小于预定值或大于预定值,则确定出:针对当前扫描点不发射探测光。
  15. 根据权利要求12所述的3D图像传感器测距系统,其中,当所述计算部件确定出当前扫描的目标区域是不重要、不受关注区域时,则确定出:应该按所述第二预设不发光比例跳过当前的光发射。
  16. 根据权利要求12所述的3D图像传感器测距系统,其中,当所述计算部件确定出第二预设时间范围内的某次扫描的发散角已经探测到当前大部分像素时,则确定出:针对当前出前扫描点不发射探测光。
  17. 根据权利要求12所述的3D图像传感器测距系统,其中,所述第二预设不发光比例为1%、5%、20%、30%或80%。
  18. 根据权利要求12所述的3D图像传感器测距系统,其中,所述计算部件被配置为在每次所述扫描前决定:当前扫描点是否应该发射探测光。
  19. 根据权利要求18所述的3D图像传感器测距系统,其中,所述计算部件被配置为:
    针对当前扫描点,确定至少一个过去的、在时间上最近的测量所得的传感向量;
    确定至少另一个过去的、在空间角度上最近的测量;以及
    根据所确定的传感向量以及确定出的所述测量,决定是否应该针对当前扫描点发射探测光。
  20. 根据权利要求19所述的3D图像传感器测距系统,其中,所述计算部件被配置为执行以下处理来所述传感向量:
    1)获取与当前扫描点时间最近的过去时段的第一传感向量;
    2)获取与当前扫描点距离最近的当前时段的第二传感向量;
    3)根据所述第一传感向量以及所述第二传感向量,预判当前扫描点的扫描特性,所述扫描特性包括当前扫描点的发射强度、发射频率、发射区域、脉冲可区分特性、受关注度、和扫描区域中的至少之一;以及
    4)根据所确定的所述扫描特征,确定当前是否应该允许所述发光单元进行发射所述探测光的操作;
    如果是,获得对应的当前扫描角度和当前发散角的最大可能覆盖的感光单元的传感向量;否则跳回到步骤1),重新执行所述步骤1)至步骤4)。
  21. 根据权利要求1所述的3D图像传感器测距系统,其中,每个所述感光单元被配置为:确定所接收到的光脉冲中的光敏电子的个数或幅度是否分别小于预定的电子数阈值和信号幅度阈值,如果是,则放弃所述光脉冲中包括的信息,其中,所述电子数阈值和所述信号 幅度阈值在发光开始随时间从预设阈值按预设规律逐渐减小。
  22. 根据权利要求1所述的3D图像传感器测距系统,其中,所述发光单元阵列中的至少两个发光单元同时发射的光束在空间角度上至少部分重叠,以及所述光束各自包含的波长范围至少部分不相同。
  23. 根据权利要求1-22中任一项所述的3D图像传感器测距系统,其中,所述发光单元发出的光包括至少包含两种发散角不同的扫描光束。
  24. 根据权利要求1-22中任一项所述的3D图像传感器测距系统,其中,所述计算部被进一步设置为使用在过去第二预设时间范围所测得的传感向量,获得所述目标场景中至少一个受关注子区域;并且发出指令以使得:
    在第三预设时间范围对所述受关注子区域做比其他区域扫描密集度大于第一倍数阈值、和/或扫描频率大于或小于第二倍数阈值、和/或单位时间平均光能量大于或小于第三倍数阈值。
  25. 根据权利要求24所述的3D图像传感器测距系统,其中,通过所述感光单元内的嵌入式计算和/或预设规律,确定至少一个所述关注子区域,其中,所述感光单元输出小于第二预设比例的图像传感器的子像素数目的所述传感向量。
  26. 一种利用3D图像传感器测距系统进行测距的方法,包括:
    通过至少一个发光单元阵列中包括的发光单元向至少一个对目标场景发射光;
    通过感光单元接收至少一部分由所述发光单元发射的、经由所述目标场景反射的光,并根据所接收的光生成传感向量;以及
    根据生成的传感向量计算所述发光单元阵列与所述目标场景之 间的距离和所述反射的光的光强中的至少之一。
  27. 根据权利要求26所述的方法,其中,在通过所述发光单元向至少一个对目标场景发射光的步骤中,所述发光单元发射的光的发散角随着时间波动,其中发散角的最大值大于第一空间分辨率阈值。
  28. 根据权利要求27所述的方法,其中,在第一预设时间范围内,所述发光单元的、具有第一预设角度比例的实际扫描空间角度与预设的扫描空间角度的随机误差大于所述第一空间分辨率阈值。
  29. 根据权利要求28所述的方法,其中,所述传感向量包括:所述发光单元与所述目标场景之间的距离、所述发射的光的光强、所述反射的光的相位、和所述发射的光的光谱中的至少之一。
  30. 根据权利要求29所述的方法,其中,所述感光单元可包括光电传感器,所述光电传感器通过光电效应,响应于接收到的反射光产生光敏电子,以及
    其中,所述计算部件被配置为:
    获得发射所述光的时间t0,
    获得所述传感向量中的单光子或单个光脉冲到达所述感光单元的t1,
    基于所获得的t0和t1确定出所述发光单元与所述目标场景之间的所述距离;以及
    将所述传感向量中的光敏电子数或收集电容的电压读值确定为所述光强。
  31. 根据权利要求29所述的方法,其中,所述感光单元包括第一电容C1和第二电容C2,所述计算的步骤包括:
    获得所述光的发射时间t0,
    获得所述第一电容C1的电压读值和所述第二电容C2的电压读 值,
    根据所述电压读值确定出所述光到达所述感光单元的到达时间t1
    基于所获得的t0和t1计算出所述发光单元与所述目标场景之间的所述距离;以及
    将所述第一电容C1的电压读值和所述第二电容C2的电压读值之和确定为所述光强。
  32. 根据权利要求29所述的方法,其中,所述计算的步骤包括:
    获得所述光的发射时间t0,
    获得最早在预设第一时间间隔阈值T_1内到达所述感光单元阵列中的同一个感光单元的2个电子的电子群的时间t_1,所述群中的第二个电子到达/出现在所述同一个感光单元的时间为t_1+Δt 1,同时获得满足相同间隔条件到达所述同一个感光单元的2个电子的电子群的个数n_1,其中,Δt 1<T_1;
    然后依次获得最早在预设第m时间间隔阈值T_m内到达所述同一个感光单元的m+1个电子的电子群的时间t_m,同时获得满足相同条件的m+1个电子的电子群的个数n_m,其中m大于等于2;
    利用对应的电子群数n_1、…、n_m,获得与最大电子群数n_max=max{n1,…,n_m}对应的电子群到达时间t_max={t_1,…,t_2};
    基于规则[距离=(t_max-t0)×C/2光速]确定出所述距离;以及
    将所述最大电子群数n_max确定为所述光强。
  33. 根据权利要求29所述的方法,其中,所述计算的步骤包括:
    获得所述光的发射时间t0;
    获得最早在预设第一时间间隔阈值内同时到达所述感光单元阵列中的不同但相邻的感光单元的2个电子群的时间t_1,同时获得满足相同间隔条件到达所述相邻感光单元的2个电子的电子群的个数n_1;
    然后依次获得最早在预设第m时间间隔阈值内到达的m+1个电 子群的时间t_m,同时获得满足相同间隔条件到达所述相邻感光单元的m+1个电子的电子群的个数n_m,其中m≥2,以及对应的电子群数n_m,获得最大电子群数n_max对应的电子群到达时间t_max;
    基于规则[距离=(t_max-t0)×C/2光速/2]确定出所述距离;以及
    将最大电子群数n_max确定为所述光强。
  34. 根据权利要求26-33中任一项所述的方法,还包括:
    在按预定规律的发射用于扫描的所述光的过程中,基于过去的传感向量,决定当前扫描点是否发射探测光,其中在第二预设时间范围内,至少有第二预设不发光比例的不发探测光次数。
  35. 根据权利要求34所述的方法,其中,当确定出至少两个所述发光单元分别用强光和弱光对所述目标场景先后进行扫描时,如果弱光的扫描已经通过测量获得所述距离,则确定出前扫描点不发射探测光。
  36. 根据权利要求34所述的方法,其中,当确定出当前的光强探测的所述距离小于预定值或大于预定值,则决定当前扫描点不发射探测光。
  37. 根据权利要求34所述的方法,其中,当确定出定出当前扫描的目标区域是不重要、不受关注区域时,会按第二预设不发光比例跳过当前的发射。
  38. 根据权利要求34所述的方法,其中,当确定出第二预设时间范围内的某次的扫描的发散角已经探测个当前大部分像素时,则决定当前出前扫描点不发射探测光。
  39. 根据权利要求34所述的方法,其中,在每次所述扫描前决定当前扫描点是否发射探测光。
  40. 根据权利要求39所述的方法,其中,所述计算的步骤包括:
    确定至少一个过去的、在时间上最近的测量所得的传感向量;
    确定至少另一个过去的、在空间角度上最近的测量;以及
    根据所确定的传感向量以及所确定的测量,决定当前扫描点是否发射探测光。
  41. 根据权利要求40所述的方法,其中,通过以下步骤获得所述传感向量:
    1)获取与当前扫描点时间最近的过去时段的第一传感向量;
    2)获取与当前扫描点距离最近的当前时段的第二传感向量;
    3)根据所述第一传感向量以及所述第二传感向量,预判当前扫描点的扫描特性,所述扫描特性包括当前扫描点的发射强度、发射频率、发射区域、脉冲可区分特性、受关注度、和扫描区域中的至少之一;以及
    4)根据所确定的所述扫描特征,确定当前是否应该允许所述发光单元进行发射所述探测光的操作;
    如果是,获得对应的当前扫描角度和当前发散角的最大可能覆盖的感光单元的传感向量;否则跳回到步骤1),重新执行所述步骤1)至步骤4)。
  42. 根据权利要求26所述的方法,其中,所述方法还包括:
    确定所接收到的光脉冲中的光敏电子的个数或幅度是否分别小于预定的电子数阈值和信号幅度阈值,如果是,则放弃所述光中包括的信息,其中,所述电子数阈值和信号幅度阈值在发光开始随时间从预设阈值按预设规律逐渐减小。
  43. 根据权利要求26所述的方法,其中,所述发光单元同时发射的光束在空间角度上至少部分重叠,以及所述光束各自包含的波长范围至少部分不相同。
  44. 根据权利要求26所述的方法,其中,所述发射的步骤包括:
    所述发光单元向所述目标场景发射包括至少包含两种发散角不同的扫描光束。
  45. 根据权利要求26所述的方法,其中,所述计算的步骤还包括:使用在过去第二预设时间范围所测得的传感向量,获得所述目标场景中至少一个受关注子区域;并且发出指令以使得:
    在第三预设时间范围对所述受关注子区域做比其他区域扫描密集度大于第一倍数阈值、和/或扫描频率大于或小于第二倍数阈值、和/或单位时间平均光能量大于或小于第三倍数阈值。
  46. 根据权利要求45所述的方法,其中,通过所述感光单元内的嵌入式计算和/或预设规律,确定至少一个所述关注子区域,其中,所述感光单元输出小于第二预设比例的图像传感器的子像素数目的所述传感向量。
  47. 一种电子设备,包括:
    一个或多个处理器;以及
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求26-46中任一所述的方法。
  48. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求26-46中任一所述的方法。
  49. 一种用于光学测距的装置,包括:
    至少一个如权利要求1-25中任一项所述的3D图像传感器测距系统;以及
    半导体芯片,至少一个所述3D图像传感器测距系统集成在所述半导体芯片中。
  50. 一种形成用于光学测距的方法,包括:形成至少一个如权利要求1-25中任一项所述的3D图像传感器测距系统;以及
    将至少一个所述3D图像传感器测距系统集成在同一个半导体芯片中。
PCT/CN2021/115878 2020-10-23 2021-09-01 3d图像传感器测距系统及使用该系统进行测距的方法 WO2022083301A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/304,845 US20230273321A1 (en) 2020-10-23 2023-04-21 3D Image Sensor Ranging System, and Ranging Method Using Same

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011149482.9 2020-10-23
CN202011149482.9A CN114488176A (zh) 2020-10-23 2020-10-23 3d图像传感器测距系统及使用该系统进行测距的方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/304,845 Continuation US20230273321A1 (en) 2020-10-23 2023-04-21 3D Image Sensor Ranging System, and Ranging Method Using Same

Publications (1)

Publication Number Publication Date
WO2022083301A1 true WO2022083301A1 (zh) 2022-04-28

Family

ID=81291603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/115878 WO2022083301A1 (zh) 2020-10-23 2021-09-01 3d图像传感器测距系统及使用该系统进行测距的方法

Country Status (3)

Country Link
US (1) US20230273321A1 (zh)
CN (1) CN114488176A (zh)
WO (1) WO2022083301A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484525A (zh) * 2022-10-11 2022-12-16 江阴思安塑胶防护科技有限公司 Pu耳塞使用场景智能分析系统

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2306825B (en) * 1995-10-18 2000-03-15 Univ Heriot Watt A laser ranger based on time correlated single photon counting
CN108897003A (zh) * 2018-05-03 2018-11-27 北京理工大学 一种双模控制的相控阵激光雷达系统及方法
CN109375191A (zh) * 2018-09-18 2019-02-22 南京航空航天大学 共照射源3d激光雷达和2d探测器超空间分辨率信息获取方法及装置
US10345447B1 (en) * 2018-06-27 2019-07-09 Luminar Technologies, Inc. Dynamic vision sensor to direct lidar scanning
CN109997057A (zh) * 2016-09-20 2019-07-09 创新科技有限公司 激光雷达系统和方法
CN110178045A (zh) * 2016-11-17 2019-08-27 特里纳米克斯股份有限公司 用于光学检测至少一个对象的检测器
CN111273256A (zh) * 2017-05-15 2020-06-12 奥斯特公司 亮度增强的光学成像发射机
CN111307303A (zh) * 2019-12-28 2020-06-19 中国船舶重工集团公司第七一七研究所 一种单光子三维成像系统及其成像方法
WO2020148567A2 (en) * 2018-10-19 2020-07-23 Innoviz Technologies Ltd. Lidar systems and methods
CN111580122A (zh) * 2020-05-28 2020-08-25 睿镞科技(北京)有限责任公司 空间测量装置、方法、设备以及计算机可读存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2306825B (en) * 1995-10-18 2000-03-15 Univ Heriot Watt A laser ranger based on time correlated single photon counting
CN109997057A (zh) * 2016-09-20 2019-07-09 创新科技有限公司 激光雷达系统和方法
CN110178045A (zh) * 2016-11-17 2019-08-27 特里纳米克斯股份有限公司 用于光学检测至少一个对象的检测器
CN111273256A (zh) * 2017-05-15 2020-06-12 奥斯特公司 亮度增强的光学成像发射机
CN108897003A (zh) * 2018-05-03 2018-11-27 北京理工大学 一种双模控制的相控阵激光雷达系统及方法
US10345447B1 (en) * 2018-06-27 2019-07-09 Luminar Technologies, Inc. Dynamic vision sensor to direct lidar scanning
CN109375191A (zh) * 2018-09-18 2019-02-22 南京航空航天大学 共照射源3d激光雷达和2d探测器超空间分辨率信息获取方法及装置
WO2020148567A2 (en) * 2018-10-19 2020-07-23 Innoviz Technologies Ltd. Lidar systems and methods
CN111307303A (zh) * 2019-12-28 2020-06-19 中国船舶重工集团公司第七一七研究所 一种单光子三维成像系统及其成像方法
CN111580122A (zh) * 2020-05-28 2020-08-25 睿镞科技(北京)有限责任公司 空间测量装置、方法、设备以及计算机可读存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115484525A (zh) * 2022-10-11 2022-12-16 江阴思安塑胶防护科技有限公司 Pu耳塞使用场景智能分析系统
CN115484525B (zh) * 2022-10-11 2023-03-14 江阴思安塑胶防护科技有限公司 Pu耳塞使用场景智能分析系统

Also Published As

Publication number Publication date
CN114488176A (zh) 2022-05-13
US20230273321A1 (en) 2023-08-31

Similar Documents

Publication Publication Date Title
JP6899005B2 (ja) 光検出測距センサ
US20210181317A1 (en) Time-of-flight-based distance measurement system and method
US10921454B2 (en) System and method for determining a distance to an object
CN110596721B (zh) 双重共享tdc电路的飞行时间距离测量系统及测量方法
KR102494430B1 (ko) 물체까지의 거리를 결정하기 위한 시스템 및 방법
US20170176579A1 (en) Light detection and ranging sensor
CN109791205A (zh) 用于从成像阵列中的像素单元的曝光值减除背景光的方法以及用于该方法的像素单元
US10852400B2 (en) System for determining a distance to an object
CN114616489A (zh) Lidar图像处理
WO2022017366A1 (zh) 一种深度成像方法及深度成像系统
KR101145132B1 (ko) 3차원 영상화 펄스 레이저 레이더 시스템 및 이 시스템에서의 자동 촛점 방법
CN110285788B (zh) ToF相机及衍射光学元件的设计方法
CN111025321B (zh) 一种可变焦的深度测量装置及测量方法
CN110780312B (zh) 一种可调距离测量系统及方法
WO2021026709A1 (zh) 一种激光雷达系统
US20230273321A1 (en) 3D Image Sensor Ranging System, and Ranging Method Using Same
CN111025319B (zh) 一种深度测量装置及测量方法
WO2022241942A1 (zh) 一种深度相机及深度计算方法
US20200300978A1 (en) Dynamic range improvements in lidar applications
US20220011437A1 (en) Distance measuring device, distance measuring system, distance measuring method, and non-transitory storage medium
US20230078063A1 (en) Distance measurement device and distance measurement system
WO2022168500A1 (ja) 測距装置およびその制御方法、並びに、測距システム
WO2022181097A1 (ja) 測距装置およびその制御方法、並びに、測距システム
CN116699621A (zh) 测距方法、光电探测模组、芯片、电子设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21881720

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21881720

Country of ref document: EP

Kind code of ref document: A1