WO2023056848A9 - 一种控制探测方法、控制装置、激光雷达及终端设备 - Google Patents

一种控制探测方法、控制装置、激光雷达及终端设备 Download PDF

Info

Publication number
WO2023056848A9
WO2023056848A9 PCT/CN2022/121114 CN2022121114W WO2023056848A9 WO 2023056848 A9 WO2023056848 A9 WO 2023056848A9 CN 2022121114 W CN2022121114 W CN 2022121114W WO 2023056848 A9 WO2023056848 A9 WO 2023056848A9
Authority
WO
WIPO (PCT)
Prior art keywords
light source
pixel
light
signal
pixels
Prior art date
Application number
PCT/CN2022/121114
Other languages
English (en)
French (fr)
Other versions
WO2023056848A1 (zh
Inventor
王超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023056848A1 publication Critical patent/WO2023056848A1/zh
Publication of WO2023056848A9 publication Critical patent/WO2023056848A9/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/933Lidar systems specially adapted for specific applications for anti-collision purposes of aircraft or spacecraft
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance

Definitions

  • the present application relates to the field of detection technology, and in particular to a control detection method, a control device, a laser radar and a terminal device.
  • detection systems are playing an increasingly important role on smart terminals, because the detection system can perceive the surrounding environment, and can identify and track moving targets based on the perceived environmental information, as well as static scenes such as lane lines and signs. Identification, and combined with navigator and map data, etc. for path planning. Therefore, detection systems are playing an increasingly important role on smart terminals.
  • angular reflection targets For example, signs, warning signs, road signs on the road, safety posts on the side of the road, guardrails, convex mirrors at corners, license plates of vehicles, high-reflective coating stickers on the body, etc.
  • These high-reflectivity or anti-angle targets will generate strong scattered light, and these strong scattered light may generate optical crosstalk, thereby reducing the detection accuracy of the detection system for the target in the detection area.
  • the present application provides a control and detection method, a control device, a laser radar and a terminal device, which are used to reduce the optical crosstalk in the detection system as much as possible.
  • the present application provides a control and detection method, the method includes controlling the light source in the first light source area to emit the first signal light with the first power, and controlling the light source in the second light source area to emit the second signal light with the second power, Control the pixels in the first pixel area to receive the first echo signal, the first pixel area corresponds to the spatial position of the first target, the first light source area corresponds to the first pixel area, the second light source area corresponds to the second pixel area, the first echo signal
  • the wave signal includes reflected light obtained after the first signal light is reflected by the first target, and the second power is greater than the first power.
  • the method further includes controlling the pixels in the second pixel region to receive a second echo signal obtained after the second signal light is reflected by the second target.
  • the method is applied to a detection system, and the detection system includes a light source array and a pixel array, the light source array includes m ⁇ n light sources, the pixel array includes m ⁇ n pixels, and the light sources and pixels of the light source array The pixels of the array correspond, and both m and n are integers greater than 1.
  • the scanning of the detection area can be realized without the need of a scanning structure.
  • the method may further include controlling the light source of the light source array to emit a third signal light at a third power, controlling the pixels of the pixel array to receive the third echo signal, that is, controlling the gate of the pixels in the pixel array .
  • the third echo signal includes the reflected light of the third signal light reflected by the first target and/or the second target, that is, the third echo signal may be the reflected light of the third signal light reflected by the first target, or may be is the reflected light of the third signal light reflected by the second target, or may include both the reflected light reflected by the first target and the reflected light reflected by the second target; the first pixel corresponding to the pixel in the first pixel area
  • the intensity of the three echo signals is greater than or equal to the first preset value, and/or the intensity of the third echo signal corresponding to the pixel in the second pixel area is smaller than the first preset value.
  • the light source array includes the first light source area and the second light source area
  • the pixel array includes the first pixel area and the second pixel area.
  • both the first light source area and the second light source area belong to the light source array
  • both the first pixel array and the second pixel array belong to the pixel array.
  • the light sources in the light source array By controlling the light sources in the light source array to emit the third signal light with the same third power, and based on the relationship between the intensity of the third echo signal and the first preset value, which pixels are included in the first pixel area can be identified, and /or which pixels are included in the second pixel area.
  • the method further includes controlling the light sources of the light source array to emit a third signal light at a third power, and controlling the pixels of the pixel array to receive the third echo signal.
  • the third echo signal includes the reflected light of the third signal light reflected by the first target and/or the second target, the intensity of the third echo signal corresponding to the pixel in the first pixel area and the intensity of the third echo signal corresponding to the pixel in the second pixel area.
  • the difference in the intensity of the third echo signal corresponding to the pixel in is greater than or equal to the second preset value, and the first distance corresponding to the pixel in the first pixel area and the first distance corresponding to the pixel in the second pixel area A distance is the same.
  • the light source array includes the first light source area and the second light source area
  • the pixel array includes the first pixel area and the second pixel area.
  • both the first light source area and the second light source area belong to the light source array
  • both the first pixel array and the second pixel array belong to the pixel array.
  • the intensities corresponding to the pixels with the same first distance are subtracted two by two, and the pixels corresponding to the larger intensities whose difference is greater than or equal to the second preset value are the pixels in the first pixel area.
  • the light sources in the light source array By controlling the light sources in the light source array to emit the third signal light with the same third power, and based on the intensity of the third echo signal and the first distance determined based on the third echo signal, it is possible to identify the Which pixels are included, and/or which pixels are included in the second pixel area.
  • the method further includes controlling the light source of the light source array to emit a third signal light at a third power, and determining the third pixel area based on receiving the third echo signal; controlling the third pixel area of the third light source area The light source emits the fourth signal light according to the fourth power, and controls the light source in the fourth light source area to emit the fifth signal light according to the fifth power, and the fifth power is greater than the fourth power; the pixel array is controlled to receive the fourth echo signal and the fifth echo signal wave signals, and determine the first pixel area and the second pixel area according to the fourth echo signal and the fifth echo signal.
  • the fourth echo signal includes the reflected light of the fourth signal light reflected by the first target
  • the fifth echo signal includes the reflected light of the fifth signal light reflected by the second target
  • the third echo signal includes the third signal light
  • the third light source area corresponds to the third pixel area
  • the intensity of the third echo signal corresponding to the third pixel area is greater than or equal to the fourth preset value
  • the third The three-pixel area includes the first pixel area and pixels crosstalked by the third echo signal reflected by the first target.
  • the light source array includes the first light source area and the second light source area
  • the pixel array includes the first pixel area and the second pixel area. In other words, both the first light source area and the second light source area belong to the light source array, and both the first pixel array and the second pixel array belong to the pixel array.
  • the third pixel area can be determined based on the third echo signal, and the third pixel area may include reflections reflected by the first target
  • the first pixel area corresponding to the spatial position of the first target can be accurately determined from the third pixel area , so as to help to obtain the associated information of the complete and accurate detection area of the whole field of view (such as the associated information of the first target and the second target, etc.).
  • the light source array selects the light sources by columns and the pixel array also selects pixels by columns as an example.
  • the third pixel area includes the (a i ⁇ a j )th row and (b i ⁇ b j )th column of the pixel array, a i and b i are both integers greater than 1, and a j is an integer greater than a i , and b j is an integer greater than bi.
  • the method may further include controlling the light source in the b i-1th column of the light source array to emit the fifth signal light at the fifth power, and controlling the (b i-1 ⁇ b i )th column of the gate pixel array The pixels in the column; wherein, the emitting field of view of the light source in the b i-1th column corresponds to the receiving field of view of the pixel in the b i-1th column.
  • the pixels in the b i-1th column are the pixels in the first edge area of the third pixel area, and here, the light source in the b i-1 column corresponding to the b i-1 column emits the fifth signal at the fifth power light, and gate the pixels in b i-1 column and b i column to receive the fifth echo signal together, so that the way of subsequent gate pixel is dislocation gate, so as to reduce the crosstalk of the first target reflected echo signal from affecting the detection area Echo signals of other targets (such as the second target) in the
  • the method may further include controlling the pixels of the (a i ⁇ a j )th row in the (b i+1 ⁇ b j )th column of the gate pixel array, and controlling the ( b i ⁇ b j ) th row of the light source array -1 )
  • the light sources in the (a i ⁇ a j )th row in the column emit the fourth signal light at the fourth power.
  • the crosstalk of the echo signal reflected by the first target can be reduced to affect the echo signal reflected by other targets (such as the second target) in the detection area, and the improvement can be improved.
  • Crosstalk phenomenon so as to realize effective detection in the full field of view of the detection system.
  • the method may further include controlling the gate pixels in the pixel array except the pixels in the (a i ⁇ a j )th row in the (b i+1 ⁇ b j )th column, and controlling the light source array except for the ( The light sources outside the (a i ⁇ a j )-th row in the b i ⁇ b j-1 ) emit fifth signal light with fifth power.
  • the method may further include stopping the gating of the pixels in the bj +1th column of the pixel array, and controlling the light source in the bjth column of the light source array to emit the sixth signal light at the sixth power.
  • the pixels are not selected at this time, so that the pixels after the third pixel area (such as the pixels in the bj+1th column) can be selected.
  • the gate pixel column and the light source column are no longer misaligned, that is, the gate pixel column and the corresponding light source column can be aligned.
  • the first pixel area includes the (A i ⁇ A j )th row and the (B i ⁇ B j )th column of the pixel array, A i and B i are both integers greater than 1, A j is an integer greater than A i , and B j is an integer greater than B i .
  • the method may further include controlling the light source in the B i-1th column of the light source array to emit the second signal light at the second power, and controlling the (B i-1 -B i )th column of the gate pixel array Columns of pixels; wherein, the emission field of view of the light source in the B i-1th column corresponds to the reception field of view of the pixels in the B i-1th column.
  • the pixels in the B i-1th column are pixels in the first edge area of the first pixel area, and here, the light source in the B i-1 column corresponding to the B i-1 column emits a second signal at the second power light, and gate the pixels in B i-1 column and B i column to receive the second echo signal together, so that the way of subsequent gate pixel is dislocation gate, so as to reduce the crosstalk of the first target reflected echo signal from affecting the detection area Echo signals of other targets (such as the second target) in the
  • the method may further include controlling the pixels of the (A i ⁇ A j )th row in the (B i+1 ⁇ B j )th column of the gate pixel array, and controlling the (B i ⁇ B j ) th row of the light source array -1 )
  • the light sources in the (A i ⁇ A j )th row in the column emit the first signal light at the first power.
  • the crosstalk of the echo signal reflected by the first target can be reduced to affect the echo signal reflected by other targets (such as the second target) in the detection area, and the The phenomenon of crosstalk is improved, so that effective detection in the full field of view of the detection system can be realized.
  • the method may further include controlling the gate pixels in the pixel array except the pixels in the (A i ⁇ A j )th row in the (B i+1 ⁇ B j )th columns, and controlling the light source array except for the ( The light sources outside the (A i ⁇ A j )-th row in the columns B i ⁇ B j-1 ) emit the second signal light at the second power.
  • the method may further include stopping the selection of the pixels in the Bj +1th column of the pixel array, and controlling the light sources in the Bjth column of the light source array to emit the sixth signal light with the sixth power.
  • the pixels are not selected at this time, so that the pixels after the first pixel area (such as the pixels in column B j+1 ) can be selected.
  • the gate pixel column and the light source column are no longer misaligned, that is, the gate pixel column and the corresponding light source column can be aligned.
  • the light source array selects the light source by row and the pixel array also selects the pixels row by row.
  • the third pixel area includes the (a i ⁇ a j )th row and (b i ⁇ b j )th column of the pixel array, a i and b i are both integers greater than 1, and a j is an integer greater than a i , and b j is an integer greater than bi.
  • the method may further include controlling the light source in the a i-1th row of the light source array to emit the fifth signal light at the fifth power, and controlling the (a i-1 -a i )th row of the gate pixel array pixels; wherein, the emission field of view of the light source in row a i-1 corresponds to the reception field of view of the pixel in row a i-1 .
  • the pixels in the ai -1th row are pixels in the first edge region of the third pixel region, and here, the light source in the b i-1 column corresponding to the pixel in the ai -1 row emits the fifth power at the fifth power signal light, and gate the pixels of a i-1 row and a i row to jointly receive the fifth echo signal, and the method of subsequent gate pixel is misplaced gate, which can reduce the influence of crosstalk of the first target reflected echo signal on the detection Echo signals of other targets (such as a second target) in the area.
  • the method may further include controlling the pixels of the (bi ⁇ b j)th column in the (a i+1 ⁇ a j ) th row of the gate pixel array, and controlling the (a i ⁇ a j- th) of the light source array 1 )
  • the light sources in the (b i -b j )th column in the row emit the fourth signal light with the fourth power.
  • the crosstalk of the echo signal reflected by the first target can be reduced to affect the echo signal reflected by other targets (such as the second target) in the detection area, and the improvement can be improved.
  • Crosstalk phenomenon so as to realize effective detection in the full field of view of the detection system.
  • the method may further include controlling the gate pixels in the pixel array except for the pixels in the (b i ⁇ b j ) columns in the (a i+1 ⁇ a j )th row, controlling the pixels in the light source array except for the (a i
  • the light sources other than the (bi ⁇ b j )-th column in the ⁇ a j-1 ) row emit fifth signal light with fifth power.
  • the method may further include stopping the gate of the pixels in the aj +1th row of the pixel array, and controlling the light sources in the ajth row of the light source array to emit the sixth signal light at the sixth power.
  • the gate pixel row and the light source row are no longer misaligned, that is, the gate pixel row and the corresponding light source row can be aligned.
  • the first pixel area includes the (A i ⁇ A j )th row and the (B i ⁇ B j )th column of the pixel array, A i and B i are both integers greater than 1, A j is an integer greater than A i , and B j is an integer greater than B i .
  • the method may further include controlling the light source in the A i-1th row of the light source array to emit the second signal light at the second power, and controlling the (A i-1 -A i )th row of the gate pixel array pixels; wherein, the emission field of view of the light source in row A i-1 corresponds to the reception field of view of the pixel in row A i-1 .
  • the pixels in row A i-1 are pixels in the first edge region of the first pixel region, and here, the light source in column B i-1 corresponding to the pixel in row A i-1 emits the second signal light, and gate the pixels in A i-1 row and A i row to receive the second echo signal together, so that the subsequent gate pixel method is misplaced gate, so as to reduce the impact of the crosstalk of the first target reflected echo signal on the detection Echo signals of other targets (such as a second target) in the area.
  • the method may also include controlling the pixels in the (B i ⁇ B j )th column in the (A i+1 ⁇ A j )th row of the gate pixel array, and controlling the (A i ⁇ A j- th) of the light source array 1 )
  • the light sources in the (B i -B j )th column in the row emit the first signal light at the first power.
  • the crosstalk of the echo signal reflected by the first target can be reduced to affect the echo signal reflected by other targets (such as the second target) in the detection area, and the The phenomenon of crosstalk is improved, so that effective detection in the full field of view of the detection system can be realized.
  • the method may further include controlling the gate pixels in the pixel array except the pixels in the (B i ⁇ B j ) columns in the (A i+1 ⁇ A j )th row, controlling the pixels in the light source array except for the (A i
  • the light sources outside the (B i ⁇ B j )-th column in the ⁇ A j-1 ) rows emit the second signal light at the second power.
  • the method may further include stopping the selection of the pixels in row A j+1 of the pixel array, and controlling the light sources in row A j of the light source array to emit the sixth signal light at the sixth power.
  • the light source in the Ajth column By controlling the light source in the Ajth column to emit the sixth signal light with the sixth power, and not selecting the pixel at this time, it can be made that when the pixels behind the first pixel area (such as the pixels in the Aj+1th row) are selected, The gate pixel row and the light source row are no longer misaligned, that is, the gate pixel row and the corresponding light source row can be aligned.
  • the present application provides a control device, the control device is used to implement the above first aspect or any one of the methods in the first aspect, and is respectively used to implement the steps in the above methods.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • control device may be an independent control device, or a module used in the control device, such as a chip or a chip system or a circuit.
  • the control device may include: an interface circuit and at least one processor.
  • the processor may be configured to support the control device to execute any method in the above first aspect or the first aspect, and the interface circuit is used to support communication between the control device and the control device and other devices.
  • the interface circuit may be an independent receiver, an independent transmitter, an input and output port integrating transceiver functions, and the like.
  • the control device may further include a memory, which may be coupled with the processor, and store necessary program instructions and data of the control device.
  • the present application provides a control device, which is used to implement the first aspect or any one of the methods in the first aspect, and includes corresponding functional modules, respectively used to implement the steps in the above method.
  • the functions may be implemented by hardware, or may be implemented by executing corresponding software through hardware.
  • Hardware or software includes one or more modules corresponding to the above-mentioned functions.
  • control device may include a processing module and a transceiver module, and these modules may execute the above-mentioned first aspect or any one of the methods in the first aspect.
  • processing module and a transceiver module, and these modules may execute the above-mentioned first aspect or any one of the methods in the first aspect.
  • the present application provides a chip, which includes at least one processor and an interface circuit. Further, optionally, the chip may further include a memory, and the processor is used to execute computer programs or instructions stored in the memory, so that the chip Execute the method in the above first aspect or any possible implementation manner of the first aspect.
  • the present application provides a terminal device, where the terminal device includes a control device configured to execute the method in the foregoing first aspect or any possible implementation manner of the first aspect.
  • the present application provides a laser radar, which includes a transmitting module, a receiving module, and a control device for performing the method in the above-mentioned first aspect or any possible implementation of the first aspect, wherein , the transmitting module is used to transmit the first signal light according to the first power, and transmits the second signal light according to the second power; the receiving module is used to receive the first echo signal from the detection area, the first echo signal It includes reflected light of the first signal light reflected by the first target.
  • the present application provides a terminal device, where the terminal device includes a lidar for performing the sixth aspect or any possible implementation manner of the sixth aspect.
  • the present application provides a computer-readable storage medium, in which computer programs or instructions are stored, and when the computer programs or instructions are executed by the control device, the control device executes the above-mentioned first aspect or the first aspect.
  • the present application provides a computer program product, the computer program product includes a computer program or an instruction, and when the computer program or instruction is executed by the control device, the control device executes any of the above-mentioned first aspect or the first aspect. method in a possible implementation.
  • Figure 1a is a schematic diagram of the reflection principle of a Lambertian body provided by the present application.
  • Figure 1b is a schematic diagram of peak power within a single pulse time provided by the present application.
  • Figure 1c is a schematic diagram of a FSI principle provided by the present application.
  • Figure 1d is a schematic diagram of a BSI principle provided by the present application.
  • Fig. 2a is a schematic diagram of the ranging principle of a d-TOF technology provided by the present application
  • Figure 2b is a schematic structural diagram of a detection module based on d-TOF technology provided by the present application.
  • FIG. 3 is a schematic structural diagram of a detection system provided by the present application.
  • Fig. 4a is a schematic diagram of a gating method of light sources in a light source array provided by the present application
  • Fig. 4b is a schematic diagram of a gating method of light sources in another light source array provided by the present application.
  • Fig. 4c is a schematic diagram of a gating method of light sources in another light source array provided by the present application.
  • Fig. 4d is a schematic diagram of a gating method of light sources in another light source array provided by the present application.
  • Fig. 4e is a schematic diagram of a gating method of light sources in another light source array provided by the present application.
  • Fig. 5a is a schematic diagram of energy distribution of a signal light spot in angular space provided by the present application.
  • Fig. 5b is a schematic diagram of energy distribution of another signal light spot in angular space provided by the present application.
  • FIG. 6 is a schematic structural diagram of a pixel provided by the present application.
  • Fig. 7a is a schematic diagram of a gating method of a light source in a pixel array provided by the present application.
  • Fig. 7b is a schematic diagram of another gating method of light sources in a pixel array provided by the present application.
  • FIG. 7c is a schematic diagram of another gating method of light sources in a pixel array provided by the present application.
  • Fig. 7d is a schematic diagram of another gating method of light sources in a pixel array provided by the present application.
  • FIG. 7e is a schematic diagram of another gating method of light sources in a pixel array provided by the present application.
  • FIG. 8 is a schematic structural view of an optical lens provided by the present application.
  • FIG. 9 is a schematic structural view of another optical lens provided by the present application.
  • Figure 10a is a possible application scenario provided by this application.
  • Figure 10b is another possible application scenario provided by this application.
  • FIG. 11 is a schematic flow chart of a control detection method provided by the present application.
  • FIG. 12 is a schematic flowchart of a method for determining a first pixel area provided by the present application.
  • FIG. 13 is a schematic flowchart of another method for determining the first pixel area provided by the present application.
  • FIG. 14 is a schematic flowchart of another method for determining the first pixel area provided by the present application.
  • FIG. 15 is a schematic flowchart of a method for determining a first pixel area based on a third pixel area provided by the present application
  • FIG. 16 is a schematic flowchart of another method for determining the first pixel area based on the third pixel area provided by the present application.
  • FIG. 17 is a schematic flow chart of a method for acquiring associated information in a detection area provided by the present application.
  • Fig. 18 is a schematic structural diagram of a control device provided by the present application.
  • Fig. 19 is a schematic structural diagram of a control device provided by the present application.
  • FIG. 20 is a schematic diagram of the architecture of a laser radar provided in the present application.
  • a Lambertian body is an object that reflects incident light uniformly in all directions. Please refer to Fig. 1a, the incident light incident on the Lambertian body is centered on the incident point, and the incident light is isotropically reflected around the whole space. It can also be understood that the Lambertian body uniformly reflects the received signal light in all directions, that is, the echo signals are uniformly distributed in all directions.
  • Optical crosstalk means that stray light interferes with useful signals (such as echo signals), and the light that interferes with normal signals can be collectively referred to as stray light.
  • Optical crosstalk is a relatively common phenomenon in the detection field.
  • optical crosstalk refers to the target with high reflectivity or anti-angle target (collectively referred to as the first target) that has relatively high energy in the echo signal after the received signal light is reflected.
  • the high-energy echo signal may be injected into the pixel area A and the pixel area B. For the pixel area B, this part of the echo signal is stray light. Part of the echo signals will cause optical crosstalk to the echo signals that the pixel area B should receive.
  • the peak power When the signal light emitted by the light source is a pulse wave, the maximum output power within a single pulse time is called the peak power, as shown in Figure 1b.
  • Spot usually refers to the spatial energy distribution formed by the beam on the cross section.
  • the light spot formed on the cross-section of the target in the detection area by the signal light directed to the detection area another example, the light spot formed on the photosensitive surface by the echo signal directed to the detector.
  • the spatial energy distribution of the light spot can be low at both ends and high in the middle, for example, the spatial energy distribution can be normal distribution or similar to normal distribution.
  • the shape of the light spot can be a rectangle, or an ellipse, or a circle, or other possible regular or irregular shapes. It should be noted that those skilled in the art know that the overall energy distribution of the light spot is different intensities, the energy density in the core area is relatively high, and the shape of the light spot is relatively obvious, while the edge part gradually extends outward, and the energy density of the edge part is relatively low. , The shape is not clear, and with the gradual weakening of the energy intensity, the recognition of the spot near the edge is relatively low. Therefore, the light spot with a certain shape mentioned in this application can be understood as a light spot with an easily identifiable boundary formed by a part with strong energy and high energy density, not the whole of the light spot in the technical sense.
  • the boundary of the light spot is usually defined by the maximum energy density 1/e 2 .
  • Angular resolution can also be called scanning resolution, which refers to the minimum angle between adjacent beams directed at the detection area.
  • scanning resolution refers to the minimum angle between adjacent beams directed at the detection area.
  • the angular resolution includes vertical angular resolution and horizontal angular resolution.
  • BSI means that light enters the pixel array from the back side, see Figure 1c.
  • the light is focused on the color filter layer by a microlens with an anti-reflection coating, is divided into three primary color components by the color filter layer, and is introduced into the pixel array.
  • the back side corresponds to the front end of line (BEOL) process of the semiconductor manufacturing process.
  • FSI means that light enters the pixel array from the front, see Figure 1d.
  • the light is focused on the color filter layer by a microlens with an anti-reflection coating, is divided into three primary color components by the color filter layer, and passes through the metal wiring layer, so that parallel light is introduced into the pixel array.
  • the front corresponds to the back end of line (BEOL) process of the semiconductor manufacturing process.
  • the row address can be the abscissa, and the column address can be the ordinate.
  • the rows of the pixel array correspond to the horizontal direction and the columns of the pixel array correspond to the vertical direction as an example.
  • the row-column strobe signal can be used to read the data at a specified location in the memory, and the pixel corresponding to the read specified location is the gated pixel.
  • the pixels in the pixel array can store the detected signals in corresponding memories.
  • the pixels can be enabled to be in an active state by a bias voltage, so that they can respond to echo signals incident on their surfaces.
  • the row address can be the abscissa, and the column address can be the ordinate.
  • the rows of the pixel array correspond to the horizontal direction and the columns of the pixel array correspond to the vertical direction as an example.
  • Gating the light source refers to turning on (or called turning on) the light source, and controlling the light source to emit signal light according to the corresponding power.
  • ROI Region of interest
  • the area of required pixels is outlined in the form of a box, circle, ellipse, or irregular polygon, which is called a region of interest.
  • the energy (or intensity) of the first echo signal obtained by reflecting the received signal light from the first target is relatively large.
  • Factors that affect the energy of the echo signal include but are not limited to the distance between the target and the detection system, and the distribution of the echo signal reflected by the target (for example, the target reflects the received signal light in a certain direction, that is, the echo signal travels along all directions. Uniform distribution), the reflectivity of the target, etc.
  • the first target may be a target that is relatively close to the detection system; or, the first target is a target with a relatively high reflectivity; or, the first target is a target with concentrated echo signals reflected in the direction of the detection system Or, the first target is a target that is closer to the detection system and has a higher reflectivity; or, the first target is a target that is closer to the detection system and the echo signal reflected in the direction of the detection system is more concentrated or, the first target is a target with a large reflectivity and concentrated echo signals reflected in the direction of the detection system; or, the first target is a target with a relatively short distance from the detection system and a large reflectivity, and a target The reflected echo signal in the direction of the system is concentrated on the target.
  • targets with high reflectivity include but are not limited to signs, warning signs, road signs on the road, safety posts on the side of the road, guardrails, convex mirrors at corners, license plates of vehicles, and high-reflection coating stickers on the car body wait.
  • a frame of image means that the light source array completes one scan, and the corresponding pixel array reads all the data, and the image formed based on all the read data is a frame of image.
  • FIG. 2a it is a schematic diagram of a ranging principle of a direct time of flight (d-TOF) technology provided by this application.
  • the signal light is usually a pulsed laser. Due to the limitation of laser safety and the power consumption limitation of the detection system, the energy of the emitted signal light is limited, but it needs to cover the complete detection area. Therefore, the signal light is reflected by the target. When the echo signal returns to the receiver, the energy loss is serious. At the same time, ambient light, as noise, will interfere with the detection and restoration of the echo signal by the detector. Therefore, d-ToF technology usually requires a detector with high sensitivity to detect the echo signal.
  • a suitable detector for the d-ToF technology is, for example, a single-photon avalanche diode (SPAD) or a digital silicon photomultiplier (silicon photomultiplier, SiPM).
  • SPAD has the sensitivity to detect a single photon
  • SPAD in working state, SPAD is a diode biased with high reverse voltage.
  • the reverse bias creates a strong electric field inside the device.
  • a photon is absorbed by SPAD and converted into a free electron
  • the free electron is accelerated by the internal electric field, and when it gains enough energy to hit other atoms, free electron and hole pairs are generated.
  • the newly generated carriers continue to be accelerated by the electric field, and the collision produces more carriers.
  • This avalanche effect of geometric amplification makes the SPAD have almost infinite gain, so as to output a large current pulse to realize the detection of a single photon.
  • the detection module may include a SPAD array and a time to digital converter (time to digital convert, TDC) array.
  • the SPAD array is a 5 ⁇ 5 array and the TDC array is also a 5 ⁇ 5 array as an example, and one TDC corresponds to at least one SPAD.
  • the TDC and the transmitting end perform time synchronization. When a certain TDC detects the moment when the transmitting end starts to emit signal light, it starts timing. After one SPAD of at least one SPAD corresponding to the TDC that starts timing receives a photon of the echo signal, The TDC stops counting.
  • the detection module may further include a memory and/or a control circuit.
  • the control circuit may store the time-of-flight of the signal light detected by the SPAD/TDC in the memory.
  • the echo signal reflected by the first target may cause optical crosstalk to the pixels around the pixel that should receive the echo signal in the pixel array in the detection system , which will reduce the detection accuracy of the detection system.
  • the present application provides a control detection method, which can reduce the optical crosstalk in the detection system as much as possible, thereby improving the detection accuracy of the detection system.
  • the detection system may include an array of light sources and an array of pixels.
  • the light source array may include m ⁇ n light sources, the pixel array may include m ⁇ n pixels, m ⁇ n light sources correspond to m ⁇ n pixels, and both m and n are integers greater than 1.
  • the m ⁇ n light sources may be all or part of the light source array, and/or the m ⁇ n pixels may also be all or part of the pixel array.
  • the light source array can form a regular pattern, or can also form an irregular pattern, which is not limited in the present application.
  • the pixel array can also form a regular pattern, or can also form an irregular pattern, which is not limited in this application.
  • the detection system may also include a transmitting optical system and a receiving optical system.
  • the light sources selected in the light source array are used to emit signal light.
  • the transmitting optical system is used to transmit the signal light from the light source array to the detection area; specifically, the transmitting optical system can collimate and/or homogenize and/or shape and/or modulate the signal light from the light source array in angular space energy distribution, etc.
  • the receiving optical system is used to propagate the echo signal from the detection area to the pixel array, and the echo signal is the reflected light obtained by reflecting the signal light by the target in the detection area.
  • the pixels gated in the pixel array photoelectrically convert the received echo signals to obtain electrical signals used to determine the associated information of the target.
  • the associated information of the target includes but not limited to the distance information of the target, the orientation of the target, the speed of the target, And/or the grayscale information of the target, etc.
  • FIG. 3 is a schematic structural diagram of an applicable detection system of the present application.
  • the light source array includes 7 ⁇ 7 light sources as an example, and the pixel array includes 7 ⁇ 7 pixels as an example.
  • 7 ⁇ 7 light sources correspond to 7 ⁇ 7 pixels.
  • the light source 11 corresponds to the pixel 11
  • the light source 12 corresponds to the pixel 12
  • the light source 66 corresponds to the pixel 66 .
  • the echo signal emitted by the light source 11 and reflected by the target in the detection area can be received by the pixel 11
  • the echo signal reflected by the signal light emitted by the light source 12 by the target in the detection area can be received by the pixel 12 receiving, and so on, the signal light emitted by the light source 66 is reflected by the target in the detection area and the echo signal can be received by the pixel 66 .
  • the light source in the first column corresponds to the pixel in the first column
  • the light source in the second column corresponds to the pixel in the second column
  • the light source in the seventh column corresponds to the pixel in the seventh column
  • the light source in the first row corresponds to the pixel in the first row
  • the light source in the second row corresponds to the pixel in the second row
  • the light source in the seventh row corresponds to the pixel in the seventh row.
  • the signal light emitted by one light source can be projected to form one light spot in the detection area, therefore, based on the light source array shown in FIG. 3 , corresponding 7 ⁇ 7 light spots can be formed in the detection area.
  • the emission field of view of each light source and the energy distribution of the light spot of the signal light in the angular space may be pre-designed.
  • the light source in the light source array can be, for example, a vertical cavity surface emitting laser (vertical cavity surface emitting laser, VCSEL), an edge emitting laser (edge emitting laser, EEL), an all-solid-state semiconductor laser (diode pumped solid state laser, DPSS) or fiber laser.
  • VCSEL vertical cavity surface emitting laser
  • EEL edge emitting laser
  • DPSS all-solid-state semiconductor laser
  • fiber laser fiber laser
  • the light source array can realize independent addressing.
  • independent addressing means that the light sources in the light source array can be independently gated (or called on or turned on or energized), and the strobed light sources can be used to emit signal light .
  • addressing can be implemented by means of electrical scanning. Specifically, a driving current may be input to a light source that needs to be strobed.
  • the addressing manner of the light source array includes, but is not limited to, strobing the light sources point by point, or strobing the light sources by column, or strobing the light sources by row, or strobing the light sources by regions of interest, and the like.
  • the addressing mode of the light source array is related to the physical connection relationship of the light sources. For example, if the physical connection of each light source in the light source array is in parallel, the light sources can be selected point by point (see Figure 4a), or the light sources can be selected column by column (see Figure 4b), or the light sources can be selected by row (see Figure 4b). Can refer to Fig.
  • gating light source can refer to Fig. 4d
  • oblique (such as diagonal direction) gating light source can refer to Fig. 4d
  • can also be gated according to the area of interest see Fig. 4e
  • the area of interest can be Gating light sources in a specific pattern or in a specific order, etc.
  • the light sources in the same column in the light source array are connected in series and different columns are connected in parallel
  • the light sources can be selected column by column, as shown in FIG. 4b.
  • the light sources in the same row of the light source array are connected in series and different rows are connected in parallel
  • the light sources can be selected row by row, as shown in FIG. 4c.
  • the light sources on each diagonal line in the light source array are connected in series, and the light sources on different diagonal lines are connected in parallel, then the light sources can be gated according to the diagonal lines, as shown in FIG. 4d.
  • the time interval between strobes between adjacent light sources may be relatively small. Therefore, optical crosstalk may also be a problem when strobing light sources point-by-point. In order to reduce the optical crosstalk as much as possible, when the light sources are strobed point by point, the time for strobing adjacent light sources can be set longer.
  • the point-by-point strobe light source array can realize point-by-point scanning of the detection area
  • the column-by-column strobe light source array can realize column-by-column scanning of the detection area
  • the row-by-row strobe light source array can realize column-by-column scanning of the detection area.
  • the area-gated light source enables scanning a specific field of view of the detection area. When all the light sources in the light source array are gated, the full field of view scanning of the detection area can be realized. It can also be understood that the splicing of the emission field of view of each light source in the light source array can obtain the full field of view of the detection system.
  • the emission field of view of the light source can be pre-designed according to the application scene of the detection system.
  • the detection system is mainly used in long-distance detection scenarios, and the emission field of view of the light source can be greater than 0.2 degrees; the detection system is mainly used in medium-distance detection scenarios, and the emission field of view of the light source can be 0.1-0.25 degrees; In short-distance detection scenarios, the emission field of view of the light source can be less than 0.15 degrees.
  • the emission field of view of the light source can also be designed according to the angular resolution required by the application scene of the detection system, for example, it can be designed to be 0.01°-2°.
  • the energy distribution of the light spot of the signal light directed at the detection area in the angular space (that is, the energy distribution of the signal light emitted by the light source on the surface of any target in space) usually cannot be completely concentrated in a specific angle range without " leakage".
  • the specific form of the energy distribution in the angular space of the signal light spot directed to the detection area can be designed according to actual requirements or energy link simulation. It can also be understood that the specific shape of the energy distribution of the signal light spot in the angular space can be designed through energy link simulation or actual requirements. In a possible implementation, the energy distribution of the signal light spot in the angular space can be determined by the specificity of the light source itself.
  • the energy distribution of the light spot of the signal light in the angular space may also be controlled by a transmitting optical system.
  • the energy distribution of the signal light emitted by the light source in the angular space is Gaussian or similar to Gaussian, but the divergence angle is relatively large, and further spatial modulation can be performed through the emitting optical system to realize the distribution of the energy of the spot in the angular space
  • the emission optical system can adjust the divergence angle to meet the requirements.
  • FIG. 5 a it is a schematic diagram of energy distribution of a spot of signal light in angular space provided by the present application.
  • the energy distribution of the spot is approximately a Gaussian line shape, and most of the energy of the Gaussian line-shaped spot energy is concentrated in the divergence angle (that is, in the emission field of view).
  • the divergence angle refers to the horizontal angular resolution or vertical angle of the signal light emitted by the light source. Resolution can also be called the emission field of view of a single signal light.
  • the divergence angle may range, for example, from 0.01° to 2°.
  • the energy attenuation of the Gaussian line-shaped light spot can extend to infinity, and the energy of the light spot becomes weaker and weaker as it extends toward infinity. After extending to a certain angle, the energy of the light spot can even be neglected.
  • FIG. 5 b it is a schematic diagram of energy distribution of another signal light spot in angular space provided by the present application.
  • Most of the energy of the Gaussian line-shaped spot energy is concentrated in the divergence angle, and a small part of the energy is designed outside the divergence angle.
  • the energy distribution of the signal light spot in the angular space can be modulated so that it presents a central height (that is, most of the energy is concentrated within the design divergence angle), and there is a local maximum peak outside the divergence angle. It should be understood that based on the energy distribution of such light spots, it may have a certain impact on the farthest detection distance of the detection system. In order to ensure that the performance of the detection system device is not affected, it is necessary to increase the total energy of the signal light emitted by the light source.
  • the energy concentration of the light spot can be characterized by energy isolation, and the unit is decibel (dB).
  • Energy isolation refers to the ratio of the peak energy within the divergence angle to the local maximum peak energy outside the divergence angle, or the ratio of the peak energy within the divergence angle to the average energy outside the divergence angle. It can also be understood that the greater the energy isolation, the weaker the energy outside the divergence angle.
  • the center of the Gaussian or quasi-Gaussian line shape of the energy distribution of the signal light in the angular space is within the emission field of view of the light source.
  • the energy distribution of the light spot of the signal light in the angular space can be designed as the form shown in FIG. 5a above.
  • the energy distribution of the light spot in the angular space can be designed as shown in the above-mentioned figure 5b.
  • the signal-to-noise ratio of the detection system can be improved under certain environmental noise conditions by designing a reasonable rising edge rate. It should be understood that the steeper the rising edge (ie, the greater the rising edge rate), the lower the signal-to-noise ratio of the detection system.
  • the detection capability of the detection system (such as the detection distance) is related to the peak power, the greater the peak power, the farther the detection system can detect.
  • the pixels in the pixel array may include one or more photosensitive units (cells), for example, the photosensitive cells may be SPADs or SiPMs.
  • the photosensitive unit is the smallest unit in the pixel array. Referring to FIG. 6 , it exemplarily shows that one pixel includes 3 ⁇ 3 SPADs. It can also be understood that 3 ⁇ 3 SPADs perform Binning to form a pixel, that is, signals output by 3 ⁇ 3 SPADs are superimposed and read out in the form of one pixel. It should be noted that the pixels may also be binning photosensitive units in the row or column direction.
  • the pixel array gates the pixels including but not limited to point-by-point gate (please refer to FIG. 7a ), or column-by-column gate (please refer to FIG. Refer to FIG. 7c ), or gating by slope (see FIG. 7d ), or gating by ROI (see FIG. 7e ), wherein the region of interest may be gating pixels in a specific pattern or in a specific order, etc.
  • the way the pixel array selects the pixels needs to be consistent with the way the light source array selects the light sources.
  • the light source array selects the light source row by row, and the pixel array also selects the pixels row by row, that is, the light source array adopts the above-mentioned gating method in FIG. 4c, and the pixel array adopts the above-mentioned gating method in FIG. 7c.
  • it may be selected in the order from the first row to the last row, or it may be selected in the order from the last row to the first row, or it may be selected from a middle row to the edge row, etc. etc., this application does not limit the order of the row selection.
  • the light source array selects the light sources column by column, and the pixel array selects the pixels column by column, that is, the light source array adopts the above-mentioned gating method in FIG. 4b, and the pixel array adopts the above-mentioned gating method in FIG. 7b. Further, it may be selected in the order from the first column to the last column, or it may be selected in the order from the last column to the first column, or it may be selected from a middle column to the edge column, etc. etc., the present application does not limit the order of column selection. In addition, it should be noted that the above-mentioned light source array and pixel array are strobed to work at the same time.
  • the emitting field of view of each light source in the light source array and the receiving field of view of each pixel in the pixel array are in one-to-one spatial correspondence. That is, one pixel corresponds to one receiving field of view, one light source corresponds to one emitting field of view, and the receiving field of view and the emitting field of view are aligned one by one in space.
  • the receiving field of view is usually designed to be slightly larger than the transmitting field of view.
  • each light source in the light source array is located on the object plane of the optical imaging system, and the photosensitive surface of each pixel in the pixel array is located on the image plane of the optical imaging system.
  • the optical imaging system may include a transmitting optical system and a receiving optical system.
  • the light source in the light source array is located on the object-side focal plane of the transmitting optical system, and the photosensitive surface of the pixel in the pixel array is located on the image-side focal plane of the receiving optical system.
  • the signal light emitted by the light source in the light source array propagates to the detection area through the emission optical system, and the echo signal obtained by reflecting the signal light from the target in the detection area can be imaged on the image focal plane through the receiving optical system.
  • the transmitting optical system and the receiving optical system are relatively simple and can be modularized, so that the detection system can achieve small volume and high integration. Based on this, the transmitting optical system and the receiving optical system generally use the same optical lens.
  • FIG. 8 it is a schematic structural diagram of an optical lens provided by the present application.
  • the optical lens includes at least one lens, which may be, for example, a lens.
  • the optical lens includes 4 lenses as an example.
  • the optical axis of the optical lens refers to the straight line passing through the spherical center of each lens shown in FIG. 8 .
  • the optical lens may be rotationally symmetrical about the optical axis.
  • the lens in the optical lens can be a single spherical lens, or a combination of multiple spherical lenses (such as a combination of concave lenses, a combination of convex lenses, or a combination of convex and concave lenses, etc.).
  • the combination of multiple spherical lenses helps to improve the imaging quality of the detection system and reduce the aberration of the optical imaging system.
  • convex lenses include biconvex lenses, plano-convex lenses, and meniscus lenses
  • concave lenses include biconvex lenses, plano-concave lenses, and meniscus lenses. In this way, it helps to improve the multiplexing rate of the optical devices of the detection system, and facilitates the installation and adjustment of the detection system.
  • the lens in the optical lens may also be a single aspheric lens or a combination of multiple aspheric lenses, which is not limited in this application.
  • the material of the lens in the optical lens may be an optical material such as glass, resin, or crystal.
  • the lens material is resin, it helps to reduce the mass of the detection system.
  • the material of the lens is glass, it helps to further improve the imaging quality of the detection system.
  • the optical lens includes at least one lens made of glass material.
  • the structure of the transmitting optical system can also be other structures that can realize collimation and/or beam expansion of the signal light emitted by the light source and/or modulation of energy distribution in angular space, such as a microlens array (see FIG. 9) Or the micro-optic system pasted on the surface of the light source, which will not be described here one by one.
  • the microlens array may be one column or multiple columns, which is not limited in this application.
  • the transmitting optical system and the receiving optical system may also have different structures, which is not limited in this application.
  • the detection system may also include a control module.
  • the control module can be a central processing unit (central processing unit, CPU), and can also be other general-purpose processors (such as microprocessors, or any conventional processors), field programmable gate arrays (field programmable gate arrays, FPGA), signal data processing (digital signal processing, DSP) circuit, application specific integrated circuit (application specific integrated circuit, ASIC), transistor logic device, or other programmable logic device, or any combination thereof.
  • CPU central processing unit
  • FPGA field programmable gate arrays
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • transistor logic device or other programmable logic device, or any combination thereof.
  • control module when the detection system is applied to a vehicle, can be used to plan the driving path according to the determined associated information of the detection area, such as avoiding obstacles on the driving path.
  • the architecture of the detection system given above is only an example, and the present application does not limit the architecture of the detection system.
  • the light source array in the detection system can also be one row or one column.
  • the detection system can also be Includes scanner. When the scanner is at a scanning angle, the signal light emitted by this row or column of light sources is a power. For example, when the scanner is at scanning angle A, this row or column of light sources emits signal light A at power A; when the scanner is at scanning angle B, this row or column of light sources emits signal light B at power B.
  • the detection system may be a laser radar.
  • the lidar can be installed on a vehicle (such as an unmanned vehicle, a smart car, an electric vehicle, or a digital car, etc.) as a vehicle lidar, please refer to FIG. 10a.
  • Lidar can be deployed in any direction or any number of directions in front, rear, left and right of the vehicle to capture information about the surrounding environment of the vehicle.
  • Figure 10a is an example where the lidar is deployed in front of the vehicle.
  • the area that the lidar can perceive can be called the detection area of the lidar, and the corresponding field of view can be called the full field of view.
  • the laser radar can acquire the longitude and latitude, speed, orientation, or related information (such as the distance of the target, the moving speed of the target, the attitude of the target or the position of the target) of the target within a certain range (such as other vehicles around) in real time or periodically. grayscale, etc.).
  • the lidar or the vehicle can determine the vehicle's position and/or path planning, etc. based on this associated information. For example, use the latitude and longitude to determine the position of the vehicle, or use the speed and orientation to determine the driving direction and destination of the vehicle in the future, or use the distance of surrounding objects to determine the number and density of obstacles around the vehicle.
  • an advanced driving assistant system can also be combined to realize assisted driving or automatic driving of the vehicle.
  • ADAS advanced driving assistant system
  • the principle of the laser radar to detect the associated information of the target is: the laser radar emits signal light in a certain direction, if there is a target in the detection area of the laser radar, the target can reflect the received signal light back to the laser radar (reflected The signal light can be called the echo signal), and the laser radar determines the relevant information of the target according to the echo signal.
  • the detection system may be a camera.
  • the camera can also be installed on a vehicle (such as an unmanned car, a smart car, an electric car, a digital car, etc.), as a vehicle-mounted camera, please refer to the above-mentioned FIG. 10b.
  • the camera can obtain measurement information such as the distance and speed of the target in the detection area in real time or periodically, so as to provide necessary information for lane correction, vehicle distance keeping, reversing and other operations.
  • Vehicle-mounted cameras can realize: a) target recognition and classification, such as various lane line recognition, traffic light recognition, and traffic sign recognition; ) to divide, mainly divide vehicles, ordinary road edges, side stone edges, boundaries without visible obstacles, unknown boundaries, etc.; c) the ability to detect lateral moving targets, such as pedestrians and vehicles crossing intersections Detection and tracking; d) positioning and map creation, such as positioning and map creation based on visual simultaneous localization and mapping (SLAM) technology; and so on.
  • target recognition and classification such as various lane line recognition, traffic light recognition, and traffic sign recognition
  • ) to divide mainly divide vehicles, ordinary road edges, side stone edges, boundaries without visible obstacles, unknown boundaries, etc.
  • positioning and map creation such as positioning and map creation based on visual simultaneous localization and mapping (SLAM) technology
  • lidar can also be mounted on drones as airborne radar.
  • lidar can also be installed in a roadside unit (RSU), as a roadside traffic lidar, which can realize intelligent vehicle-road collaborative communication.
  • RSU roadside unit
  • lidar can be installed on an automated guided vehicle (AGV).
  • AGV automated guided vehicle
  • the AGV is equipped with an automatic navigation device such as electromagnetic or optical, and can drive along a prescribed navigation path. It has safety protection and various Transporter with transfer function. They are not listed here.
  • the application scenarios can be applied to areas such as unmanned driving, automatic driving, assisted driving, intelligent driving, connected vehicles, security monitoring, remote interaction, surveying and mapping, or artificial intelligence.
  • the method of the present application can be applied to a scene where the target and the detection system are relatively static, or a scene where the frame rate of the image collected by the detection system is relatively low relative to the relative speed of the target and the detection system.
  • FIG. 11 is a schematic flow chart of a control detection method provided in the present application.
  • the control and detection method can be executed by a control device, which can belong to the detection system (such as the above-mentioned control module), or can also be a control device independent of the detection system, such as a chip or a chip system.
  • the control device may be a domain processor in the vehicle, or may also be an electronic control unit (electronic control unit, ECU) in the vehicle, etc.
  • ECU electronic control unit
  • Step 1101 the control device controls the light sources in the first light source region to emit first signal light with a first power, and controls the light sources in the second light source region to emit second signal light with a second power.
  • the first light source area corresponds to the first pixel area
  • the spatial position of the first object corresponds to the first pixel area
  • the second light source area corresponds to the second pixel area. It can also be understood that the first signal light emitted by the light source in the first light source area, and the first echo signal reflected by the first target in the detection area can be received by the pixels in the first pixel area; the second light source area If the second signal light emitted by the light source in the detection area is reflected by the second target in the detection area, the second echo signal can be received by the pixels in the second pixel area.
  • the pixels in the first pixel area are used to receive the first echo signal obtained by reflecting the first signal light through the first target; the pixels in the second pixel area are used to receive the second signal light including reflected through the second target The obtained second echo signal.
  • the first pixel area and the second pixel area are two different areas in the pixel array, and the first light source area and the second light source area are two different areas in the light source array.
  • the first pixel area can be, for example, the area formed by pixels 43, 44, and 45, that is, the pixels in the first pixel area include pixels 43, 44, and 45, and the corresponding first light source area includes
  • the light sources include a light source 43 , a light source 44 and a light source 45 .
  • the pixel area may be represented by row and column numbers of pixels, for example, the first pixel area may be represented as (4,3) ⁇ (4,5).
  • the light source area may also be represented by row and column numbers of the light source, for example, the first light source area may be represented as (4, 3)-(4, 5).
  • the shape of the first pixel area may be a rectangle, or may be a square, or may be other figures. It should be understood that each pixel in the pixel array has a corresponding identifier.
  • the second pixel area may be the area formed by all pixels in the pixel array except the first pixel area (with reference to the above-mentioned FIG. The region formed by pixels other than the first pixel region), or it may also be the region formed by some pixels except the first pixel region.
  • the energy (or intensity) of the first echo signal obtained by reflecting the first signal light incident on the first target is relatively large.
  • factors affecting the energy of the echo signal include but are not limited to the distance between the target and the detection system, the distribution of the echo signal reflected by the target (for example, the target of the Lambertian body reflects the received signal light uniformly in all directions, that is, The echo signal is evenly distributed in all directions), the reflectivity of the target, etc.
  • the first target may be a target that is relatively close to the detection system; or, the first target is a target with a high reflectivity (such as a specular reflector, a metal reflector, a corner reflector, a mixed reflector, and a diffuse reflector component is weak); or, the first target is a target whose reflected echo signal is relatively concentrated along the direction of the detection system; or, the first target is a target with a relatively short distance from the detection system and a large reflectivity; or, the second The first target is a target that is closer to the detection system and has more concentrated echo signals reflected along the direction of the detection system; or, the first target has a higher reflectivity and more concentrated echo signals reflected along the direction of the detection system or, the first target is a target with a relatively short distance from the detection system, a relatively high reflectivity, and concentrated echo signals reflected along the direction of the detection system.
  • a high reflectivity such as a specular reflector, a metal reflector, a corner reflector, a mixed reflect
  • targets with high reflectivity include but are not limited to signs, warning signs, road signs on the road, safety posts on the roadside, guardrails, convex mirrors at corners, vehicles, etc. License plates and high-reflective coating stickers on the body.
  • the reflectivity of the first target is greater than The reflectivity of the second target.
  • the first target is closer to the detection system than the second target.
  • the first echo signals of the first target are more concentrated along the direction of the detection system (that is, the distance along the direction of the detection system The distribution of the first echo signal is relatively concentrated), the distribution of the first echo signal in other directions is less, the distribution of the second echo signal of the second target along the direction of the detection system is less, or the second target of the second target along each direction
  • the echo signal is evenly distributed. They are not listed here.
  • the second power is greater than the first power.
  • the first power and the second power can be preset working parameters of the detection system, for example, they can be pre-stored in the detection system (such as pre-stored in the configuration table of the detection system), and the control device can check the The first power and the second power are obtained by means of a table or the like.
  • the amount of reduction of the first power compared to the second power can be obtained by the control device according to self-feedback, etc. Specifically: the control device can determine the difference between the first power and the second power based on the previously collected data. The amount of reduction in the second power.
  • the second power may be the peak power of the light source.
  • a possible implementation of this step 1101 may be: the control device sends a first control signal to the light sources in the first light source area, and sends a second control signal to the light sources in the second light source area, wherein the first control signal uses The light source in the first light source area is controlled to emit the first signal light with the first power, and the second control signal is used to control the light source in the second light source area to emit the second signal light with the second power.
  • the light sources in the first light source region can emit the first signal light with the first power based on the first control signal.
  • the light sources in the second light source area may emit the second signal light at the second power based on the second control signal.
  • the first light source area may be gated in a first gating manner, and the light sources in the gated first light source area may emit a first signal light into the detection area at a first power.
  • the light sources in the second light source area may be gated according to the first gating manner, and the gated light sources in the second light source area may emit the second signal light to the detection area at the second power.
  • the first gating method can be, for example, point-by-point, row-by-row, column-by-column, region-by-region (ROI), or in a specific order, etc.; , can also be equally spaced, or can also be unequally spaced; or all the light sources in the first light source area can be selected at one time; or all light sources in the second light source area can be selected at one time; etc. .
  • the first gating mode is related to the physical connection relationship of the light sources in the light source array, for details, please refer to the foregoing related descriptions, which will not be repeated here.
  • the specific gating mode adopted by the light source array may be that the first control signal (and the second control signal) carry indication information, for example, the indication information may be the addressing timing of the light sources in the first light source area (and the second control signal). addressing timing of light sources in the two-light source region). That is, the first control signal can also be used to control the addressing timing of the first light source region, and the second control signal can also be used to control the addressing timing of the second light source region.
  • the specific gating mode adopted by the light source array may also be pre-set or pre-agreed, which is not limited in this application.
  • Step 1102 the control device controls the pixels in the first pixel area to receive the first echo signal including the first echo signal obtained after the first signal light is reflected by the first object.
  • the control device can control the pixels in the first pixel area to be gated, that is, control the reading of the pixels in the first pixel area based on the data collected by the first echo signal, and the gated first pixel Pixels in the area are available for receiving first echo signals. Referring to the above-mentioned FIG. 3 , the control device can control to gate the pixels 43 , 44 and 45 in the first pixel region.
  • control device may send a seventh control signal to the first pixel region in the pixel array, and the seventh control signal is used to control the gate of the first pixel region in the pixel array.
  • the seventh control signal may be a timing signal for gating pixels in the first pixel region.
  • the manner in which the first pixel region selects pixels is consistent with the manner in which the first light source region selects light sources.
  • step 1101 and step 1102 do not represent a sequence, and may be executed synchronously.
  • the control device may also send a first synchronous signal (ie, the same clock signal) to the first light source region and the first pixel region, respectively, to instruct the first light source region and the first pixel region to perform gate synchronously.
  • a first synchronous signal ie, the same clock signal
  • control device may further control the pixels in the second pixel area to receive the second echo signal including the second signal light reflected by the second target.
  • This step 1103 is an optional step.
  • control device may send an eighth control signal to the second pixel region in the pixel array, and the eighth control signal is used to control the gate of the second pixel region in the pixel array.
  • the manner in which the second pixel region selects pixels is consistent with the manner in which the second light source region selects light sources.
  • the intensity (or energy) of a signal light can reduce the intensity (or energy) of the first echo signal, thereby helping to reduce the first echo signal entering other areas except the first pixel area.
  • the pixels in this way, help to reduce the crosstalk of the first echo signal to pixels other than the first pixel area, such as pixels in the second pixel area.
  • the pixels in the first pixel region may perform photoelectric conversion on the received first echo signal to obtain the first electrical signal.
  • the pixels in the second pixel area can perform photoelectric conversion on the received second echo signal to obtain a second electrical signal.
  • the control device can receive the first electrical signal from the first pixel area and the second electrical signal from the second pixel area, and determine the related information of the detection area according to the first electrical signal and the second electrical signal.
  • the associated information of the detection area includes but not limited to the distance information of the first target, the orientation of the first target, the speed of the first target, the grayscale information of the first target, the distance information of the second target, the orientation of the second target , the speed of the second target, or the grayscale information of the second target and so on.
  • the following example shows four possible manners of determining the first pixel area.
  • Way 1 Determine the first pixel region based on the acquired intensity information.
  • FIG. 12 it is a schematic flowchart of a method for determining a first pixel area provided by the present application. The method includes the following steps:
  • Step 1201 the control device controls the light sources of the light source array to emit a third signal light with a third power.
  • the third power may be equal to the second power, for example, the third power may also be peak power.
  • control device may send a third control signal to the light source array, and the third control signal is used to control the light sources in the light source array to emit the third signal light with a third power.
  • the light source array may strobe the light source in a second gating manner, and emit a third signal light to the detection area at a third power.
  • the second gating manner may be the same as the first gating manner, or may also be different, which is not limited in this application.
  • the second gating mode may be the instruction information carried in the third control signal, for example, the instruction information may be the addressing sequence of the light sources in the light source array, that is, the third control signal may also be used to control the addressing sequence of the light source array or the second gating manner may also be pre-set or pre-agreed, which is not limited in this application.
  • the second gating manner may be the same as or different from the first gating manner, which is not limited in this application.
  • control device sends a third control signal to the light source array, and the third control signal is used to control the light source array to gate the light source in the second gate manner and emit the third signal light in the third power.
  • the light source array can generate a third driving signal based on the third control signal, and the light source array can emit the third signal light according to the second gating mode and the third power under the drive of the driving signal (such as current) .
  • the driving signal is consistent with the addressing timing of the light sources in the light source array.
  • the light source array includes 7 ⁇ 7 light sources, and the 7 ⁇ 7 light sources can emit third signal light to the detection area according to the second gating mode, and the gating light source is based on the third power.
  • 7 ⁇ 7 third signal lights can be emitted to the detection area (7 ⁇ 7 light spots can be formed in the detection area), and 7 ⁇ 7 third signal lights may be detected by the detection area Reflected by the first target and/or the second target, 7 ⁇ 7 third echo signals are obtained.
  • Step 1202 the control device controls the pixels of the pixel array to receive the third echo signal.
  • the third echo signal includes reflected light of the third signal light reflected by the first target and/or the second target.
  • the third echo signal may be the reflected light of the third signal light reflected by the first target, or may be the reflected light of the third signal light reflected by the second target, or may include both the reflected light of the third signal light reflected by the first target. Reflected light also includes reflected light reflected by the second object.
  • the control device may send a fourth control signal to the pixel array, where the fourth control signal is used to control the pixel array to gate pixels in the second gate manner.
  • the fourth control signal may be used to control timing of selecting pixels in the pixel array.
  • the fourth control signal may be a timing signal for selecting pixels in the pixel array.
  • the second gating method for selecting pixels in the pixel array in step 1202 is consistent with the second gating method for selecting light sources in the light source array in step 1201 above.
  • the way pixels are communicated requires consistent relative descriptions.
  • step 1201 the first column of light sources in the light source array is selected, and in step 1202, the first column of pixels corresponding to the first column of light sources in the pixel array is selected.
  • step 1201 the first row of light sources in the light source array is selected, and in step 1202, the pixels in the first row corresponding to the first row of light sources in the pixel array are selected.
  • control device also needs to send a second synchronous signal (such as a clock signal) to the light source array and the pixel array respectively, so as to instruct the light source array and the pixel array to perform gate synchronously.
  • a second synchronous signal such as a clock signal
  • Step 1203 the pixels of the pixel array perform photoelectric conversion on the received third echo signal to obtain a third electrical signal.
  • each pixel in the pixel array can output a third electrical signal.
  • the pixel array can output 7 ⁇ 7 third electrical signals. In other words, one pixel corresponds to one third electrical signal.
  • Step 1204 the pixel array sends a third electrical signal to the control device.
  • Step 1205 the control device can determine the first intensity according to the third electrical signal.
  • the third electrical signal carries intensity information of the third echo signal, which is referred to as the first intensity.
  • a third electrical signal corresponds to a first intensity.
  • the control device can determine 7 ⁇ 7 first intensities according to the 7 ⁇ 7 third electrical signals.
  • the control device processes the collected third electrical signal (original signal) to obtain an effective data format and a processable signal form, and then the processing circuit and algorithm module
  • the associated information of the target can be obtained, such as the intensity of the echo signal used to characterize the reflectivity of the target.
  • the ordinate of the statistical histogram can record the intensity. It should be understood that the counting of the TDC has an upper limit, and the third echo signal reflected by the first target may cause the TDC to exceed the upper limit of the counting, that is, the counting is saturated.
  • control device may determine that pixels corresponding to intensities greater than or equal to a first preset value among the first intensities are pixels in the first pixel area.
  • control device may also determine that the pixels corresponding to the intensities of the first intensities that are smaller than the first preset value are the pixels in the second area.
  • the first intensity corresponding to the pixels in the first pixel area is greater than or equal to the first preset value, and/or the first intensity corresponding to the pixels in the second pixel area is smaller than the first preset value .
  • the above-mentioned first preset value can also be replaced by the first preset range, and it is judged whether the pixel is in the first pixel area or the second pixel area according to whether the first intensity belongs to the first preset range.
  • the control device may determine that pixels corresponding to intensities that do not belong to the first preset range are pixels in the first pixel area. Further, optionally, the control device may also determine that pixels corresponding to intensities belonging to the first preset range are pixels in the second area. For another example, the control device may determine that pixels corresponding to intensities belonging to the first preset range are pixels in the first pixel area.
  • control device may also determine that pixels corresponding to intensities that do not belong to the first preset range are pixels in the second area. For another example, the control device may determine that pixels corresponding to intensities belonging to the first preset range are pixels in the first pixel area. Further, optionally, the control device may also determine that pixels corresponding to intensities belonging to the second preset range are pixels in the second area. Based on specific implementations, there may be more pixel areas such as the third pixel area to distinguish and process different signal intensity ranges, which is not specifically limited in the present application.
  • the control device determines that the first intensity corresponding to the third electrical signal output by the pixel 43, the pixel 44, and the pixel 45 is greater than or equal to the first preset value, so that it can be determined that the pixels in the first pixel area include the pixel 43 , Pixel 44 and Pixel 45.
  • the pixel array may output the corresponding relationship between the pixel number and the third electrical signal to the control device.
  • the pixel 43 may send the third electrical signal and the pixel number 43 to the control device.
  • the first preset value may be a value close to saturation or a saturated value on the ordinate of the statistical histogram.
  • Mode 2 determining the first pixel region based on the acquired distance information and intensity information.
  • FIG. 13 it is a schematic flowchart of another method for determining the first pixel area provided by the present application. The method includes the following steps:
  • Step 1301 the control device controls the light sources of the light source array to emit a third signal light with a third power.
  • step 130 reference may be made to the introduction of the above step 1201, which will not be repeated here.
  • Step 1302 the control device controls the pixels of the pixel array to receive the third echo signal.
  • step 1302 reference may be made to the introduction of the above step 1202, which will not be repeated here.
  • Step 1303 the pixels of the pixel array perform photoelectric conversion on the received third echo signal to obtain a third electrical signal.
  • step 1303 reference may be made to the introduction of the above step 1203, which will not be repeated here.
  • Step 1304 the pixel array sends a third electrical signal to the control device.
  • Step 1305 the control device determines the first distance and the first intensity according to the third electrical signal.
  • a third electrical signal corresponds to a first distance and corresponds to a first intensity. It can also be understood that there is a one-to-one correspondence among the four pixels, the third electrical signal, the first distance, and the first intensity.
  • the control device processes the collected third electrical signal (original signal) to obtain an effective data format and a processable signal form, and then the processing circuit and algorithm module Calculate the effective data obtained by the signal acquisition circuit to obtain the relevant information of the target, such as the echo signal used to characterize the reflectivity of the target, the time-of-flight of the echo signal, etc., and further, the first distance can be determined based on the time-of-flight .
  • the time-of-flight and intensity can be expressed in the form of a statistical histogram.
  • the vertical axis of the statistical histogram can record the intensity, and the time of flight can be collected and recorded by TDC.
  • the maximum number of bits of TDC determines the maximum amount of data that can be recorded. It should be understood that the counting of the TDC has an upper limit, and the third echo signal reflected by the first target may cause the TDC to exceed the upper limit of the counting, that is, the counting is saturated.
  • Step 1306 the control device determines the first intensities corresponding to the pixels with the same first distance, and subtracts the first intensities corresponding to the pixels with the same first distance, and the difference is greater than or equal to the second preset value.
  • the pixels corresponding to the intensity are determined as pixels in the first pixel area.
  • pixels corresponding to intensities whose difference values are smaller than a second preset value may be determined as pixels in the second pixel area.
  • a pixel corresponding to a smaller intensity whose difference is greater than or equal to a second preset value may also be determined as a pixel in the second pixel area.
  • the difference between the first intensity corresponding to the pixel in the first pixel area and the first intensity corresponding to the pixel in the second pixel area is greater than or equal to the second preset value, and is different from the first intensity in the first pixel area
  • the first distance corresponding to the pixels in is the same as the first distance corresponding to the pixels in the second pixel area.
  • control device can first determine which distances among the 7 ⁇ 7 first distances are the same, and then subtract them two by two to determine the difference of the first intensities corresponding to the pixels with the same first distance, and convert the difference A pixel corresponding to a larger intensity among the two first intensities greater than the second preset value is determined as a pixel in the first pixel area.
  • the second preset value may be smaller than the first preset value.
  • the intensity difference of echo signals reflected by targets at the same distance is small or even the same. If the intensity difference is large, it indicates that the first target may exist. It can also be understood that when there is a first target in the detection area, the first intensity obtained by collecting the third signal light reflected by the first target by the pixels in the pixel array is often higher than the reflected third signal of the second target at the same distance. The intensity of the light is much greater, even greater than the first intensity obtained by reflecting the third signal light from the second target which is closer, and even the counting is saturated.
  • the indicator of intensity difference can be defined artificially and is related to actual working conditions such as ambient light intensity. For example, if it does not exceed ⁇ 10%, the difference is small, and if it exceeds ⁇ 10%, it is considered to be large.
  • the point cloud image acquired by the control device will be affected, because the first target will reflect a strong third echo signal (including the echo signal formed by its own reflection and reflection The echo signal formed by the background noise), the third echo signal will not only trigger the pixel response in the first pixel area corresponding to the spatial position of the first target and output (may cause saturation) the third electrical signal, but also cause Optical crosstalk affects other pixels around the first pixel area. Therefore, the size of the first target in the output point cloud image and the definition and sharpness of the contour edge will deteriorate, there will be a large number of stray point distributions, and the overall contour will be pulled.
  • Stretching, tailing, etc. that is, the abnormal distribution of the point cloud map corresponding to the spatial position of the first target (there may be extensions, pull lines, etc. of the point cloud distribution in the front, back, left, and right, up and down directions in the 3D point cloud map).
  • control device may determine the first intensity of the stray points in the area where the stray points are distributed, and determine the point corresponding to the intensity greater than the third preset value in the first intensity as the pixel in the first pixel area.
  • the third preset value may be preset, for example, the third preset value is equal to saturation intensity ⁇ a coefficient less than 1.
  • the third preset value may be an intensity contour line in the point cloud image, and the points within the range of the intensity contour line in the area where the stray points are distributed are determined as the points in the first pixel area pixels.
  • Way 4 Determine the first pixel area based on at least two frames of images.
  • FIG. 14 it is a schematic flowchart of another method for determining the first pixel area provided by the present application. The method includes the following steps:
  • Step 1401 the control device controls the light sources of the light source array to emit a third signal light with a third power.
  • step 140 reference may be made to the introduction of the above step 1201, which will not be repeated here.
  • Step 1402 the control device controls the pixels of the pixel array to receive the third echo signal.
  • step 1402 reference may be made to the introduction of the above step 1202, which will not be repeated here.
  • Step 1403 the pixels of the pixel array perform photoelectric conversion on the received third echo signal to obtain a third electrical signal.
  • step 1403 reference may be made to the introduction of the above step 1203, which will not be repeated here.
  • Step 1404 the control device determines a third pixel area according to the third echo signal.
  • the first intensity corresponding to the pixels in the third pixel area is greater than or equal to the fourth preset value.
  • the fourth preset value may be equal to the first preset value, for example, may also be a value close to saturation or a saturated value on the ordinate of the statistical histogram.
  • step 1404 For a possible implementation manner of step 1404, refer to step 1205 in the foregoing manner 1, or refer to steps 1305 and 1306 in the foregoing manner 2.
  • the third pixel area can be determined based on the above steps 1401 to 1404 . It can also be understood that, based on the above steps 1401 to 1404, a frame of image may be obtained, which may be referred to as a first image.
  • the third pixel area includes the first pixel area, and may further include pixels crosstalked by echo signals reflected by the first object. In other words, the third pixel area may already include pixels affected by the optical crosstalk of the echo signal reflected by the first object.
  • the pixels in the third pixel region include pixel 33 , pixel 34 , pixel 35 , pixel 43 , pixel 44 , pixel 45 , pixel 53 , pixel 54 and pixel 55 .
  • pixels may already include pixels crosstalked by echo signals reflected by the first target.
  • the following steps can also be performed 1405 to step 1407.
  • the gating method of the pixel array is to select pixels column by column, the adjacent pixels in the column direction may suffer from optical crosstalk; if the gating method of the pixel array is to select pixels row by row, then it may It is the adjacent pixels in the row direction that will be subjected to optical crosstalk; if the pixel array is gated by diagonally selecting pixels, it is the adjacent pixels on the diagonal that may be subject to optical crosstalk.
  • Step 1405 the control device controls the light sources in the third light source region to emit fourth signal light with fourth power, and controls the light sources in the fourth light source region to emit fifth signal light with fifth power.
  • the fifth power is greater than the fourth power.
  • the fifth power may be equal to the foregoing second power, or may also be equal to the foregoing third power, and the fourth power may be equal to the foregoing first power.
  • the fifth power may be peak power.
  • the third light source area corresponds to the third pixel area. It can also be understood that the fourth echo signal obtained by reflecting the fourth signal light emitted by the light source in the third light source area from the first target in the detection area can be received by the pixels in the third pixel area.
  • control device may send a fifth control signal to the light sources in the third light source area, and send a sixth control signal to the light sources in the fourth light source area, and the fifth control signal is used to control the third light source area
  • the light sources in the region emit fourth signal light with fourth power
  • the sixth control signal is used to control the light sources in the fourth light source region to emit fifth signal light with fifth power.
  • the third light source area may be gated in a third gating manner, and the light sources in the gated third light source area may emit fourth signal light into the detection area at a fourth power.
  • the light sources in the fourth light source area may be gated in a third gating manner, and the gated light sources in the fourth light source area may emit fifth signal light to the detection area at fifth power.
  • the third gating manner may be indication information carried in the fifth control signal, for example, the indication information may be addressing timing of the light sources in the light source array. That is, the fifth control signal can also be used to control the addressing timing of the light sources in the third light source region, and the sixth control signal can also be used to control the addressing timing of the light sources in the fourth light source region.
  • the specific gating mode adopted by the light source array may also be pre-set or pre-agreed, which is not limited in this application.
  • the third gating method may be the same as or different from the first gating method, and the third gating method may be the same as or different from the second gating method, which is not limited in this application.
  • control device sends a fifth control signal to the third light source region, and the fifth control signal is used to control the third light source region to gate the light source in a third gate manner and emit fourth signal light with a fourth power.
  • the control device sends a fifth control signal to the fourth light source area, and the fifth control signal is used to control the fourth light source area to gate the light source in the third gate manner and emit the fifth signal light with the fifth power.
  • Step 1406 the control device controls the pixel array to receive the fourth echo signal and the fifth echo signal.
  • the fourth echo signal includes reflected light of the fourth signal light reflected by the first target
  • the fifth echo signal includes reflected light of the fifth signal light reflected by the second target.
  • step 1406 For the possible implementation of step 1406, refer to the introduction in FIG. 15 or FIG. 16 below, which will not be repeated here.
  • Step 1407 the control device determines the first pixel area and the second pixel area according to the fourth echo signal and the fifth echo signal.
  • the control device may perform photoelectric conversion on the first echo signal and the fifth echo signal to obtain the fourth electrical signal and the fifth electrical signal.
  • the fourth electrical signal carries the intensity of the fourth echo signal (which may be called the second intensity)
  • the fifth electrical signal carries the intensity of the fifth echo signal (which may be called the third intensity).
  • a fourth electrical signal corresponds to a second intensity
  • a fifth electrical signal corresponds to a third intensity.
  • the control device can determine 7 ⁇ 7 intensities (including the second intensity and the third intensity) according to the 7 ⁇ 7 electrical signals (including the fourth electrical signal and the fifth electrical signal).
  • control device can determine the pixel corresponding to the intensity greater than or equal to the fifth preset value among the second intensity and the third intensity as the pixel in the first pixel area, and set the second intensity and the third intensity less than the fifth preset value. Pixels corresponding to the intensity of the preset value are determined as pixels in the second pixel area. Wherein, the fifth preset value may be equal to the first preset value.
  • the control device may perform photoelectric conversion on the first echo signal and the fifth echo signal to obtain the fourth electrical signal and the fifth electrical signal.
  • the intensity of the fourth echo signal carried in the fourth electrical signal may be referred to as the second intensity
  • the intensity of the fifth echo signal carried in the fifth electrical signal may be referred to as the third intensity
  • the control device can determine 7 ⁇ 7 intensities (including the second and third intensities) and 7 ⁇ 7 electrical signals (including the fourth and fifth electrical signals) according to the 7 ⁇ 7 electrical signals. distance (including the second distance and the first distance).
  • control device determines the intensities corresponding to the pixels with the same distance between the second distance and the first distance, subtracts the corresponding intensities with equal distances, and calculates the greater intensity whose difference is greater than or equal to the second preset value
  • the corresponding pixel is determined as a pixel in the first pixel area, and the corresponding pixel with a greater intensity whose difference is eliminated by a second preset value is determined as a pixel in the second pixel area.
  • control device determines that pixels 43, 44, and 45 are pixels in the first pixel area, and determines pixels in the pixel array other than pixels 43, 44, and 45 as the second pixel area.
  • determining the pixels in the first pixel area based on the third pixel area includes but is not limited to the possible ways given above, for example, the first pixel area can also be determined from the third pixel area through a centering algorithm, specifically The middle area of the third pixel area can be determined as the first pixel area; or, the pixels in the central area of the third pixel area can be determined as the quasi-first pixel area, and then the intensity difference in the quasi-first pixel area is relatively large Pixels with higher intensity among the pixels are determined as pixels in the first pixel region.
  • the third pixel area can be determined first, since the third pixel area may include pixels that have been crosstalked by the reflected light reflected by the first object, by further adaptively adjusting the corresponding The power of the light source in the third light source area can accurately determine the first pixel area corresponding to the spatial position of the first target from the third pixel area, thereby helping to obtain a complete and accurate detection area of the entire field of view Association information (such as association information of the first object and the second object, etc.).
  • Association information such as association information of the first object and the second object, etc.
  • the process of determining the first pixel area based on the third pixel area in the above step 1405 to step 1407 can be understood as acquiring the second frame image (which can be called the second image), and the specific process can be divided into five stages, That is, the first phase of gating to the area before the first edge area of the third pixel area, the second phase of gating to the first edge area of the third pixel area, and the third phase of gating to the third pixel area, And a fourth stage of gating to the second edge region of the third pixel region, a fifth stage of gating to the region after the second edge region of the third pixel region.
  • what is gated in the first stage is the pixels before the first edge area of the third pixel area (such as pixels in the previous row or column of the first edge area, etc.), and what is gated in the second stage is the pixels in the third pixel area.
  • the pixels in the first edge area, the pixels in the third pixel area are gated in the third stage, the pixels in the second edge area of the third pixel area are gated in the fourth stage, and the third pixels are gated in the fifth stage Pixels after the second edge area of the region (such as pixels in the next row or column of the second edge area, etc.).
  • FIG. 15 it is a schematic flowchart of a method for determining a first pixel area based on a third pixel area provided in the present application.
  • the pixel array and the light source array are gated in a column-based manner, and are gated from the first column as an example.
  • the third pixel area is taken as an example including the (b i ⁇ b j )th column in the (a i ⁇ a j ) row of the pixel array, a i and b i are both integers greater than 1, a j is an integer greater than a i , and b j is an integer greater than bi.
  • the pixels included in the (b i ⁇ b j )th column in the (a i ⁇ a j )th row of the pixel array are: in the pixel array, the row a i ⁇ a jth row is the pixel, and the column is b i
  • the pixels corresponding to the (b i ⁇ b j )th column in the (a i ⁇ a j )th row and the (a i ⁇ a j )th row in the (b i ⁇ b j )th row are the same .
  • the third pixel area includes the (3-5)th column in the (3-5)th row in the pixel array, that is, the third pixel area includes pixel 33, pixel 34, pixel 35, pixel 43, Pixel 44, Pixel 45, Pixel 53, Pixel 54, and Pixel 55.
  • the control device controls the light source array to select the light source column by column, and the selected light source column emits the fifth signal light according to the fifth power; correspondingly, the control device controls the pixel array to select the corresponding pixel column column by column, and the selected pixel column
  • the pixels receive the fifth echo signal from the detection area.
  • control device controls the light source in the first column to emit the fifth signal light at the fifth power; correspondingly, the control device controls the pixels in the first column to be selected.
  • control device may execute the following step 1501 .
  • Step 1501 the control device controls the light sources in the b i-1th column of the light source array to emit fifth signal light at the fifth power, and controls to gate the pixels in the (b i-1 ⁇ b i )th column of the pixel array.
  • the emitting field of view of the light source in the b i-1th column corresponds to the receiving field of view of the pixel in the b i-1th column. It should be understood that the pixels in the b i-1th column are the pixels in the first edge region of the third pixel region.
  • the control device controls the second row of light sources in the light source array to emit the fifth signal light at the fifth power; correspondingly, the control device controls the gate of the second row of pixels and the third row of pixels in the pixel array.
  • the pixels in the second column and the pixels in the third column can be jointly used to receive the fourth echo signal and the fifth echo signal from the detection area.
  • the light source in column b i-1 emits the fifth signal light at the fifth power, the intensity of the fifth signal light is stronger, and the intensity of the corresponding fifth echo signal is also stronger, because the b i-
  • the emitting field of view of the light source in column 1 corresponds to the receiving field of view of the pixel in column b i-1 , therefore, most of the energy of the fifth echo signal hits the pixel in column b i-1 , and some Five echo signals will enter the pixels in column b i .
  • the method of subsequent gating pixels is shifted gating, which can reduce the crosstalk of the reflected echo signal of the first target and affect the echo signals of other targets (such as the second target) in the detection area. .
  • control device may execute the following steps 1502 and 1503.
  • Step 1502 the control device controls the pixels in the (a i ⁇ a j )th row in the (b i +1 ⁇ b j )th column of the pixel array to be sequentially selected; sequentially controls the ( bi ⁇ b j - 1 )
  • the light sources in the (a i -a j )th row in the column emit the fourth signal light at the fourth power.
  • the light sources in the (a i ⁇ a j )th row in the (b i ⁇ b j -1 )th column are sequentially controlled, which can be understood as controlling the (a i ⁇ a j )th row in the b i th column at time i
  • the light source in the row j ) emits the fourth signal light according to the fourth power, correspondingly, controls the gate of the pixels in the (a i ⁇ a j )th row in the b i+1th column; at the i+1th moment, controls the b i
  • the light sources in the (a i ⁇ a j )th row in the +1 column emit the fourth signal light at the fourth power, and correspondingly, control the gating of the (a i ⁇ a j )th row in the b i+2th column Pixels; and so on, at the j-1th moment, control the light source in the (a i ⁇ a j )th row in the
  • the control device controls the light sources (ie, light source 33, light source 43, and light source 53) in the third row (3-5) of the light source array to emit the fourth signal light at the fourth power, correspondingly, the control
  • the pixels of the (3-5)th row in the 4th column of the pixel array ie pixel 34, pixel 44 and pixel 54
  • the control device controls the light sources (i.e. light source 34, light source 44, and light source 54) in the row (3-5) of the fourth column of the light source array to emit the fourth signal light at the fourth power, and correspondingly controls the gate
  • the pixels of the (3-5)th row in the 5th column of the pixel array ie, pixel 35 , pixel 45 and pixel 55 ).
  • the column where the selected pixel is located is misaligned with the column where the selected light source is located. Specifically, the column where the selected pixel is located is one column behind the column where the selected light source is located.
  • the fourth power is smaller than the fifth power.
  • the fifth power may be peak power.
  • Step 1503 the control device controls the light sources in the light source array except the (a i ⁇ a j )th row in the (b i ⁇ b j-1 )th column to emit the fifth signal light at the fifth power, and controls the gate pixel array Pixels except the pixels in the (a i ⁇ a j )th row in the (b i+1 ⁇ b j )th column.
  • the control device controls the light sources in the light source array except for the light sources (ie, light source 33, light source 43, light source 53, light source 34, light source 44 and light source 54) in the (3-5) row of (3-4)
  • the light source emits the fifth signal light according to the fifth power.
  • the pixels in the (3-5) rows except the (4-5) columns in the gate pixel array that is, the pixels 34, 44, 54, Pixel 35, Pixel 45, and Pixel 55).
  • step 1502 may also be that the control device controls the pixels in the (b i+1 -b j )th column of the pixel array to be sequentially selected, and controls the pixels in the (b i -b j-1 )-th column of the light source array.
  • the light source emits fourth signal light with fourth power.
  • the control device may control the pixels in the gate pixel array except for the pixels in the (b i+1 -b j )th column, and control the pixels in the light source array except for the (b i ⁇ b j-1 )th column
  • a light source other than the light source emits fifth signal light with fifth power.
  • the above-mentioned step 1502 may also be that the control device controls the light source in the third column of the light source array to emit the fourth signal light at the fourth power, and correspondingly controls the gate of the pixel in the fourth column of the pixel array.
  • the control device controls the light source in the fourth column of the light source array to emit the fourth signal light at the fourth power, and correspondingly controls the gate of the pixel in the fifth column of the pixel array.
  • the above step 1503 may also be to control the light sources in the light source array except the light sources in the third column and the fourth column to emit the fifth signal light at the fifth power. pixels outside.
  • control device may execute step 1504 .
  • Step 1504 the control device controls the light sources in the bjth column of the light source array to emit the sixth signal light at the sixth power, and stop gating the pixels in the bj +1th column of the pixel array.
  • the sixth power may be any power greater than 0.
  • the light source in the b jth column By controlling the light source in the b jth column to emit the sixth signal light with the sixth power, and not selecting the pixel at this time, it can be made that when the pixel after the third pixel area (such as the pixel in the bj +1th column) is selected, Gated pixel columns and light sources are no longer misaligned, but aligned.
  • the light source in the fifth column of the light source array is controlled to emit the sixth signal light at the sixth power, and correspondingly, any pixel in the pixel array is stopped from being selected. Based on this, when the pixels behind the third pixel area are gated, the light source in the sixth column of the light source array is controlled to emit the second signal light at the second power; correspondingly, the pixels in the sixth column of the pixel array are controlled to be selected.
  • the control device controls the light source in the sixth column to emit the fifth signal light at the fifth power
  • the control device controls the gate of the sixth column of pixels in the pixel array
  • the gate of the sixth column of pixels can receive the fifth echo signal, And so on until the last column of the light source array is scanned.
  • each stage in this example is illustrated with one column as an example. If there are multiple columns in each stage, just repeat the process of giving a column for the corresponding stage in the example, which will not be repeated in this application. repeat.
  • the energy of the echo signal reflected by the first target can be reduced, thereby reducing the crosstalk of the echo signal reflected by the first target to surrounding pixels .
  • the crosstalk of the reflected echo signal of the first target can be further reduced by shifting the pixels in the pixel array.
  • the third pixel area starts from the first column of pixels in the pixel array, then there is no need to shift the gate pixels, and the pixel columns in the pixel array can be sequentially selected from the first column to control the light source
  • the light sources in the corresponding column in the array emit the fourth signal light with the fourth power. If the last column of the third pixel area is the last column of the pixel array, the fourth stage and the fifth stage do not need to be executed.
  • FIG. 16 it is a schematic flow chart of another method for determining the first pixel area based on the third pixel area provided by the present application.
  • the pixel array and the light source array are gated by row, and the strobe starts from the first row as an example.
  • the third pixel region includes ( bi -b j )th columns in the (a i -a j )th rows of the pixel array as an example, for details, please refer to the aforementioned relevant introduction.
  • the control device controls the light source array to select the light sources row by row, and the selected light source emits the fifth signal light according to the fifth power; correspondingly, the control device controls the pixel array to select the pixels row by row, and the selected pixels receive signals from the detector The fifth echo signal of the area.
  • step 1201 to step 1203 of the foregoing method 1 to obtain part of the fifth electrical signal.
  • control device controls the light source in the first row to emit the fifth signal light at the fifth power; correspondingly, the control device controls the pixels in the first row to be selected.
  • control device may execute the following step 1601 .
  • Step 1601 the control device controls the light sources in the a i-1th row of the light source array to emit fifth signal light at the fifth power, and controls the gate of the pixels in the (a i-1 -a i )th row of the pixel array.
  • the emitting field of view of the light source in the a i-1th row corresponds to the receiving field of view of the pixel in the ai -1th row. It should be understood that the pixels in the ai -1th row are the pixels in the first edge region of the third pixel region.
  • the control device controls the second row of light sources in the light source array to emit the fifth signal light at the fifth power; correspondingly, the control device controls the gate of the pixels in the second row and the third row in the pixel array.
  • the pixels in the second row and the third row can be jointly used to receive the fourth echo signal and the fifth echo signal from the detection area.
  • control device may execute the following steps 1602 and 1603.
  • Step 1602 the control device controls the light source in the ( bi ⁇ b j )th column in the (a i ⁇ a j-1 )th row of the light source array to emit the fourth signal light at the fourth power, and controls the gate of the pixel array Pixels in columns (b i to b j ) in rows (a i+1 to a j ).
  • the control device controls the light sources (ie, light source 33, light source 34, and light source 35) in the (3-5) column of the third row of the light source array to emit the fourth signal light at the fourth power, correspondingly, the control
  • the pixels of the (3-5)th column in the 4th row of the pixel array ie, pixel 43 , pixel 44 and pixel 45 ) are gated.
  • the control device controls the light sources (that is, the light source 43, the light source 44, and the light source 45) in the (3-5) column in the fourth row of the light source array to emit the fourth signal light according to the fourth power, and correspondingly controls the gate
  • the pixels of the (3-5)th column in the 5th row of the pixel array that is, the pixel 53 , the pixel 54 and the pixel 55 ).
  • the row where the selected pixel is located is misaligned with the row where the selected light source is located. Specifically, the row where the selected pixel is located is one row behind the row where the selected light source is located.
  • Step 1603 the control device controls the pixels in the gate pixel array except the pixels in the (b i ⁇ b j ) columns in the (a i+1 ⁇ a j )th row, and controls the pixels in the light source array except for the (a i ⁇ a j )
  • the light sources other than the (b i -b j )th column in row j-1 ) emit fifth signal light with fifth power.
  • step 1603 refer to the aforementioned step 1503. Specifically, "row” in the above step 1503 can be replaced with “column”, and “column” can be replaced with “row”.
  • control device may execute step 1604 .
  • Step 1604 the control device controls the light sources in the a jth row of the light source array to emit the sixth signal light at the sixth power, and stops gating the pixels in the aj+1th row of the pixel array.
  • step 1604 refer to the aforementioned step 1504. Specifically, the "row” in the above step 1504 can be replaced with “column”, and the “column” can be replaced with "row”.
  • the control device controls the light source in the sixth row to emit the fifth signal light at the fifth power
  • the control device controls the pixels in the sixth row in the gate pixel array
  • the gate pixels in the sixth row can receive the fifth echo signal, and so on, until the last line of the light source array is scanned.
  • each stage in this example is described with one line as an example. If there are multiple lines in each stage, just repeat the process of giving a line for the corresponding stage in the example, which will not be repeated in this application. repeat.
  • the third pixel area starts from the pixels in the first row in the pixel array, then there is no need to shift the gate pixels, and the pixel rows in the pixel array can be sequentially selected from the first row, and control
  • the light sources in the corresponding row in the light source array emit the fourth signal light with the fourth power. If the last row of the third pixel area is the last row of the pixel array, the fourth stage and the fifth stage do not need to be executed.
  • a relatively Accurate detection area associated information After determining the first pixel area corresponding to the spatial position of the first object, by adjusting the power of the light source in the first light source area corresponding to the first pixel area, and dislocation gating the pixels in the first pixel area, a relatively Accurate detection area associated information. Specifically, it can be divided into the following five stages, that is, the A-th stage of gating to the area before the first edge area of the first pixel area, the B-th phase of gating to the first edge area of the first pixel area, and the gating to the first edge area of the first pixel area. Phase C of a pixel area, and phase D gated to the second edge area of the first pixel area, phase E gated to the area after the second edge area of the first pixel area.
  • phase A is the pixels before the first edge area of the first pixel area (such as pixels in the previous row or column of the first edge area, etc.)
  • phase B is the pixels in the first pixel area.
  • the pixels in the first edge area are gated in the C stage
  • the pixels in the second edge area of the first pixel area are gated in the D stage
  • the first pixels are gated in the E stage Pixels after the second edge area of the region (such as pixels in the next row or column of the second edge area, etc.).
  • FIG. 17 it is a schematic flowchart of a method for acquiring associated information in a detection area provided by the present application.
  • the pixel array and the light source array are gated according to columns and both are gated from the first column as an example.
  • the first pixel area includes the (B i ⁇ B j )th column in the (A i ⁇ A j )th row of the pixel array as an example, A i and B i are both integers greater than 1, A j is an integer greater than A i , and B j is an integer greater than B i . It should be understood that the pixels included in the (B i ⁇ B j )th column in the (A i ⁇ A j )th row of the pixel array are: the row A i to A jth row in the pixel array, and the B ith column The pixels corresponding to column B to column B j .
  • the pixels corresponding to the (B i ⁇ B j )th column in the (A i ⁇ A j )th row and the (A i ⁇ A j )th row in the (B i ⁇ B j )th column are the same .
  • the first pixel area includes the 4th row and the (3-5)th column in the pixel array, that is, the first pixel area includes pixel 43 , pixel 44 and pixel 45 .
  • the control device controls the light source array to select the light sources column by column, and the selected light source columns emit the first signal light according to the first power.
  • the control device controls the gate of corresponding pixel columns in the pixel array column by column.
  • the control device controls the light source in the first column to emit the first signal light at the first power; correspondingly, controls the gate of the pixels in the first column, and the gate in the first column of pixels can receive the first echo from the detection area Signal.
  • stage B the control device controls the pixels in the first edge region of the first pixel region to be gated.
  • Step 1701 the control device controls the light sources in the B i-1th column of the light source array to emit the second signal light at the second power, and controls to gate the pixels in the (B i-1 -B i )th column of the pixel array.
  • the selected pixels in the (B i-1 ⁇ B i )th column are used to receive the second echo signal from the detection area.
  • the emission field of view of the light source in the B i-1th column corresponds to the reception field of view of the pixels in the B i-1th column. It should be understood that the pixels in the B i-1th column are the pixels in the first edge area of the first pixel area.
  • the control device controls the light sources in the second column in the light source array to emit the second signal light at the second power; correspondingly, the control device controls the pixels in the second column and the third column in the gate pixel array.
  • the pixels in the second column and the pixels in the third column can be jointly used to receive the second echo signal from the detection area.
  • control device controls the pixels that are gated to the first pixel region.
  • Step 1702 the control device sequentially controls the light sources in the (A i ⁇ A j )th row in the (B i ⁇ B j-1 )th column to emit the first signal light at the first power, and sequentially controls the gate (B i+ 1 to B j ) in the (A i to A j )-th row.
  • the light sources in the (A i ⁇ A j ) th row in the (B i ⁇ B j-1 )th column are sequentially controlled, which can be understood as controlling the (A i ⁇ A j )th row in the B i th column at the ith moment
  • the light source in the row j ) emits the first signal light according to the first power, correspondingly, controls the gate of the pixels in the (A i ⁇ A j )th row in the b i+1th column; at the i+1th moment, controls the B i
  • the light sources in the (A i ⁇ A j )th row in the +1 column emit the first signal light at the first power, and correspondingly, the control gates the (A i ⁇ A j )th row in the B i+2th column Pixels; and so on, at the j-1th moment, control the light source in the (A i ⁇ A j )th row in the B j-1th column to
  • the control device controls the light source in the fourth row in the third column of the light source array (that is, the light source 43) to emit the first signal light at the first power; correspondingly, controls the gate in the fourth column of the pixel array
  • the pixel in row 4 ie pixel 44.
  • the control device controls the light source in the 4th row in the 4th column of the light source array (that is, the light source 44) to emit the first signal light at the first power, and correspondingly controls the 4th signal light in the 5th column of the gate pixel array. row of pixels (ie pixel 45). It should be noted that the column where the selected pixel is located is misaligned with the column where the selected light source is located.
  • the column where the selected pixel is located is one column behind the column where the selected light source is located.
  • the crosstalk of the echo signal reflected by the first target can be reduced, and the echo signal reflected by other targets (such as the second target) in the detection area can be affected.
  • the phenomenon of crosstalk is improved, so that effective detection in the full field of view of the detection system can be realized.
  • Step 1703 the control device controls the light sources in the light source array except the light sources in the (A i ⁇ A j )th row in the (B i ⁇ B j-1 )th column to emit the second signal light at the second power, and controls the gate Pixels in the (A i -A j )th row in the (B i+1 -B j ) th column in the pixel array.
  • control device controls the light sources in the light source array except the light source in the fourth row in the third column to emit the second signal light at the second power; Pixels outside the pixels of row 4.
  • step 1702 may also be that the control device controls the pixels in the (B i+1 ⁇ B j )th column of the pixel array to be sequentially selected, and controls the pixels in the (B i ⁇ B j-1 )th column of the light source array.
  • the light source emits the first signal light with the first power.
  • the control device may control the pixels in the gate pixel array except for the pixels in the (B i+1 -B j )-th column, and control the pixels in the light source array except for the (B i -B j-1 )-th column
  • the light source other than the light source emits the second signal light with the second power.
  • control device controls the pixels in the second edge area of the first pixel area to be gated.
  • Step 1704 the control device controls the light source in column B j of the light source array to emit the second signal light at the second power, and stops gating the pixels in column B j+1 of the pixel array.
  • the light source in the B jth column By controlling the light source in the B jth column to emit the second signal light at the second power, and not selecting the pixel at this time, it can be made that when the pixels behind the first pixel area (such as the pixels in the Bj +1th column) are selected, The gated pixel columns and light source columns are no longer misaligned, but aligned.
  • the light source in the fifth column of the light source array is controlled to emit the sixth signal light at the sixth power, and correspondingly, any pixel in the pixel array is stopped from being selected. Based on this, after the pixels in the first pixel area are selected, the light sources in the sixth column of the light source array are controlled to emit the second signal light at the second power; correspondingly, the pixels in the sixth column of the pixel array are controlled to be selected.
  • Phase E can repeat the process of the above-mentioned phase A.
  • the control device controls the light source in the sixth column to emit the second signal light at the second power
  • the control device controls the pixels in the sixth column in the gate pixel array
  • the gate in the sixth column of pixels can receive the second echo signal, And so on until the last column of the light source array is scanned.
  • the first pixel area is the pixel corresponding to the spatial position of the first target, that is, the first pixel area does not include pixels crosstalked by the first echo signal, therefore, accurate and complete associated information of the detection area can be obtained.
  • FIG. 17 is illustrated by taking the same scanning mode as that given in the above-mentioned FIG. 15 as an example. It should be understood that, after the first light source region and the first pixel region are determined, related information of the detection region may also be obtained in other possible manners, which is not limited in the present application. It should be noted that the scanning method involves the hardware level of the detection system, such as driver design, data reading circuit design, thermal evaluation, energy utilization, and the impact on the performance of the detection system. Among them, the performance of the detection system includes but not Limited to detection accuracy, detection distance, etc.
  • the control device controls some light sources in the light source array to emit signal light at a certain power.
  • the control device may send corresponding control signals to the light source array.
  • the control device controlling certain pixels in the pixel array may be that the control device sends a corresponding control signal to the pixel array. It should be understood that the sending of the control signal to the light source array by the control device and/or the sending of the control signal to the pixel array are both examples, and the present application does not limit how to control specifically.
  • Fig. 18 and Fig. 19 are schematic structural diagrams of a possible control device provided in the present application. These control devices can be used to implement the functions of the control devices in the above method embodiments, and therefore can also achieve the beneficial effects of the above method embodiments.
  • the control device 1800 includes a processing module 1801 and a transceiver module 1802 .
  • the control device 1800 is used to implement the functions of the control device in the method embodiments shown in FIG. 11 , FIG. 12 , FIG. 13 , FIG. 14 , FIG. 15 , FIG. 16 or FIG. 17 .
  • the light source in the second light source area emits the second signal light according to the second power, and controls the pixels in the first pixel area to receive the first echo signal obtained after the first signal light is reflected by the first target, and the spatial position of the first target
  • the first light source area corresponds to the first pixel area
  • the second light source area corresponds to the second pixel area
  • the second power is greater than the first power
  • processing module 1801 and the transceiver module 1802 can be directly obtained by referring to the related descriptions in the method embodiment shown in FIG. 11 , and will not be repeated here.
  • the light source area and the pixel area refer to the light source array and the pixel array mentioned above, which will not be repeated here.
  • processing module 1801 in the embodiment of the present application may be implemented by a processor or a processor-related circuit module
  • transceiver module 1802 may be implemented by an interface circuit or an interface circuit-related circuit module.
  • the present application further provides a control device 1900 .
  • the control device 1900 may include at least one processor 1901 and an interface circuit 1902 .
  • the processor 1901 and the interface circuit 1902 are coupled to each other.
  • the interface circuit 1902 may be an input and output interface.
  • the control device 1900 may further include a memory 1903 for storing instructions executed by the processor 1901 or storing input data required by the processor 1901 to execute the instructions or storing data generated by the processor 1901 after executing the instructions.
  • the processor 1901 is used to execute the functions of the above-mentioned processing module 1801
  • the interface circuit 1902 is used to execute the functions of the above-mentioned transceiver module 1802 .
  • FIG. 20 is a schematic diagram of a possible lidar architecture provided by the present application.
  • the lidar 2000 may include a transmitting module 2001, a receiving module 2002, and a control device 2003 for executing any of the above method embodiments.
  • the transmitting module 2001 is used to transmit the first signal light according to the first power, and transmit the second signal light according to the second power;
  • the receiving module 2002 is used to receive the first echo signal from the detection area, the first echo signal
  • the signal includes the reflected light of the first signal light reflected by the first target;
  • the function of the control device 2003 can refer to the related description above, and will not be repeated here.
  • the transmitting module 2001 refer to the introduction of the aforementioned transmitting module
  • the receiving module 2002 please refer to the aforementioned introduction of the receiving module, which will not be repeated here.
  • the terminal device may include a control device for executing any of the foregoing method embodiments. Further, optionally, the terminal device may further include a memory, and the memory is used to store programs or instructions. Certainly, the terminal device may also include other components, such as a wireless control device and the like. Wherein, for the control device, reference may be made to the description of the above control device, which will not be repeated here.
  • the terminal device may further include the above-mentioned transmitting module 2001 and receiving module 2002 . That is to say, the terminal device may include the aforementioned lidar 2000 .
  • the terminal device can be, for example, a vehicle (such as an unmanned car, a smart car, an electric car, or a digital car, etc.), a robot, a surveying and mapping device, a drone, a smart home device (such as a TV, a sweeping robot, a smart desk lamp, etc.) , audio system, intelligent lighting system, electrical control system, home background music, home theater system, intercom system, or video surveillance, etc.), intelligent manufacturing equipment (such as industrial equipment), intelligent transportation equipment (such as AGV, unmanned transport vehicle , or trucks, etc.), or smart terminals (mobile phones, computers, tablets, handheld computers, desktops, headphones, audio, wearable devices, vehicle-mounted devices, virtual reality devices, augmented reality devices, etc.), etc.
  • a vehicle such as an unmanned car, a smart car, an electric car, or a digital car, etc.
  • a robot such as a robot, a surveying and mapping device, a drone, a smart home device (such as a TV, a
  • control device includes hardware structures and/or software modules corresponding to each function.
  • present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the modules and method steps described in the embodiments disclosed in the present application. Whether a certain function is executed by hardware or computer software drives the hardware depends on the specific application scenario and design constraints of the technical solution.
  • the method steps in the embodiments of the present application may be implemented by means of hardware, or may be implemented by means of a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules, and software modules can be stored in random access memory (random access memory, RAM), flash memory, read-only memory (read-only memory, ROM), programmable read-only memory (programmable ROM) , PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically erasable programmable read-only memory (electrically EPROM, EEPROM), register, hard disk, mobile hard disk, CD-ROM or known in the art any other form of storage medium.
  • An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium.
  • the storage medium may also be a component of the processor.
  • the processor and storage medium can be located in the ASIC.
  • the ASIC can be located in the control device.
  • the processor and the storage medium can also exist in the control device
  • all or part of them may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product consists of one or more computer programs or instructions. When the computer programs or instructions are loaded and executed on the computer, the processes or functions of the embodiments of the present application are executed in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, a network of computers, or other programmable devices.
  • Computer programs or instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, computer programs or instructions may be Wired or wireless transmission to another website site, computer, server or data center.
  • a computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrating one or more available media.
  • Available media can be magnetic media, such as floppy disks, hard disks, and magnetic tapes; optical media, such as digital video discs (digital video discs, DVDs); and semiconductor media, such as solid state drives (SSDs). ).
  • the word “exemplarily” is used to mean an example, illustration or illustration. Any embodiment or design described herein as “example” is not to be construed as preferred or advantageous over other embodiments or designs. Or it can be understood that the use of the word example is intended to present a concept in a specific manner, and does not constitute a limitation to the application.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种控制探测方法、控制装置、激光雷达、终端设备及计算机可读存储介质。方法包括控制第一光源区域的光源按第一功率发射第一信号光,控制第二光源区域的光源按第二功率发射第二信号光;控制与第一光源区域对应的第一像素区域的像素接收第一回波信号,第一像素区域对应第一目标的空间位置,第一回波信号包括第一信号光经由第一目标反射后得到的反射光,第二功率大于第一功率。控制装置,包括至少一个处理器和接口电路,处理器用于执行方法。激光雷达,包括发射模组、接收模组、以及用于执行方法的控制装置。终端设备,包括用于执行方法的控制装置。计算机可读存储介质,存储有计算机程序或指令,当计算机程序或指令被控制装置执行时,使得控制装置执行方法。由于第一目标的空间位置对应第一像素区域,通过降低与第一像素区域对应的第一光源区域的光源发射第一信号光的功率,可减小第一回波信号的强度,进而可抑制第一回波信号进入除第一像素区域外的像素,从而有助于减小光学串扰。

Description

一种控制探测方法、控制装置、激光雷达及终端设备
相关申请的交叉引用
本申请要求在2021年10月08日提交中国专利局、申请号为202111169740.4、申请名称为“一种控制探测方法、控制装置、激光雷达及终端设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及探测技术领域,尤其涉及一种控制探测方法、控制装置、激光雷达及终端设备。
背景技术
随着科学技术的发展,智能运输设备、智能家居设备、机器人、车辆等智能终端正在逐步进入人们的日常生活。探测系统在智能终端上发挥着越来越重要的作用,由于探测系统可以感知周围的环境,并可基于感知到的环境信息进行移动目标的辨识与追踪,以及静止场景如车道线、标示牌的识别,并可结合导航仪及地图数据等进行路径规划等。因此,探测系统在智能终端上发挥着越来越重要的作用。
在实际应用场景中,探测系统在感知周围的环境时,不可避免的会遇到高反射率或者特殊反射率(称为角反)目标。例如,公路上的指示牌、警示牌、路标牌,路边的安全柱、防护栏、转角的凸面镜以及车辆的车牌、车身上的高反涂层贴纸等。这些高反射率或者角反目标会产生较强的散射光,这些较强的散射光可能会产生光学串扰,从而会降低探测系统对探测区域中的目标探测的准确度。
综上,如何减小高反射率的目标或角反目标引起的光学串扰是当前亟需解决的技术问题。
发明内容
本申请提供一种控制探测方法、控制装置、激光雷达及终端设备,用于尽可能的减小探测系统中的光学串扰。
第一方面,本申请提供一种控制探测方法,该方法包括控制第一光源区域的光源按第一功率发射第一信号光,控制第二光源区域的光源按第二功率发射第二信号光,控制第一像素区域的像素接收第一回波信号,第一像素区域与第一目标的空间位置对应,第一光源区域对应第一像素区域,第二光源区域对应第二像素区域,第一回波信号包括第一信号光经由第一目标反射后得到的反射光,第二功率大于第一功率。
基于该方案,通过降低第一目标的空间位置对应的第一像素区域对应的第一光源区域中的光源的第一功率,有助于减小第一信号光的强度(或称为能量),从而可减小第一回波信号的强度(或称为能量),进而有助于减小第一回波信号进入除第一像素区域外的其它像素。如此,有助于减小第一回波信号对除第一像素区域外的像素(如第二像素区域中的像素)的串扰。
进一步,可选的,该方法还包括控制第二像素区域的像素接收包括第二信号光经由第二目标反射后得到的第二回波信号。
通过控制第二像素区域中的像素接收第二回波信号,并结合第一像素区域中的像素接收到的第一回波信号,可实现对探测区域的全视场的完整探测。
在一种可能的实现方式中,该方法应用于探测系统,该探测系统包括光源阵列和像素阵列,光源阵列包括m×n个光源,像素阵列包括m×n个像素,光源阵列的光源与像素阵列的像素对应,m和n均为大于1的整数。
基于该探测系统,可在不需要扫描结构的情况下,实现对探测区域的扫描。
在一种可能的实现方式中,该方法还可包括控制光源阵列的光源按第三功率发射第三信号光,控制像素阵列的像素接收第三回波信号,即控制选通像素阵列中的像素。其中,第三回波信号包括第三信号光经由第一目标和/或第二目标反射的反射光,即第三回波信号可能是第三信号光经第一目标反射的反射光,也可能是第三信号光经第二目标反射的反射光,或者也可能既包括经第一目标反射的反射光,也包括经第二目标反射的反射光;与第一像素区域中的像素对应的第三回波信号的强度大于或等于第一预设值,和/或,与第二像素区域中的像素对应的第三回波信号的强度小于第一预设值。其中,所述光源阵列包括所述第一光源区域和所述第二光源区域,像素阵列包括所述第一像素区域和所述第二像素区域。换言之,第一光源区域和第二光源区域均属于光源阵列,第一像素阵列和第二像素阵列均属于像素阵列。
通过控制光源阵列中的光源按相同的第三功率发射第三信号光,并可基于第三回波信号的强度与第一预设值的关系,识别出第一像素区域中包括哪些像素、和/或第二像素区域中包括哪些像素。
在另一种可能的实现方式中,该方法还包括控制光源阵列的光源按第三功率发射第三信号光,控制像素阵列的像素接收第三回波信号。其中,第三回波信号包括第三信号光经由第一目标和/或第二目标反射的反射光,与第一像素区域中的像素对应的第三回波信号的强度和与第二像素区域中的像素对应的第三回波信号的强度的差值大于或等于第二预设值,且与第一像素区域中的像素对应的第一距离和与第二像素区域中的像素对应的第一距离相同。其中,所述光源阵列包括所述第一光源区域和所述第二光源区域,像素阵列包括所述第一像素区域和所述第二像素区域。换言之,第一光源区域和第二光源区域均属于光源阵列,第一像素阵列和第二像素阵列均属于像素阵列。
也可以理解为,第一距离相同的像素对应的强度两两相减,差值大于或等于第二预设值的较大的强度对应的像素即为第一像素区域中的像素。
通过控制光源阵列中的光源按相同的第三功率发射第三信号光,并可基于第三回波信号的强度、以及基于第三回波信号确定的第一距离,识别出第一像素区域中包括哪些像素、和/或第二像素区域中包括哪些像素。
在又一种可能的实现方式中,该方法还包括控制光源阵列的光源按第三功率发射第三信号光,基于接收到第三回波信号,确定第三像素区域;控制第三光源区域的光源按第四功率发射第四信号光、及控制第四光源区域的光源按第五功率发射第五信号光,第五功率大于第四功率;控制像素阵列接收第四回波信号和第五回波信号,并根据第四回波信号和第五回波信号,确定第一像素区域和第二像素区域。其中,第四回波信号包括第四信号光经第一目标反射的反射光,第五回波信号包括第五信号光经第二目标反射的反射光,第三 回波信号包括第三信号光经由第一目标和/或第二目标反射的反射光,第三光源区域与第三像素区域对应,对应于第三像素区域的第三回波信号的强度大于或等于第四预设值,第三像素区域包括第一像素区域及被第一目标反射得到的第三回波信号串扰的像素。其中,所述光源阵列包括所述第一光源区域和所述第二光源区域,像素阵列包括所述第一像素区域和所述第二像素区域。换言之,第一光源区域和第二光源区域均属于光源阵列,第一像素阵列和第二像素阵列均属于像素阵列。
通过控制光源阵列中的光源按相同的第三功率发射第三信号光,基于第三回波信号可先确定出第三像素区域,第三像素区域中可能包括了已被第一目标反射的反射光串扰的像素,通过进一步适应性的调整第三像素区域对应的第三光源区域中的光源的功率,可从第三像素区域中准确的确定出第一目标的空间位置对应的第一像素区域,从而有助于获得全视场完整且精确的探测区域的关联信息(如第一目标和第二目标的关联信息等)。
下面以光源阵列按列选通光源、且像素阵列也按列选通像素为例。
在一种可能的实现方式中,第三像素区域包括像素阵列的第(a i~a j)行、第(b i~b j)列,a i和b i均为大于1的整数,a j为大于a i的整数,b j为大于b i的整数。
基于该第三像素区域,该方法可还包括控制光源阵列的第b i-1列的光源按第五功率发射第五信号光,控制选通像素阵列的第(b i-1~b i)列的像素;其中,第b i-1列的光源的发射视场与第b i-1列的像素的接收视场对应。
其中,第b i-1列的像素为第三像素区域的第一边缘区域的像素,此处,该b i-1列像素对应的b i-1列的光源按第五功率发射第五信号光,并选通b i-1列和b i列的像素共同接收第五回波信号,使得后续选通像素的方式为错位选通,从而可降低第一目标反射回波信号串扰影响探测区域中的其它目标(如第二目标)的回波信号。
进一步,该方法还可包括控制选通像素阵列的第(b i+1~b j)列中的第(a i~a j)行的像素,并控制光源阵列的第(b i~b j-1)列中的第(a i~a j)行的光源按第四功率发射第四信号光。
通过错位一列选通像素列,利用回波信号的光斑的边缘能量,可降低第一目标反射的回波信号串扰影响探测区域中的其它目标(如第二目标)反射的回波信号,可改善串扰现象,从而可以实现对探测系统的全视场范围内的有效探测。
进一步,该方法还可包括控制选通像素阵列中除第(b i+1~b j)列中的第(a i~a j)行的像素外的像素,并控制光源阵列中除第(b i~b j-1)列中的第(a i~a j)行外的光源按第五功率发射第五信号光。
进一步,该方法还可包括停止选通像素阵列的第b j+1列的像素,并控制光源阵列的第b j列的光源按第六功率发射第六信号光。
通过控制第b j列的光源按第六功率发射第六信号光,此时不选通像素,如此,可以使得在选通第三像素区域之后的像素(如第b j+1列的像素)时,选通的像素列和光源列不再错位,即可以对齐选通像素列和对应的光源列。
在一种可能的实现方式中,第一像素区域包括像素阵列的第(A i~A j)行、第(B i~B j)列,A i和B i均为大于1的整数,A j为大于A i的整数,B j为大于B i的整数。
基于该第一像素区域,该方法可还包括控制光源阵列的第B i-1列的光源按第二功率发射第二信号光,控制选通像素阵列的第(B i-1~B i)列的像素;其中,第B i-1列的光源的发射视场与第B i-1列的像素的接收视场对应。
其中,第B i-1列的像素为第一像素区域的第一边缘区域的像素,此处,该B i-1列像素 对应的B i-1列的光源按第二功率发射第二信号光,并选通B i-1列和B i列的像素共同接收第二回波信号,使得后续选通像素的方式为错位选通,从而可降低第一目标反射回波信号串扰影响探测区域中的其它目标(如第二目标)的回波信号。
进一步,该方法还可包括控制选通像素阵列的第(B i+1~B j)列中的第(A i~A j)行的像素,并控制光源阵列的第(B i~B j-1)列中的第(A i~A j)行的光源按第一功率发射第一信号光。
通过错位一列选通像素列,利用回波信号的光斑的边缘能量,可降低第一目标反射的回波信号的串扰影响探测区域中的其它目标(如第二目标)反射的回波信号,可改善串扰现象,从而可以实现对探测系统的全视场范围内的有效探测。
进一步,该方法还可包括控制选通像素阵列中除第(B i+1~B j)列中的第(A i~A j)行的像素外的像素,并控制光源阵列中除第(B i~B j-1)列中的第(A i~A j)行外的光源按第二功率发射第二信号光。
进一步,该方法还可包括停止选通像素阵列的第B j+1列的像素,并控制光源阵列的第B j列的光源按第六功率发射第六信号光。
通过控制第B j列的光源按第六功率发射第六信号光,此时不选通像素,如此,可以使得在选通第一像素区域之后的像素(如第B j+1列的像素)时,选通的像素列和光源列不再错位,即可以对齐选通像素列和对应的光源列。
下面以光源阵列按行选通光源、且像素阵列也行按选通像素为例。
在一种可能的实现方式中,第三像素区域包括像素阵列的第(a i~a j)行、第(b i~b j)列,a i和b i均为大于1的整数,a j为大于a i的整数,b j为大于b i的整数。
基于该第三像素区域,该方法可还包括控制光源阵列的第a i-1行光源按第五功率发射第五信号光,控制选通像素阵列的第(a i-1~a i)行的像素;其中,第a i-1行的光源的发射视场与第a i-1行的像素的接收视场对应。
其中,第a i-1行的像素为第三像素区域的第一边缘区域的像素,此处,该a i-1行的像素对应的b i-1列的光源按第五功率发射第五信号光,并选通a i-1行和a i行的像素共同接收第五回波信号,得后续选通像素的方式为错位选通,从而可降低第一目标反射回波信号串扰影响探测区域中的其它目标(如第二目标)的回波信号。
进一步,该方法还可包括控制选通像素阵列的第(a i+1~a j)行中的(b i~b j)列的像素,并控制光源阵列的第(a i~a j-1)行中的第(b i~b j)列的光源按第四功率发射第四信号光。
通过错位一行选通像素列,利用回波信号的光斑的边缘能量,可降低第一目标反射的回波信号串扰影响探测区域中的其它目标(如第二目标)反射的回波信号,可改善串扰现象,从而可以实现对探测系统的全视场范围内的有效探测。
进一步,该方法还可包括控制选通像素阵列中除第(a i+1~a j)行中的(b i~b j)列的像素外的像素,控制光源阵列中除第(a i~a j-1)行中的第(b i~b j)列外的光源按第五功率发射第五信号光。
进一步,该方法还可包括停止选通像素阵列的第a j+1行的像素,控制光源阵列的第a j行的光源按第六功率发射第六信号光。
通过控制第a j列的光源按第六功率发射第六信号光,此时不选通像素,可以使得在选通第三像素区域之后的像素(如第a j+1行的像素)时,选通的像素行和光源行不再错位,即可以对齐选通像素行和对应的光源行。
在一种可能的实现方式中,第一像素区域包括像素阵列的第(A i~A j)行、第(B i~B j)列, A i和B i均为大于1的整数,A j为大于A i的整数,B j为大于B i的整数。
基于该第一像素区域,该方法可还包括控制光源阵列的第A i-1行光源按第二功率发射第二信号光,控制选通像素阵列的第(A i-1~A i)行的像素;其中,第A i-1行的光源的发射视场与第A i-1行的像素的接收视场对应。
其中,第A i-1行的像素为第一像素区域的第一边缘区域的像素,此处,该A i-1行的像素对应的B i-1列的光源按第二功率发射第二信号光,并选通A i-1行和A i行的像素共同接收第二回波信号,使得后续选通像素的方式为错位选通,从而可降低第一目标反射回波信号串扰影响探测区域中的其它目标(如第二目标)的回波信号。
进一步,该方法还可包括控制选通像素阵列的第(A i+1~A j)行中的(B i~B j)列的像素,并控制光源阵列的第(A i~A j-1)行中的第(B i~B j)列的光源按第一功率发射第一信号光。
通过错位一行选通像素列,利用回波信号的光斑的边缘能量,可降低第一目标反射的回波信号的串扰影响探测区域中的其它目标(如第二目标)反射的回波信号,可改善串扰现象,从而可以实现对探测系统的全视场范围内的有效探测。
进一步,该方法还可包括控制选通像素阵列中除第(A i+1~A j)行中的(B i~B j)列的像素外的像素,控制光源阵列中除第(A i~A j-1)行中的第(B i~B j)列外的光源按第二功率发射第二信号光。
进一步,该方法还可包括停止选通像素阵列的第A j+1行的像素,控制光源阵列的第A j行的光源按第六功率发射第六信号光。
通过控制第A j列的光源按第六功率发射第六信号光,此时不选通像素,可以使得在选通第一像素区域之后的像素(如第A j+1行的像素)时,选通的像素行和光源行不再错位,即可以对齐选通像素行和对应的光源行。
第二方面,本申请提供一种控制装置,该控制装置用于实现上述第一方面或第一方面中的任意一种方法,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实现方式中,该控制装置可以是独立的控制装置,也可以是用于控制装置中的模块,例如芯片或芯片系统或者电路。有益效果可参见上述第一方面的描述,此处不再赘述。该控制装置可以包括:接口电路和至少一个处理器。该处理器可被配置为支持该控制装置执行以上第一方面或第一方面中的任意一种方法,该接口电路用于支持该控制装置与控制装置和其它装置等之间的通信。其中,接口电路可以为独立的接收器、独立的发射器、集成收发功能的输入输出端口等。可选地,该控制装置还可以包括存储器,该存储器可以与处理器耦合,其保存该控制装置必要的程序指令和数据。
第三方面,本申请提供一种控制装置,该控制装置用于实现上述第一方面或第一方面中的任意一种方法,包括相应的功能模块,分别用于实现以上方法中的步骤。功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块。
在一种可能的实施方式中,该控制装置可以括处理模块和收发模块,这些模块可以执行上述第一方面或第一方面中的任意一种方法,具体参见方法示例中的详细描述,此处不做赘述。
第四方面,本申请提供一种芯片,该芯片包括至少一个处理器和接口电路,进一步,可选的,该芯片还可包括存储器,处理器用于执行存储器中存储的计算机程序或指令,使 得芯片执行上述第一方面或第一方面的任意可能的实现方式中的方法。
第五方面,本申请提供一种终端设备,该终端设备包括用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的控制装置。
第六方面,本申请提供一种激光雷达,该激光雷达包括发射模组、接收模组、以及用于执行上述第一方面或第一方面的任意可能的实现方式中的方法的控制装置,其中,发射模组,用于按第一功率发射第一信号光,并按第二功率发射第二信号光;接收模组,用于接收来自探测区域的第一回波信号,第一回波信号包括第一信号光经由第一目标反射的反射光。
第七方面,本申请提供一种终端设备,该终端设备包括用于执行上述第六方面或第六方面的任意可能的实现方式中的激光雷达。
第八方面,本申请提供一种计算机可读存储介质,计算机可读存储介质中存储有计算机程序或指令,当计算机程序或指令被控制装置执行时,使得该控制装置执行上述第一方面或第一方面的任意可能的实现方式中的方法。
第九方面,本申请提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当该计算机程序或指令被控制装置执行时,使得该控制装置执行上述第一方面或第一方面的任意可能的实现方式中的方法。
上述第二方面至第九方面中任一方面可以达到的技术效果可以参照上述第一方面中有益效果的描述,此处不再重复赘述。
附图说明
图1a为本申请提供的一种朗伯体的反射原理示意图;
图1b为本申请提供的一种单次脉冲时间内的峰值功率示意图;
图1c为本申请提供的一种FSI原理示意图;
图1d为本申请提供的一种BSI原理示意图;
图2a为本申请提供的一种d-TOF技术的测距原理示意图;
图2b为本申请提供的一种基于d-TOF技术的探测模组的结构示意图;
图3为本申请提供的一种探测系统的架构示意图;
图4a为本申请提供的一种光源阵列中光源的选通方式示意图;
图4b为本申请提供的另一种光源阵列中光源的选通方式示意图;
图4c为本申请提供的另一种光源阵列中光源的选通方式示意图;
图4d为本申请提供的另一种光源阵列中光源的选通方式示意图;
图4e为本申请提供的另一种光源阵列中光源的选通方式示意图;
图5a为本申请提供的一种信号光的光斑在角空间中能量分布示意图;
图5b为本申请提供的另一种信号光的光斑在角空间中能量分布示意图;
图6为本申请提供的一种像素的结构示意图;
图7a为本申请提供的一种像素阵列中光源的选通方式示意图;
图7b为本申请提供的另一种像素阵列中光源的选通方式示意图;
图7c为本申请提供的另一种像素阵列中光源的选通方式示意图;
图7d为本申请提供的另一种像素阵列中光源的选通方式示意图;
图7e为本申请提供的另一种像素阵列中光源的选通方式示意图;
图8为本申请提供的一种光学镜头的结构示意图;
图9为本申请提供的另一种光学镜头的结构示意图;
图10a为本申请提供的一种可能的应用场景;
图10b为本申请提供的另一种可能的应用场景;
图11为本申请提供的一种控制探测方法的方法流程示意图;
图12为本申请提供的一种确定第一像素区域的方法流程示意图;
图13为本申请提供的另一种确定第一像素区域的方法流程示意图;
图14为本申请提供的另一种确定第一像素区域的方法流程示意图;
图15为本申请提供的一种基于第三像素区域确定第一像素区域的方法流程示意图;
图16为本申请提供的另一种基于第三像素区域确定第一像素区域的方法流程示意图;
图17为本申请提供的一种获取探测区域中关联信息的方法流程示意图;
图18为本申请提供的一种控制装置的结构示意图;
图19为本申请提供的一种控制装置的结构示意图;
图20为本申请提供的一种激光雷达的架构示意图。
具体实施方式
下面将结合附图,对本申请实施例进行详细描述。
以下,对本申请中的部分用语进行解释说明。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
一、朗伯体
朗伯体指可以沿各个方向均匀反射入射光的物体。请参见图1a,射向朗伯体的入射光以入射点为中心,在整个空间内向四周各向同性的反射入射光。也可以理解为,朗伯体对接收到的信号光在各个方向均匀反射,即回波信号沿各个方向均匀分布。
二、光学串扰
光学串扰是指杂散光干扰了有用信号(如回波信号),其中,对正常信号产生干扰的光可统称为杂散光。光学串扰在探测领域是比较常见的现象。本申请中,光学串扰是指反射率较高的目标或角反目标等(统称为第一目标)对接收到的信号光反射后的回波信号具有较高的能量,该回波信号应该射入像素区域A,但是由于回波信号的能量较高,该高能量的回波信号可能会射入像素区域A和像素区域B,对于像素区域B,这部分回波信号即为杂散光,这部分回波信号会对像素区域B应该接收的回波信号产生光学串扰。
三、峰值功率
当光源发射的信号光为脉冲波时,单次脉冲时间内的最大出射功率称为峰值功率,可看见图1b。
四、光斑
光斑通常是指光束在横截面上形成的空间能量分布。例如,射向探测区域的信号光在探测区域中的目标的横截面上形成的光斑;再比如,射向探测器的回波信号在光敏面上形成的光斑。光斑的空间能量分布可呈两头低、中间高的形态,例如,空间能量分布可呈正态分布(Normal distribution)或类似正态分布(Normal distribution)的形态。
光斑的形状可以是长方形、或者椭圆形、或者圆形、或者其他可能的规则或不规则的形状等。需要说明的是,本领域技术人员可知,实质上光斑整体上呈不同强度的能量分布, 核心区域能量密度较大,光斑形状较为明显,而边缘部分逐渐向外延伸,边缘部分的能量密度较低、形状并不清晰,且伴随能量强度的逐渐减弱,靠近边缘的光斑部分辨识度相对较低。因此,本申请所涉及的具有一定形状的光斑可以理解为能量较强且能量密度较大的部分所形成的边界易识别的光斑,并非是技术意义上的光斑的整体。
应理解,通常用最大能量密度1/e 2的来定义光斑的边界。
五、角分辨率
角分辨率也可称为扫描分辨率,是指射向探测区域的相邻光束之间的最小夹角。角分辨率越小,射向探测区域中的光斑的数量越多,即可以探测到探测区域中的目标的点越多,探测的清晰度越高。其中,角分辨率包括垂直角分辨率和水平角分辨率。
六、背面照明(back side illumination,BSI)
BSI是指光从背面入射进像素阵列,可参见图1c。光被具有防反射涂层的微透镜(microlen)聚焦在彩色滤光层上,经彩色滤光层分为三原色分量,并导入像素阵列。背面对应的是半导体制成工艺的前道(front end of line,BEOL)工艺。
七、正面照明(front side illumination,FSI)
FSI是指光从正面入射进像素阵列,可参见图1d。光被具有防反射涂层的微透镜(microlen)聚焦在彩色滤光层上,经彩色滤光层分为三原色分量,并通过金属布线层,使得平行光导入像素阵列。正面对应的是半导体制成工艺的后道(back end of line,BEOL)工艺。
八、选通像素和选通光源
在像素阵列中,行地址可为横坐标,列地址可为纵坐标。在本申请中,以像素阵列的行对应水平方向,像素阵列的列对应垂直方向为例介绍。可利用行列选通信号来读取内存里指定位置的数据,被读取的指定位置对应的像素即为选通的像素。应理解,像素阵列中的像素可将检测到的信号存储于对应的内存中。示例性的,可通过偏压使像素使能而处于活跃(active)状态,从而可以响应入射到其表面的回波信号。
在光源阵列中,行地址可为横坐标,列地址可为纵坐标。在本申请中,以像素阵列的行对应水平方向,像素阵列的列对应垂直方向为例介绍。选通光源是指点亮(或称为开启)光源,并控制光源按对应的功率发射信号光。
九、感兴趣区域(region of interest,ROI)
从像素阵列或光源阵列中以方框、圆、椭圆、或不规则多边形等方式勾勒出需要的像素的区域,称为感兴趣区域。
十、第一目标
第一目标对接收到的信号光反射得到的第一回波信号的能量(或强度)较大。影响回波信号的能量的因素包括但不限于目标与探测系统的距离、目标反射的回波信号分布情况(例如目标对接收到的信号光集中在某个方向反射,即回波信号沿各个方向均匀分布)、目标的反射率等。
示例性地,第一目标可以是距离探测系统距离较近的目标;或者,第一目标是反射率较大的目标;或者,第一目标是沿探测系统的方向上反射的回波信号较集中的目标;或者,第一目标是与探测系统距离较近且反射率较大的目标;或者,第一目标是与探测系统距离较近且沿探测系统的方向上反射的回波信号较集中的目标;或者,第一目标是反射率较大且沿探测系统的方向上反射的回波信号较集中的目标;或者,第一目标是与探测系统距离 较近且反射率较大、且沿探测系统的方向上反射的回波信号较集中的目标。其中,反射率较大的目标包括但不限于公路上的指示牌、警示牌、路标牌,路边的安全柱、防护栏、转角的凸面镜、车辆的车牌以及车身上的高反涂层贴纸等。
十一、一帧图像
本申请中,一帧图像是指光源阵列完成一次扫描,对应的像素阵列读取完全部的数据,基于读取到的全部的数据,形成的图像即为一帧图像。
前文介绍了本申请所涉及到的一些用语,下面介绍本申请涉及的技术特征。需要说明的是,这些解释是为了便于本领域技术人员理解,并不是对本申请所要求的保护范围构成限定。
如图2a所示,为本申请提供的一种直接飞行时间(direct time of flight,d-TOF)技术的测距原理示意图。d-TOF技术是直接测量发射信号光的发射时间t1与接收到回波信号的接收时间t2的差值(即t2-t1),其中,回波信号是探测区域中的目标对信号光反射得到的反射光;再根据d=C×(t2-t1)/2来计算目标的距离信息,其中,d表示与目标的距离,C表示光速。
需要说明的是,信号光通常为脉冲激光,由于激光安全的限制以及探测系统的功耗限制,发射的信号光的能量有限,但是需要覆盖完整的探测区域,因此,信号光被目标反射得到的回波信号回到接收器时,能量损失较严重。与此同时,环境光作为噪声,会干扰探测器对于回波信号的检测和还原。因此,d-ToF技术通常需要有灵敏度较高的探测器来检测回波信号。适用于d-ToF技术的探测器例如为单光子雪崩二极管(single-photon avalanche diode,SPAD)或数字硅光电倍增管(silicon photomultiplier,SiPM)。以SPAD为例,SPAD具有探测单个光子的灵敏度,SPAD在工作状态是一个偏置了高逆向电压的二极管。反向偏压在器件内部形成了一个强大的电场。当一个光子被SPAD吸收转化为一个自由电子时,这个自由电子被内部的电场加速,获得足够的能量撞击其他原子时产生自由电子和空穴对。而新产生的载流子继续被电场加速,撞击产生更多的载流子。这种几何放大的雪崩效应使得SPAD具有几乎无穷大的增益,从而输出一个大电流脉冲,实现对于单个光子的探测。
如图2b所示,为本申请提供的一种基于d-TOF技术的探测模组的结构示意图。探测模组可包括SPAD阵列和时间数字转换器(time to digital convert,TDC)阵列。该示例中以SPAD阵列为5×5的阵列、TDC阵列也为5×5的阵列为例,一个TDC对应至少一个SPAD。TDC与发射端进行时间同步,当某个TDC检测到发射端开始发射信号光时刻后,开始计时,开始计时的该TDC对应的至少一个SPAD中的一个SPAD接收到回波信号的一个光子后,该TDC停止计时。经过N次的发射与接收后,TDC能够记录n次(n≤N)光的飞行时间,于是产生一个关于飞行时间的分布直方图,求其出现频率最大的飞行时间值即为目标飞行时间t,基于d=C×t/2,可确定目标的距离信息。
进一步,可选地,探测模组还可包括存储器和/或控制电路。控制电路可将通过SPAD/TDC检测到的信号光的飞行时间存储至存储器。
如背景技术所描述,当探测区域中存在第一目标时,第一目标反射的回波信号可能会对探测系统中的像素阵列中的应该接收该回波信号的像素的周围的像素造成光学串扰,从而会降低探测系统探测的准确性。
鉴于上述问题,本申请提供一种控制探测方法,该控制探测方法可以尽可能的减小探测系统中的光学串扰,从而提高探测系统探测的准确度。
下面介绍本申请适用的可能的探测系统的架构。该探测系统可包括光源阵列和像素阵列。光源阵列可包括m×n个光源,像素阵列可包括m×n个像素,m×n个光源与m×n个像素对应,m和n均为大于1的整数。应理解,m×n个光源可以是光源阵列中的全部或部分,和/或,m×n个像素也可以是像素阵列中的全部或部分。也可以理解为,光源阵列可以形成规则的图形,或者也可以形成不规则的图形,本申请对此不作限定。像素阵列也可以形成规则的图形,或者也可以形成不规则的图形,本申请对此不作限定。
进一步,可选的,光源阵列和像素阵列之间通常采用固定的光学映射关系。具体的,该探测系统还可包括发射光学系统和接收光学系统。其中,光源阵列中选通的光源用于发射信号光。发射光学系统用于将来自光源阵列的信号光传播至探测区域;具体的,发射光学系统可对来自光源阵列的信号光进行准直和/或匀光和/或整形和/或调制在角空间的能量分布等。接收光学系统用于将来自探测区域的回波信号传播至像素阵列,回波信号为经探测区域中的目标对信号光反射得到的反射光。像素阵列中选通的像素对接收到的回波信号进行光电转换,得到用于确定目标的关联信息的电信号,目标的关联信息包括但不限于目标的距离信息、目标的方位、目标的速度、和/或目标的灰度信息等。
图3是本申请的可应用的一种探测系统的架构示意图。其中,光源阵列以包括7×7个光源为例,像素阵列以包括7×7个像素为例。其中,7×7个光源与7×7个像素对应。换言之,光源11与像素11对应,光源12与像素12对应,依次类推,光源66与像素66对应。也可以理解为,光源11发射的信号光经探测区域中的目标反射回的回波信号可被像素11接收,光源12发射的信号光经探测区域中的目标反射回的回波信号可被像素12接收,依次类推,光源66发射的信号光经探测区域中的目标反射回的回波信号可被像素66接收。进一步,第1列光源与第1列像素对应,第2列光源与第2列像素对应,依次类推,第7列光源与第7列像素对应;类似的,第1行光源与第1行的像素对应,第2行光源与第2行的像素对应,依次类推,第7行光源与第7行的像素对应。
需要说明的是,一个光源发射的信号光在探测区域可投射形成一个光斑,因此,基于图3所示的光源阵列在探测区域可形成对应的7×7个的光斑。另外,每个光源的发射视场、及信号光的光斑在角空间的能量分布可以是预先设计好的。
下面对上述探测系统中的各个结构分别进行介绍说明,以给出示例性的具体实现方案。
一、光源阵列
在一种可能的实现方式中,光源阵列中的光源例如可以是垂直腔面发射激光器(vertical cavity surface emitting laser,VCSEL)、边缘发射激光器(edge emitting laser,EEL)、全固态半导体激光器(diode pumped solid state laser,DPSS)或光纤激光器。
进一步,可选地,光源阵列可实现独立寻址,所谓独立寻址是指可独立选通(或称为点亮或开启或通电)光源阵列中的光源,选通的光源可用于发射信号光。示例性地,可以通过电学扫描的方式实现寻址。具体的,可向需要选通的光源输入驱动电流。
在一种可能的实现方式中,光源阵列的寻址方式包括但不限于逐点选通光源、或逐列选通光源、或逐行选通光源、或按感兴趣的区域选通光源等。需要说明的是,光源阵列的寻址方式与光源的物理连接关系相关。例如,若光源阵列中各个光源的物理连接方式为并 联,则可以逐点选通光源(可参阅图4a),也可以逐列选通光源(可参阅图4b),或者逐行选通光源(可参阅图4c),或者倾斜(如对角线方向)选通光源(可参阅图4d),或者也可以按感兴趣的区域选通光源(参阅图4e),其中,感兴趣的区域可以是按特定图案或特定顺序选通光源等。再比如,若光源阵列中的同一列内的光源串联,不同列之间并联,则可以逐列选通光源,可参阅图4b。再比如,若光源阵列同一行内的光源串联,不同行之间并联,则可以逐行选通光源,可参阅图4c。再比如,若光源阵列中每一斜对角线上的光源串联,不同斜对角线上光源并联,则可以按斜对角线选通光源,可参阅图4d。应理解,逐点选通光源时,相邻光源之间选通的时间间隔可能较小。因此,逐点选通光源时也可能存在光学串扰的问题。为了尽可能的减小光学串扰,逐点选通光源时,选通相邻光源的时间可以设置较大些。
需要说明的是,逐点选通光源阵列可以实现按点扫描探测区域,逐列选通光源阵列可以实现按列扫描探测区域,逐行选通光源阵列可以实现按列扫描探测区域,按感兴趣的区域选通光源可以实现扫描探测区域的特定视场。当光源阵列中的全部光源被选通后,可实现对探测区域的全视场的扫描。也可以理解为,光源阵列中每个光源的发射视场拼接可得到探测系统的全视场。其中,光源的发射视场可根据探测系统的应用场景预先设计。例如,探测系统主要应用于远距离探测场景中,光源的发射视场可以大于0.2度;探测系统主要应用于中距离探测场景中,光源的发射视场可以为0.1~0.25度;探测系统主要应用于近距离探测场景中,光源的发射视场可以小于0.15度。再比如,还可根据探测系统的应用场景所需要的角分辨率来设计光源的发射视场,例如可以设计为0.01°~2°。
应理解,射向探测区域的信号光的光斑在角空间中的能量分布(即光源发射的信号光在空间中任意目标的表面上的能量分布)通常不能完全集中于特定角度范围内而不“泄漏”。在一种可能的实现方式中,射向探测区域的信号光的光斑在角空间中的能量分布的具体形式可根据实际需求或能量链路仿真进行设计。也可以理解为,可通过能量链路仿真或实际需求,设计信号光的光斑在角空间中能量分布的具体形状。在一种可能的实现方式中,信号光的光斑在角空间中能量分布可由光源自身的特定决定,光源的相干性越高其发射的信号光的光斑在角空间的能量分布越接近高斯线型。在另一种可能的实现方式中,信号光的光斑在角空间的能量分布还可以通过发射光学系统来控制。例如光源发射的信号光在角空间中的能量分布为高斯线型或者类似高斯线型,但发散角较大,可经发射光学系统做进一步的空间调制以实现对光斑在角空间的能量的分布的调整,例如发射光学系统可将发散角调整为满足需求的发散角。关于发射光学系统对光源发射的信号光的调整可参见下述的相关的介绍,此处不再赘述。
如图5a所示,为本申请提供的一种信号光的光斑在角空间中能量分布示意图。该光斑的能量分布近似为高斯线型,高斯线型的光斑能量大部分能量集中在发散角内(即发射视场内),发散角是指光源发射的信号光的水平角分辨率或垂直角分辨率,也可以称为单个信号光的发射视场。发散角的范围例如可以为0.01~2°。应理解,高斯线型的光斑能量衰减可以延伸至无穷远,随着向无穷远的方向的延伸,光斑的能量越来越弱,延伸至一定角度后,光斑的能量甚至可以忽略不计。
如图5b所示,为本申请提供的另一种信号光的光斑在角空间中能量分布示意图。高斯线型的光斑能量大部分能量集中在发散角内,还有少部分能量设计在发散角之外。也可以理解为,信号光的光斑在角空间的能量分布可以通过调制使其呈现中心高(即设计发散 角内集中大部分能量),在发散角之外存在局部的极大值峰。应理解,基于此种光斑的能量分布,可能会对探测系统的最远探测距离有一定的影响。为了保证探测系装置的性能不受影响,需要提高光源发射的信号光的总能量。
在一种可能的实现方式中,光斑的能量集中度可用能量隔离度来表征,单位为分贝(dB)。能量隔离度是指发散角内的峰值能量与发散角外的局部最大峰值能量的比,或者是指发散角内的峰值能量与发散角外的平均能量之比。也可以理解为,能量隔离度越大,发散角外的能量越弱。
需要说明的是,信号光在角空间的能量分布的高斯线型或类高斯线型的中心处于光源的发射视场内。通常,在探测系统应用于远距离探测场景以及中距离探测场景中,信号光的光斑在角空间中能量分布可设计为上述图5a所示的形式。在探测系统应用于近距离探测场景中,光斑在角空间中能量分布可设计为上述图5b所示的形式。
在一种可能的实现方式中,可通过设计合理的上升沿速率,在环境噪声一定的情况下,可提高探测系统的信噪比。应理解,上升沿越陡峭(即上升沿速率大),探测系统的信噪比越低。另外,探测系统的探测能力(如探测的远近)与峰值功率相关,峰值功率越大,探测系统可以探测的距离越远。
二、像素阵列
在一种可能的实现方式中,像素阵列中的像素可以包括一个或多个感光单元(cell),感光单元例如可以是SPAD或SiPM。其中,感光单元为像素阵列中的最小单元。参阅图6,示例性的示出了一个像素包括3×3个SPAD。也可以理解为,3×3个SPAD进行Binning组成一个像素,即3×3个SPAD输出的信号叠加在一起以一个像素的方式被读出。需要说明的是,像素也可以是行或列方向感光单元Binning的。
在一种可能的实现方式中,像素阵列选通像素的方式包括但不限于逐点选通(可参阅图7a)、或者逐列选通(请参阅图7b)、或者逐行选通(请参阅图7c)、或者按倾斜选通(请参阅图7d)、或者按ROI选通(参阅图7e),其中,感兴趣的区域可以是按特定图案或特定顺序选通像素等。
需要说明的是,像素阵列选通像素的方式与光源阵列选通光源的方式需要一致。例如,光源阵列逐行选通光源,像素阵列也逐行选通像素,即光源阵列采用上述图4c的选通方式,像素阵列采用上述图7c的选通方式。进一步,可以是按从第一行向最后一行的顺序选通,或者也可以是按从最后一行向第一行的顺序选通,或者也可以是从中间某一行开始向边缘行选通,等等,本申请对按行选通的顺序不做限定。再比如,光源阵列逐列选通光源,像素阵列也逐列选通像素,即光源阵列采用上述图4b的选通方式,像素阵列采用上述图7b的选通方式。进一步,可以是按从第一列向最后一列的顺序选通,或者也可以是按从最后一列向第一列的顺序选通,或者也可以是从中间某一列开始向边缘列选通,等等,本申请对按列选通的顺序不做限定。另外,还需要说明的是,上述光源阵列和像素阵列是同时被选通工作的。
在一种可能的实现方式中,光源阵列中的每个光源的发射视场与像素阵列中的每个像素的接收视场在空间上一一对应。即一个像素对应一个接收视场,一个光源对应一个发射视场,接收视场与发射视场在空间一一对准。为了保证回波信号可以被尽可能的接收到,通常会设计接收视场略大于发射视场。
进一步,可基于焦平面成像的光学原理实现发射视场与接收视场的一一对准。即光源 阵列中的每个光源位于光学成像系统的物面,像素阵列中的每个像素的光敏面位于光学成像系统的像面。光学成像系统可包括发射光学系统和接收光学系统,光源阵列中的光源位于发射光学系统的物方焦平面,像素阵列中像素的光敏面位于接收光学系统的像方焦平面上。光源阵列中的光源发射的信号光经发射光学系统传播至探测区域,探测区域中的目标反射信号光得到的回波信号经接收光学系统可成像在像方焦平面上。如此,发射光学系统和接收光学系统相对较简单,可模块化,从而可以使得探测系统实现小体积,高集成度等。基于此,发射光学系统与接收光学系统一般采用相同的光学镜头。
如图8所示,为本申请提供的一种光学镜头的结构示意图。该光学镜头包括至少一个镜片,镜片例如可以是透镜,图8以光学镜头包括4片透镜为例的。光学镜头的光轴是指过图8所示的各个透镜的球面球心的直线。
需要说明的,光学镜头可以是关于光轴旋转对称的。例如,光学镜头中的镜片可以是单片的球面透镜,也可以是多片球面透镜的组合(例如凹透镜的组合、凸透镜的组合或凸透镜和凹透镜的组合等)。通过多片球面透镜的组合,有助于提高探测系统的成像质量,降低光学成像系统的像差。应理解,凸透镜和凹透镜有多种不同的类型,例如凸透镜有双凸透镜,平凸透镜以及凹凸透镜,凹透镜有双凹透镜,平凹透镜以及凹凸透镜。如此,有助于提高探测系统的光学器件的复用率,且便于探测系统的装调。
需要说明的是,光学镜头中的镜片也可以是单片非球面透镜、或多片非球面透镜的组合,本申请对此不作限定。
在一种可能的实现方式中,光学镜头中的镜片的材料可以是玻璃、树脂或者晶体等光学材料。当镜片的材料为树脂时,有助于减轻探测系统的质量。当镜片的材料为玻璃时,有助于进一步提高探测系统的成像质量。进一步,为了有效抑制温漂,光学镜头中包括至少一个玻璃材料的镜片。
应理解,发射光学系统的结构也可以是其它可以实现对光源发射的信号光进行准直和/或扩束和/或在角空间的能量分布的调制的结构,例如微透镜阵列(请参见图9)或者粘贴于光源表面的微光学系统,此处不再逐一赘述。其中,微透镜阵列可以是一列、也可以多列,本申请对此不作限定。需要说明的是,发射光学系统和接收光学系统也可以是不同的架构,本申请对此不作限定。
进一步,可选的,该探测系统还可包括控制模组。控制模组可以是央处理单元(central processing unit,CPU),还可以是其它通用处理器(如微处理器,也可以是任何常规的处理器)、现场可编程门阵列(field programmable gate array,FPGA)、信号数据处理(digital signal processing,DSP)电路、专门应用的集成电路(application specific integrated circuit,ASIC)、晶体管逻辑器件、或者其他可编程逻辑器件、或者其任意组合。
在一种可能的实现方式中,当探测系统应用于车辆时,控制模组可用于根据确定出的探测区域的关联信息,进行行驶路径的规划,例如躲避将要行驶的路径上的障碍物等。
需要说明的是,上述给出的探测系统的架构仅是示例,本申请对探测系统的架构不做限定,例如,探测系统中的光源阵列也可以是一行或一列,进一步,该探测系统还可包括扫描器。扫描器每处于一个扫描角度,这一行或一列光源发射信号光的为一个功率。例如,扫描器处于扫描角度A,这一行或一列光源按功率A发射信号光A;扫描器处于扫描角度B,这一行或一列光源按功率B发射信号光B。
基于上述内容,下面给出了本申请中探测系统可能的应用场景。
在一种可能应用场景中,探测系统可以为激光雷达。激光雷达可以被安装在车辆(例如无人车、智能车、电动车、或数字汽车等)上作为车载激光雷达,请参阅图10a。激光雷达可以部署于车辆前、后、左、右四个方向中任一方向或任多个方向,以实现对车辆周围环境信息的捕获。图10a是以激光雷达部署于车辆的前方为例示例的。激光雷达可感知到区域可称为激光雷达的探测区域,对应的视场可称为全视场。激光雷达可以实时或周期性地获取自车的经纬度、速度、朝向、或一定范围内的目标(例如周围其它车辆)的关联信息(例如目标的距离、目标的移动速度、目标的姿态或目标的灰度图等)。激光雷达或车辆可根据这些关联信息确定车辆的位置和/或路径规划等。例如,利用经纬度确定车辆的位置,或利用速度和朝向确定车辆在未来一段时间的行驶方向和目的地,或利用周围物体的距离确定车辆周围的障碍物数量、密度等。进一步,可选地,还可结合高级驾驶辅助系统(advanced driving assistant system,ADAS)的功能可以实现车辆的辅助驾驶或自动驾驶等。应理解,激光雷达探测目标的关联信息的原理是:激光雷达以一定方向发射信号光,若在该激光雷达的探测区域内存在目标,目标可将接收到的信号光反射回激光雷达(被反射的信号光可以称为回波信号),激光雷达再根据回波信号确定目标的关联信息。
在另一种可能的应用场景中,探测系统可以为摄像机。摄像机也可被安装在车辆(例如无人车、智能车、电动车、数字汽车等)上,作为车载摄像机,请参阅上述图10b。摄像机可以实时或周期性地获取探测区域中的目标的距离、目标的速度等测量信息,从而可为车道纠偏、车距保持、倒车等操作提供必要信息。车载摄像机可以实现:a)目标识别与分类,例如各类车道线识别、红绿灯识别以及交通标志识别等;b)可通行空间检测(FreeSpace),例如,可对车辆行驶的安全边界(可行驶区域)进行划分,主要对车辆、普通路边沿、侧石边沿、没有障碍物可见的边界、未知边界进行划分等;c)对横向移动目标的探测能力,例如对十字路口横穿的行人以及车辆的探测和追踪;d)定位与地图创建,例如基于视觉同步定位与地图构建(simultaneous localization and mapping,SLAM)技术的定位与地图创建;等等。
需要说明的是,如上应用场景只是举例,本申请所提供的探测系统还可以应用在多种其它可能场景,而不限于上述示例出的场景。例如,激光雷达还可以安装在无人机上,作为机载雷达。再比如,激光雷达也可以安装在路侧单元(road side unit,RSU),作为路边交通激光雷达,可以可实现智能车路协同通信。再比如,激光雷达可以安装在自动导引运输车(automated guided vehicle,AGV)上,其中,AGV指装备有电磁或光学等自动导航装置,能够沿规定的导航路径行驶,具有安全保护以及各种移载功能的运输车。此处不再一一列举。应理解,本申请所描述的应用场景是为了更加清楚的说明本申请的技术方案,并不构成对本申请提供的技术方案的限定,本领域普通技术人员可知,随着新的应用场景的出现,本申请提供的技术方案对于类似的技术问题,同样适用。
基于上述内容,应用场景可应用于无人驾驶、自动驾驶、辅助驾驶、智能驾驶、网联车、安防监控、远程交互、测绘或人工智能等领域。
需要说明的是,本申请的方法可应用于目标与探测系统是相对静止的场景,或者探测系统采集图像的帧率相对于目标和探测系统的相对速度而言较低的场景。
基于上述内容,图11为本申请提供的一种控制探测方法的方法流程示意图。该控制探测方法可由控制装置执行,该控制装置可以属于探测系统(如上述控制模组),或者也可 以是独立于探测系统控制装置,例如芯片或芯片系统等。当该控制装置属于车辆时,该控制装置可以是车辆中的域处理器,或者也可以是车辆中的电子控制单元(electronic control unit,ECU)等。该方法可应用于上述任一实施例中的探测系统,可包括以下步骤:
步骤1101,控制装置控制第一光源区域的光源按第一功率发射第一信号光,控制第二光源区域的光源按第二功率发射第二信号光。
其中,第一光源区域对应第一像素区域,第一目标的空间位置与第一像素区域对应,第二光源区域对应第二像素区域。也可以理解为,第一光源区域中的光源发射的第一信号光,经探测区域中的第一目标反射得到的第一回波信号可被第一像素区域中的像素接收;第二光源区域中的光源发射的第二信号光,若被探测区域中的第二目标反射得到的第二回波信号可被第二像素区域中的像素接收。换言之,第一像素区域中的像素用于接收包括经由第一目标反射第一信号光得到的第一回波信号;第二像素区域中的像素用于接收包括经由第二目标反射第二信号光得到的第二回波信号。应理解,第一像素区域和第二像素区域为像素阵列中两个不同的区域,第一光源区域和第二光源区域为光源阵列中两个不同的区域。
结合上述图3,若第一像素区域例如可以是像素43、像素44和像素45形成的区域,即第一像素区域中的像素包括像素43、像素44和像素45,对应的第一光源区域包括的光源包括光源43、光源44和光源45。需要说明的是,像素区域可以用像素的行列编号来表示,如第一像素区域可表示为(4,3)~(4,5)。光源区域也可以用光源的行列编号来表示,如第一光源区域可表示为(4,3)~(4,5)。另外,第一像素区域的形状可以矩形,或者也可以是正方形、或者也可能是其它的图形。应理解,像素阵列中的每个像素均有对应的标识。
在一种可能的实现方式中,第二像素区域可以是像素阵列中除第一像素区域外的全部像素形成的区域(结合上述图3,可以是像素阵列中除像素43、像素44和像素45外的像素形成的区域),或者也可以是除第一像素区域外的部分像素形成的区域。
在一种可能的实现方式中,第一目标对射向的第一信号光反射得到的第一回波信号的能量(或强度)较大。应理解,影响回波信号的能量的因素包括但不限于目标与探测系统的距离、目标反射的回波信号分布情况(例如朗伯体的目标对接收到的信号光在各个方向均匀反射,即回波信号沿各个方向均匀分布)、目标的反射率等。示例性地,第一目标可以是距离探测系统距离较近的目标;或者,第一目标是反射率较大的目标(如镜面反射体、金属反射体、角反体、混和反射体且漫反射成分较弱);或者,第一目标是沿探测系统的方向上反射的回波信号较集中的目标;或者,第一目标是与探测系统距离较近且反射率较大的目标;或者,第一目标是与探测系统距离较近且沿探测系统的方向上反射的回波信号较集中的目标;或者,第一目标是反射率较大且沿探测系统的方向上反射的回波信号较集中的目标;或者,第一目标是与探测系统距离较近且反射率较大、且沿探测系统的方向上反射的回波信号较集中的目标。当该方法所应用的探测系统应用于车辆上,反射率较大的目标包括但不限于公路上的指示牌、警示牌、路标牌,路边的安全柱、防护栏、转角的凸面镜、车辆的车牌以及车身上的高反涂层贴纸等。
例如,若第一目标和第二目标与距离探测系统的距离相同、且反射的回波信号的分布相同(如第一目标和第二目标都是朗伯体),第一目标的反射率大于第二目标的反射率。再比如,若第一目标和第二目标的反射率相同、且反射的回波信号的分布相同,第一目标 比第二目标更靠近探测系统。再比如,若第一目标和第二目标与探测系统的距离相同、且反射率相同,第一目标的第一回波信号较多的集中于沿探测系统的方向(即沿探测系统的方向的第一回波信号分布较集中),其它方向的第一回波信号分布较少,第二目标沿探测系统的方向的第二回波信号分布较少,或者第二目标沿各个方向的第二回波信号分布均匀。此处不再一一列举。
此处,第二功率大于第一功率。一种可选的方式中,第一功率和第二功率可以是探测系统预先设置的工作参数,例如可以预先存储在探测系统(如预先存储于探测系统的配置表中),控制装置可通过查表等方式获得第一功率和第二功率。另一种可选的方式中,第一功率相比第二功率的降低量可以是控制装置根据自反馈等方式获得的,具体的:控制装置可通过之前采集的数据来确定第一功率相比于第二功率的降低量。示例性地,第二功率可以为光源的峰值功率。
该步骤1101的一种可能的实现可以是:控制装置向第一光源区域中的光源发送第一控制信号,并向第二光源区域中的光源发送第二控制信号,其中,第一控制信号用于控制第一光源区域中的光源按第一功率发射第一信号光,第二控制信号用于控制第二光源区域中的光源按第二功率发射第二信号光。
相应的,第一光源区域中的光源可基于第一控制信号,按第一功率发射第一信号光。第二光源区域中的光源可基于第二控制信号,按第二功率发射第二信号光。
进一步,第一光源区域可以是按第一选通方式选通的,选通的第一光源区域中的光源可按第一功率向探测区域中发射第一信号光。第二光源区域中的光源可以是按第一选通方式选通的,选通的第二光源区域中的光源可按第二功率向探测区域发射第二信号光。此处,第一选通方式例如可以是逐点、逐行、逐列、按区域(ROI)、或按特定顺序等;或者也可以是一次选通多行,这多行可以是相邻的,也可以是等间隔的、或者也可以是不等间隔的;或者也可以一次选通第一光源区域中的全部光源;或者也可以是一次选通第二光源区域中的全部光源;等等。应理解,第一选通方式与光源阵列中的光源的物理连接关系相关,具体可参见前述相关描述,此处不再赘述。另外,光源阵列具体采用哪种选通方式可以是在第一控制信号(和第二控制信号)中携带指示信息,例如,指示信息可以是第一光源区域中的光源的寻址时序(和第二光源区域中的光源的寻址时序)。即第一控制信号还可用于控制第一光源区域的寻址时序,第二控制信号还可用于控制第二光源区域的寻址时序。再比如,光源阵列具体采用哪种选通方式也可以是预先设置或预先约定的,本申请对此不作限定。
需要说明的是,关于如何确定第一像素区域、第二像素区域的可能的方式可下述方式1至方式4中描述,此处不再赘述。
步骤1102,控制装置控制第一像素区域的像素接收包括第一信号光经由第一目标反射后得到的第一回波信号。
在一种可能的实现方式中,控制装置可控制选通第一像素区域中的像素,即控制读取第一像素区域中的像素基于第一回波信号采集的数据,选通的第一像素区域中的像素可用于接收第一回波信号。结合上述图3,控制装置可控制选通第一像素区域的像素43、像素44和像素45。
示例性地,控制装置可向像素阵列中的第一像素区域发送第七控制信号,第七控制信号用于控制选通像素阵列中的第一像素区域。一种可能的实现中,第七控制信号可以是选 通第一像素区域中的像素的时序信号。
在一种可能的实现方式中,第一像素区域选通像素的方式与第一光源区域选通光源的方式一致。
需要说明的是,上述步骤1101和步骤1102不表示先后顺序,可以是同步执行的。示例性地,控制装置还可分别向第一光源区域和第一像素区域发送第一同步信号(即同一时钟信号),以指示第一光源区域与第一像素区域同步进行选通。
步骤1103,控制装置还可控制第二像素区域的像素接收包括第二信号光经由第二目标反射后得到的第二回波信号。
该步骤1103为可选步骤。
示例性地,控制装置可向像素阵列中的第二像素区域发送第八控制信号,第八控制信号用于控制选通像素阵列中的第二像素区域。
在一种可能的实现方式中,第二像素区域选通像素的方式与第二光源区域选通光源的方式一致。
通过上述步骤1101至步骤1103,基于第一目标的空间位置,通过降低第一目标的空间位置对应的第一像素区域对应的第一光源区域中的光源的第一功率,有助于减小第一信号光的强度(或称为能量),从而可减小第一回波信号的强度(或称为能量),进而有助于减小第一回波信号进入除第一像素区域外的其它像素,如此,有助于减小第一回波信号对除第一像素区域外的像素(如第二像素区域中的像素)的串扰。
进一步,可选的,第一像素区域中的像素可对接收到的第一回波信号进行光电转换,得到第一电信号。第二像素区域中的像素可对接收到的第二回波信号进行光电转换,得到第二电信号。控制装置可接收来自第一像素区域的第一电信号、以及接收来自第二像素区域的第二电信号,并根据第一电信号和第二电信号,确定探测区域的关联信息。其中,探测区域的关联信息包括但不限于第一目标的距离信息、第一目标的方位、第一目标的速度、第一目标的灰度信息、第二目标的距离信息、第二目标的方位、第二目标的速度、或者第二目标的灰度信息等中的一项或多项。
下面示例的示出了四种可能的确定第一像素区域的方式。
方式1,基于获取到的强度信息确定第一像素区域。
也可以理解为,基于获取到的强度信息确定第一像素区域包括哪些像素,换言之,基于获取到的强度信息确定哪些像素属于第一像素区域。
如图12所示,为本申请提供的一种确定第一像素区域的方法流程示意图。该方法包括以下步骤:
步骤1201,控制装置控制光源阵列的光源按第三功率发射第三信号光。
其中,第三功率可以等于第二功率,例如,第三功率也可以是峰值功率。
在一种可能的实现方式中,控制装置可向光源阵列发送第三控制信号,第三控制信号用于控制光源阵列中的光源按第三功率发射第三信号光。
进一步,可选的,光源阵列可以按第二选通方式选通光源、并按第三功率向探测区域发射第三信号光。需要说明的是,第二选通方式可以与第一选通方式相同,或者也可以不相同,本申请对此不作限定。示例性地,第二选通方式可以是携带在第三控制信号中的指示信息,例如指示信息可以是光源阵列中光源的寻址时序,即第三控制信号还可以用于控 制光源阵列寻址的时序;或者第二选通方式也可以是预先设置或预先约定的,本申请对此不作限定。另外,第二选通方式可以与第一选通方式相同,或者也可以不相同,本申请对此不作限定。
示例性地,控制装置向光源阵列发送第三控制信号,第三控制信号用于控制光源阵列按第二选通方式选通光源、且按第三功率发射第三信号光。
进一步,可选的,光源阵列可基于第三控制信号生成第三驱动信号,光源阵列可在驱动信号(例如电流)的驱动下,按第二选通方式且按第三功率发射第三信号光。应理解,驱动信号与光源阵列中光源的寻址时序是一致的。
结合上述图3,光源阵列包括7×7个光源,7×7个光源可按第二选通方式、选通的光源按第三功率向探测区域发射第三信号光。光源阵列中的光源全部被选通后,可向探测区域发射7×7个第三信号光(在探测区域可形成7×7个光斑),7×7个第三信号光可能会被探测区域中的第一目标和/或第二目标反射,从而得到7×7个第三回波信号。
步骤1202,控制装置控制像素阵列的像素接收第三回波信号。
其中,第三回波信号包括第三信号光经由第一目标和/或第二目标反射的反射光。换言之,第三回波信号可能是第三信号光经第一目标反射的反射光,或者也可能是第三信号光经第二目标反射的反射光,或者也可能既包括经第一目标反射的反射光、也包括经第二目标反射的反射光。
在一种可能的实现方式中,控制装置可向像素阵列发送第四控制信号,第四控制信号用于控制像素阵列按第二选通方式选通像素。示例性地,该第四控制信号可以用于控制像素阵列选通像素的时序,一种可能的实现中,第四控制信号可以是选通像素阵列中像素的时序信号。
此处,该步骤1202中像素阵列选通像素的第二选通方式与上述步骤1201中光源阵列选通光源第二选通方式一致,具体可参见前述光源阵列选通光源的方式与像素阵列选通像素的方式需要一致的相关描述。例如,步骤1201中选通光源阵列中的第1列光源,该步骤1202选通像素阵列中与第1列光源对应的第1列像素。再比如,步骤1201中选通光源阵列中的第1行光源,该步骤1202选通像素阵列中与第1行光源对应的第1行的像素。
需要说明的是,控制装置还需要分别向光源阵列和像素阵列发送第二同步信号(如同一时钟信号),以指示光源阵列与像素阵列同步进行选通。
步骤1203,像素阵列的像素对接收到的第三回波信号进行光电转换,得到第三电信号。
其中,像素阵列中的每个像素可输出一个第三电信号。结合上述图3,像素阵列可输出7×7第三电信号。换言之,一个像素对应一个第三电信号。
步骤1204,像素阵列向控制装置发送第三电信号。
步骤1205,控制装置根据第三电信号可确定第一强度。
也可以理解为,第三电信号中携带第三回波信号的强度信息,称为第一强度。
此处,一个第三电信号对应一个第一强度。结合上述图3,控制装置根据7×7个第三电信号可确定出7×7个第一强度。
在一种可能的实现方式中,控制装置(如信号采集电路)将采集到的第三电信号(原始信号)进行处理得到有效的数据格式和可处理的信号形式,再由处理电路及算法模块对信号采集电路得到的有效的数据进行计算,可得到目标的关联信息,例如用于表征目标反 射率的回波信号的强度等。结合上述图2b,统计直方图的纵坐标可记录强度。应理解,TDC计数是有上限的,第一目标反射的第三回波信号可能会使TDC超过计数上限,即达到计数饱和。
步骤1205,控制装置可确定第一强度中大于或等于第一预设值的强度对应的像素为第一像素区域中的像素。
进一步,可选的,控制装置还可确定第一强度中小于第一预设值的强度对应的像素为第二区域中的像素。
也可以理解为,与第一像素区域中的像素对应的第一强度大于或等于第一预设值,和/或,与第二像素区域中的像素对应的第一强度小于第一预设值。
本领域技术人员可知,上述第一预设值也可以替换为第一预设范围,根据第一强度是否属于该第一预设范围来判断像素是否为第一像素区域或者第二像素区域中的像素。例如,控制装置可确定不属于第一预设范围的强度对应的像素为第一像素区域中的像素。进一步,可选的,控制装置还可确定属于所述第一预设范围的强度对应的像素为第二区域中的像素。又如,控制装置可确定属于第一预设范围的强度对应的像素为第一像素区域中的像素。进一步,可选的,控制装置还可确定不属于所述第一预设范围的强度对应的像素为第二区域中的像素。再如,控制装置可确定属于第一预设范围的强度对应的像素为第一像素区域中的像素。进一步,可选的,控制装置还可确定属于第二预设范围的强度对应的像素为第二区域中的像素。基于具体的实现,可能还存在第三像素区域等更多的像素区域,以区别处理不同的信号强度范围,本申请对此不做具体限定。
结合上述图3,例如控制装置确定像素43、像素44和像素45输出的第三电信号对应的第一强度大于或等于第一预设值,从而可确定第一像素区域中的像素包括像素43、像素44和像素45。应理解,像素阵列可向控制装置输出像素编号和第三电信号的对应关系。例如,像素43可向控制装置发送第三电信号和像素编号43。
此处,第一预设值可以为统计直方图的纵坐标接近饱和的值或已饱和的值。
方式2,基于获取的距离信息与强度信息确定第一像素区域。
如图13所示,为本申请提供的另一种确定第一像素区域的方法流程示意图。该方法包括以下步骤:
步骤1301,控制装置控制光源阵列的光源按第三功率发射第三信号光。
该步骤1301可参见上述步骤1201的介绍,此处不再赘述。
步骤1302,控制装置控制像素阵列的像素接收第三回波信号。
该步骤1302可参见上述步骤1202的介绍,此处不再赘述。
步骤1303,像素阵列的像素对接收到的第三回波信号进行光电转换,得到第三电信号。
该步骤1303可参见上述步骤1203的介绍,此处不再赘述。
步骤1304,像素阵列向控制装置发送第三电信号。
步骤1305,控制装置根据第三电信号,确定第一距离和第一强度。
此处,一个第三电信号对应一个第一距离、对应一个第一强度。也可以理解为,像素、第三电信号、第一距离、第一强度四者之间一一对应。
在一种可能的实现方式中,控制装置(如信号采集电路)将采集到的第三电信号(原始信号)进行处理得到有效的数据格式和可处理的信号形式,再由处理电路及算法模块对信号采集电路得到的有效的数据进行计算,可得到目标的关联信息,例如用于表征目标反 射率的回波信号的、回波信号的飞行时间等,进一步,可基于飞行时间确定第一距离。结合上述图2b,飞行时间和强度可以用统计直方图的方式表示。统计直方图的纵坐标可记录强度,飞行时间可通过TDC采集并记录,TDC的最大位数决定了其可记录的最大数据量。应理解,TDC计数是有上限的,第一目标反射的第三回波信号可能会使TDC超过计数上限,即达到计数饱和。
步骤1306,控制装置确定第一距离相同的像素对应的第一强度,将第一距离相同的像素对应的第一强度中两两相减,将差值大于或等于第二预设值的较大强度对应的像素确定为第一像素区域的像素。
进一步,可将差值小于第二预设值的强度对应的像素确定为第二像素区域中的像素。还可以将差值大于或等于第二预设值的较小强度对应的像素确定为第二像素区域中的像素。
也可以理解为,与第一像素区域中的像素对应的第一强度和与第二像素区域中的像素对应的第一强度的差值大于或等于第二预设值,且与第一像素区域中的像素对应的第一距离和与第二像素区域中的像素对应的第一距离相同。
结合上述图3,控制装置可先确定7×7个第一距离中哪些距离相同,进一步,再两两相减,确定这些第一距离相同的像素对应的第一强度的差值,将差值大于第二预设值的两个第一强度中强度较大的一个对应的像素确定为第一像素区域中的像素。此处,第二预设值可以小于第一预设值。
需要说明的是,在探测系统的探测区域内,相同距离的目标反射的回波信号的强度差异较小甚至相同。若强度差异较大,说明可能存在第一目标。也可以理解为,在探测区域中存在第一目标时,像素阵列中的像素采集到该第一目标反射的第三信号光得到的第一强度往往比相同距离的第二目标的反射第三信号光的强度大很多,甚至比距离更近的第二目标反射第三信号光得到的第一强度也大、甚至出现计数饱和。其中,强度差异该指标可人为定义且与实际工况条件如环境光强度等有关,例如不超过±10%为差异较小,超出±10%即认为差异较大。
方式3,基于获取到的点云图判断。
需要说明的是,如果探测区域内存在第一目标,控制装置获取到的点云图会受到影响,由于第一目标会反射较强的第三回波信号(包括自身反射形成的回波信号和反射的背景噪声形成的回波信号),第三回波信号除了会触发第一目标的空间位置对应的第一像素区域中的像素响应并输出(可能会导致饱和)第三电信号,还会造成光学串扰影响到第一像素区域周围的其它像素,因此,输出的点云图像中第一目标的尺寸和轮廓边缘的清晰度、锐度会变差、存在大量杂散点分布、整体轮廓存在拉伸、拖尾等的现象,即第一目标的空间位置的周围(3D点云图中的前后左右上下方向可能存在点云分布的延伸、拉线等)对应的点云图异常分布等。
进一步,控制装置可确定杂散点分布的区域中的杂散点的第一强度,将第一强度中大于第三预设值的强度对应的点确定为第一像素区域中的像素。其中,第三预设值可以是预先设置的,例如第三预设值等于饱和强度×小于1的系数。在一种可能的实现方式中,第三预设值可以为点云图中的一个强度等高线,将杂散点分布的区域中强度等高线范围内的点确定为第一像素区域中的像素。
方式4,基于至少两帧图像,确定第一像素区域。
如图14所示,为本申请提供的另一种确定第一像素区域的方法流程示意图。该方法包括以下步骤:
步骤1401,控制装置控制光源阵列的光源按第三功率发射第三信号光。
该步骤1401可参见上述步骤1201的介绍,此处不再赘述。
步骤1402,控制装置控制像素阵列的像素接收第三回波信号。
该步骤1402可参见上述步骤1202的介绍,此处不再赘述。
步骤1403,像素阵列的像素对接收到的第三回波信号进行光电转换,得到第三电信号。
该步骤1403可参见上述步骤1203的介绍,此处不再赘述。
步骤1404,控制装置根据第三回波信号确定第三像素区域。
其中,第三像素区域中的像素对应的第一强度大于或等于第四预设值。其中,第四预设值可以等于第一预设值,例如也可以是统计直方图的纵坐标接近饱和的值或已饱和的值。
该步骤1404的可能的实现方式可参见上述方式1中的步骤1205,或者也可以参见上述方式2中的步骤1305和步骤1306。
基于上述步骤1401至步骤1404可确定出第三像素区域。也可以理解为,基于上述步骤1401至步骤1404可以获得一帧图像,可称为第一图像。第三像素区域包括第一像素区域,进一步,还可包括受第一目标反射的回波信号串扰的像素。换言之,第三像素区域中可能已经包括了受第一目标反射的回波信号的光学串扰影响的像素。结合上述图3,第三像素区域中的像素包括像素33、像素34、像素35、像素43、像素44、像素45、像素53、像素54和像素55。这些像素中可能已经包括了受第一目标反射的回波信号串扰的像素。为了进一步识别出第一目标的空间位置对应的第一像素区域,换言之,为了识别出第三像素区域中的像素中哪些是受第一目标的回波信号串扰的像素,还可执行下述步骤1405至步骤1407。
需要说明的是,若像素阵列的选通方式是逐列选通像素,则可能会受光学串扰的是列方向上的近邻像素;若像素阵列的选通方式是逐行选通像素,则可能会受光学串扰的是行方向上的近邻像素;若像素阵列的选通方式是按斜对角线选通像素,则可能会受光学串扰的是斜对角线上近邻的像素。
步骤1405,控制装置控制第三光源区域的光源按第四功率发射第四信号光、及控制第四光源区域的光源按第五功率发射第五信号光。
其中,第五功率大于第四功率。在一种可能的实现方式中,第五功率可等于上述第二功率,或者也可以等于上述第三功率,第四功率可等于上述第一功率。进一步,可选地,第五功率可以是峰值功率。通过降低第三光源区域的光源发射第四信号光的功率,有助于减小经第一目标反射的第四回波信号的强度,从而可减小第四回波信号对第五回波信号的串扰。
其中,第三光源区域与第三像素区域对应。也可以理解为,第三光源区域中的光源发射的第四信号光,经探测区域中的第一目标反射得到的第四回波信号可被第三像素区域中的像素接收。
在一种可能的实现方式中,控制装置可向第三光源区域中的光源发送第五控制信号,向第四光源区域的光源发送第六控制信号,第五控制信号用于控制第三光源区域中的光源按第四功率发射第四信号光,第六控制信号用于控制第四光源区域中的光源按第五功率发射第五信号光。
进一步,第三光源区域可以是按第三选通方式选通的,选通的第三光源区域中的光源可按第四功率向探测区域中发射第四信号光。第四光源区域中的光源可以是按第三选通方式选通的,选通的第四光源区域中的光源可按第五功率向探测区域发射第五信号光。
需要说明的是,第三选通方式可以是携带在第五控制信号中的指示信息,例如指示信息可以是光源阵列中光源的寻址时序。即第五控制信号还可用于控制第三光源区域中的光源的寻址时序,第六控制信号还可用于控制第四光源区域中的光源的寻址时序。再比如,光源阵列具体采用哪种选通方式也可以是预先设置或预先约定的,本申请对此不作限定。另外,第三选通方式可以与第一选通方式相同,或者也可以不相同,第三选通方式与第二选通方式可以相同,也可以不相同,本申请对此也不作限定。
示例性地,控制装置向第三光源区域发送第五控制信号,第五控制信号用于控制第三光源区域按第三选通方式选通光源、且按第四功率发射第四信号光。控制装置向第四光源区域发送第五控制信号,第五控制信号用于控制第四光源区域按第三选通方式选通光源、且按第五功率发射第五信号光。
步骤1406,控制装置控制像素阵列接收第四回波信号和第五回波信号。
其中,第四回波信号包括第四信号光经第一目标反射的反射光,第五回波信号包括第五信号光经第二目标反射的反射光。
关于该步骤1406可能的实现方式可参见下述图15或图16的介绍,此处不再赘述。
步骤1407,控制装置根据第四回波信号和第五回波信号,确定第一像素区域和第二像素区域。
在一种可能的实现方式中,控制装置可将第回波信号和第五回波信号进行光电转换,得到第四电信号和第五电信号。其中,第四电信号中携带第四回波信号的强度(可称为第二强度),第五电信号中携带第五回波信号的强度(可称为第三强度)。应理解,一个第四电信号对应一个第二强度,一个第五电信号对应一个第三强度。结合上述图3,控制装置根据7×7个电信号(包括第四电信号和第五电信号)可确定出7×7个强度(包括第二强度和第三强度)。
进一步,控制装置可将第二强度和第三强度中大于或等于第五预设值的强度对应的像素确定为第一像素区域中的像素,并将第二强度和第三强度中小于第五预设值的强度对应的像素确定为第二像素区域中的像素。其中,第五预设值可以等于第一预设值。
在另一种可能的实现方式中,控制装置可将第回波信号和第五回波信号进行光电转换,得到第四电信号和第五电信号。其中,第四电信号中携带第四回波信号的强度(可称为第二强度)和第二距离,第五电信号中携带第五回波信号的强度(可称为第三强度)和第一距离。应理解,一个第四电信号对应一个第二强度、对应一个第二距离,一个第五电信号对应一个第三强度、对应一个第一距离。结合上述图3,控制装置根据7×7个电信号(包括第四电信号和第五电信号)可确定出7×7个强度(包括第二强度和第三强度)、及7×7个距离(包括第二距离和第一距离)。
进一步,控制装置确定第二距离和第一距离中距离相同的像素对应的强度,将距离相等的相对对应的强度两两相减,并将差值大于或等于第二预设值的较大强度对应像素确定为第一像素区域中的像素,并将差值消于第二预设值的较大强度对应像素确定为第二像素区域中的像素。
结合上述图3,例如控制装置确定像素43、像素44、像素45为第一像素区域中的像 素,将像素阵列中除像素43、像素44、像素45外的像素确定为第二像素区域。
需要说明的是,基于第三像素区域确定第一像素区域中的像素包括但不限于上述给出的可能的方式,例如还可以通过置心算法从第三像素区域中确定第一像素区域,具体的,可将第三像素区域的中间区域确定为第一像素区域;或者,将第三像素区域的中心区域的像素确定为准第一像素区域,再将准第一像素区域中强度差异较大的像素中的强度较大的像素确定为第一像素区域中的像素。
通过上述步骤1401至步骤1407,可先确定出第三像素区域,由于第三像素区域中可能包括了已被第一目标反射的反射光串扰的像素,通过进一步适应性的调整第三像素区域对应的第三光源区域中的光源的功率,可从第三像素区域中准确的确定出第一目标的空间位置对应的第一像素区域,从而有助于获得全视场完整且精确的探测区域的关联信息(如第一目标和第二目标的关联信息等)。
需要说明的是,上述步骤1405至步骤1407中基于第三像素区域确定第一像素区域的过程可以理解为是获取第二帧图像(可称为第二图像),具体过程可分五个阶段,即选通至第三像素区域的第一边缘区域之前的区域的第一阶段,选通至第三像素区域的第一边缘区域的第二阶段,选通至第三像素区域的第三阶段,以及选通至第三像素区域的第二边缘区域的第四阶段,选通至第三像素区域的第二边缘区域之后的区域的第五阶段。换言之,第一阶段选通的是第三像素区域的第一边缘区域之前的像素(如第一边缘区域的前一行或前一列的像素等),第二阶段选通的是第三像素区域的第一边缘区域的像素,第三阶段选通的是第三像素区域的像素,第四阶段选通的是第三像素区域的第二边缘区域的像素,第五阶段选通的是第三像素区域的第二边缘区域之后的像素(如第二边缘区域的后一行或后一列的像素等)。
下面结合具体的示例,对获取第二图像的过程进行介绍。
如图15所示,为本申请提供的一种基于第三像素区域确定第一像素区域的方法流程示意图。该方法中像素阵列和光源阵列按列选通方式、且从第一列开始选通为例。
在下文的介绍中,第三像素区域以包括像素阵列的第(a i~a j)行中的第(b i~b j)列为例,a i和b i均为大于1的整数,a j为大于a i的整数,b j为大于b i的整数。应理解,像素阵列的第(a i~a j)行中的第(b i~b j)列包括的像素是:像素阵列中行为第a i行至a j行、且列为第b i列至第b j列对应的像素。另外,第(a i~a j)行中的第(b i~b j)列与第(b i~b j)列中的第(a i~a j)行所对应的像素是相同的。
结合上述图3,第三像素区域包括像素阵列中的第(3~5)行中的第(3~5)列,即第三像素区域中包括像素33、像素34、像素35、像素43、像素44、像素45、像素53、像素54和像素55。
第一阶段,控制装置控制光源阵列逐列选通光源,选通的光源列按第五功率发射第五信号光;相应地,控制装置控制像素阵列逐列选通对应的像素列,选通的像素接收来自探测区域的第五回波信号。第一阶段的过程可参见前述方式1的步骤1201至步骤1203,得到部分第五电信号。
结合上述图3,控制装置控制第1列光源按第五功率发射第五信号光;相应的,控制装置控制选通第1列像素。
第二阶段,控制装置可执行下述步骤1501。
步骤1501,控制装置控制光源阵列的第b i-1列的光源按第五功率发射第五信号光,控制选通像素阵列的第(b i-1~b i)列的像素。
其中,第b i-1列的光源的发射视场与第b i-1列的像素的接收视场对应。应理解,第b i-1列的像素为第三像素区域的第一边缘区域的像素。
结合上述图3,控制装置控制光源阵列中的第2列光源按第五功率发射第五信号光;相应的,控制装置控制选通像素阵列中的第2列像素及第3列像素。换言之,第2列像素和第3列像素可共同用于接收来自探测区域的第四回波信号和第五回波信号。
在第二阶段,第b i-1列的光源按第五功率发射第五信号光,第五信号光的强度较强,对应的第五回波信号的强度也较强,由于第b i-1列的光源的发射视场与第b i-1列的像素的接收视场对应,因此,第五回波信号的大部分能量射向了第b i-1列的像素,会有部分第五回波信号会进入到第b i列的像素。通过同时选通相邻的两列,使得后续选通像素的方式为错位选通,从而可降低第一目标反射回波信号串扰影响探测区域中的其它目标(如第二目标)的回波信号。
第三阶段,控制装置可执行下述步骤1502和步骤1503。
步骤1502,控制装置控制依次选通像素阵列的第(b i+1~b j)列中的第(a i~a j)行的像素;依次控制光源阵列的第(b i~b j-1)列中的第(a i~a j)行的光源按第四功率发射第四信号光。
此处,依次控制第(b i~b j-1)列中的第(a i~a j)行的光源,可以理解为,第i时刻控制第b i列中的第(a i~a j)行的光源按第四功率发射第四信号光,相应的,控制选通第b i+1列中的第(a i~a j)行的像素;第i+1时刻控制第b i+1列中的第(a i~a j)行的光源按第四功率发射第四信号光,相应的,控制选通第b i+2列中的第(a i~a j)行的像素;依次类推、第j-1时刻控制第b j-1列中的第(a i~a j)行的光源按第四功率发射第四信号光,相应的,控制选通第b j列中的第(a i~a j)行的像素。
结合上述图3,控制装置控制光源阵列的第3列中的第(3~5)行的光源(即光源33、光源43、光源53)按第四功率发射第四信号光,相应的,控制选通像素阵列的第4列中的第(3~5)行的像素(即像素34、像素44和像素54)。依次类推,控制装置控制光源阵列的第4列中的第(3~5)行的光源(即光源34、光源44、光源54)按第四功率发射第四信号光,相应的,控制选通像素阵列的第5列中的第(3~5)行的像素(即像素35、像素45和像素55)。需要说明的是,选通的像素所在列与选通的光源所在的列是错位的,具体的,选通的像素所在的列比选通的光源所在的列靠后一列。
在一种可能的实现方式中,第四功率小于第五功率。示例性的,第五功率可以是峰值功率。
步骤1503,控制装置控制光源阵列中除第(b i~b j-1)列中的第(a i~a j)行外的光源按第五功率发射第五信号光,控制选通像素阵列中除第(b i+1~b j)列中的第(a i~a j)行的像素外的像素。
结合上述图3,控制装置控制光源阵列中除第(3~4)中的第(3~5)行的光源(即光源33、光源43、光源53、光源34、光源44和光源54)外的光源按第五功率发射第五信号光,相应的,控制选通像素阵列中除第(4~5)列中的(3~5)行的像素(即像素34、像素44、像素54、像素35、像素45和像素55)外的像素。
需要说明的是,上述步骤1502也可以是控制装置控制依次选通像素阵列的第(b i+1~b j)列的像素,控制光源阵列的第(b i~b j-1)列的光源按第四功率发射第四信号光。相应的,该步骤1503可以是控制装置控制选通像素阵列中除第(b i+1~b j)列的像素外的像素,控制光源 阵列中除第(b i~b j-1)列的光源外的光源按第五功率发射第五信号光。
结合上述图3,上述步骤1502也可以是控制装置控制光源阵列的第3列的光源按第四功率发射第四信号光,相应的,控制选通像素阵列的第4列的像素。依次类推,控制装置控制光源阵列的第4列的光源按第四功率发射第四信号光,相应的,控制选通像素阵列的第5列的像素。上述步骤1503也可以是控制光源阵列中除第3列和第4列的光源外的光源按第五功率发射第五信号光,相应的,控制选通像素阵列中除第4列和第5列外的像素。
第四阶段,控制装置可执行步骤1504。
步骤1504,控制装置控制光源阵列的第b j列的光源按第六功率发射第六信号光,停止选通像素阵列的第b j+1列的像素。
其中,第六功率可以是大于0的任意功率。
通过控制第b j列的光源按第六功率发射第六信号光,此时不选通像素,可以使得在选通第三像素区域之后的像素(如第b j+1列的像素)时,选通的像素列和光源不再错位,是对齐的。结合上述图3,控制光源阵列的第5列的光源按第六功率发射第六信号光,相应的,停止选通像素阵列中的任何像素。基于此,选通至第三像素区域之后的像素时,控制光源阵列的第6列光源按第二功率发射第二信号光;相应的,控制选通像素阵列的第6列像素。
第五阶段可重复上述第一阶段的过程。结合上述图3,控制装置控制第6列光源按第五功率发射第五信号光,控制装置控制选通像素阵列中第6列像素,选通的第6列像素可接收第五回波信号,依次类推,直至扫描完光源阵列的最后一列。
需要说明的是,该示例中每个阶段均是以一列为例说明的,若每个阶段存在多列,则重复示例的出的对应阶段给出一列的过程即可,本申请中不再重复赘述。
基于上述步骤1501至步骤1504,通过降低第三光源区域中的光源的发射功率,可降低第一目标反射的回波信号的能量,从而可降低第一目标反射的回波信号对周围像素的串扰。同时利用光源发射的信号光的光斑的能量分布特性,通过错位选通像素阵列中的像素,可进一步降低了第一目标反射回波信号的串扰。
需要说明的是,若第三像素区域是从像素阵列中的第1列像素开始的,则不需要错位选通像素,可从第1列开始依次选通像素阵列中的像素列,并控制光源阵列中对应列的光源按第四功率发射第四信号光。若第三像素区域的最后一列为像素阵列的最后一列,则不需要执行第四阶段和第五阶段。
如图16所示,为本申请提供的另一种基于第三像素区域确定第一像素区域的方法流程示意图。该方法中像素阵列和光源阵列按行选通方式、且从第一行开始选通为例。
第三像素区域以包括像素阵列的第(a i~a j)行中的第(b i~b j)列为例,具体可参见前述相关介绍。
第一阶段,控制装置控制光源阵列逐行选通光源,选通的光源按第五功率发射第五信号光;相应地,控制装置控制像素阵列逐行选通像素,选通的像素接收来自探测区域的第五回波信号。第一阶段的过程可参见前述方式1的步骤1201至步骤1203,得到部分第五电信号。
结合上述图3,控制装置控制第1行光源按第五功率发射第五信号光;相应的,控制装置控制选通第1行的像素。
第二阶段,控制装置可执行下述步骤1601。
步骤1601,控制装置控制光源阵列的第a i-1行光源按第五功率发射第五信号光,控制选通像素阵列的第(a i-1~a i)行的像素。
其中,第a i-1行的光源的发射视场与第a i-1行的像素的接收视场对应。应理解,第a i-1行的像素为第三像素区域的第一边缘区域的像素。
结合上述图3,控制装置控制光源阵列中的第2行光源按第五功率发射第五信号光;相应的,控制装置控制选通像素阵列中的第2行的像素及第3行的像素。换言之,第2行的像素和第3行的像素可共同用于接收来自探测区域的第四回波信号和第五回波信号。
第三阶段,控制装置可执行下述步骤1602和步骤1603。
步骤1602,控制装置控制光源阵列的第(a i~a j-1)行中的第(b i~b j)列的光源按第四功率发射第四信号光,控制选通像素阵列的第(a i+1~a j)行中的(b i~b j)列的像素。
结合上述图3,控制装置控制光源阵列的第3行中的第(3~5)列的光源(即光源33、光源34、光源35)按第四功率发射第四信号光,相应的,控制选通像素阵列的第4行中的第(3~5)列的像素(即像素43、像素44和像素45)。依次类推,控制装置控制光源阵列的第4行中的第(3~5)列的光源(即光源43、光源44、光源45)按第四功率发射第四信号光,相应的,控制选通像素阵列的第5行中的第(3~5)列的像素(即像素53、像素54和像素55)。需要说明的是,选通的像素所在行与选通的光源所在的行是错位的,具体的,选通的像素所在的行比选通的光源所在的行靠后一行。
步骤1603,控制装置控制选通像素阵列中除第(a i+1~a j)行中的(b i~b j)列的像素外的像素,控制光源阵列中除第(a i~a j-1)行中的第(b i~b j)列外的光源按第五功率发射第五信号光。
该步骤1603可参见前述步骤1503,具体可将上述步骤1503中的“行”用“列”替换,将“列”用“行”替换。
第四阶段,控制装置可执行步骤1604。
步骤1604,控制装置控制光源阵列的第a j行的光源按第六功率发射第六信号光,停止选通像素阵列的第a j+1行的像素。
该步骤1604可参见前述步骤1504,具体可将上述步骤1504中的“行”用“列”替换,将“列”用“行”替换。
第五阶段可重复上述第一阶段的过程。结合上述图3,控制装置控制第6行光源按第五功率发射第五信号光,控制装置控制选通像素阵列中第6行的像素,选通的第6行的像素可接收第五回波信号,依次类推,直至扫描完光源阵列的最后一行。
需要说明的是,该示例中每个阶段均是以一行为例说明的,若每个阶段存在多行,则重复示例的出的对应阶段给出一行的过程即可,本申请中不再重复赘述。
需要说明的是,若第三像素区域是从像素阵列中的第1行的像素开始的,则不需要错位选通像素,可从第1行开始依次选通像素阵列中的像素行,并控制光源阵列中对应行的光源按第四功率发射第四信号光。若第三像素区域的最后一行为像素阵列的最后一行,则不需要执行第四阶段和第五阶段。
在确定出第一目标的空间位置对应的第一像素区域后,通过调整第一像素区域对应的第一光源区域中的光源的功率,以及错位选通第一像素区域中的像素,可获得较精确的探测区域的关联信息。具体可分如下五个阶段,即选通至第一像素区域的第一边缘区域之前 的区域的第A阶段,选通至第一像素区域的第一边缘区域的第B阶段,选通至第一像素区域的第C阶段,以及选通至第一像素区域的第二边缘区域的第D阶段,选通至第一像素区域的第二边缘区域之后的区域的第E阶段。换言之,第A阶段选通的是第一像素区域的第一边缘区域之前的像素(如第一边缘区域的前一行或前一列的像素等),第B阶段选通的是第一像素区域的第一边缘区域的像素,第C阶段选通的是第一像素区域的像素,第D阶段选通的是第一像素区域的第二边缘区域的像素,第E阶段选通的是第一像素区域的第二边缘区域之后的像素(如第二边缘区域的后一行或后一列的像素等)。
如图17所示,为本申请提供的一种获取探测区域中关联信息的方法流程示意图。该方法中像素阵列和光源阵列按列选通方式、且均从第一列开始选通为例。
在下文的介绍中,第一像素区域以包括像素阵列的第(A i~A j)行中的第(B i~B j)列为例,A i和B i均为大于1的整数,A j为大于A i的整数,B j为大于B i的整数。应理解,像素阵列的第(A i~A j)行中的第(B i~B j)列包括的像素是:像素阵列中行为第A i行至A j行、且列为第B i列至第B j列对应的像素。另外,第(A i~A j)行中的第(B i~B j)列与第(B i~B j)列中的第(A i~A j)行所对应的像素是相同的。
结合上述图3,第一像素区域包括像素阵列中的第4行、第(3~5)列,即第一像素区域中包括像素43、像素44和像素45。
第A阶段,控制装置控制光源阵列逐列选通光源,选通的光源列按第一功率发射第一信号光。相应的,控制装置逐列控制选通像素阵列中对应的像素列。结合上述图3,控制装置控制第1列光源按第一功率发射第一信号光;相应的,控制选通第1列像素,选通的第1列像素可接收来自探测区域的第一回波信号。
第B阶段,控制装置控制选通至第一像素区域的第一边缘区域的像素。
步骤1701,控制装置控制光源阵列的第B i-1列的光源按第二功率发射第二信号光,控制选通像素阵列的第(B i-1~B i)列的像素。
此处,选通的第(B i-1~B i)列的像素用于接收来自探测区域的第二回波信号。
其中,第B i-1列的光源的发射视场与第B i-1列的像素的接收视场对应。应理解,第B i-1列的像素为第一像素区域的第一边缘区域的像素。
结合上述图3,控制装置控制光源阵列中的第2列光源按第二功率发射第二信号光;相应的,控制装置控制选通像素阵列中的第2列像素及第3列像素。换言之,第2列像素和第3列像素可共同用于接收来自探测区域的第二回波信号。
第C阶段,控制装置控制选通至第一像素区域的像素。
步骤1702,控制装置依次控制第(B i~B j-1)列中的第(A i~A j)行的光源按第一功率发射第一信号光,依次控制选通第(B i+1~B j)列中的第(A i~A j)行的像素。
此处,依次控制第(B i~B j-1)列中的第(A i~A j)行的光源,可以理解为,第i时刻控制第B i列中的第(A i~A j)行的光源按第一功率发射第一信号光,相应的,控制选通第b i+1列中的第(A i~A j)行的像素;第i+1时刻控制第B i+1列中的第(A i~A j)行的光源按第一功率发射第一信号光,相应的,控制选通第B i+2列中的第(A i~A j)行的像素;依次类推、第j-1时刻控制第B j-1列中的第(A i~A j)行的光源按第一功率发射第一信号光,相应的,控制选通第B j列中的第(A i~A j)行的像素。
结合上述图3,控制装置控制光源阵列的第3列中的第4行的光源(即光源43)按第一功率发射第一信号光;相应的,控制选通像素阵列的第4列中的第4行的像素(即像素 44)。依次类推,控制装置控制光源阵列的第4列中的第4行的光源(即光源44)按第一功率发射第一信号光,相应的,控制选通像素阵列的第5列中的第4行的像素(即像素45)。需要说明的是,选通的像素所在列与选通的光源所在的列是错位的,具体的,选通的像素所在的列比选通的光源所在的列靠后一列。通过错位选通像素阵列的列,利用回波信号的光斑的边缘能量,可降低第一目标反射的回波信号串扰影响探测区域中的其它目标(如第二目标)反射的回波信号,可改善串扰现象,从而可以实现对探测系统的全视场范围内的有效探测。
步骤1703,控制装置控制光源阵列中除第(B i~B j-1)列中的第(A i~A j)行的光源外的光源按第二功率发射第二信号光,控制选通像素阵列中除第(B i+1~B j)列中的第(A i~A j)行的像素。
结合上述图3,控制装置控制光源阵列中除第3列中的第4行的光源外的光源按第二功率发射第二信号光;相应的,控制选通像素阵列中除第4列中的第4行的像素外的像素。
需要说明的是,上述步骤1702也可以是控制装置控制依次选通像素阵列的第(B i+1~B j)列的像素,控制光源阵列的第(B i~B j-1)列的光源按第一功率发射第一信号光。相应的,该步骤1703可以是控制装置控制选通像素阵列中除第(B i+1~B j)列的像素外的像素,控制光源阵列中除第(B i~B j-1)列的光源外的光源按第二功率发射第二信号光。
第D阶段,控制装置控制选通至第一像素区域的第二边缘区域的像素。
步骤1704,控制装置控制光源阵列的第B j列的光源按第二功率发射第二信号光,停止选通像素阵列的第B j+1列的像素。
通过控制第B j列的光源按第二功率发射第二信号光,此时不选通像素,可以使得在选通第一像素区域之后的像素(如第B j+1列的像素)时,选通的像素列和光源列不再错位,是对齐的。结合上述图3,控制光源阵列的第5列的光源按第六功率发射第六信号光,相应的,停止选通像素阵列中的任何像素。基于此,选通完第一像素区域中的像素后,控制光源阵列的第6列光源按第二功率发射第二信号光;相应的,控制选通像素阵列的第6列像素。
第E阶段可重复上述第A阶段的过程。结合上述图3,控制装置控制第6列光源按第二功率发射第二信号光,控制装置控制选通像素阵列中第6列像素,选通的第6列像素可接收第二回波信号,依次类推,直至扫描完光源阵列的最后一列。
通过上述步骤1701至步骤1704,降低第一光源区域中的光源的功率,可降低第一目标反射回的第一回波信号的能量。而且,第一像素区域是第一目标的空间位置对应的像素,即第一像素区域中不包括受第一回波信号串扰的像素,因此,可以获得精确且完整的探测区域的关联信息。
需要说明的是,上述图17所示的方法也可以是按行扫描的,可将上述图17中的“行”用“列”替换,将“列”用“行”替换,具体过程类似上述图17,此处不再赘述。
上述图17是以与上述图15所给出的扫描方式相同为例说明的。应理解,确定出第一光源区域及第一像素区域后,也可以按其它可能的方式获取探测区域的关联信息,本申请对此不作限定。需要说明的是,扫描方式涉及到探测系统的硬件层面如驱动的设计、数据读取电路设计、热评估、能量利用率及对探测系统的性能的影响等,其中,探测系统的性能包括但不限于探测的准确度、探测的距离等。
在一种可能的实现方式中,上述图15至图17所示的方法中,控制装置控制光源阵列 中的某些光源按某一功率发射信号光具体可以是控制装置向光源阵列发送对应的控制信号。控制装置控制选通像素阵列中的某些像素具体可以是控制装置向像素阵列发送对应的控制信号。应理解,控制装置向光源阵列发送控制信号,和/或向像素阵列发送控制信号均是示例,本申请对具体如何控制不作限定。
应理解,上述图15至图17所示的方法中均是以错位一行或错位一列为例说明的,本申请对错位的行数和列数不作限定,例如,也可以错位两行或两列,甚至多于两行或两列。
基于上述内容和相同构思,图18和图19为本申请的提供的可能的控制装置的结构示意图。这些控制装置可以用于实现上述方法实施例中控制装置的功能,因此也能实现上述方法实施例所具备的有益效果。
如图18所示,该控制装置1800包括处理模块1801和收发模块1802。控制装置1800用于实现上述图11、图12、图13、图14、图15、图16或图17中所示的方法实施例中控制装置的功能。
当控制装置1800用于实现图11所示的方法实施例的控制装置的功能时:处理模块1801用于通过收发模组1802控制第一光源区域的光源按第一功率发射第一信号光,控制第二光源区域的光源按第二功率发射第二信号光,及控制第一像素区域的像素接收包括第一信号光经由第一目标反射后得到的第一回波信号,第一目标的空间位置与第一像素区域对应,第一光源区域对应第一像素区域,第二光源区域对应第二像素区域,第二功率大于第一功率。
有关上述处理模块1801和收发模块1802更详细的描述可以参考图11所示的方法实施例中相关描述直接得到,此处不再一一赘述。上述光源区域和像素区域的解释参见上文涉及的光源阵列和像素阵列,这里不再赘述。
应理解,本申请实施例中的处理模块1801可以由处理器或处理器相关电路模块实现,收发模块1802可以由接口电路或接口电路相关电路模块实现。
基于上述内容和相同构思,如19所示,本申请还提供一种控制装置1900。该控制装置1900可包括至少一个处理器1901和接口电路1902。处理器1901和接口电路1902之间相互耦合。可以理解的是,接口电路1902可以为输入输出接口。可选地,控制装置1900还可包括存储器1903,用于存储处理器1901执行的指令或存储处理器1901运行指令所需要的输入数据或存储处理器1901运行指令后产生的数据。
当控制装置1900用于实现图11所示的方法时,处理器1901用于执行上述处理模块1801的功能,接口电路1902用于执行上述收发模块1802的功能。
基于上述内容和相同构思,图20为本申请的提供的可能的激光雷达的架构示意图。该激光雷达2000可包括发射模组2001、接收模组2002、以及用于执行上述任意方法实施例的控制装置2003。其中,发射模组2001用于按第一功率发射第一信号光,并按第二功率发射第二信号光;接收模组2002用于接收来自探测区域的第一回波信号,第一回波信号包括第一信号光经由第一目标反射的反射光;控制装置2003的功能可参见前述相关描述,此处不再赘述。发射模组2001可能的实现可参见前述发射模组的介绍,接收模组2002可能的实现可参见前述接收模组的介绍,此处不再赘述。
基于上述内容和相同构思,本申请提供一种终端设备。该终端设备可包括用于执行上 述任意方法实施例的控制装置。进一步,可选的,该终端设备还可包括存储器,存储器用于存储程序或指令。当然,该终端设备还可以包括其他器件,例如无线控制装置等。其中,控制装置可参见上述控制装置的描述,此处不再赘述。
在一种可能的实现方式中,该终端设备还可包括上述发射模组2001和接收模组2002。也就是说,该终端设备可包括上述激光雷达2000。
示例性地,该终端设备例如可以是车辆(例如无人车、智能车、电动车、或数字汽车等)、机器人、测绘设备、无人机、智能家居设备(例如电视、扫地机器人、智能台灯、音响系统、智能照明系统、电器控制系统、家庭背景音乐、家庭影院系统、对讲系统、或视频监控等)、智能制造设备(例如工业设备)、智能运输设备(例如AGV、无人运输车、或货车等)、或智能终端(手机、计算机、平板电脑、掌上电脑、台式机、耳机、音响、穿戴设备、车载设备、虚拟现实设备、增强现实设备等)等。
可以理解的是,为了实现上述实施例中功能,控制装置包括了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本申请中所公开的实施例描述的各示例的模块及方法步骤,本申请能够以硬件或硬件和计算机软件相结合的形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用场景和设计约束条件。
本申请的实施例中的方法步骤可以通过硬件的方式来实现,也可以由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、CD-ROM或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。另外,该ASIC可以位于控制装置中。当然,处理器和存储介质也可以作为分立模块存在于控制装置中。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机程序或指令。在计算机上加载和执行计算机程序或指令时,全部或部分地执行本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络或者其它可编程装置。计算机程序或指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机程序或指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是集成一个或多个可用介质的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,例如,软盘、硬盘、磁带;也可以是光介质,例如,数字视频光盘(digital video disc,DVD);还可以是半导体介质,例如,固态硬盘(solid state drive,SSD)。
在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑 关系可以组合形成新的实施例。
本申请中,“均匀”不是指绝对的均匀,可以允许有一定工程上的误差。“垂直”不是指绝对的垂直,可以允许有一定工程上的误差。“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。在本申请的文字描述中,字符“/”,一般表示前后关联对象是一种“或”的关系。在本申请的公式中,字符“/”,表示前后关联对象是一种“相除”的关系。另外,在本申请中,“示例性地”一词用于表示作例子、例证或说明。本申请中被描述为“示例”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。或者可理解为,使用示例的一词旨在以具体方式呈现概念,并不对本申请构成限定。
可以理解的是,在本申请中涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的实施例的范围。上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。术语“第一”、“第二”等类似表述,是用于分区别类似的对象,而不必用于描述特定的顺序或先后次序。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元。方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
尽管结合具体特征及其实施例对本申请进行了描述,显而易见的,在不脱离本申请的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的方案进行示例性说明,且视为已覆盖本申请范围内的任意和所有修改、变化、组合或等同物。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本申请实施例的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (18)

  1. 一种控制探测方法,其特征在于,所述方法包括:
    控制第一光源区域的光源按第一功率发射第一信号光,控制第二光源区域的光源按第二功率发射第二信号光,所述第二功率大于所述第一功率,所述第一光源区域对应第一像素区域,所述第二光源区域对应第二像素区域;
    控制所述第一像素区域的像素接收所述第一信号光经由第一目标反射后得到的第一回波信号,所述第一目标的空间位置与所述第一像素区域对应。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    控制所述第二像素区域的像素接收包括所述第二信号光经由第二目标反射后得到的第二回波信号。
  3. 如权利要求1或2所述的方法,其特征在于,所述方法应用于探测系统,所述探测系统包括光源阵列和像素阵列,所述光源阵列包括m×n个光源,所述像素阵列包括m×n个像素,所述光源阵列的光源对应所述像素阵列的像素,所述m和n均为大于1的整数。
  4. 如权利要求2或3所述的方法,其特征在于,所述方法还包括:
    控制光源阵列的光源按第三功率发射第三信号光,所述光源阵列包括所述第一光源区域和所述第二光源区域;
    控制像素阵列的像素接收第三回波信号,所述第三回波信号包括所述第三信号光经由所述第一目标和/或所述第二目标反射的反射光,所述像素阵列包括所述第一像素区域和所述第二像素区域;
    其中,与所述第一像素区域中的像素对应的第三回波信号的强度大于或等于第一预设值,和/或,与所述第二像素区域中的像素对应的第三回波信号的强度小于所述第一预设值。
  5. 如权利要求2或3所述的方法,其特征在于,所述方法还包括:
    控制光源阵列的光源按第三功率发射第三信号光,所述光源阵列包括所述第一光源区域和所述第二光源区域;
    控制像素阵列的像素接收第三回波信号,所述第三回波信号包括所述第三信号光经由所述第一目标和/或所述第二目标反射的反射光,所述像素阵列包括所述第一像素区域和所述第二像素区域;
    其中,与所述第一像素区域中的像素对应的第三回波信号的强度和与所述第二像素区域中的像素对应的第三回波信号的强度的差值大于或等于第二预设值,且与所述第一像素区域中的像素对应的第一距离和与所述第二像素区域中的像素对应的第一距离相同。
  6. 如权利要求2或3所述的方法,其特征在于,所述方法还包括:
    控制光源阵列的光源按第三功率发射第三信号光,所述光源阵列包括所述第一光源区域和所述第二光源区域;
    基于接收到第三回波信号,确定第三像素区域,所述第三回波信号包括所述第三信号光经由所述第一目标和/或所述第二目标反射的反射光,对应于所述第三像素区域的第三回波信号的强度大于或等于第四预设值,所述第三像素区域包括所述第一像素区域、及被所述第一目标反射得到的所述第三回波信号串扰的像素;
    控制第三光源区域的光源按第四功率发射第四信号光、及控制第四光源区域的光源按第五功率发射第五信号光,所述第五功率大于所述第四功率,所述第三光源区域与所述第 三像素区域对应;
    控制所述像素阵列接收第四回波信号和第五回波信号,所述第四回波信号包括所述第四信号光经所述第一目标反射的反射光,所述第五回波信号包括所述第五信号光经所述第二目标反射的反射光,所述像素阵列包括所述第一像素区域和所述第二像素区域;
    根据所述第四回波信号和所述第五回波信号,确定所述第一像素区域和所述第二像素区域。
  7. 如权利要求6所述的方法,其特征在于,所述第三像素区域包括所述像素阵列的第(a i~a j)行、第(b i~b j)列,所述a i和所述b i均为大于1的整数,所述a j为大于a i的整数,所述b j为大于b i的整数;
    所述控制所述第四光源区域的光源按第五功率发射第五信号光,包括:
    控制所述光源阵列的第b i-1列的光源按所述第五功率发射所述第五信号光;
    所述控制所述像素阵列获取第四回波信号和第五回波信号,包括:
    控制选通所述像素阵列的第(b i-1~b i)列的像素;
    其中,所述第b i-1列的光源的发射视场与所述第b i-1列的像素的接收视场对应。
  8. 如权利要求7所述的方法,其特征在于,所述控制所述像素阵列获取第四回波信号及第五回波信号,还包括:
    控制选通所述像素阵列的第(b i+1~b j)列中的第(a i~a j)行的像素;
    所述控制第三光源区域的光源按第四功率发射第四信号光,包括:
    控制所述光源阵列的第(b i~b j-1)列中的第(a i~a j)行的光源按所述第四功率发射所述第四信号光。
  9. 如权利要求8所述的方法,其特征在于,所述控制所述像素阵列获取第四回波信号及第五回波信号,还包括:
    控制选通所述像素阵列中除所述第(b i+1~b j)列中的第(a i~a j)行的像素外的像素;
    所述控制所述第四光源区域的光源按第五功率发射第五四信号光,还包括:
    控制所述光源阵列中除所述第(b i~b j-1)列中的第(a i~a j)行外的光源按所述第五功率发射所述第五信号光。
  10. 如权利要求9所述的方法,其特征在于,所述方法还包括:
    停止选通所述像素阵列的第b j+1列的像素;
    控制所述光源阵列的第b j列的光源按第六功率发射第六信号光。
  11. 如权利要求6所述的方法,其特征在于,所述第三像素区域包括所述像素阵列的第(a i~a j)行、第(b i~b j)列,所述a i和所述b i均为大于1的整数,所述a j为大于a i的整数,所述b j为大于b i的整数;
    所述控制所述第四光源区域的光源按第五功率发射第五信号光,包括:
    控制所述光源阵列的第a i-1行光源按所述第五功率发射所述第五信号光;
    所述控制所述像素阵列获取第四回波信号及第五回波信号,包括:
    控制选通所述像素阵列的第(a i-1~a i)行的像素;
    所述第a i-1行的光源的发射视场与所述第a i-1行的像素的接收视场对应。
  12. 如权利要求11所述的方法,其特征在于,所述控制所述像素阵列获取第四回波信号及第五回波信号,还包括:
    控制选通所述像素阵列的第(a i+1~a j)行中的(b i~b j)列的像素;
    所述控制第三光源区域的光源按第四功率发射第四信号光,包括:
    控制所述光源阵列的第(a i~a j-1)行中的第(b i~b j)列的光源按所述第四功率发射所述第四信号光。
  13. 如权利要求12所述的方法,其特征在于,所述控制所述像素阵列获取第四回波信号及第五回波信号,还包括:
    控制选通所述像素阵列中除第(a i+1~a j)行中的(b i~b j)列的像素外的像素;
    所述控制所述第四光源区域的光源按第五功率发射第五信号光,还包括:
    控制所述光源阵列中除所述第(a i~a j-1)行中的第(b i~b j)列外的光源按所述第五功率发射所述第五信号光。
  14. 如权利要求11~13任一项所述的方法,其特征在于,所述方法还包括:
    停止选通所述像素阵列的第a j+1行的像素;
    控制所述光源阵列的第a j行的光源按第六功率发射第六信号光。
  15. 一种控制装置,其特征在于,包括至少一个处理器和接口电路,所述处理器用于执行如权利要求1~14中任一项所述的方法。
  16. 一种激光雷达,其特征在于,包括发射模组、接收模组、以及用于执行如权利要求1~14中任一项所述方法的控制装置;
    所述发射模组,用于发射所述第一信号光和所述第二信号光;
    所述接收模组,用于接收来自探测区域的第一回波信号,所述第一回波信号包括所述第一信号光经由所述第一目标反射的反射光。
  17. 一种终端设备,其特征在于,包括用于执行如权利要求1~14中任一项所述方法的控制装置。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机程序或指令,当所述计算机程序或指令被控制装置执行时,使得所述控制装置执行如权利要求1~14中任一项所述的方法。
PCT/CN2022/121114 2021-10-08 2022-09-23 一种控制探测方法、控制装置、激光雷达及终端设备 WO2023056848A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111169740.4 2021-10-08
CN202111169740.4A CN115963514A (zh) 2021-10-08 2021-10-08 一种控制探测方法、控制装置、激光雷达及终端设备

Publications (2)

Publication Number Publication Date
WO2023056848A1 WO2023056848A1 (zh) 2023-04-13
WO2023056848A9 true WO2023056848A9 (zh) 2023-08-31

Family

ID=85803902

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/121114 WO2023056848A1 (zh) 2021-10-08 2022-09-23 一种控制探测方法、控制装置、激光雷达及终端设备

Country Status (2)

Country Link
CN (1) CN115963514A (zh)
WO (1) WO2023056848A1 (zh)

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009259703A (ja) * 2008-04-18 2009-11-05 Olympus Corp 照明装置、画像取得装置
WO2015176953A1 (en) * 2014-05-23 2015-11-26 Koninklijke Philips N.V. Object detection system and method
US11662433B2 (en) * 2017-12-22 2023-05-30 Denso Corporation Distance measuring apparatus, recognizing apparatus, and distance measuring method
CN111182287A (zh) * 2018-11-13 2020-05-19 南昌欧菲生物识别技术有限公司 发射模组、成像装置和电子装置
US10852434B1 (en) * 2018-12-11 2020-12-01 Facebook Technologies, Llc Depth camera assembly using fringe interferometery via multiple wavelengths
CN110018475A (zh) * 2019-05-08 2019-07-16 沈阳航空航天大学 一种多基地机载无源合成孔径雷达目标成像方法
US11592537B2 (en) * 2019-07-29 2023-02-28 Infineon Technologies Ag Optical crosstalk mitigation in LIDAR using digital signal processing
CN111142088B (zh) * 2019-12-26 2022-09-13 奥比中光科技集团股份有限公司 一种光发射单元、深度测量装置和方法
CN113167870B (zh) * 2020-04-03 2023-11-24 深圳市速腾聚创科技有限公司 激光收发系统、激光雷达及自动驾驶设备
CN111487639B (zh) * 2020-04-20 2024-05-03 深圳奥锐达科技有限公司 一种激光测距装置及方法
CN111830530B (zh) * 2020-06-04 2023-02-24 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质

Also Published As

Publication number Publication date
CN115963514A (zh) 2023-04-14
WO2023056848A1 (zh) 2023-04-13

Similar Documents

Publication Publication Date Title
US11353588B2 (en) Time-of-flight sensor with structured light illuminator
US20230014366A1 (en) Laser transceiver system, lidar, and autonomous driving apparatus
CN111175786B (zh) 一种多路消除串扰的宽视场高分辨率固态激光雷达
CN104991255A (zh) 一种基于目视原理的多点激光测距雷达
CN114200426A (zh) 光接收模块、光接收方法、激光雷达系统以及车辆
WO2023083198A1 (zh) 一种回波信号的处理方法、装置、设备及存储介质
WO2023056848A9 (zh) 一种控制探测方法、控制装置、激光雷达及终端设备
CN110554398B (zh) 一种激光雷达及探测方法
WO2023015563A1 (zh) 一种接收光学系统、激光雷达系统及终端设备
US11860317B1 (en) Optical adjustment for image fusion LiDAR systems
WO2023015562A1 (zh) 一种激光雷达及终端设备
WO2024036582A1 (zh) 一种发射模组、接收模组、探测装置及终端设备
CN115201788A (zh) 用于运行激光雷达系统的方法、激光雷达系统、计算机程序和机器可读的存储介质
EP4168986A1 (en) Projector for diffuse illumination and structured light
WO2023056585A1 (zh) 一种探测系统、终端设备、控制探测方法及控制装置
WO2024044905A1 (zh) 一种探测装置及终端设备
WO2023060374A1 (zh) 一种扫描系统、探测系统及终端设备
WO2024044997A1 (zh) 一种光学接收模组、接收系统、探测装置及终端设备
US11841516B2 (en) Anamorphic receiver optical design for LIDAR line sensors
WO2023201596A1 (zh) 一种探测装置及终端设备
US20230236290A1 (en) Lidar sensor for detecting an object and a method for a lidar sensor
WO2023155048A1 (zh) 一种探测装置、终端设备及分辨率的调控方法
WO2023066052A1 (zh) 一种探测方法及装置
CN115512099B (zh) 一种激光点云数据处理方法及装置
WO2022252057A1 (zh) 一种检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22877867

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022877867

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022877867

Country of ref document: EP

Effective date: 20240417