WO2023061386A1 - Radar laser, système de réception, système d'émission et procédé de commande - Google Patents

Radar laser, système de réception, système d'émission et procédé de commande Download PDF

Info

Publication number
WO2023061386A1
WO2023061386A1 PCT/CN2022/124749 CN2022124749W WO2023061386A1 WO 2023061386 A1 WO2023061386 A1 WO 2023061386A1 CN 2022124749 W CN2022124749 W CN 2022124749W WO 2023061386 A1 WO2023061386 A1 WO 2023061386A1
Authority
WO
WIPO (PCT)
Prior art keywords
array
receiving
detectors
receiving array
output
Prior art date
Application number
PCT/CN2022/124749
Other languages
English (en)
Chinese (zh)
Inventor
王超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023061386A1 publication Critical patent/WO2023061386A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements

Definitions

  • the present application relates to the field of laser radar, in particular to a laser radar, a receiving system, a transmitting system and a control method.
  • lidar is considered to be an important product leading to unmanned/intelligent driving due to its excellent characteristics of high resolution, strong anti-interference, wide detection range, and nearly all-weather work.
  • the biggest advantage of lidar is that it can draw a three-dimensional environment map around the vehicle in real time. At the same time, the distance, speed, acceleration, angular velocity and other information of other vehicles in the surrounding area are measured.
  • the scanning methods of lidar technology are divided into three schemes: mechanical rotary scanning, semi-solid scanning and all solid-state scanning. Among them, mechanical rotary scanning and semi-solid scanning are difficult to meet the requirements of stability and reliability in harsh environments, and all-solid-state laser technology is a better solution.
  • lidars Short-range, medium-range, and long-range application scenarios have different requirements for the ranging capability of lidar.
  • multiple lidars usually need to be deployed in conjunction with each other in order to draw a complete three-dimensional environment map around the vehicle.
  • the embodiment of the present application discloses a laser radar, a receiving system, a transmitting system and a control method, which are compatible with different ranging capabilities.
  • the embodiment of the present application provides a laser radar, including: a transmitting array for generating a transmitting beam; the transmitting array includes a first transmitting array and a second transmitting array; a first receiving array for receiving the first transmitting array An echo beam; the first echo beam corresponds to the beam emitted by the first emitting array, and the K detectors in the first receiving array output in the first pixel-merging manner, and the K detectors
  • the device corresponds to at least one laser in the first transmitting array, and the K is an integer greater than 0;
  • the second receiving array is used to receive the second echo beam; the second echo beam corresponds to the first
  • the beams emitted by the two emitting arrays, the F detectors in the second receiving array are output in a second pixel-merging manner, and the F detectors correspond to at least one laser in the second emitting array; F is an integer greater than 0, and the K is not equal to the F.
  • the first echo beam corresponds to at least one laser in the first emitting array corresponding to the K detectors.
  • the second echo light beam corresponds to at least one laser in the first emitting array corresponding to the F detectors.
  • the first receiving array may include two or more groups of detectors, and the K detectors included in each group are output in a manner of combining the first pixels.
  • the K detectors output in the first pixel binning manner may refer to the K detectors adopting a pixel binning output mode (pixel binning, hereinafter referred to as binning).
  • the second receiving array may include two or more groups of detectors, and the F detectors included in each group are output in a manner of combining the second pixels. Outputting the F detectors in the manner of second pixel binning may mean that the F detectors use binning.
  • K detectors in the first receiving array output in a first pixel-binning manner
  • F detectors in the second receiving array output in a second pixel-binning manner.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner. That is to say, the ranging capability corresponding to the first receiving array is different from the ranging capability corresponding to the second receiving array. Therefore, the lidar provided by the embodiment of the present application is compatible with different ranging capabilities. That is to say, the lidar provided in the embodiment of the present application has two or more ranging capabilities.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array, and the lidar can be compatible with different ranging capabilities and field of view angles.
  • the first receiving array and the second receiving array share one or more detectors, or the first receiving array and the second receiving array share no detectors.
  • the first receiving array and the second receiving array may be located in or belong to the same array.
  • the first receiving array and the second receiving array share one or more detectors, which can realize the multiplexing of detectors, and the utilization rate of the detectors is high, which is suitable for the field of view and the corresponding field of view of the first receiving array.
  • the first receiving array and the second receiving array do not share a detector, the circuit is simple, and is applicable to a scene where the field of view corresponding to the first receiving array and the field of view corresponding to the second receiving array do not overlap.
  • the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located in the around the detector, where the K is less than the F.
  • the K detectors output corresponding ranging capabilities in the first pixel binning manner is better than the F detectors output corresponding ranging capabilities in the second pixel binning manner.
  • the ranging capability corresponding to the first receiving array is better than the ranging capability of the second receiving array. Since the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located around the detectors included in the first receiving array, the second viewing angle corresponding to the second receiving array The viewing angle of the field is larger than the viewing angle of the first viewing field corresponding to the first receiving array.
  • the angle of view corresponding to the first receiving array is smaller than the angle of view corresponding to the second receiving array, and the ranging capability corresponding to the first receiving array is better than that of the second receiving array. Therefore, in this implementation manner, the first receiving array can realize long-distance detection with a small viewing angle, and the second receiving array can realize short-distance detection with a large viewing angle.
  • the detectors included in the first receiving array surround the detectors included in the second receiving array, or the detectors included in the first receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a large field of view
  • the second receiving array can realize short-distance detection with a small field of view
  • the lidar further includes: a driving circuit, configured to respectively drive the first emitting array and the second emitting array through different current magnitudes.
  • the first transmitting array and the second transmitting array are respectively driven by different current magnitudes, so as to realize ranging capabilities of different fields of view.
  • the first receiving array is configured to output a first electrical signal according to the first echo beam;
  • the second receiving array is configured to output a first electrical signal based on the second echo beam , outputting a second electrical signal;
  • the lidar further includes: a processing module configured to generate a first point cloud according to the first electrical signal and the second electrical signal.
  • the K detectors in the first receiving array output in a first pixel-combining manner
  • the F detectors in the second receiving array output in a second pixel-combining manner.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner.
  • the processing module is configured to generate the first point cloud according to the first electrical signal and the second electrical signal; it can be compatible with different ranging capabilities.
  • the first receiving array is configured to output a first electrical signal according to the first echo beam; the second receiving array is configured to output a first electrical signal based on the second echo beam , outputting a second electrical signal;
  • the lidar also includes: a processing module configured to generate a second point cloud based on the first electrical signal; the processing module is also configured to generate a second point cloud based on the second electrical signal Third point cloud. The processing module is further configured to generate a fourth point cloud according to the second point cloud and the third point cloud.
  • the processing module is configured to generate a fourth point cloud according to the second point cloud and the third point cloud; and can fuse the second point cloud and the third point cloud to obtain a better point cloud.
  • the lidar further includes: a processing module, configured to acquire area adaptation information, where the area adaptation information is used to instruct the K detectors in the first receiving array to use the output in the first pixel combination manner, and the F detectors in the second receiving array output in the second pixel combination manner.
  • the processing module is configured to obtain area adaptation information; it can accurately determine the output mode of the detectors in the first receiving array and the output mode of the second receiving array.
  • the area adaptation information is further used to indicate that the first transmitting array corresponding to the first receiving array and/or the first transmitting array corresponding to the second receiving array launch array.
  • the area adaptation information is further used to indicate the first transmit array corresponding to the first receive array and/or the first transmit array corresponding to the second receive array.
  • the first emitting array and/or the second emitting array can be flexibly configured to meet different application requirements.
  • the detectors in the first receiving array correspond to the arrangement positions of the lasers in the first transmitting array, and/or, the detectors in the second receiving array Corresponding to the arrangement position of the lasers in the second emitting array.
  • the first receiving array and the second receiving array are located on the same conjugate imaging plane.
  • the first receiving array is specifically used to receive the first echo beam from the receiving optical assembly; the second receiving array is specifically used to receive the first echo beam from the receiving optical assembly. of the second echo beam.
  • the first receiving array and the second receiving array share the receiving optical components, which can realize the multiplexing of the receiving optical components.
  • the first electrical signal is used to generate a point cloud with a first resolution
  • the second electrical signal is used to generate a point cloud with a second resolution
  • the first resolution and The second resolutions are different
  • the first point cloud includes a point cloud of the first resolution and a point cloud of the second resolution.
  • the first electrical signal output by the first receiving array is used to generate a point cloud with a first resolution (corresponding to the output of K detectors in a first pixel binning manner).
  • the second electrical signal output by the second receiving array is used to generate a point cloud with a second resolution (corresponding to the output of the F detectors in a second pixel binning manner).
  • the first point cloud includes a point cloud with a first resolution and a point cloud with a second resolution (that is, includes point clouds with different resolutions), and can be compatible with different ranging capabilities.
  • the resolution of the second point cloud is different from that of the third point cloud, and the field angle corresponding to the second point cloud is the same as that corresponding to the third point cloud.
  • the field of view is different.
  • the fourth point cloud is generated according to the second point cloud and the third point cloud with different resolutions; the advantages of the second point cloud and the third point cloud can be combined to obtain a better point cloud.
  • the embodiment of the present application provides a receiving system, which is applied to lidar, and includes: a first receiving array, used to receive the first echo beam; K detectors in the first receiving array The output of the detector is combined by the first pixel, and the K is an integer greater than 0; the second receiving array is used to receive the second echo beam; the F detectors in the second receiving array are combined by the second pixel output in a manner; the F is an integer greater than 0, and the K is not equal to the F.
  • the receiving system provided by the embodiment of the present application can be compatible with different ranging capabilities. That is to say, the receiving system provided by the embodiment of the present application has two or more ranging capabilities.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array.
  • the corresponding first field of view of the first receiving array The ranging capability is different from the ranging capability of the second field of view corresponding to the second receiving array.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array, and the lidar can be compatible with different ranging capabilities and field of view angles.
  • the first receiving array and the second receiving array share one or more detectors, or the first receiving array and the second receiving array share no detectors.
  • the first receiving array and the second receiving array share one or more detectors, which can realize the multiplexing of detectors, and the utilization rate of the detectors is high, which is suitable for the field of view and the corresponding field of view of the first receiving array.
  • the first receiving array and the second receiving array do not share a detector, the circuit is simple, and is applicable to a scene where the field of view corresponding to the first receiving array and the field of view corresponding to the second receiving array do not overlap.
  • the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a small field of view
  • the second receiving array can realize short-distance detection with a large field of view
  • the detectors included in the first receiving array surround the detectors included in the second receiving array, or the detectors included in the first receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a large field of view
  • the second receiving array can realize short-distance detection with a small field of view
  • the first receiving array is configured to output a first electrical signal according to the first echo beam;
  • the second receiving array is configured to output a first electrical signal based on the second echo beam , outputting a second electrical signal; the first electrical signal and the second electrical signal are used to generate the same point cloud or different point clouds.
  • the first electrical signal and the second electrical signal are used to generate Point clouds of different resolutions.
  • the first electrical signal and the second electrical signal are used to generate the same point cloud, different parts of the point cloud have different resolutions.
  • the first electrical signal and the second electrical signal are used to generate different point clouds, so that a better point cloud can be obtained by merging the point cloud generated by the first electrical signal and the point cloud generated by the second electrical signal.
  • the receiving system further includes: a processing module, configured to acquire area adaptation information, where the area adaptation information is used to instruct the K detectors in the first receiving array to use the output in the first pixel combination manner, and the F detectors in the second receiving array output in the second pixel combination manner.
  • a processing module configured to acquire area adaptation information, where the area adaptation information is used to instruct the K detectors in the first receiving array to use the output in the first pixel combination manner, and the F detectors in the second receiving array output in the second pixel combination manner.
  • the processing module is configured to obtain area adaptation information; it can accurately determine the output mode of the detectors in the first receiving array and the output mode of the second receiving array.
  • the first receiving array is specifically used to receive the first echo beam from the receiving optical assembly; the second receiving array is specifically used to receive the first echo beam from the receiving optical assembly. of the second echo beam.
  • the first receiving array and the second receiving array share the receiving optical components, which can realize the multiplexing of the receiving optical components.
  • the first electrical signal is used to generate a point cloud with a first resolution
  • the second electrical signal is used to generate a point cloud with a second resolution
  • the first electrical signal and The second electrical signal is used to generate a first point cloud comprising a point cloud of the first resolution and a point cloud of the second resolution.
  • the first electrical signal and the second electrical signal are used to generate a first point cloud comprising a point cloud of the first resolution and a point cloud of the second resolution, and the first point cloud can cover a more complete Probe scene.
  • the first electrical signal is used to generate a second point cloud with a first resolution
  • the second electrical signal is used to generate a third point cloud with a second resolution
  • the first electrical signal is used to generate a third point cloud with a second resolution.
  • the first resolution is different from the second resolution
  • the field angle corresponding to the second point cloud is different from the field angle corresponding to the third point cloud.
  • the first electrical signal and the second electrical signal are used to generate point clouds with different resolutions, so as to combine the advantages of the two point clouds with different resolutions to obtain a better point cloud.
  • an embodiment of the present application provides a transmitting system, which is applied to a laser radar, and includes: a transmitting array for generating a transmitting beam; the transmitting array includes a first transmitting array and a second transmitting array; The circuit is used to respectively drive the first emitting array and the second emitting array with different current magnitudes.
  • the driving circuit respectively drives the first emitting array and the second emitting array through different current magnitudes, so as to be able to emit optical signals with different intensities.
  • the first emitting array and the second emitting array share one or more lasers, or the first emitting array and the second emitting array do not share a laser.
  • the first emitting array and the second emitting array share one or more detectors, so that the multiplexing of detectors can be realized, and the utilization rate of the detectors is high.
  • the first emitting array and the second emitting array do not share a detector, and the circuit structure is simple.
  • the lasers included in the second emitting array surround the lasers included in the first emitting array, or the lasers included in the second emitting array are located at the edge of the lasers included in the first emitting array around.
  • the viewing angle corresponding to the first emitting array is included in the viewing angle corresponding to the second emitting array.
  • the K lasers in the first emitting array output in a first pixel-binning manner, and the F lasers in the second emitting array output in a second pixel-binning manner, so
  • the K is an integer greater than
  • the F is an integer greater than
  • the K is not equal to the F.
  • the K lasers output the corresponding ranging capability in the first pixel-merging manner, which is different from the F lasers outputting the corresponding ranging capability in the second pixel-merging manner. Therefore, the transmitting system provided by the embodiment of the present application can be compatible with different ranging capabilities. That is to say, the transmitting system provided in the embodiment of the present application has two or more ranging capabilities.
  • the transmitting system further includes: a processing module, configured to acquire area adaptation information, where the area adaptation information is used to indicate that the first transmitting array and/or the second transmitting array array.
  • the area adaptation information is also used to indicate the first transmitting array and/or the first transmitting array.
  • the first emitting array and/or the second emitting array can be flexibly configured to meet different application requirements.
  • the embodiment of the present application provides a laser radar control method, the method is applied to the laser radar, and the method includes: receiving the first echo beam through the first receiving array to obtain the first electrical signal; the first An echo beam corresponds to the beam emitted by the first emitting array, and the K detectors in the first receiving array are output in a first pixel-merging manner, and the K detectors correspond to the beams in the first emitting array At least one laser, the K is an integer greater than 0; the second echo beam is received by the second receiving array to obtain the second electrical signal; the second echo beam corresponds to the beam emitted by the second emitting array, so
  • the F detectors in the second receiving array are output in a second pixel combination mode, and the F detectors correspond to at least one laser in the second emitting array; the F is an integer greater than 0, so
  • the K is not equal to the F; according to the first electrical signal and the second electrical signal, a target point cloud is generated.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner. That is to say, the ranging capability corresponding to the first receiving array is different from the ranging capability corresponding to the second receiving array.
  • the target point cloud is generated according to the first electrical signal and the second electrical signal; it can be compatible with different ranging capabilities.
  • the method further includes: respectively driving the first emitting array and the second emitting array with different current magnitudes, the first emitting array emits a first emitting beam and the The second transmit array emits a second transmit beam; the first transmit beam corresponds to the first echo beam, and the second transmit beam corresponds to the second echo beam.
  • the driving circuit respectively drives the first emitting array and the second emitting array through different current magnitudes, so that the first emitting array and the second emitting array can emit optical signals with different intensities.
  • the first receiving array and the second receiving array share one or more detectors, or the first receiving array and the second receiving array share no detectors.
  • the first emitting array and the second emitting array share one or more detectors, so that the multiplexing of detectors can be realized, and the utilization rate of the detectors is high.
  • the first emitting array and the second emitting array do not share a detector, and the circuit structure is simple.
  • the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a small field of view
  • the second receiving array can realize short-distance detection with a large field of view
  • the detectors included in the first receiving array surround the detectors included in the second receiving array, or the detectors included in the first receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a large field of view
  • the second receiving array can realize short-distance detection with a small field of view
  • the method further includes: acquiring area adaptation information, where the area adaptation information is used to indicate that the K detectors in the first receiving array use the first pixel combination output in a manner, and the F detectors in the second receiving array output in a manner of the second pixel binning.
  • the area adaptation information is acquired to accurately determine the output mode of the detectors in the first receiving array and the output mode of the second receiving array.
  • the area adaptation information is further used to indicate that the first transmitting array corresponding to the first receiving array and/or the first transmitting array corresponding to the second receiving array launch array.
  • the area adaptation information is further used to indicate the first transmit array corresponding to the first receive array and/or the first transmit array corresponding to the second receive array.
  • the first emitting array and/or the second emitting array can be flexibly configured to meet different application requirements.
  • an embodiment of the present application provides a terminal device, where the terminal device includes the lidar described in the above first aspect and any possible implementation manner of the above first aspect.
  • the terminal device may be a vehicle (such as a smart car), a surveying and mapping device, an aircraft (such as a drone), a ship, etc. deployed with a lidar.
  • FIG. 1 is a schematic diagram of a focal plane imaging optical system provided by the present application
  • FIG. 2 is a schematic diagram of an example of an equivalent pixel area of a receiving end provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram of a focal length of a transmitting end and a field of view area of the transmitting end provided by an embodiment of the present application;
  • FIGS. 4A to 4K are schematic diagrams of examples of the first receiving array and the second receiving array provided by the embodiment of the present application.
  • FIG. 5 is a schematic diagram of an example of a first receiving array and a second receiving array provided by an embodiment of the present application
  • FIG. 6 is a flow chart of a control method of a laser radar according to an embodiment of the present application.
  • an embodiment means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application.
  • the occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.
  • At least one (item) means one or more
  • “multiple” means two or more
  • “at least two (items)” means two or three and three
  • “and/or” is used to describe the association relationship of associated objects, which means that there can be three kinds of relationships, for example, "A and/or B” can mean: only A exists, only B exists, and A and B exist at the same time A case where A and B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an "or” relationship.
  • “At least one of the following” or similar expressions refer to any combination of these items. For example, at least one item (piece) of a, b or c can mean: a, b, c, "a and b", “a and c", “b and c", or "a and b and c ".
  • the present application provides an active radar compatible with different ranging capabilities.
  • the activated radar provided by the present application has two or more ranging capabilities.
  • the present application also provides a lidar compatible with different ranging capabilities and different field of view angles.
  • the activated radar provided in this application has two or more fields of view, and different fields of view have different ranging capabilities.
  • some knowledge related to lidar is introduced below.
  • Binning is an image readout mode that adds the charges induced by adjacent pixels together and reads out in a pixel mode. Binning is divided into horizontal binning and vertical binning. Binning in the horizontal direction is to add the charges of adjacent rows together for readout, while binning in the vertical direction is to add the charges of adjacent columns together for readout.
  • the advantage of binning technology is that several pixels can be combined as one pixel to improve sensitivity, output speed, and reduce resolution. When binning is used for both rows and columns, the aspect ratio of the image does not change. For example, when using 2:2 binning, the resolution of the image will be reduced by 75%.
  • Time-to-digital converter time-to-digital converter
  • a time-to-digital converter is an instrument that can identify the time of an event and convert an analog signal into a digital signal. It is widely used in statistical laser pulse distribution, particle collision time, quantum optics, quantum key distribution, light detection and lidar measurement. distance and other scientific research fields.
  • a frame of point cloud data (that is, laser point cloud) can be composed of multiple two-dimensional point cloud data obtained by sequential scanning along the scanning direction of the laser radar within one scanning cycle.
  • the lighting sequence and time of the array light source are determined by the injection sequence of the driving current and the working area.
  • Lidar light detection and ranging, LiDAR
  • Lidar is a radar system that emits laser beams to detect the position, speed and other characteristics of the target. Lidar is also known as laser radar or LADAR. Its working principle is to transmit a detection signal (laser beam) to the target, and then compare the received signal (target echo) reflected from the target with the transmitted signal. After proper processing, Information about the target can be obtained. For example, parameters such as target distance, azimuth, height, speed, attitude, and even shape.
  • Lidar can include a transmitter and a receiver. The transmitting end collimates and shapes the outgoing light of the transmitting array through the transmitting optical system, and then projects it to the detection field of view.
  • the receiving end receives the signal reflected back from the outgoing light of the transmitting array through the receiving optical system and the receiving array.
  • the transmitting array is deployed on the transmitting chip, and the receiving array is deployed on the receiving chip.
  • lidar can be replaced by other detection devices that measure the position, speed and other information of the target object by transmitting signals to the target object and receiving signals reflected from the target object.
  • the basic optical architecture of the laser radar transceiver optical system can be designed based on the focal plane imaging array.
  • the transceiver optical system adopted by lidar can include a transmitting optical system and a receiving optical system.
  • LiDAR can include a transmit array and a receive array.
  • the emitting optical system collimates and shapes the outgoing light of the emitting array, and then projects it to the detection field of view.
  • the design of the receiving optical system can adopt a rotationally symmetric imaging optical design.
  • the specifications of the receiving chip including the chip of the receiving array), the pixel size and the maximum field of view that can be realized by the receiving optical system determine the main design parameters of the receiving optical system.
  • the transmitting array (or the light source at the transmitting end) realizes the electrical scanning light emission (that is, the transmitted signal) in a certain way under the addressing logic control unit and driving control.
  • the receiving array uses the same strategy for receiving echo energy (corresponding to the transmitted signal). Because the divergence angle of the outgoing light of the transmitting array is small after collimation and shaping, the optical system for transmitting and receiving realizes a point-to-point projection design on the optical energy link.
  • the transmitting end of the lidar can use a vertical cavity surface emitting laser (vertical cavity surface emitting laser, VCSEL) array, and the receiving end can use a single photon avalanche diode (single photon avalanche diode, SPAD) array.
  • the VCSEL array can be replaced by a laser diode (laser diode, LD), tunable laser, etc.
  • the SPAD array can be replaced by an avalanche photodiode (avalanche photodiode, APD), a silicon photomultiplier (silicon photomultiplier, SiPM), etc.
  • the pixel size of the transmit array (e.g. VCSEL array) and receive array e.g.
  • SPAD array can be the same or can vary in proportion.
  • one pixel in the transmit array corresponds to two or more pixels in the receive array.
  • two or more pixels in the transmitting array correspond to one pixel in the receiving array.
  • a pixel in a receiving array may refer to a detector on the receiving array.
  • a pixel in a transmit array may refer to a laser on the transmit array.
  • a pixel in the receiving array can be a receiving cell (such as a SPAD cell), or it can be obtained by binning multiple receiving cells, that is, binning multiple receiving cells as one pixel. It can be understood that a detector on the receiving array can be a receiving cell, and can also be obtained by binning multiple receiving cells.
  • the receiving cell refers to a receiving unit that is the smallest unit in the receiving array.
  • each pixel in the SPAD array is obtained by multiple SPAD cells (ie, a receiving cell).
  • a pixel in the emission array can be an emission cell, and can also be obtained by binning multiple emission cells, that is, multiple emission cell binning is regarded as one pixel.
  • the emission cell refers to the emission unit which is the smallest unit in the emission array.
  • each pixel in the VCSEL array is obtained by multiple VCSEL cells (ie, a receiving cell). It can be understood that one laser on the transmitting array can be one transmitting cell, and can also be obtained by binning multiple transmitting cells.
  • FIG. 1 is a schematic diagram of a focal plane imaging optical system provided by the present application.
  • the focal plane imaging optical system in Figure 1 is a possible transceiver optical system for lidar.
  • 101 represents a transmitting array, such as a VCSEL array
  • 102 represents a receiving array, such as a SPAD array
  • one or more pixels in the transmitting array correspond to one or more pixels in the receiving array.
  • one pixel (corresponding to an equivalent pixel) in the transmitting array corresponds to a plurality of pixels (that is, one equivalent pixel) in the receiving array
  • the number of equivalent pixels in the transmitting array is equal to the number of equivalent pixels in the receiving array.
  • the number of effective pixels may be the same.
  • an equivalent pixel refers to the smallest corresponding unit of a transmitting pixel (ie, a pixel in a transmitting array) and a receiving pixel (ie, a pixel in a receiving array).
  • a transmitting pixel ie, a pixel in a transmitting array
  • a receiving pixel ie, a pixel in a receiving array.
  • one VCSEL pixel in the VCSEL array corresponds to one SPAD pixel in the SPAD array (that is, one pixel in the SPAD array); for the SPAD array, each pixel of it is a Equivalent pixel; for the VCSEL array, each pixel is an equivalent pixel.
  • 1 VCSEL pixel in the VCSEL array corresponds to 4 SPAD pixels in the SPAD array; for the SPAD array, 4 SPAD pixels corresponding to the same VCSEL pixel are an equivalent pixel; for the VCSEL array, Each of its pixels is an equivalent pixel.
  • 4 VCSEL pixels in the VCSEL array correspond to 1 SPAD pixel in the SPAD array; for the SPAD array, each of its pixels is an equivalent pixel; for the VCSEL array, it corresponds to the same SPAD pixel
  • the 4 VCSEL pixels are an equivalent pixel.
  • the pixels in the transmitting array and receiving array of the lidar in this application can adopt the equal-area design of the transmitting and receiving pixels.
  • one VCSEL pixel corresponds to several SPAD pixels (that is, one VCSEL pixel is one VCSEL equivalent pixel, and several SPAD pixels are one SPAD equivalent pixel).
  • several SPAD pixels in the SPAD array can use binning way of working in order to improve ranging performance.
  • Some possible scenario requirements include: close-range detection with a large field of view (FOV), and long-distance detection with a small field of view.
  • FOV field of view
  • the strategy for short-distance detection that satisfies large FOV the transmitter (or receiver) adopts an adapted short-focus lens.
  • the equivalent pixel area of the receiver increases (the number of SPADs in the active area that receives energy also increases), but The total energy at the receiving end has not changed.
  • the energy averaged to a single SPAD that is, SPAD cell
  • the equivalent pixel area at the receiving end refers to the sum of the areas of the SPADs in the receiving array that receive energy (ie echo signals).
  • the transmitting end (or receiving end) adopts an adapted telephoto lens, at this time, the equivalent pixel area of the receiving end is reduced, and the SPAD (SPAD cell) in the active area receiving energy number also decreased. As the energy averaged to each SPAD increases, the ranging capability of the lidar increases.
  • a reasonable equivalent pixel for sending and receiving can be adapted according to the energy link simulation calculation of the sending and receiving optical system, that is, how many pixels in the receiving array are binned to form a pixel that corresponds spatially to Emit H pixels in the array.
  • H is an integer greater than 0.
  • the transmitting end of the lidar adopts a VCSEL array
  • the receiving end adopts a SPAD array.
  • To adapt the receiving chip (that is, a chip including a SPAD array) to receive and receive reasonable equivalent pixels is to configure the H pixels in the VCSEL array in space Corresponding to how many pixels formed by SPAD binning.
  • a reasonable transceiver equivalent pixel for the receiver chip can be estimated according to the farthest ranging capability and system requirements of the transceiver optical system, thus giving the parameter requirements for the transmitter array and the receiver array.
  • the receiving array takes the equivalent pixel as an output unit.
  • FIG. 2 is a schematic diagram of an example of an equivalent pixel area of a receiving end provided by an embodiment of the present application.
  • the rectangular frame of each smallest unit represents an equivalent pixel (such as a SPAD)
  • 201 represents the first equivalent pixel area (corresponding to 2*2 SPADs)
  • 202 represents the second equivalent pixel area (corresponding to For 4*4 SPADs)
  • 203 represents the third equivalent pixel area (corresponding to 6*6 SPADs)
  • 204 represents the fourth equivalent pixel area (corresponding to 8*8 SPADs).
  • the first equivalent pixel area is the equivalent pixel area of the receiving end when the lidar adopts the first lens.
  • the second equivalent pixel area is the equivalent pixel area of the receiving end when the lidar adopts the second lens.
  • the third equivalent pixel area is the equivalent pixel area of the receiving end when the lidar adopts the third lens.
  • the fourth equivalent pixel area is the equivalent pixel area of the receiving end when the lidar adopts the fourth lens.
  • the first equivalent pixel area is smaller than the second equivalent pixel area
  • the second equivalent pixel area is smaller than the third equivalent pixel area
  • the third equivalent pixel area is smaller than the fourth equivalent pixel area.
  • the first field of view angle when the lidar is adapted to the first lens is greater than the second field of view when the lidar is adapted to the second lens.
  • the second field of view is larger than the third field of view when the lidar is adapted to the third lens.
  • the third field of view is larger than the fourth field of view when the lidar is adapted to the fourth lens.
  • the transmitting array at the receiving end has sufficient array scale and reasonable design size, and cooperates with a suitable receiving optical system, a theoretically large FOV can be realized.
  • the focal length of the transmitter changes, the field of view projected by the transmitter (corresponding to the equivalent pixel area of the receiver) changes, telephoto - small FOV, short focus - large FOV.
  • the change of the equivalent pixel corresponding to the pixel in the transmitting array in the receiving array is introduced when the focal length of the transmitting end changes.
  • FIG. 3 is a schematic diagram of a focal length of a transmitter and a field of view of the transmitter provided by an embodiment of the present application.
  • 301 represents VCSEL array
  • A, B, C, D represent an equivalent pixel in VCSEL array respectively
  • 302 represents the field of view grid of SPAD array, and in the field of view grid of 302 representation, 1, 2, 3
  • the 4 grids corresponding to 4 represent the equivalent pixels corresponding to the four equivalent pixels of A, B, C, and D in the SPAD array when the transmitter adopts the telephoto system (for example, the grid corresponding to 1 represents the equivalent pixel A
  • the grid corresponding to 2 indicates the equivalent pixel corresponding to the equivalent pixel B
  • the grid corresponding to 3 indicates the equivalent pixel corresponding to the equivalent pixel C
  • the grid corresponding to 4 indicates the equivalent pixel D corresponds to equivalent pixels
  • the upper left (2 ⁇ 2) grids in the field of view grid represented by 302 represent the equivalent pixels corresponding to the equivalent pixel A in the SPAD
  • a grid in the field of view grid represented by 302 represents an equivalent pixel in the SPAD array (corresponding to an equivalent pixel in the VCSEL array);
  • the (2 ⁇ 2) grids in the field of view grid represented by 302 represent an equivalent pixel in the SPAD array (corresponding to an equivalent pixel in the VCSEL array).
  • one grid in the field of view grid can represent one equivalent pixel of the SPAD array.
  • An equivalent pixel in the SPAD array can be obtained by binning a plurality of SPAD pixels (may be referred to simply as pixels), that is, binning a plurality of SPAD pixels serves as an equivalent pixel.
  • multiple SPAD cell binning in the SPAD array is used as a pixel, and multiple pixel binning is used as an equivalent pixel. Since each pixel in the SPAD array contains one or several SPAD cells (that is, several SPAD cellbinning as a pixel), an equivalent pixel in the SPAD array can be obtained by multiple SPAD cellbinning, that is, multiple SPAD cellbinning as a equivalent pixels.
  • each grid in the field of view grid contains one or more pixels in the SPAD array, and each pixel contains one or several SPAD cells.
  • each grid can contain 4 SPAD cells (corresponding to one or more pixels), and the grid of the smallest unit in Figure 3 represents a SPAD cells.
  • the angle of view 1 represents the angle of view when the transmitter uses a short-focus system (corresponding to position 1)
  • the angle of view 2 represents the angle of view when the transmitter adopts a telephoto system (corresponding to position 2).
  • the equivalent pixel area corresponding to the four equivalent pixels of A, B, C, and D in the SPAD array will be occupied from 16 grids.
  • the area occupied by 4 grids becomes the area occupied by 4 grids, which meets the needs of long-distance detection. It should be understood that if a longer-focus emission system is adopted, the average H VCSEL pixels (that is, one VCSEL equivalent pixel) will have fewer SPAD cells in the SPAD array, and the energy amortized on one SPAD cell will be higher , the detection distance may be farther. But this trend has not always worked.
  • H VCSEL pixels correspond to only one SPAD cell, the transceiver optical system is easily blinded by ambient light, and the performance may be degraded. Therefore, whether H VCSEL pixels correspond to an equivalent pixel formed by merging several SPAD cells is optimal, which can be determined according to the system link simulation design.
  • the maximum detection FOV of lidar can be determined by the receiving optical system.
  • Figure 3 shows the field of view angle corresponding to the telephoto system (long focal length) at the transmitting end and the short focal length system (short focal length) at the transmitting end when the field of view at the receiving end remains unchanged (that is, the focal length of the optical lens at the receiving end remains unchanged). corresponding angle of view.
  • the focal length of the optical lens at the transmitting end of the lidar affects its field of view.
  • the focal length of the optical lens at the receiving end of the lidar also affects its field of view.
  • the larger the field of view of the lidar that is, the shorter the focal length of the transmitter
  • the larger the equivalent pixel area of the receiver and the lower the ranging capability.
  • the principle that the smaller the field of view of the lidar that is, the longer the focal length of the transmitter
  • the smaller the equivalent pixel area of the receiver is, and the stronger the ranging capability.
  • FOV within a certain range can be realized based on a set of transmitting chips and receiving chips, but the field of view may not be compatible with all scene requirements.
  • the lidar can be adapted to different focal lengths of the transmitter optical lens and/or the receiver optical lens to meet the needs of different scenarios. That is to say, one strategy that lidar may adopt to meet the needs of different scenarios is to adapt the transmitter optical lens and/or the receiver optical lens with different focal lengths for the lidar.
  • the optical lens at the transmitting end and/or the optical lens at the receiving end of the lidar is pluggable. That is to say, the user can replace the lidar-adapted transmitter optical lens and/or the receiver optical lens as needed.
  • the lidar includes two or more receiving arrays with different ranging capabilities, or the lidar includes two or more transmitting arrays with different ranging capabilities array.
  • the lidar includes a first receiving array and a second receiving array, the K detectors in the first receiving array are output in the first pixel binning manner, and the F detectors in the second receiving array are output in the first pixel combination Two-pixel binning is output, the ranging capability of the first receiving array is different from the ranging capability of the second receiving array, and K and F are different integers.
  • the lidar includes a third emitting array and a fourth emitting array, the K detectors in the third emitting array are output in a first pixel-merging manner, and the F detectors in the fourth emitting array are output in a manner of The second pixel combination is output, the ranging capability of the third transmitting array is different from the ranging capability of the fourth transmitting array, and K and F are different integers.
  • the lidar includes two or more receiving arrays with different ranging capabilities, and the fields of view corresponding to any two receiving arrays do not completely overlap .
  • the lidar includes two or more transmitting arrays with different ranging capabilities, and the fields of view corresponding to any two transmitting arrays do not completely overlap.
  • the lidar includes a first receiving array and a second receiving array, K detectors in the first receiving array (corresponding to one equivalent pixel in the first receiving array) output in the first pixel binning manner , the F detectors in the second receiving array (corresponding to one equivalent pixel in the second receiving array) are output in a second pixel binning manner, the ranging capability of the first receiving array and the second receiving array
  • the ranging capabilities of the two receivers are different, and the first field of view corresponding to the first receiving array does not completely overlap with the second field of view corresponding to the second receiving array.
  • the lidar includes a third emitting array and a fourth emitting array
  • the K detectors in the third emitting array are output in a first pixel-merging manner
  • the F detectors in the fourth emitting array are output in a manner of The second pixel is combined to output
  • the ranging capability of the third transmitting array is different from the ranging capability of the fourth transmitting array
  • the third field of view corresponding to the third transmitting array is different from the fourth field of view corresponding to the fourth transmitting array.
  • the fields of view do not completely overlap.
  • K and F are different integers. Later, we will describe in detail the possible implementation methods of lidar compatible with different ranging capabilities and field of view.
  • a laser radar provided in an embodiment of the present application includes: a transmitting array for generating a transmitting beam; the transmitting array includes a first transmitting array and a second transmitting array; a first receiving array for receiving a first echo beam; The first echo light beam corresponds to the light beam emitted by the first transmitting array, and the K detectors in the first receiving array are output in a first pixel combining manner, and the K detectors (corresponding to one etc.
  • the effective pixel corresponds to at least one laser in the first emitting array (corresponding to an equivalent pixel), and the K is an integer greater than 0;
  • the second receiving array is used to receive the second echo beam;
  • the first The two echo beams correspond to the beams emitted by the second emitting array, and the F detectors (corresponding to one equivalent pixel) in the second receiving array are output in a second pixel combining manner, and the F detectors Corresponding to at least one laser in the second emitting array;
  • the F is an integer greater than 0, and the K is not equal to the F.
  • the first receiving array may include two or more groups of detectors, and the K detectors included in each group are output in the manner of the first pixel combination.
  • the second receiving array may include two or more groups of detectors, and the F detectors included in each group are output in a manner of combining the second pixels.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner.
  • Each group of detectors in the first receiving array can be regarded as an equivalent pixel in the first receiving array
  • each group of detectors in the second receiving array can be regarded as an equivalent pixel in the second receiving array.
  • the equivalent pixels in the first receiving array and the equivalent pixels in the second receiving array are of different sizes. That is to say, the ranging capability corresponding to the first receiving array is different from the ranging capability corresponding to the second receiving array.
  • the lidar provided in the embodiment of the present application is compatible with different ranging capabilities.
  • the first receiving array and the first transmitting array are a set of transceiving arrays
  • the second receiving array and the second transmitting array are another set of transmitting and receiving arrays.
  • These two sets of transmitting and receiving arrays have different ranging capabilities. That is to say, lidar includes at least two sets of transceiver arrays with different ranging capabilities.
  • the lidar provided in this application may include two or more groups of transceiver arrays with different ranging capabilities. There is no essential difference between the lidar including two sets of transceiver arrays with different ranging capabilities and including more sets of transceiver arrays with different ranging capabilities.
  • the following mainly takes two sets of transceiver arrays with different ranging capabilities as an example to introduce the lidar provided by this application that is compatible with different ranging capabilities.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array, that is, they do not overlap completely.
  • the viewing angle of the first viewing field is smaller or larger than the viewing angle of the second viewing field.
  • the ranging capability of lidar in the first field of view is different from that in the second field of view.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array, and the lidar can be compatible with different ranging capabilities and field of view angles.
  • the first receiving array and the second receiving array share one or more detectors, or the first receiving array and the second receiving array share no detectors.
  • the first receiving array and the second receiving array may be located in or belong to the same array, that is, the first receiving array and the second receiving array are different parts of the same receiving array (ie, the same receiving chip).
  • FIG. 4A to 4K are schematic diagrams of examples of the first receiving array and the second receiving array provided by the embodiments of the present application.
  • the shaded area represents the second receiving array
  • the white area represents the first receiving array.
  • the hatched area indicates the second receiving array
  • the white area indicates the first receiving array
  • the vertical line area indicates the third receiving array.
  • G is an integer greater than 0, and is neither equal to K nor equal to F. It should be understood that FIG. 4A to FIG. 4K are only some examples provided by the embodiment of the present application, rather than all examples.
  • the first receiving array and the second receiving array may share one or more detectors, or may not share detectors.
  • the first receiving array and the second receiving array can independently receive echo energy in their own ways (that is, use different circuits to receive echo energy Wave energy), the same circuit can also be used to receive the echo energy in the respective ways of the first receiving array and the second receiving array.
  • the first transmitting array realizes the electrical scanning light emission in a certain way, and the first receiving array adopts a corresponding way to correspondingly receive the echo energy (corresponding to the first an outgoing light from the emission array).
  • the second emitting array realizes electrical scanning light emission in a certain way, and the second receiving array adopts a corresponding method to receive the echo energy (corresponding to the second emitting array outgoing light).
  • the first receiving array and the second receiving array can independently receive echo energy in their own ways.
  • the first emission array realizes the electrical scanning light emission in a certain manner and the second emission array realizes the electrical scanning light emission in a certain manner, which can be regarded as being completed within the same time period.
  • the first receiving array and the second receiving array may respectively use the one or more detectors at different times.
  • the first transmitting array realizes electrical scanning light emission in a certain way
  • the first receiving array adopts a corresponding method to receive the echo energy (corresponding to the output of the first transmitting array). shoot light).
  • the second emitting array realizes electrical scanning light emission in a certain way
  • the second receiving array adopts a corresponding method to receive the echo energy correspondingly (corresponding to the outgoing light of the second emitting array ).
  • the second transmitting array first realizes electrical scanning light emission in a certain way under the addressing logic control unit and driving control, and the second receiving array uses a corresponding method to receive echo energy (corresponding to the second transmitting array’s outgoing light). Then, under the addressing logic control unit and driving control, the first emitting array realizes electrical scanning light emission in a certain way, and the first receiving array adopts a corresponding method to receive the echo energy (corresponding to the emitted light of the first emitting array) ). It should be understood that the first receiving array and the second receiving array can time-division multiplex one or more common detectors.
  • the first emission array realizes the electrical scanning light emission according to a certain method and the second emission array realizes the electrical scanning light emission according to a certain method, which can be considered to be completed within the same time period.
  • the first receiving array and the second receiving array share one or more detectors, which can realize the multiplexing of detectors, and the utilization rate of the detectors is high, which is suitable for the field of view and the corresponding field of view of the first receiving array.
  • the first receiving array and the second receiving array do not share a detector, the circuit is simple, and is applicable to a scene where the field of view corresponding to the first receiving array and the field of view corresponding to the second receiving array do not overlap.
  • the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can be responsible for a small field of view range
  • the second receiving array can be responsible for a large field of view range.
  • the surrounding refers to the part surrounding the center.
  • the detectors included in the second receiving array are located around the detectors included in the first receiving array can be understood as: the detectors included in the second receiving array are located where the detectors included in the first receiving array are located Around the closed area, the area where the detectors included in the second receiving array are located is a closed area or a non-closed area.
  • the detectors included in the second receiving array surrounding the detectors included in the first receiving array can also be understood as: the detectors included in the second receiving array are located close to the detectors included in the first receiving array Around the area, the area where the detectors included in the second receiving array are located is a closed area or a non-closed area.
  • a closed area refers to a bounded and closed area.
  • An open area refers to an open area.
  • 4B and 4C are examples of detectors included in the second receiving array surrounding detectors included in the first receiving array. In other words, FIG. 4B and FIG. 4C are examples in which the detectors included in the second receiving array are located around the detectors included in the first receiving array. In FIG. 4B and FIG.
  • the area where the detectors included in the second receiving array are located is a closed area.
  • FIG. 4G and FIG. 4K are also examples in which the detectors included in the second receiving array surround the detectors included in the first receiving array.
  • FIG. 4G and FIG. 4K are also examples in which the detectors included in the second receiving array are located around the detectors included in the first receiving array.
  • the area where the detectors included in the second receiving array is located is a non-closed area.
  • FIG. 5 is a schematic diagram of an example of a first receiving array and a second receiving array provided by an embodiment of the present application.
  • the grid of each smallest unit represents one or more SPAD pixels, and each SPAD pixel includes several (corresponding to K) SPAD cells, and each SPAD pixel can adopt a 3D stacking structure and a back-illuminated incident ( back side illumination, BSI) process, the signal output is directly connected to a TDC.
  • the oblique area in the center is the area corresponding to the first receiving array, and the equivalent pixels in the first receiving array (for example, corresponding to a grid) are responsible for the small FOV range, and the white area (i.e., the oblique line in Fig.
  • each grid in the shaded area represents an equivalent pixel (corresponding to K detectors) in the first receiving array.
  • each equivalent pixel in the first receiving array is a SPAD pixel (corresponding to a grid), and each SPAD pixel adopts a structure of (2 ⁇ 2) SPAD cell binning.
  • adjacent (2 ⁇ 2) grids in the white area represent an equivalent pixel (corresponding to F detectors) in the second receiving array.
  • 4 grids occupied by four 1s (or 2, 3, 4, x) represent an equivalent pixel in the second receiving array.
  • each equivalent pixel in the first receiving array corresponds to a grid in Figure 5
  • one equivalent pixel in the second receiving array corresponds to (2 ⁇ 2) grids in Figure 5
  • the equivalent pixels of can be regarded as adopting (2 ⁇ 2) SPAD cell binning architecture
  • each equivalent pixel in the second receiving array can be regarded as adopting (2 ⁇ 2) equivalent pixel binning in the first receiving array Architecture, that is, the architecture of the second binning.
  • each SPAD cell contained in each equivalent pixel in the oblique area is connected to a TDC, and the second binning architecture of N ⁇ N (as shown in Figure 5 is 2 ⁇ 2) is used in the white area to generate a higher resolution Poor pixel equivalent.
  • each SPAD cell in the first receiving array adopts the binning structure of (2 * 2), promptly in the first receiving array, 4 (corresponding to K) detectors output in the mode of the first pixel combination;
  • Each SPAD pixel in the receiving array adopts a (2 ⁇ 2) second binning structure, that is, 16 (corresponding to F) detectors in the second receiving array output in a second pixel binning manner.
  • the SPAD cell between each equivalent pixel can use a set of conflict detection logic circuit (such as the SPAD cell with different shading but the same number in the above figure).
  • the single-photon event detected by any SPAD cell with the same number can be used as one of the four output point clouds of the equivalent pixel, and finally the results of merging and sorting four different numbers are obtained to obtain a set of (2 ⁇ 2) array of point clouds.
  • the pixels in the first emitting array corresponding to the first receiving array need to emit enough light peak power.
  • the VCSEL pixels of the first emission array can be controlled to have a larger injection current and output a higher peak power.
  • the second transmitting array corresponding to the second receiving array can use a small injection current.
  • the luminous slope efficiency of general multi-section VCSEL is 2-6W/A, limited by the device structure and principle, it is difficult to achieve higher slope efficiency in theory.
  • the peak energy of the equivalent pixel between the transceiver directly determines the detection capability of the lidar. Therefore, according to the small FOV in the center and the long-distance detection requirements and the surrounding large FOV and short-distance detection requirements, it can be calculated The demand for the peak power of the light source at the transmitter.
  • the VCSEL chip at the transmitting end it is only necessary for the device to support the modulation range of the laser radar to the VCSEL luminous peak, which is relatively achievable.
  • the actual FOV partition of lidar does not necessarily only have two types: the center (small FOV) and the surrounding (large FOV).
  • custom adaptation of the FOV area can be achieved by controlling the data reading and addressing logic of the SPAD array.
  • it only needs to control the peak injection currents of different regions of the light source at the emitting end (corresponding to the emitting array) to be different.
  • the following introduces some possible ways to implement custom adaptation of the FOV area.
  • the lidar further includes: a processing module, configured to acquire area adaptation information, where the area adaptation information is used to instruct the K detectors in the first receiving array to use the output in the first pixel combination manner, and the F detectors in the second receiving array output in the second pixel combination manner.
  • the processing module acquires the area adaptation information, it can configure the K detectors in the first receiving array to output in the first pixel combination according to the area adaptation information, and configure the second receiving array according to the area adaptation information The F detectors in the array output in the manner of the second pixel binning.
  • the processing module configures the data reading mode and addressing logic of the SPAD array according to the area adaptation information, so as to realize the output of the K detectors in the first receiving array in the first pixel binning mode and the second receiving The F detectors in the array output in the manner of the second pixel binning.
  • the area adaptation information may include information indicating the first receiving array, information indicating the second receiving array, information indicating that the K detectors in the first receiving array output in the first binning manner, and indicating the second receiving array Information output by the F detectors in the array in the manner of the second pixel combination.
  • the detectors included in the first receiving array form a rectangular area
  • the detectors included in the second receiving array also form a rectangular area
  • the area adaptation information includes the first coordinate, the second coordinate, K, the third coordinate, The fourth coordinate and F, wherein, the first coordinate is the coordinate of the detector in the upper left corner of the first receiving array, the second coordinate is the coordinate of the detector in the lower right corner of the first receiving array, and the third coordinate is the second receiving array
  • the first coordinate is the coordinate of the detector in the upper left corner of
  • the second coordinate is the coordinate of the detector in the lower right corner of the first receiving array
  • the third coordinate is the second receiving array
  • the fourth coordinate is the coordinates of the detector in the lower right corner of the second receiving array.
  • the first coordinates and the second coordinates are information indicating the first receiving array, and the processing module may determine the first receiving array according to the first coordinates and the second coordinates.
  • the third coordinate and the fourth coordinate are information indicating the second receiving array, and the processing module may determine the second receiving array according to the third coordinate and the fourth coordinate.
  • K is the information indicating that the K detectors in the first receiving array are output in the first pixel binning manner
  • F is the information indicating that the F detectors in the second receiving array are outputting in the second pixel binning mode Combined output information.
  • the area adaptation information may also include other information that can indicate that the K detectors in the first receiving array output in the first pixel binning manner and indicate that the F detectors in the second receiving array output in the above
  • the information output by the second pixel combination method is not limited in this application.
  • the output mode of the detectors in the first receiving array is not K
  • the output mode of the detectors in the first receiving array is not K
  • the output mode of the detectors in the first receiving array is not in the second pixel-combined manner.
  • the output mode of the detectors in the first receiving array and the output mode of the detectors in the second receiving array can be conveniently configured by the user or developer by outputting the area adaptation information.
  • the processing module is configured to obtain area adaptation information; it can accurately determine the output mode of the detectors in the first receiving array and the output mode of the detectors in the second receiving array.
  • the area adaptation information is further used to indicate that the first transmitting array corresponding to the first receiving array and/or the first transmitting array corresponding to the second receiving array launch array.
  • the area adaptation information may also include information indicative of the first transmit array and/or information indicative of the second transmit array.
  • the area adaptation information also includes the fifth coordinate, the sixth coordinate, the seventh coordinate, and the eighth coordinate, wherein the fifth coordinate is the coordinate of the detector in the upper left corner of the first transmitting array, and the sixth coordinate is The coordinates of the detector in the lower right corner of the first transmitting array, the seventh coordinates are the coordinates of the detector in the upper left corner of the second transmitting array, and the eighth coordinates are the coordinates of the detector in the lower right corner of the second transmitting array.
  • the fifth coordinate and the sixth coordinate are information indicating the first emitting array, and the processing module may determine the first emitting array according to the fifth coordinate and the sixth coordinate.
  • the seventh coordinate and the eighth coordinate are information indicating the second emitting array, and the processing module may determine the second emitting array according to the seventh coordinate and the eighth coordinate.
  • the area adaptation information is further used to indicate the first transmit array corresponding to the first receive array and/or the first transmit array corresponding to the second receive array.
  • the first emitting array and/or the second emitting array can be flexibly configured to meet different application requirements.
  • the detectors included in the first receiving array surround the detectors included in the second receiving array, or the detectors included in the first receiving array are located in the around the detector, where the K is less than the F.
  • FIG. 4D and 4E are examples of detectors included in the second receiving array surrounding detectors included in the first receiving array.
  • FIG. 4D and FIG. 4E are examples in which the detectors included in the second receiving array are located around the detectors included in the first receiving array.
  • the first receiving array can realize long-distance detection with a large field of view
  • the second receiving array can realize short-distance detection with a small field of view.
  • the lidar further includes: a driving circuit, configured to respectively drive the first emitting array and the second emitting array through different current magnitudes.
  • a driving circuit configured to respectively drive the first emitting array and the second emitting array through different current magnitudes.
  • users or developers can configure the magnitude of the current driving the first emission array and the magnitude of the current driving the second emission array according to actual needs.
  • the field of view corresponding to the first receiving array needs to detect a longer distance than the field of view corresponding to the second receiving array, and the driving circuit drives the first emitting array with a current greater than the current driving the second emitting array.
  • the first transmitting array and the second transmitting array are respectively driven by different current magnitudes, so as to realize ranging capabilities of different fields of view.
  • the outputs of the K detectors in the first receiving array in the first pixel binning manner may be the adjacent ones in the first receiving array detectors are output in the first binning manner. is an integer greater than 0.
  • the output of the F detectors in the second receiving array in the manner of the second pixel combination may be adjacent in the second receiving array detectors are output in the second pixel binning manner. is an integer greater than 0.
  • the plurality of adjacent detectors in the first receiving array are output in the first pixel-merging manner, so that several detectors can be combined and used as one detector, thereby improving sensitivity, output speed, and reducing resolution.
  • lidar provided by this application is compatible with different ranging capabilities.
  • a major use of lidar is to output a point cloud that reflects the detected scene.
  • the following describes a possible way for the laser radar provided by the embodiment of the present application to generate a point cloud.
  • the first receiving array is configured to output a first electrical signal according to the first echo beam;
  • the second receiving array is configured to output a first electrical signal based on the second echo beam , outputting a second electrical signal;
  • the lidar further includes: a processing module configured to generate a first point cloud according to the first electrical signal and the second electrical signal.
  • the first electrical signal and the second electrical signal are electrical signals output by the lidar in the same scanning period, and the first point cloud is a frame of point cloud obtained by the lidar in the scanning period.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner.
  • the ranging capability corresponding to the first electrical signal is different from the ranging capability corresponding to the second electrical signal.
  • the processing module generating the first point cloud according to the first electrical signal and the second electrical signal may be: first performing fusion processing on the first electrical signal and the second electrical signal to obtain a target signal, and then according to the target The signal generates the first point cloud.
  • the first electrical signal may reflect the situation that the detectors in the first receiving array receive echo energy
  • the second electrical signal may reflect the situation that the detectors in the second receiving array receive echo energy.
  • Performing fusion processing on the first electrical signal and the second electrical signal to obtain the target signal may be performing fusion processing on the first electrical signal and the second electrical signal, so as to obtain
  • the situation of the echo energy received by the detectors can reflect the signal of the situation of the echo energy received by the detectors in the second receiving array.
  • the present application does not limit the manner of performing fusion processing on the first electrical signal and the second electrical signal.
  • the first electrical signal is used to generate a point cloud with a first resolution
  • the second electrical signal is used to generate a point cloud with a second resolution
  • the first point cloud includes the point cloud of the first resolution and the point cloud of the second resolution.
  • the processing module is configured to generate the first point cloud according to the first electrical signal and the second electrical signal; it can be compatible with different ranging capabilities.
  • the first receiving array is configured to output a first electrical signal according to the first echo beam; the second receiving array is configured to output a first electrical signal based on the second echo beam , outputting a second electrical signal;
  • the lidar also includes: a processing module configured to generate a second point cloud based on the first electrical signal; the processing module is also configured to generate a second point cloud based on the second electrical signal Third point cloud.
  • the processing module is further configured to generate a fourth point cloud according to the second point cloud and the third point cloud.
  • the first electrical signal and the second electrical signal are two sets of electrical signals output by the lidar in the same scanning period, and the second point cloud and the third point cloud are two frame points obtained by the lidar in the scanning period cloud.
  • the first electrical signal and the second electrical signal are two sets of electrical signals output by the laser radar in different scanning periods (for example, two adjacent scanning periods), and the second point cloud and the third point cloud are laser Two frames of point clouds obtained by the radar at two different scan cycles.
  • generating the fourth point cloud may be understood as fusing two frames of point clouds into one frame of point cloud.
  • generating the fourth point cloud may be: performing fusion processing on the second point cloud and the third point cloud to obtain a fourth point cloud.
  • the processing module can perform fusion processing on the second point cloud and the third point cloud in any manner, which is not limited in this application.
  • the resolution of the second point cloud may be different from that of the third point cloud, and the field angle corresponding to the second point cloud may be different from the field angle corresponding to the third point cloud.
  • the fourth point cloud is generated according to the second point cloud and the third point cloud with different resolutions; the advantages of the second point cloud and the third point cloud can be combined to obtain a better point cloud.
  • Each of the K detectors outputs an analog pulse after receiving the echo energy.
  • the analog pulses output by each detector in the K detector are superimposed together to obtain a superimposed analog signal.
  • the superimposed analog signal is processed by an analog to digital converter (ADC) and converted into a digital signal.
  • ADC analog to digital converter
  • the digital signals output by the ADC are sampled to obtain output signals corresponding to the K detectors.
  • the K detectors are adjacent (4 ⁇ 4) detectors in the first receiving array, the K detectors detect 8 single-photon events, and the 8 analog pulses output by the K detectors are superimposed together to obtain a superimposed analog signal, which is converted into a digital signal through ADC processing, and the digital signal is sampled to output signals 8 corresponding to (4 ⁇ 4) detectors.
  • the output signal 8 corresponding to (4 ⁇ 4) detectors obtained through the first method indicates that 8 single-photon events have been detected.
  • Each of the K detectors outputs an analog pulse after receiving the echo energy, and the signal output of each detector is connected to a TDC. That is to say, the analog pulse output by each detector will be processed by TDC to obtain a digital signal and a time stamp.
  • the receiving chip combines and processes the two or more digital signals according to the time stamps corresponding to the two or more digital signals output by each TDC connected to the K detectors, and obtains output signals corresponding to the K detectors.
  • the K detectors are adjacent (4 ⁇ 4) detectors in the first receiving array, the K detectors detect 8 single-photon events, and the K detectors obtained through the second method correspond to The output signal corresponds to a statistical histogram of height 8.
  • the output of the F detectors in the second pixel combination may be similar to that of the K detectors in the first pixel combination, which will not be described in detail here. It should be understood that the output of the K detectors in the first pixel combining manner is to combine the outputs of the K detectors into one output, that is, the K detectors can be equivalent to one detector. Similarly, the output of the F detectors in the second pixel-combining manner is to combine the outputs of the F detectors into one output, that is, the F detectors can be equivalent to one detector.
  • Ways 1 and 2 can be understood as some possible ways of determining the number of single-photon events detected by the K detectors (the K detectors in the first receiving array outputting in the first pixel binning manner) provided by the present application. It should be understood that the lidar may also use other methods to determine the number of single-photon events detected by the K detectors in the first receiving array, which is not limited in this application.
  • the output of the K detectors in the first pixel combining manner may be: outputting the detection results of the K detectors according to the number of single-photon events detected by the K detectors.
  • the detection results of the K detectors are detected single-photon events; otherwise, no single-photon events are detected.
  • the target threshold may be 1, a minimum integer greater than (K/2), a minimum integer greater than (3K/5), or the like.
  • the present application also provides a receiving system, a transmitting system, and a control method for the lidar.
  • the receiving system provided by this application is introduced below.
  • the receiving system provided by the present application can be applied to lidar, including: a first receiving array for receiving the first echo beam; the K detectors in the first receiving array output in the first pixel-merging manner, so The K is an integer greater than 0; the second receiving array is used to receive the second echo light beam; the F detectors in the second receiving array are output in the second pixel combination mode; the F is greater than 0 Integer, said K is not equal to said F.
  • One possible product form of the receiving system is the receiving chip. That is to say, the receiving chip including the first receiving array and the second receiving array is a product to be protected in this application.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner.
  • the receiving system provided by the embodiment of the present application is compatible with different ranging capabilities. That is to say, the receiving system provided by the embodiment of the present application has two or more ranging capabilities.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array.
  • the corresponding first field of view of the first receiving array The ranging capability is different from the ranging capability of the second field of view corresponding to the second receiving array.
  • the first field of view corresponding to the first receiving array is different from the second field of view corresponding to the second receiving array, and the laser radar deployed with the first receiving array and the second receiving array can be compatible with different ranging capabilities and field of view.
  • the first receiving array and the second receiving array share one or more detectors, or the first receiving array and the second receiving array share no detectors.
  • the first receiving array and the second receiving array share one or more detectors, which can realize the multiplexing of detectors, and the utilization rate of the detectors is high, which is suitable for the field of view and the corresponding field of view of the first receiving array.
  • the first receiving array and the second receiving array do not share a detector, the circuit is simple, and is applicable to a scene where the field of view corresponding to the first receiving array and the field of view corresponding to the second receiving array do not overlap.
  • the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a small field of view
  • the second receiving array can realize short-distance detection with a large field of view
  • the detectors included in the first receiving array surround the detectors included in the second receiving array, or the detectors included in the first receiving array are located in the around the detector, where the K is less than the F.
  • the first receiving array can realize long-distance detection with a large field of view
  • the second receiving array can realize short-distance detection with a small field of view
  • the first receiving array is configured to output a first electrical signal according to the first echo beam;
  • the second receiving array is configured to output a first electrical signal based on the second echo beam , outputting a second electrical signal; the first electrical signal and the second electrical signal are used to generate the same point cloud or different point clouds.
  • the first electrical signal and the second electrical signal are used to generate the same point cloud, different parts of the point cloud have different resolutions.
  • the first electrical signal and the second electrical signal are used to generate different point clouds, so that a better point cloud can be obtained by merging the point cloud generated by the first electrical signal and the point cloud generated by the second electrical signal.
  • the receiving system further includes: a processing module, configured to acquire area adaptation information, where the area adaptation information is used to instruct the K detectors in the first receiving array to use the output in the first pixel combination manner, and the F detectors in the second receiving array output in the second pixel combination manner.
  • a processing module configured to acquire area adaptation information, where the area adaptation information is used to instruct the K detectors in the first receiving array to use the output in the first pixel combination manner, and the F detectors in the second receiving array output in the second pixel combination manner.
  • the processing module is configured to obtain area adaptation information; it can accurately determine the output mode of the detectors in the first receiving array and the output mode of the second receiving array.
  • the first receiving array is specifically used to receive the first echo beam from the receiving optical assembly; the second receiving array is specifically used to receive the first echo beam from the receiving optical assembly. of the second echo beam.
  • the first receiving array and the second receiving array share the receiving optical components, which can realize the multiplexing of the receiving optical components.
  • the first electrical signal is used to generate a point cloud with a first resolution
  • the second electrical signal is used to generate a point cloud with a second resolution
  • the first electrical signal and The second electrical signal is used to generate a first point cloud comprising a point cloud of the first resolution and a point cloud of the second resolution.
  • the first electrical signal and the second electrical signal are used to generate a first point cloud comprising a point cloud of the first resolution and a point cloud of the second resolution, and the first point cloud can cover a more complete Probe scene.
  • the first electrical signal is used to generate a second point cloud with a first resolution
  • the second electrical signal is used to generate a third point cloud with a second resolution
  • the first electrical signal is used to generate a third point cloud with a second resolution.
  • the first resolution is different from the second resolution
  • the field angle corresponding to the second point cloud is different from the field angle corresponding to the third point cloud.
  • the first electrical signal and the second electrical signal are used to generate point clouds with different resolutions, so as to combine the advantages of the two point clouds with different resolutions to obtain a better point cloud.
  • the transmitting system provided by the embodiment of the present application can be applied to lidar, and the transmitting system includes: a transmitting array, which is used to generate a transmitting beam; the transmitting array includes a first transmitting array and a second transmitting array; a driving circuit, which is used to pass different The magnitude of the current drives the first emitting array and the second emitting array respectively.
  • a possible product form of the launch system is the launch chip. That is to say, the receiving chip including the transmitting array and the driving circuit is a product to be protected in this application.
  • the driving circuit respectively drives the first emitting array and the second emitting array through different current magnitudes, so as to be able to emit optical signals with different intensities.
  • the first emitting array and the second emitting array share one or more lasers, or the first emitting array and the second emitting array do not share a laser.
  • the first emitting array and the second emitting array share one or more detectors, so that the multiplexing of detectors can be realized, and the utilization rate of the detectors is high.
  • the first emitting array and the second emitting array do not share a detector, and the circuit structure is simple.
  • the lasers included in the second emitting array surround the lasers included in the first emitting array, or the lasers included in the second emitting array are located at the edge of the lasers included in the first emitting array around.
  • the viewing angle corresponding to the first emitting array is included in the viewing angle corresponding to the second emitting array.
  • the K lasers in the first emitting array (corresponding to one equivalent pixel in the first emitting array) output in the first pixel-merging manner
  • the K lasers in the second emitting array F lasers (corresponding to one equivalent pixel in the second emission array) are output in a second pixel combination mode
  • the K is an integer greater than
  • the F is an integer greater than
  • the K is not equal to the F.
  • the output of the K lasers in the first binning manner may be similar to the output of the K detectors in the first binning manner.
  • the output of the F lasers in the second binned manner may be similar to the output of the F detectors in the second binned manner.
  • the K lasers output the corresponding ranging capability in the first pixel-merging manner, which is different from the F lasers outputting the corresponding ranging capability in the second pixel-merging manner. Therefore, the transmitting system provided by the embodiment of the present application can be compatible with different ranging capabilities. That is to say, the transmitting system provided in the embodiment of the present application has two or more ranging capabilities.
  • the first emitting array may include two or more groups of lasers, and the K lasers included in each group output in the first pixel binning manner.
  • the second emitting array may include two or more groups of lasers, and the F lasers included in each group output in the manner of the second pixel combination.
  • the K lasers output the corresponding ranging capability in the first pixel-merging manner, which is different from the F lasers outputting the corresponding ranging capability in the second pixel-merging manner.
  • Each group of lasers in the first emitting array can be regarded as an equivalent pixel in the first emitting array
  • each group of lasers in the second emitting array can be regarded as an equivalent pixel in the second emitting array. Since K is not equal to F, the equivalent pixels in the first transmit array and the equivalent pixels in the second transmit array are of different sizes.
  • the transmitting system further includes: a processing module, configured to acquire area adaptation information, where the area adaptation information is used to indicate that the first transmitting array and/or the second transmitting array array.
  • the area adaptation information is also used to indicate the first transmitting array and/or the first transmitting array.
  • the first emitting array and/or the second emitting array can be flexibly configured to meet different application requirements.
  • FIG. 6 is a flow chart of a control method of a laser radar according to an embodiment of the present application. As shown in Figure 6, the method includes:
  • the lidar receives a first echo light beam through a first receiving array to obtain a first electrical signal.
  • the first echo beam corresponds to the beam emitted by the first emitting array, and the K detectors in the first receiving array output in a first pixel-merging manner.
  • the K detectors correspond to at least one laser in the first emitting array, and K is an integer greater than 0.
  • the lidar receiving the first echo beam through the first receiving array to obtain the first electrical signal can be understood as the receiving chip in the lidar receiving the first echo beam to obtain the first point signal.
  • the first receiving array and the second receiving array share one or more detectors, or the first receiving array and the second receiving array share no detectors.
  • the first emitting array and the second emitting array share one or more detectors, so that the multiplexing of detectors can be realized, and the utilization rate of the detectors is high.
  • the first emitting array and the second emitting array do not share a detector, and the circuit structure is simple.
  • the detectors included in the second receiving array surround the detectors included in the first receiving array, or the detectors included in the second receiving array are located around the detectors included in the first receiving array , wherein the K is less than the F.
  • the detectors included in the first receiving array surround the detectors included in the second receiving array, or the detectors included in the first receiving array are located at the edge of the detectors included in the second receiving array around, wherein said K is less than said F.
  • the method further includes: respectively driving the first emitting array and the second emitting array with different current magnitudes, the first emitting array emits a first emitting beam and the The second transmit array emits a second transmit beam; the first transmit beam corresponds to the first echo beam, and the second transmit beam corresponds to the second echo beam.
  • the driving circuit respectively drives the first emitting array and the second emitting array through different current magnitudes, so that the first emitting array and the second emitting array can emit optical signals with different intensities.
  • the laser radar receives the second echo light beam through the second receiving array to obtain a second electrical signal.
  • the second echo light beam corresponds to the light beam emitted by the second transmitting array, and the F detectors in the second receiving array output in a second pixel-merging manner, and the F detectors correspond to the second emit at least one laser in the array; the F is an integer greater than 0, and the K is not equal to the F.
  • the lidar receiving the second echo beam through the second receiving array to obtain the second electrical signal can be understood as the receiving chip in the lidar receiving the second echo beam to obtain the second point signal.
  • the lidar generates a target point cloud according to the first electrical signal and the second electrical signal.
  • step 603 is as follows: the lidar generates a second point cloud according to the first electrical signal; generates a third point cloud according to the second electrical signal; generates a target point cloud ( corresponds to the fourth point cloud).
  • step 603 Another possible implementation manner of step 603 is as follows: firstly perform fusion processing on the first electrical signal and the second electrical signal to obtain a target signal, and then generate a target point cloud (corresponding to the first point cloud) according to the target signal.
  • the method further includes: acquiring area adaptation information by lidar; configuring K detectors in the first receiving array according to the area adaptation information to output in the manner of the first pixel combination , and configure F detectors in the second receiving array to output in the second pixel binning manner according to the area adaptation information.
  • the area adaptation information is used to indicate that the K detectors in the first receiving array output in the first pixel binning manner, and the F detectors in the second receiving array output in the second Output by pixel binning.
  • the area adaptation information is acquired to accurately determine the output mode of the detectors in the first receiving array and the output mode of the second receiving array.
  • the area adaptation information may also be used to indicate the first transmit array corresponding to the first receive array and/or the first transmit array corresponding to the second receive array.
  • the lidar may also configure the first transmitting array corresponding to the first receiving array and/or the first transmitting array corresponding to the second receiving array according to the area adaptation information.
  • the configuration drives a current magnitude corresponding to the first transmit array of the first receive array and/or a current magnitude corresponding to the second transmit array of the second receive array.
  • the area adaptation information is further used to indicate the first transmit array corresponding to the first receive array and/or the first transmit array corresponding to the second receive array.
  • the first emitting array and/or the second emitting array can be flexibly configured to meet different application requirements.
  • the corresponding ranging capabilities output by the K detectors in the first pixel combining manner are different from the corresponding ranging capabilities output by the F detectors in the second pixel combining manner. That is to say, the ranging capability corresponding to the first receiving array is different from the ranging capability corresponding to the second receiving array.
  • the target point cloud is generated according to the first electrical signal and the second electrical signal; it can be compatible with different ranging capabilities.
  • the lidar in this application may be a non-solid-state lidar or a solid-state lidar.
  • solid-state lidar is radar with no moving parts at all.
  • One disadvantage of solid-state lidar is the limited scan angle, solid state means that the lidar cannot rotate 360 degrees and can only detect the front. Therefore, to achieve omni-directional scanning, multiple (at least two) solid-state lidars need to be arranged in different directions.
  • the lidar provided in this application has two or more fields of view, and the ranging capability of each field of view is different. That is to say, one lidar provided by this application is equivalent to two or more solid-state lidars, which can better solve the problem of limited scanning angle of solid-state lidars.
  • Solid-state lidar is a highly integrated, low-cost lidar product positioning. Because solid-state lidar requires many types of materials and components, and has different performance parameters, even products from the same manufacturer mostly lack platform and modular design concepts. After research, it is found that the realization of the platform and modular design of lidar mainly needs to solve the following problems:
  • the transmitting end and/or receiving end of the lidar are adapted to lenses with different focal lengths to obtain different FOV solutions.
  • FOV of the lidar can be adjusted by adjusting the optical lens of the receiver to which the lidar is adapted.
  • the laser radar provided by this application only needs to ensure that the scale of the transmitting array and the receiving array is large enough and the resolution is high enough, such as 1080p, the parameter specifications of the transmitting chip and the optical lens of the transmitting end None need to be changed.
  • the laser radar provided by this application can solve the problem of how to improve the component reuse rate of the solid-state laser radar based on platform design. Since the lidar provided in this application has two or more fields of view, and the ranging capabilities of each field of view are different, one lidar can simultaneously meet the needs of different scenarios. That is to say, the laser radar provided by this application can solve the problem of how to realize the coupling of various performance parameters on the same platform product. Since one laser radar provided by this application is equivalent to two or more solid-state laser radars, the cost can be greatly reduced.
  • lidars suitable for different scenarios by adapting different receiving end optical lenses for lidar and controlling the output mode of the receiving array in software or control strategy.
  • the lidar provided by this application is easier to realize the platform and modular design of solid-state lidar, thereby reducing the cost and development cycle.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

L'invention concerne un radar laser, un système de réception, un système d'émission et un procédé de commande. Le radar laser est appliqué au domaine des véhicules intelligents. Le radar laser comprend : un réseau d'émission (101) servant à produire un faisceau de lumière émis, le réseau d'émission (101) comprenant un premier réseau d'émission (101) et un second réseau d'émission (101) ; un premier réseau de réception (102) servant à recevoir un premier faisceau de lumière d'écho, le premier faisceau de lumière d'écho correspondant à un faisceau de lumière émis par le premier réseau d'émission (101) et K détecteurs dans le premier réseau de réception (102) étant délivrés d'une première manière de fusion de pixels ; et un second réseau de réception (102) servant à recevoir un second faisceau de lumière d'écho, le second faisceau de lumière d'écho correspondant à un faisceau de lumière émis par le second réseau d'émission (101), F détecteurs dans le second réseau de réception (102) étant délivrés d'une seconde manière de fusion de pixels, F étant un nombre entier supérieur à 0 et K n'étant pas égal à F. Le radar laser selon l'invention peut être compatible avec différentes capacités de télémétrie.
PCT/CN2022/124749 2021-10-15 2022-10-12 Radar laser, système de réception, système d'émission et procédé de commande WO2023061386A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111204614.8A CN115980778A (zh) 2021-10-15 2021-10-15 激光雷达、接收系统、发射系统以及控制方法
CN202111204614.8 2021-10-15

Publications (1)

Publication Number Publication Date
WO2023061386A1 true WO2023061386A1 (fr) 2023-04-20

Family

ID=85974605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/124749 WO2023061386A1 (fr) 2021-10-15 2022-10-12 Radar laser, système de réception, système d'émission et procédé de commande

Country Status (2)

Country Link
CN (1) CN115980778A (fr)
WO (1) WO2023061386A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104049255A (zh) * 2014-05-05 2014-09-17 南京大学 一种基于编码调制的激光三维雷达装置
CN107728130A (zh) * 2017-09-14 2018-02-23 中国科学院上海光学精密机械研究所 多通道宽幅度合成孔径激光成像雷达收发系统
CN109946710A (zh) * 2019-03-29 2019-06-28 中国科学院上海技术物理研究所 一种双波长多偏振激光成像装置
CN111830530A (zh) * 2020-06-04 2020-10-27 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质
US20210124049A1 (en) * 2019-10-29 2021-04-29 Hexagon Technology Center Gmbh Multi-beam measuring device
CN113176555A (zh) * 2020-01-24 2021-07-27 上海禾赛科技有限公司 激光雷达的发射单元、激光雷达以及测距方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104049255A (zh) * 2014-05-05 2014-09-17 南京大学 一种基于编码调制的激光三维雷达装置
CN107728130A (zh) * 2017-09-14 2018-02-23 中国科学院上海光学精密机械研究所 多通道宽幅度合成孔径激光成像雷达收发系统
CN109946710A (zh) * 2019-03-29 2019-06-28 中国科学院上海技术物理研究所 一种双波长多偏振激光成像装置
US20210124049A1 (en) * 2019-10-29 2021-04-29 Hexagon Technology Center Gmbh Multi-beam measuring device
CN113176555A (zh) * 2020-01-24 2021-07-27 上海禾赛科技有限公司 激光雷达的发射单元、激光雷达以及测距方法
CN111830530A (zh) * 2020-06-04 2020-10-27 深圳奥锐达科技有限公司 一种距离测量方法、系统及计算机可读存储介质

Also Published As

Publication number Publication date
CN115980778A (zh) 2023-04-18

Similar Documents

Publication Publication Date Title
CN110596721B (zh) 双重共享tdc电路的飞行时间距离测量系统及测量方法
KR102589319B1 (ko) 잡음 적응형 솔리드-스테이트 lidar 시스템
Horaud et al. An overview of depth cameras and range scanners based on time-of-flight technologies
US5682229A (en) Laser range camera
US11435446B2 (en) LIDAR signal acquisition
US20230014366A1 (en) Laser transceiver system, lidar, and autonomous driving apparatus
CN110596723B (zh) 动态直方图绘制飞行时间距离测量方法及测量系统
WO2022017366A1 (fr) Procédé d'imagerie de profondeur et système d'imagerie de profondeur
WO2022011974A1 (fr) Système et procédé de mesure de distance et support d'enregistrement lisible par ordinateur
EP3611533B1 (fr) Appareil pour fournir une pluralité de faisceaux de lumière
CN113589317A (zh) 一种激光雷达和二维扫描方法
US20230221437A1 (en) Application specific integrated circuits for lidar sensor and multi-type sensor systems
CN113767303A (zh) 激光测距装置、激光测距方法和可移动平台
CN112558105A (zh) 激光雷达系统及激光雷达系统的控制方法
WO2023061386A1 (fr) Radar laser, système de réception, système d'émission et procédé de commande
US20210318439A1 (en) Hybrid LADAR with Co-Planar Scanning and Imaging Field-of-View
JP2023508481A (ja) 信号処理方法及び関連する装置
CN111965659A (zh) 一种距离测量系统、方法及计算机可读存储介质
WO2022198386A1 (fr) Appareil de télémétrie laser, procédé de télémétrie laser et plateforme mobile
WO2024113328A1 (fr) Procédé de détection, détecteur en réseau, émetteur en réseau, appareil de détection et terminal
CN219302660U (zh) 一种扫描式激光雷达
CN220491038U (zh) 一种多点激光测距装置
WO2023155048A1 (fr) Appareil de détection, dispositif terminal et procédé de régulation et de commande de résolution
US20220350000A1 (en) Lidar systems for near-field and far-field detection, and related methods and apparatus
US20230143755A1 (en) Hybrid LADAR with Co-Planar Scanning and Imaging Field-of-View

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22880322

Country of ref document: EP

Kind code of ref document: A1