WO2023193408A1 - 激光雷达及激光雷达控制方法 - Google Patents

激光雷达及激光雷达控制方法 Download PDF

Info

Publication number
WO2023193408A1
WO2023193408A1 PCT/CN2022/120814 CN2022120814W WO2023193408A1 WO 2023193408 A1 WO2023193408 A1 WO 2023193408A1 CN 2022120814 W CN2022120814 W CN 2022120814W WO 2023193408 A1 WO2023193408 A1 WO 2023193408A1
Authority
WO
WIPO (PCT)
Prior art keywords
image acquisition
rotor
laser transceiver
laser
exposure
Prior art date
Application number
PCT/CN2022/120814
Other languages
English (en)
French (fr)
Inventor
陈杰
向少卿
Original Assignee
上海禾赛科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海禾赛科技有限公司 filed Critical 上海禾赛科技有限公司
Publication of WO2023193408A1 publication Critical patent/WO2023193408A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4811Constructional features, e.g. arrangements of optical elements common to transmitter and receiver
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning

Definitions

  • the embodiments of this specification relate to the field of radar technology, and in particular, to a laser radar and a laser radar control method.
  • lidar has become an important equipment in environmental perception.
  • the point cloud of lidar can provide three-dimensional spatial information of the external environment.
  • the useful information that lidar can provide is limited.
  • LiDAR can achieve 360° environmental perception perpendicular to the axis by rotating around the axis.
  • the image acquisition device is fixed and has a limited field of view. Therefore, there is a large field of view between the image and the point cloud. difference.
  • two solutions In order to solve the above problems, there are currently two solutions:
  • the one-dimensional linear array image sensor is installed on the rotor of the lidar, rotates together with the linear array detector of the lidar, and shares a set of optical components with the linear array detector.
  • the linear array image sensor and linear array detector can respectively receive incident light from the same field of view area, and the field of view matching between the image and depth information does not require a complicated calibration process.
  • the time for the line array image sensor and the line array detector to receive the same incident light is reduced, thereby shortening the exposure time of the line array image sensor.
  • the line array image sensor has multiple single-color channels (such as red channel, green channel and blue channel)
  • the exposure time of each monochromatic channel of the linear array image sensor is shortened even more, thus seriously affecting the imaging quality of the image.
  • embodiments of this specification provide a lidar and a lidar control method that can improve the data quality and synchronization of the lidar, thereby improving the performance of the lidar.
  • Embodiments of this specification provide a laser radar, including: a rotor, a laser transceiver device, an image acquisition device and a control device; the laser transceiver device and the image acquisition device are arranged on the rotor and surround the rotor. Axis arrangement; where:
  • the rotor is adapted to rotate around the axis
  • the laser transceiver device includes a transceiver optical component, adapted to emit detection signals and receive echo signals formed by reflections of the detection signals;
  • the image acquisition device includes an imaging optical component and is suitable for exposing the target area
  • the control device is adapted to generate depth information based on the detection signal and the echo signal, and generate an image based on the exposure result of the image acquisition device.
  • control device is also adapted to obtain an image containing depth information based on the relative pose relationship between the laser transceiver device and the image acquisition device.
  • the image acquisition device further includes a plurality of pixel acquisition modules arranged in a first direction, and each of the pixel acquisition modules includes a plurality of photosensitive units arranged in a second direction; the first direction is A direction parallel to the axis, and the second direction is a direction perpendicular to the axis.
  • the angle that the pixel acquisition module rotates between two adjacent exposures is equivalent to the angular resolution of the pixel acquisition module in the second direction.
  • control device is also adapted to control the plurality of photosensitive units of each of the pixel acquisition modules to sequentially expose the same field of view scanning area during the rotation process.
  • the pixel acquisition module is adapted to superimpose the exposure charges generated by the multiple photosensitive units sequentially exposing the field of view scanning area to output as the exposure result.
  • the pixel acquisition module also includes: a charge shift register unit and a conversion output unit;
  • the charge shift register unit includes a plurality of charge storage areas, a plurality of the charge storage areas correspond to a plurality of the photosensitive units, and the plurality of charge storage areas are coupled in sequence; the charge shift a registration unit adapted to store and output exposure charges generated by sequential exposure of multiple photosensitive units to the field of view scanning area;
  • the conversion output unit is coupled to the charge shift register unit and is adapted to sample the exposure charge output by the charge shift register unit and convert it into an electrical signal for output.
  • control device is adapted to read the exposure result to generate an image
  • the duration of the exposure period of the pixel acquisition module is at least greater than the sum of: a single exposure time, a single charge transfer time, and a single reading time.
  • the laser transceiver device includes: a first laser transceiver module and a second laser transceiver module, the first laser transceiver module includes a first transceiver optical component, and the second laser transceiver module includes a third Two optical transmitting and receiving components, the focal length of the first receiving and transmitting optical component is greater than the focal length of the second receiving and transmitting optical component.
  • the first laser transceiver module and the second laser transceiver module are arranged around the axis.
  • control device is also adapted to perform quality evaluation on the generated image, and adjust the exposure time of a plurality of the pixel acquisition modules according to the evaluation results.
  • the laser radar further includes: a light filling module, which is provided on the rotor and is suitable for filling light for the image acquisition device.
  • a light filling module which is provided on the rotor and is suitable for filling light for the image acquisition device.
  • the embodiment of this specification also provides a lidar control method.
  • the lidar includes: a rotor, a laser transceiver device, an image acquisition device and a control device.
  • the laser transceiver device includes a transceiver optical component;
  • the image acquisition device includes an imaging device.
  • Optical components; the laser transceiver device and the image acquisition device are arranged on the rotor and arranged around the axis of the rotor;
  • the lidar control method includes:
  • A3) Generate depth information based on the detection signal and echo signal
  • A5) Generate an image based on the exposure result of the image acquisition device.
  • the lidar control method also includes:
  • the image acquisition device further includes a plurality of pixel acquisition modules arranged in a first direction, and each of the pixel acquisition modules includes a plurality of photosensitive units arranged in a second direction;
  • the first direction is A direction parallel to the axis of the rotor, and the second direction is: a direction perpendicular to the axis of the rotor;
  • the step A1) includes:
  • the rotation speed of the rotor is controlled so that the angle rotated by the pixel acquisition module between two adjacent exposures is equivalent to the angular resolution of the pixel acquisition module in the second direction.
  • step A4) includes:
  • the plurality of photosensitive units of each of the pixel acquisition modules are controlled to expose the same field of view scanning area in sequence.
  • the laser transceiver device and the image acquisition device can be disposed on the rotor and arranged around the axis of the rotor, and the rotor can rotate around the axis; the laser transceiver device can transmit and receive optics through The component emits a detection signal and receives an echo signal formed by reflection of the detection signal; the image acquisition device exposes the target area through the imaging optical component, and the control device can generate depth information based on the detection signal and the echo signal. , and an image can be generated based on the exposure result of the image acquisition device.
  • both the laser transceiver device and the image acquisition device can achieve 360° perpendicular to the axis direction.
  • the laser transceiver device and the image acquisition device are jointly installed on the rotor, and the relative posture relationship between the two is fixed, thereby reducing the calibration complexity between the image and depth information and improving data processing efficiency; and, the rotation speed of the rotor , the working timing of the laser transceiver device and the working timing of the image acquisition device are adapted, thereby improving the time synchronization between the image and the depth information; and, the laser transceiver device and the image acquisition device pass
  • the operation of independently configured optical components can provide sufficient exposure time for the image acquisition device to ensure the imaging quality of the image. Therefore, the above structure can improve the data quality and synchronization of lidar, thereby improving the performance of lidar. In addition, adopting the above structure can also reduce the hardware cost of lidar
  • Figure 1 is a schematic structural diagram of a laser radar in an embodiment of this specification.
  • Figure 2 is a schematic structural diagram of the lidar rotor shown in Figure 1 rotated 180° counterclockwise.
  • Figure 3 is a front view of an image acquisition device provided by an embodiment of this specification.
  • FIG. 4 is a top view of the image acquisition device shown in FIG. 3 .
  • FIGs 5 to 6 are schematic diagrams of the rotation process of the image acquisition device shown in Figure 4.
  • FIG. 7 is a schematic diagram of the image acquisition device shown in FIG. 3 obtaining the exposure results of the scanning area of the same field of view.
  • FIG. 8 is a schematic structural diagram of a pixel acquisition module provided by an embodiment of this specification.
  • Figure 9 is a schematic structural diagram of a laser transceiver device provided by an embodiment of this specification.
  • FIG 10 is a schematic structural diagram of another lidar provided by the embodiment of this specification.
  • Figure 11 is a flow chart of a lidar control method provided by the embodiment of this specification.
  • the laser transceiver device and the image acquisition device can be disposed on the rotor and arranged around the axis of the rotor.
  • the rotor can rotate around the axis;
  • the laser transceiver device can emit a detection signal through a transceiver optical component and receive an echo signal formed by reflection of the detection signal;
  • the image acquisition device can expose the target area through an imaging optical component, and the control device can be based on the detection signal and the echo signal to generate depth information, and an image can be generated based on the exposure result of the image acquisition device.
  • the data quality and synchronization of the lidar can be improved, thereby improving the performance of the lidar.
  • the laser radar LS1 may include: a rotor M11, a laser transceiver device M12, an image acquisition device M13, and a control device M14;
  • the laser transceiver M12 and the image acquisition device M13 are disposed on the rotor M11 and arranged around the axis of the rotor M11, wherein, in the perspective shown in Figure 1, the axis of the rotor M11 is perpendicular to Figure 1 The plane on which it is located.
  • Said rotor M11 is adapted to rotate about said axis.
  • the laser transceiver M12 includes a transceiver optical component M121, and the laser transceiver M12 is adapted to transmit a detection signal X1 to the outside world and receive an echo signal X2 formed by the detection signal X1 being reflected by an external obstacle W1.
  • the image acquisition device M13 includes an imaging optical component M131, and the image acquisition device M13 is adapted to expose the target area F1.
  • the control device M14 generates depth information based on the detection signal X1 and the echo signal X2, and generates an image based on the exposure result of the image acquisition device M13.
  • both the laser transceiver device and the image acquisition device can achieve 360° perpendicular to the axis direction.
  • the laser transceiver device and the image acquisition device are jointly installed on the rotor, and the relative posture relationship between the two is fixed, thereby reducing the calibration complexity between the image and depth information and improving data processing efficiency; and, the rotation speed of the rotor , the working timing of the laser transceiver device and the working timing of the image acquisition device are adapted, thereby improving the time synchronization between the image and the depth information; and, the laser transceiver device and the image acquisition device pass
  • the operation of independently configured optical components can provide sufficient exposure time for the image acquisition device to ensure the imaging quality of the image. Therefore, the above structure can improve the data quality and synchronization of lidar, thereby improving the performance of lidar. In addition, adopting the above structure can also reduce the hardware cost of lidar
  • target area of the image acquisition device described in this specification can be understood as: the area covered by the field of view of the image acquisition device in the external environment within one exposure cycle.
  • the target area of the image acquisition device is not an area defined by real boundaries, and during the rotation of the image acquisition device driven by the rotor, the field of view of the image acquisition device rotates, and accordingly, the image acquisition device The target area of the device also changes dynamically.
  • data fusion can be performed based on the relative pose relationship between the laser transceiver device and the image acquisition device to obtain an image containing depth information.
  • the high-precision detection results of the laser transceiver device and the high-precision exposure results of the image acquisition device can be fully combined, thereby improving the accuracy of depth information and images, and effectively ensuring the data quality and information volume provided by lidar.
  • the laser transceiver device and the image can be The acquisition device is calibrated, so that the relative posture relationship between the laser transceiver device and the image acquisition device can be obtained. Then, based on the obtained relative posture relationship between the laser transceiver device and the image acquisition device, data fusion is performed to obtain an image containing depth information.
  • the calibration methods may include: manual calibration, algorithm automatic calibration, etc. The embodiments of this specification do not specifically limit this.
  • the calibration between the laser transceiver device and the image acquisition device is completed before the lidar is operated, which is conducive to calling when the lidar is working, so that there is no need to repeatedly perform relative pose calibration during subsequent data processing, which reduces algorithm complexity and improve data fusion efficiency.
  • the specific process of data fusion may include: projecting the depth information onto the image based on the relative pose relationship between the laser transceiver device and the image acquisition device, or, The image is projected onto the lidar point cloud and combined with the depth information to form an image containing depth information.
  • the laser transceiver device and the image acquisition device can use the same coordinate system, and both the depth information and the image can be in the same coordinate system.
  • the coordinate system is represented by polar coordinates.
  • the relative posture relationship between the laser transceiver device and the image acquisition device can be expressed by the angle between them.
  • the relative posture relationship between the two can be determined by the angle between the optical axis of the transceiver optical component in the laser transceiver device and the optical axis of the imaging optical component in the image acquisition device. It should be noted that, for convenience of description and understanding, the direction perpendicular to the rotor axis is defined as the horizontal direction.
  • the synchronously working laser transceiver device and the image acquisition device do not correspond to the same target area.
  • the laser transceiver device rotates through the horizontal angle between the two, it can correspond to the target area of the image acquisition device before the horizontal angle is rotated, or conversely, the image acquisition device rotates through the horizontal angle between the two. and corresponds to the target area of the laser transceiver device before the horizontal angle is rotated.
  • the target area of the laser transceiver device can be understood as the area covered by the field of view of the laser transceiver device in the external environment within a detection cycle. It can be seen that there is a horizontal angular delay between the laser transceiver device and the image acquisition device.
  • the field of view range of the laser transceiver device and the field of view range of the image acquisition device may be different, so there is a horizontal angle difference between the horizontal angle corresponding to the data point providing depth information and the horizontal angle of different pixels in the horizontal direction in the image.
  • the level of the obtained depth information and the image in the same coordinate system can be determined Angle difference, and then after the depth information or image rotation above the horizontal angle difference, image pixels and depth information corresponding to the same field of view can be obtained, thereby achieving data fusion and obtaining an image with depth information.
  • the field of view range of the laser transceiver device and the field of view range of the image acquisition device can be expressed by angles in polar coordinates.
  • the laser transceiver M12 and the image acquisition device M13 are respectively disposed on both sides of the axis, sandwiched between the optical axes of the transceiver optical component M121 and the imaging optical component M131 The angle is ⁇ .
  • the rotor M11 rotates counterclockwise around said axis.
  • the laser transceiver M12 can also include a laser transmitting module and a laser receiving module (not shown in the figure), wherein: the laser transmitting module can emit detection light from different divergence angles, multiple detection lights form a beam of detection light, and emit to the receiving and receiving optical components; after the receiving and receiving optical components transmit and shape the detection light, they can form a detection signal and emit it to the external environment.
  • the receiving and receiving optical components can transmit and focus the echo light formed by the reflection of the detection signal, thereby obtaining the echo signal and transmitting it to the laser receiving module.
  • the laser receiving module is suitable for photoelectric detection of echo signals and outputs electrical signals obtained by photoelectric detection to the control device.
  • the laser transmitting module includes multiple lasers
  • the laser receiving module includes multiple detectors.
  • the multiple lasers and multiple detectors are respectively arranged in specified directions (such as the direction parallel to the axis of the rotor M11). There are different angles between the emission directions of multiple lasers emitting detection light and the specified direction.
  • Each detector has a corresponding relationship with each laser.
  • the control device M14 transmits the detection light time through the corresponding laser. Based on the time when the detector receives the echo signal, the light flight time can be calculated, and then the spatial distance between the laser transceiver device M12 and external obstacles can be determined, and the depth information can be obtained.
  • Different detectors are used to receive echo signals at different angles relative to the specified direction. Therefore, the data points measured by different detectors can know their corresponding angles relative to the specified direction according to the position of the detector.
  • the rotor M11 drives the laser transceiver M12 to rotate.
  • multiple lasers patrol in sequence and emit detection light.
  • the detector detects the echo signal after its corresponding laser emits detection light.
  • the depth information corresponding to the Field of View (FOV) range in the specified direction of the detector at the horizontal angle is obtained.
  • FOV Field of View
  • the field of view of the image acquisition device M13 can cover a two-dimensional area with a certain specified direction angle and a certain rotation direction angle.
  • the control device M14 generates the image P1 based on the exposure result of the image acquisition device M13 at the position shown in FIG. 1 .
  • the center of the field of view of the laser transceiver device M12 corresponds to the center of the target area F1, as shown in Figure 2.
  • the detection signal X1' emitted by the laser transceiver device M12 within the field of view shown in Figure 2 and the received echo signal X2'.
  • the laser transceiver M12 can scan and detect the target area F1. As shown in Figure 1, it rotates counterclockwise, so the laser transceiver M12 scans the target area F1 sequentially from the right edge to the left edge.
  • the control device M14 can obtain the depth information D1 within the angular range of the target area F1 based on the detection signal X1' transmitted by the laser transceiver M12 to the target area F1 and the received echo signal X2'.
  • the image P1 of the image acquisition device M13 can be determined Different pixels in the horizontal direction correspond to the horizontal angle differences corresponding to the data points of the depth information D1 provided in the laser transceiver device M12, so that the depth information D1 can be projected to the image P1 to form an image PD1 containing the depth information.
  • the laser transceiver M12 performs 360° detection in the horizontal direction.
  • the image acquisition device M13 performs 360° imaging in the horizontal direction.
  • the depth information of the laser transceiver M12 is It is represented in the same coordinate system as the image of the image acquisition device M13, and then the depth information or image information is compensated for the horizontal angle, so that the fusion of the depth information and the image can be achieved, and an image containing depth information of 360° in the horizontal direction can be obtained.
  • the image acquisition device may further include a plurality of pixel acquisition modules arranged in a first direction, and each of the pixel acquisition modules includes a plurality of photosensitive units arranged in a second direction; the first The direction is a direction parallel to the axis, and the second direction is a direction perpendicular to the axis.
  • the area covered by the field of view of each photosensitive unit in the external environment is part of the target area
  • the area covered by the field of view of the photosensitive unit in the external environment is called the "field of view scanning area"
  • the target area of the image acquisition device is composed of the fields of view of all photosensitive units that perform exposure.
  • the field scanning area consists of.
  • the rotor drives the image acquisition device to rotate, by controlling multiple pixel acquisition modules, the field of view scanning areas in different first directions in the target area can be exposed, and by controlling the photosensitive units in each pixel acquisition module , it is possible to expose the field of view scanning areas in different second directions in the target area, thereby improving the exposure flexibility and effectively ensuring the exposure efficiency.
  • Figures 3 and 4 are a front view and a top view of an image acquisition device provided by embodiments of this specification. It should be noted that, in order to facilitate description and understanding of the positional relationship between multiple pixel acquisition modules in the image acquisition device, the imaging optical component is not shown in FIG. 3 .
  • the image acquisition device 30 may include: an imaging optical component 31 and M pixel acquisition modules, namely pixel acquisition module 321 to pixel acquisition module 32M, where M is a positive integer greater than 1.
  • the pixel acquisition modules 321 to 32M are arranged in a first direction, and the first direction is a direction parallel to the axis of the rotor (not shown in the figure).
  • Each of the pixel acquisition modules includes a plurality of photosensitive units, and the plurality of photosensitive units are arranged in the second direction.
  • the pixel acquisition module 321 includes N photosensitive units, namely photosensitive units 321-1 to 321-N. It can be understood that, according to the arrangement description of the multiple photosensitive units in the pixel acquisition module 321, multiple photosensitive units in other pixel acquisition modules (ie, the pixel acquisition module 322 to the pixel acquisition module 32M) can be deduced by analogy The arrangement will not be repeated here.
  • the axis of the rotor, the first direction and the second direction are all parallel to the plane of Figure 3; and in the perspective shown in Figure 4, the axis of the rotor
  • the first direction is perpendicular to the plane of Figure 4, and the second direction is parallel to the plane of Figure 4.
  • Figure 3 only schematically shows the positional relationship between multiple pixel acquisition modules. It does not limit the actual physical positions of multiple pixel acquisition modules in the lidar. In practical applications, it can be based on In specific circumstances, multiple pixel acquisition modules and photosensitive units in each pixel acquisition module are provided in the lidar. For example, multiple pixel acquisition modules can be disposed on the same substrate; for another example, the photosensitive units of all pixel acquisition modules can be disposed on the same substrate.
  • the angle at which the pixel acquisition module rotates between two adjacent exposures is equivalent to the angular resolution of the pixel acquisition module in the second direction.
  • the angular resolution of the pixel acquisition module in the second direction is related to the field of view of the photosensitive unit in the second direction.
  • the corresponding angle of the angular resolution of the pixel acquisition module in the second direction is the difference between the field of view angles of two adjacent photosensitive units.
  • the exposure timing of the pixel acquisition module and the The rotation speed of the rotor is adapted to improve the exposure quality.
  • the difference in field of view between the photosensitive unit 321-1 and the photosensitive unit 321-2 is ⁇
  • the pixel acquisition module 321 is in the second direction
  • the corresponding angle of angular resolution is ⁇ . Therefore, by controlling the single exposure time of the pixel acquisition module 321 to be less than the time when the pixel acquisition module 321 rotates through a second direction corresponding to the angle ⁇ of the angular resolution, the exposure timing of the pixel acquisition module can be It is adapted to the rotation speed of the rotor to improve the exposure quality.
  • control device may control multiple photosensitive units of each of the pixel acquisition modules to sequentially expose the same field of view scanning area during the rotation process.
  • the pixel acquisition module also rotates accordingly, and the fields of view of multiple photosensitive units in the same pixel acquisition module can cover the same area in different periods of time.
  • the exposure timing of each photosensitive unit in each pixel acquisition module after exposure is performed with the first photosensitive unit located at the end as the starting unit, when the pixel acquisition module rotates through an angle corresponding to the angular resolution in the second direction, The photosensitive units in the corresponding order are controlled backward to perform exposure, so that multiple photosensitive units in each pixel acquisition module can sequentially expose the same field of view scanning area.
  • the exposure time of a single photosensitive unit to the same field of view scanning area can be reduced, and rich environmental information can be obtained.
  • the exposure time of the photosensitive units 321 - 1 to 321 -N to the same field of view scanning area f1 can be reduced, and rich environmental information can be obtained.
  • the photosensitive surface of the photosensitive unit is used to sense incident photons, so that the photosensitive unit is excited by the incident photons to generate exposure charges.
  • the pixel acquisition module can superimpose the exposure charges generated by the multiple photosensitive units sequentially exposing the field of view scanning area to output as the exposure result. Specifically, since the relative position between two adjacent photosensitive units in the same pixel acquisition module is fixed, after determining the time difference between adjacent photosensitive units for exposing the same field of view scanning area according to the rotation speed, the same view can be The exposure charges in the field scanning area are superimposed and output as the exposure result of the field scanning area.
  • the exposure quality of the same field of view scanning area can be ensured, thereby improving image accuracy.
  • the pixel acquisition module can store, transfer, and convert the exposure charges of multiple light-emitting units, so that the exposure charges of the same field of view scanning area can be superimposed, and the exposure results can be output.
  • the pixel collection module 80 may include: P photosensitive units (photosensitive unit 81 in Figure 8 -1 to photosensitive unit 81-P), charge shift register unit 82 and conversion output unit 83;
  • the charge shift register unit 82 includes a plurality of charge storage areas (charge storage area 82-1 to charge storage area 82-P in FIG. 8), a plurality of charge storage areas and a plurality of photosensitive units. There is a one-to-one correspondence (the charge storage area 82-1 corresponds to the photosensitive unit 81-1 in Figure 8), and a plurality of the charge storage areas are coupled in sequence (the charge storage area 82-1 and the charge storage area 82 in Figure 8 -2 coupling); the charge shift register unit 82 can store and output exposure charges generated by sequential exposure of the photosensitive unit 81-1 to the photosensitive unit 81-P to the field of view scanning area.
  • the conversion output unit 83 is coupled to the charge shift register unit 82 and can sample the exposure charge output by the charge shift register unit 82 and convert it into an electrical signal.
  • the electrical signal is used as an exposure result. output.
  • the photosensitive unit 81-1 When the pixel acquisition module 80 rotates, according to the order of exposure to the same field of view scanning area, the photosensitive unit 81-1 is the first to expose the same field of view scanning area, and the charge storage area 82-1 corresponding to the photosensitive unit 81-1 The exposure charge generated by the photosensitive unit 81-1 is stored, and after the photosensitive unit 81-1 completes the exposure, the exposure charge output value generated by the photosensitive unit 81-1 is stored in the charge storage area 82-2, so that the photosensitive unit 81-1 The generated exposure charges are transferred to the charge storage area 82-2.
  • the photosensitive unit 81-2 After the pixel acquisition module 80 rotates through an angle corresponding to the angular resolution in the second direction, the photosensitive unit 81-2 secondly exposes the same field of view scanning area, and the charge storage area 82- corresponding to the photosensitive unit 81-2 2. Based on the stored exposure charges generated by the photosensitive unit 81-1, the exposure charges generated by the photosensitive unit 81-2 are superimposed and stored, and after the photosensitive unit 81-2 completes the exposure, the superposed exposure charges (i.e., the photosensitive unit 81-2) are 1 and the exposure charge generated by the photosensitive unit 81-2) are output to the charge storage area 82-3 (not shown in the figure) for storage, thereby transferring the superimposed exposure charge to the charge storage area 82-3.
  • the superposed exposure charges i.e., the photosensitive unit 81-2
  • the exposure charge generated by the photosensitive unit 81-2 are output to the charge storage area 82-3 (not shown in the figure) for storage, thereby transferring the superimposed exposure charge to the charge storage area 82-3.
  • the photosensitive unit 81-P exposes the same field of view scanning area, and the charge storage area corresponding to the photosensitive unit 81-P 82-P is based on the stored superimposed exposure charges (ie, the exposure charges generated by the photosensitive unit 81-1 to the exposure charges generated by the photosensitive unit 81-(P-1)), and superimposes and stores the exposure charges generated by the photosensitive unit 81-P.
  • the superimposed exposure charges are output to the conversion output unit 83 .
  • the conversion output unit 83 samples the superposed exposure charges output by the charge shift register unit 82 (ie, the exposure charges generated by the photosensitive unit 81-1 to the exposure charges generated by the photosensitive unit 81-P), and converts them into electrical signals. Output.
  • the conversion output unit 83 can sample and output the superimposed exposure charges generated by P photosensitive units sequentially exposing the same field of view scanning area in each exposure cycle.
  • the exposure process of P photosensitive units and the exposure charge transfer process between P photosensitive units are continuous.
  • the conversion output unit can receive P times of superposed exposure charges.
  • multiple pixel acquisition modules can be controlled so that the photosensitive units in the same order in the multiple pixel acquisition modules are exposed synchronously.
  • the photosensitive unit 321 - 1 of the pixel acquisition module 321 and the photosensitive unit 322 - 1 of the pixel acquisition module 322 are controlled to pixel acquisition module 321 to 32M.
  • the photosensitive unit 32M-1 of the module 32M performs exposure; at time t 02 , the photosensitive unit 321-2 of the pixel acquisition module 321, the photosensitive unit 322-2 of the pixel acquisition module 322, and the photosensitive unit 32M-2 of the pixel acquisition module 32M are controlled.
  • Exposure By analogy, at time t0N , the photosensitive unit 321-N of the pixel acquisition module 321, the photosensitive unit 322-N of the pixel acquisition module 322, and the photosensitive unit 32M-N of the pixel acquisition module 32M are controlled to perform exposure.
  • the plurality of pixel acquisition modules can be controlled so that all photosensitive units are continuously exposed according to a preset exposure cycle, that is, in each exposure cycle, all photosensitive units are exposed simultaneously.
  • Each of the pixel acquisition modules can continuously output exposure results corresponding to different field of view scanning areas, thereby obtaining high-precision two-dimensional images.
  • control device can read the exposure result output by the pixel acquisition module to generate an image.
  • control device may include an imaging readout circuit adapted to the image acquisition device.
  • the imaging readout circuit can separately read the exposure results output by each pixel acquisition module in the image acquisition device.
  • the duration of the exposure period of the pixel acquisition module is at least greater than the sum of: a single exposure time, a single charge transfer time, and a single reading time.
  • the pixel acquisition module can perform one exposure, one charge transfer, and one exposure result reading by the controlled device between two adjacent exposures.
  • the rotation speed of the lidar rotor is related to the angular resolution of the pixel acquisition module in the second direction and the length of the exposure period.
  • the angular velocity of the rotor of the lidar the corresponding angle of the angular resolution of the pixel acquisition module in the second direction/the length of the exposure period.
  • the specific device type of the pixel acquisition module can be determined according to specific conditions and needs.
  • the pixel acquisition module can include: based on Metal-Oxide-Semiconductor Field-Effect Transistor, referred to as It is a photosensitive unit formed by a MOSFET (abbreviated as MOS) capacitor, a shift register circuit formed based on the MOS capacitor, and a conversion output circuit formed based on the MOS capacitor.
  • the pixel acquisition module may include a charge-coupled device (CCD).
  • the specific device type in the imaging optical component that implements the incident photon transmission function can also be determined based on specific circumstances and needs.
  • the imaging optical component can include one or more optical components such as a lens or lens group, a reflector, and a beam splitter. device.
  • the laser emitting module may include one or more lasers from a vertical-cavity surface-emitting laser (VCSEL) and an edge-emitting laser (Edge Emitting Laser, EEL).
  • the laser receiving module may include: one or more detectors of single photon avalanche diode (Single Photon Avalanche Diode, SPAD) and avalanche photodiode (Avalanche Photo Diode, APD).
  • the laser receiving module may also include a filter element, which is disposed upstream of the optical path of the detector to filter out ambient light contained in the echo signal.
  • the receiving and receiving optical components may include one or more optical devices among lenses or lens groups, reflective mirrors, semi-transparent mirrors, rotating mirrors, and beam splitters.
  • the laser transceiver device 90 includes: a laser transmitting module 91 , a laser receiving module 92 and a transceiver optical component 93 (not labeled in the figure).
  • the receiving and receiving optical component 93 may include lenses 93-1 to 93-4 and reflecting mirrors 93-5 and 93-6.
  • the laser emission module 91 can emit detection light from different divergence angles to form a beam of detection light and emit it to the lens 93-1.
  • the detection light passes through the lens 93-1, the reflector 93-5 and the lens 93-2.
  • the detection signal X9-1 is formed and emitted to the external environment.
  • an echo signal is obtained and transmitted to the laser receiving module 92.
  • the laser receiving module 92 performs photoelectric detection on the echo signal, and outputs the electrical signal obtained by photoelectric detection to the control device (not shown in Figure 9).
  • the laser receiving module 92 includes a detector and a filter element.
  • the filter element is disposed upstream of the optical path of the detector to filter out the ambient light contained in the echo signal to improve the signal-to-noise of detection. Compare.
  • the focal length of the optical transceiver component is positively related to the detection distance of the laser transceiver device. That is, relative to a beam of detection light with the same divergence angle, the greater the focal length of the optical transceiver component, the smaller the detection signal after shaping by the optical transceiver component.
  • the field of view range FOV of the laser transceiver device can be expressed as:
  • a is the height of the light-emitting surface of a single laser in the laser transmitting module or the photosensitive surface of a single detector in the laser receiving module (the height can also be called the vertical dimension, that is, the dimension in the direction parallel to the axis of the rotor), f is the focal length of the receiving and receiving optical components. It can be seen that for lasers and detectors of the same size, the field of view range FOV of the laser transceiver device is inversely related to the focal length f of the transceiver optical component.
  • one or more optical components can be installed in the laser transceiver device.
  • the hardware structure of the laser transceiver device can be easily adjusted, and the focal length of the optical component can be set according to specific detection requirements to meet various requirements. detection requirements.
  • the laser transceiver device may include: a first laser transceiver module and a second laser transceiver module, the first laser transceiver module including a first transceiver optical component, and the second laser transceiver module.
  • the module includes a second optical transceiver component, wherein the focal length of the first optical transceiver component is greater than the focal length of the second optical transceiver component.
  • first laser transceiver module and the second laser transceiver module may also include other components.
  • the first laser transceiver module may include a first laser transmitting module and a first laser receiving module;
  • the second laser transceiver module may also include a second laser transmitting module and a second laser receiving module.
  • the specific structures of the first laser transmitting module and the second laser transmitting module may be the same or different, and the specific structures of the first laser receiving module and the second laser receiving module may be the same or different.
  • the specific types of devices included in the first laser transceiver module and the second laser transceiver module can be determined according to specific circumstances.
  • the focal length of the first optical transceiver component of the first laser transceiver module is greater than the focal length of the second optical transceiver component of the second laser transceiver module, for the first laser transceiver module, it passes through the first
  • the receiving and receiving optical component can emit a detection signal with a larger beam diameter, and accordingly receive an echo signal with a larger beam diameter, thereby increasing the detection distance; and for the second laser receiving and transmitting module, through the second receiving and receiving optical component, it can emit Detection signals with a larger field of view, thereby expanding the detection range.
  • the detection ranges of the first laser transceiver module and the second laser transceiver module can be made different through the first transceiver optical component and the second transceiver optical component with different focal lengths, thereby taking into account the needs of long-distance detection and large-scale detection. meet the needs of range detection and realize all-round environmental detection.
  • the structure of the first laser transceiver module can be adjusted individually according to the actual detection distance requirements, and according to the actual detection range According to the demand, the structure of the second laser transceiver module is separately adjusted.
  • the structure is easy to adjust and has a high degree of integration, so that it can meet a variety of detection needs when the internal space of the lidar is limited, which is conducive to optimizing the detection performance of the lidar.
  • the first laser transceiver module, the second laser transceiver module and the image acquisition device do not interfere with each other, thereby improving imaging accuracy while taking into account the needs of long-distance detection and large-scale detection.
  • the first laser transceiver module, the second laser transceiver module and the image acquisition device can share some components of the lidar (such as the rotor, power supply module and signal transmission circuit, etc.), thereby affecting the overall size and overall size of the lidar. Effective cost control.
  • first laser transceiver module and the second laser transceiver module may be arranged around the axis.
  • the angle between the first laser transceiver module and the second laser transceiver module can be set according to specific circumstances.
  • the lidar LS2 includes: a rotor M21, a laser transceiver M22 (Fig. (not labeled in ), image acquisition device M23 and control device M24.
  • the laser transceiver device M22 includes: a first laser transceiver module M22-1 and a second laser transceiver module M22-2. Among them, the included angles between the first laser transceiver module M22-1, the second laser transceiver module M22-2 and the image acquisition device M23 are all 60°.
  • lidar LS2 functions and effects of each component in the lidar LS2 (such as the rotor M21, the laser transceiver M22, the image acquisition device M23, the control device M24, etc.) can be referred to the above-mentioned relevant parts. Here No longer.
  • the horizontal angle between the first laser transceiver module and the second laser transceiver module in the same coordinate system can be , determine the relative posture relationship between the two, so as to determine the field of view range of the first laser transceiver module, the field of view range of the second laser transceiver module, and the relationship between the first laser transceiver module and the second laser transceiver module.
  • the relative posture relationship can determine the depth information of the first laser transceiver module and the depth information of the second laser transceiver module that are at the same horizontal angle in the same coordinate system, so as to facilitate data fusion of the depth information of the two.
  • the angle between the emission direction of the detection light emitted at different divergence angles and the direction parallel to the rotor axis is different.
  • the second laser transceiver module In the detection signals emitted by the group the angle between the emission direction of the detection light emitted from different positions and the direction parallel to the rotor axis is also different.
  • the detection light whose emission direction is parallel to the horizontal direction in the first laser transceiver module and the second laser transceiver module can be Align the positions to ensure that the first laser transceiver module and the second laser transceiver module have the same reference datum (that is, the detection light parallel to the horizontal direction), which is conducive to data fusion.
  • control device can also perform quality evaluation on the generated image, and adjust the exposure time of multiple pixel acquisition modules according to the evaluation results. For example, if it is determined that the image is overexposed, the exposure time of the pixel acquisition module can be reduced; if it is determined that the image is underexposed, the exposure time of the pixel acquisition module can be extended. Thus, based on the image evaluation results, feedback control is performed on the exposure time of the pixel acquisition module, thereby improving imaging quality.
  • the specific quality assessment method of the image can be determined according to specific situations and needs.
  • the quality of the image can be assessed by calculating parameters such as brightness and brightness histogram of the image. This manual does not limit the specific quality assessment methods used.
  • the laser radar may further include: a light filling module, which is disposed on the rotor and is suitable for filling light for the image acquisition device. Therefore, when the exposure time of the pixel acquisition module cannot be extended, or sufficient exposure cannot be provided after extending the exposure time of the pixel acquisition module, the pixel acquisition module can be filled with light.
  • the specific position and number of the light supplement modules on the rotor can be determined according to the lighting conditions.
  • fill light modules can be provided on both sides of the image acquisition device. Therefore, at least one light filling module and the light filling module can be selectively turned on according to the lighting conditions, thereby providing light filling for the image acquisition device, thereby improving the flexibility of the light filling and the uniformity of the lighting conditions.
  • the type of device used to implement the light filling function in the light filling module can be determined according to specific circumstances.
  • the light supplement module may include a light emitting diode; for another example, the light supplement module may include a light emitting module and a lens. This manual has specific restrictions on the structure of the fill light module.
  • the lidar control method provided by the embodiment of the present application is introduced below.
  • the data processing method described below can be applied to any kind of lidar described in the embodiment of this specification.
  • the content of the data processing method described below can be compared with the above description.
  • the related contents of lidar correspond to each other.
  • the lidar may include: a rotor, a laser transceiver device, an image acquisition device and a control device,
  • the laser transceiver device includes a transceiver optical component;
  • the image acquisition device includes an imaging optical component;
  • the laser transceiver device and the image acquisition device are arranged on the rotor and arranged around the axis of the rotor.
  • the lidar control method may include:
  • A3) Generate depth information based on the detection signal and echo signal.
  • A5) Generate an image based on the exposure result of the image acquisition device.
  • both the laser transceiver device and the image acquisition device can achieve 360° perpendicular to the axis direction.
  • the laser transceiver device and the image acquisition device are jointly installed on the rotor, and the relative posture relationship between the two is fixed, thereby reducing the calibration complexity between the image and depth information and improving data processing efficiency; and, the rotation speed of the rotor , the working timing of the laser transceiver device and the working timing of the image acquisition device are adapted, thereby improving the time synchronization between the image and the depth information; and, the laser transceiver device and the image acquisition device pass
  • the operation of independently configured optical components can provide sufficient exposure time for the image acquisition device to ensure the imaging quality of the image. Therefore, the above method can improve the data quality and synchronization of lidar, thereby improving the performance of lidar.
  • the lidar control method may also include:
  • the calibration between the laser transceiver device and the image acquisition device is completed before the lidar is operated, which is beneficial to calling when the lidar is working, so that there is no need to repeatedly perform algorithm calibration during subsequent data processing, reducing the complexity of the algorithm. , and improve data fusion efficiency.
  • the image acquisition device further includes a plurality of pixel acquisition modules arranged in a first direction, and each of the pixel acquisition modules includes a plurality of photosensitive units arranged in a second direction; the first direction is a direction parallel to the axis of the rotor, and the second direction is a direction perpendicular to the axis of the rotor.
  • the step A1) may include: controlling the rotation speed of the rotor so that the angle of rotation of the pixel acquisition module between two adjacent exposures is consistent with the angle of the pixel acquisition module in the second direction. Angular resolution is comparable.
  • the exposure timing of the pixel acquisition module and the The rotation speed of the rotor is adapted to improve the exposure quality.
  • the step A4) may include: controlling the plurality of photosensitive units of each of the pixel acquisition modules to sequentially expose the same field of view scanning area during the rotation process.
  • the exposure time of a single photosensitive unit to the same field of view scanning area can be reduced, and rich environmental information can be obtained.
  • first and second in the embodiments of the specification are only used for descriptive purposes and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • first, second, etc. are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. It is to be understood that the data so used are interchangeable under appropriate circumstances so that the embodiments of the specification described herein are capable of being practiced in sequences other than those illustrated or described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本说明书提供激光雷达及激光雷达控制方法,其中,激光雷达包括:转子、激光收发装置、图像采集装置及控制装置;所述激光收发装置和所述图像采集装置设置于所述转子上,并围绕所述转子的轴线排布;所述转子,适于绕所述轴线旋转;所述激光收发装置,包括收发光学组件,适于发射探测信号和接收所述探测信号被反射形成的回波信号;所述图像采集装置,包括成像光学组件,适于对目标区域进行曝光;所述控制装置,适于基于所述探测信号和回波信号生成深度信息,并基于所述图像采集装置的曝光结果生成图像。采用上述方案,能够提高激光雷达的数据质量和同步性,从而改善激光雷达的性能。

Description

激光雷达及激光雷达控制方法 技术领域
本说明书实施例涉及雷达技术领域,尤其涉及一种激光雷达及激光雷达控制方法。
背景技术
目前,激光雷达已成为环境感知环节的重要设备。通过激光雷达的点云,可以提供外部环境的三维空间信息。但是,由于激光雷达点云的分辨率和反射率信息的精准度存在局限,使得激光雷达能够提供的有用信息受到了限制。
与此同时,图像采集技术的发展更加超前,图像采集设备的硬件性能(如体积、重量、功耗、使用寿命、可响应的光谱范围等)和软件性能(如数据分辨率、数据读取、数据转换、数据处理等)均已存在显著提升,使得图像采集设备生成的图像能够提供层次丰富且真实直观的二维平面信息。
因此,在目标识别任务(如识别路牌、交通指示灯和斑马线等目标的任务)中,仍然高度依赖图像采集设备,而为了弥补图像采集设备的空间局限性,需要通过激光雷达的点云为图像提供对应的深度信息。
激光雷达可以通过绕轴线旋转的方式实现垂直于轴线方向的360°环境感知,而图像采集设备固定不动,且图像采集设备的视场有限,因此,图像和点云之间存在较大视场差异。为了解决上述问题,目前存在以下两种方案:
1)在图像采集设备中设置多个图像传感器,通过多个图像传感器的组合形成垂直于激光雷达转轴方向的360°视场角,或者,通过多个图像采集设备的组合形成垂直于激光雷达转轴方向的360°视场角。
但是该方案需要进行多图像拼接。同时,由于图像采集设备和激光雷达之间独立运行,二者坐标系不同,且工作时序存在偏差,单个图像和点云的深度信息之间视场并不匹配,需要对多图像拼接得到的360°图像和深度信息进行标定,计算过程复杂;且激光雷达和图像采集设备的精度越高,计算量越大。因此,图像和深度信息之间的标定复杂性会造成数据处理延时过长的问题。
2)将一维的线阵图像传感器安装在激光雷达的转子上,与激光雷达的线阵探测器一同旋转,并与线阵探测器共用一套光学组件。这样,线阵图像传感器 和线阵探测器可以分别接收来自相同视场区域的入射光,图像和深度信息之间的视场匹配,不需要复杂的标定过程。
但是,通过共用的光学组件,线阵图像传感器和线阵探测器分别接收相同入射光的时间减少,因而导致线阵图像传感器的曝光时间缩短,进一步地,若线阵图像传感器存在多个单色通道(如红色通道、绿色通道和蓝色通道),则线阵图像传感器各单色通道的曝光时间缩短更多,从而严重影响图像的成像质量。
综上可知,现有的技术方案无法在降低图像和深度信息之间标定复杂性的情况下,兼顾图像的成像质量。
发明内容
有鉴于此,本说明书实施例提供一种激光雷达及激光雷达控制方法,能够提高激光雷达的数据质量和同步性,从而改善激光雷达的性能。
本说明书实施例提供了一种激光雷达,包括:转子、激光收发装置、图像采集装置及控制装置;所述激光收发装置和所述图像采集装置设置于所述转子上,并围绕所述转子的轴线排布;其中:
所述转子,适于绕所述轴线旋转;
所述激光收发装置,包括收发光学组件,适于发射探测信号和接收所述探测信号被反射形成的回波信号;
所述图像采集装置,包括成像光学组件,适于对目标区域进行曝光;
所述控制装置,适于基于所述探测信号和回波信号生成深度信息,并基于所述图像采集装置的曝光结果生成图像。
可选地,所述控制装置,还适于基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,获得包含深度信息的图像。
可选地,所述图像采集装置还包括按照第一方向排布的多个像素采集模块,且各所述像素采集模块包括按照第二方向排布的多个感光单元;所述第一方向为与所述轴线平行的方向,所述第二方向为:与所述轴线垂直的方向。
可选地,所述像素采集模块在相邻两次曝光之间转过的角度与所述像素采集模块在所述第二方向的角度分辨率相当。
可选地,所述控制装置还适于在转动过程中控制各所述像素采集模块的多个所述感光单元依次对相同的视场扫描区域进行曝光。
可选地,所述像素采集模块适于将多个所述感光单元对所述视场扫描区域依次曝光产生的曝光电荷进行叠加,以作为所述曝光结果输出。
可选地,所述像素采集模块还包括:电荷移位寄存单元和转化输出单元;
所述电荷移位寄存单元,包括多个电荷存储区域,多个所述电荷存储区域与多个所述感光单元一一对应,且多个所述电荷存储区域依次耦接;所述电荷移位寄存单元,适于存储和输出多个所述感光单元对所述视场扫描区域依次曝光产生的曝光电荷;
所述转化输出单元,与所述电荷移位寄存单元耦接,适于对所述电荷移位寄存单元输出的曝光电荷进行采样,并转化为电信号输出。
可选地,所述控制装置,适于读取所述曝光结果,以生成图像;
所述像素采集模块的曝光周期的时长至少大于:单次曝光时间、单次电荷转移时间以及单次读取时间之和。
可选地,所述激光收发装置包括:第一激光收发模组和第二激光收发模组,所述第一激光收发模组包括第一收发光学组件,所述第二激光收发模组包括第二收发光学组件,所述第一收发光学组件的焦距大于所述第二收发光学组件的焦距。
可选地,所述第一激光收发模组和所述第二激光收发模组围绕所述轴线排布。
可选地,所述控制装置还适于对生成的图像进行质量评估,并根据评估结果调整多个所述像素采集模块的曝光时间。
可选地,所述激光雷达还包括:补光模块,设置于所述转子上,适于为所述图像采集装置进行补光。
本说明书实施例还提供了一种激光雷达控制方法,所述激光雷达包括:转子、激光收发装置、图像采集装置及控制装置,所述激光收发装置包括收发光学组件;所述图像采集装置包括成像光学组件;所述激光收发装置和所述图像采集装置设置于所述转子上,并围绕所述转子的轴线排布;
所述激光雷达控制方法包括:
A1)控制所述转子绕所述轴线旋转;
A2)控制所述激光收发装置发射探测信号和接收所述探测信号被反射形成的回波信号;
A3)基于所述探测信号和回波信号生成深度信息;
A4)控制所述图像采集装置对目标区域进行曝光;
A5)基于所述图像采集装置的曝光结果生成图像。
可选地,所述激光雷达控制方法还包括:
A6)基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,获得包含深度信息的图像。
可选地,所述图像采集装置还包括按照第一方向排布的多个像素采集模块,且各所述像素采集模块包括按照第二方向排布的多个感光单元;所述第一方向为与所述转子的轴线平行的方向,所述第二方向为:与所述转子的轴线垂直的方向;
所述步骤A1)包括:
控制所述转子的转速,以使所述像素采集模块在相邻两次曝光之间转过的角度与所述像素采集模块在所述第二方向的角度分辨率相当。
可选地,所述步骤A4)包括:
在转动过程中控制各所述像素采集模块的多个所述感光单元依次对相同的视场扫描区域进行曝光。
采用本说明书实施例提供的激光雷达,激光收发装置和图像采集装置可设置于转子上,并围绕所述转子的轴线排布,所述转子可以绕轴线旋转;所述激光收发装置可以通过收发光学组件发射探测信号和接收所述探测信号被反射形成的回波信号;所述图像采集装置通过成像光学组件对目标区域进行曝光,所述控制装置可以基于所述探测信号和回波信号生成深度信息,并且可以基于所述图像采集装置的曝光结果生成图像。由上可知,在所述转子旋转的过程中,通过带动所述激光收发装置和所述图像采集装置旋转,可使所述激光收发装置和所述图像采集装置均实现垂直于轴线方向的360°环境感知,激光收发装置和图像采集装置共同设置于转子上,二者之间的相对位姿关系固定,从而降低图像和深度信息之间的标定复杂性,提高数据处理效率;并且,转子的转速、所述激光收发装置的工作时序和所述图像采集装置的工作时序相适配,从而可以提高图像和深度信息之间的时间同步性;以及,所述激光收发装置和所述图像采集装置通过独立配置的光学组件进行运作,可以为图像采集装置提供足够的曝光时间,从而确保图像的成像质量。故上述结构可以提高激光雷达的数据质量和同步性,从而改善激光雷达的性能。此外,采用上述结构还可以降低激光雷达的硬件成本。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对本说明书实施例 或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本说明书实施例中一种激光雷达的结构示意图。
图2为图1所示激光雷达的转子逆时针旋转180°后的结构示意图。
图3为本说明书实施例提供的一种图像采集装置的正视图。
图4为图3所示图像采集装置的俯视图。
图5至图6为图4所示图像采集装置的旋转过程示意图。
图7为图3所示图像采集装置获得同一视场扫描区域曝光结果的示意图。
图8为本说明书实施例提供的一种像素采集模块的结构示意图。
图9为本说明书实施例提供的一种激光收发装置的结构示意图。
图10为本说明书实施例提供的另一种激光雷达的结构示意图。
图11为本说明书实施例提供的一种激光雷达控制方法的流程图。
具体实施方式
由背景技术部分可知,现有的技术方案无法在降低图像和深度信息之间标定复杂性的情况下,兼顾图像的成像质量。
为了解决上述问题,本说明书实施例提供了一种激光雷达结构,激光收发装置和图像采集装置可设置于转子上,并围绕所述转子的轴线排布,所述转子可以绕轴线旋转;所述激光收发装置可以通过收发光学组件发射探测信号和接收所述探测信号被反射形成的回波信号;所述图像采集装置通过成像光学组件对目标区域进行曝光,所述控制装置可以基于所述探测信号和回波信号生成深度信息,并且可以基于所述图像采集装置的曝光结果生成图像。由此,能够提高激光雷达的数据质量和同步性,从而改善激光雷达的性能。
为使本领域技术人员更加清楚地了解及实施本说明书实施例的构思、实现方案及优点,以下参照附图进行详细说明。
参照图1,为本说明书实施例中一种激光雷达的结构示意图,在本说明书实施例中,激光雷达LS1可以包括:转子M11、激光收发装置M12、图像采集装置M13及控制装置M14;所述激光收发装置M12和所述图像采集装置M13设置于所述转子M11上,并围绕所述转子M11的轴线排布,其中,在图1所示视角中,所述转子M11的轴线垂直于图1所处平面。
所述转子M11适于绕所述轴线旋转。
所述激光收发装置M12包括收发光学组件M121,且所述激光收发装置M12适于向外界发射探测信号X1和接收所述探测信号X1被外界的障碍物W1反射形成的回波信号X2。
所述图像采集装置M13包括成像光学组件M131,且所述图像采集装置M13适于对目标区域F1进行曝光。
所述控制装置M14,基于所述探测信号X1和回波信号X2生成深度信息,并基于所述图像采集装置M13的曝光结果生成图像。
由上可知,在所述转子旋转的过程中,通过带动所述激光收发装置和所述图像采集装置旋转,可使所述激光收发装置和所述图像采集装置均实现垂直于轴线方向的360°环境感知,激光收发装置和图像采集装置共同设置于转子上,二者之间的相对位姿关系固定,从而降低图像和深度信息之间的标定复杂性,提高数据处理效率;并且,转子的转速、所述激光收发装置的工作时序和所述图像采集装置的工作时序相适配,从而可以提高图像和深度信息之间的时间同步性;以及,所述激光收发装置和所述图像采集装置通过独立配置的光学组件进行运作,可以为图像采集装置提供足够的曝光时间,从而确保图像的成像质量。故上述结构可以提高激光雷达的数据质量和同步性,从而改善激光雷达的性能。此外,采用上述结构还可以降低激光雷达的硬件成本。
需要说明的是,本说明书中所述的图像采集装置的目标区域,可以理解为:在一个曝光周期内,所述图像采集装置的视场在外部环境中覆盖的区域。
还需要说明的是,所述图像采集装置的目标区域不是一个真实存在边界限定的区域,并且,在转子带动图像采集装置的旋转过程中,图像采集装置的视场发生转动,相应地,图像采集装置的目标区域也发生动态变化。
在具体实施中,可基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,进行数据融合,获得包含深度信息的图像。由此,能够充分结合激光收发装置的高精度探测结果和图像采集装置的高精度曝光结果,从而提高深度信息和图像的精度,有效保障激光雷达所提供的数据质量和信息量。
在具体实施中,由于所述激光收发装置和所述图像采集装置之间的相对位姿(即位置和角度)固定不变,在激光雷达工作之前,可对所述激光收发装置和所述图像采集装置进行标定,从而能够得到所述激光收发装置和所述图像采集装置之间的相对位姿关系。然后,基于已获得的激光收发装置和所述图像采集装置之间的相对位姿关系,进行数据融合,获得包含深度信息的图像。其中, 标定方式可以包括:人工标定、算法自动标定等。本说明书实施例对此不做具体限制。
由此,在激光雷达工作之前完成所述激光收发装置和所述图像采集装置之间的标定,有利于在激光雷达工作时调用,从而在后续数据处理时无需反复进行相对位姿的标定,降低算法复杂性,并提高数据融合效率。
在一可选示例中,数据融合的具体过程可以包括:基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,将所述深度信息投影到所述图像上,或则,将图像投影到激光雷达的点云中,与深度信息结合,由此形成包含深度信息的图像。
在具体实施中,因为激光收发装置和图像采集装置均设置于转子上,为了进一步降低标定复杂性,可以使激光收发装置和图像采集装置采用同一坐标系,并且,深度信息和图像均可在同一坐标系下采用极坐标进行表示。基于此,激光收发装置和图像采集装置之间的相对位姿关系可通过二者之间的夹角进行表示。在一可选示例中,可以通过激光收发装置中收发光学组件的光轴和图像采集装置中成像光学组件的光轴之间的夹角,确定二者之间的相对位姿关系。需要说明的是,为了便于描述和理解,将与转子轴线垂直的方向定义为水平方向。
在转动过程中,由于激光收发装置和图像采集装置之间存在固定的水平夹角,因而同步工作的激光收发装置和图像采集装置并不对应同一目标区域。当激光收发装置转过二者之间存在的水平夹角后,能够与未转动水平夹角前图像采集装置的目标区域对应,或者反之,图像采集装置在转过二者之间的水平夹角后,与未转动水平夹角前激光收发装置的目标区域对应。其中,所述的激光收发装置的目标区域,可以理解为:在一个探测周期内,所述激光收发装置的视场在外部环境中覆盖的区域。由此可知,激光收发装置和图像采集装置之间存在水平角度延迟。
此外,激光收发装置的视场范围和图像采集装置的视场范围可能存在差异,因而提供深度信息的数据点对应的水平角度与图像中水平方向的不同像素的水平角度之间存在水平角度差。
基于此,根据激光收发装置的视场范围和图像采集装置的视场范围、以及激光收发装置和图像采集装置之间相对位姿关系,可以确定获得的深度信息和图像在同一坐标系中的水平角度差,进而在深度信息或图像旋转上述水平角度差后,即可获得对应于同一视场范围的图像像素和深度信息,从而实现数据融 合,获得具有深度信息的图像。其中,激光收发装置的视场范围和图像采集装置的视场范围均可通过极坐标中的角度表示。
在一可选示例中,继续参考图1,所述激光收发装置M12和所述图像采集装置M13分别设置于所述轴线的两侧,收发光学组件M121和成像光学组件M131的光轴之间夹角为θ。转子M11绕所述轴线逆时针旋转。
激光收发装置M12还可以包括激光发射模块和激光接收模块(图中未示出),其中:所述激光发射模块可以从不同发散角发射探测光,多个探测光形成一束探测光,并发射至所述收发光学组件;所述收发光学组件在对该束探测光进行传输和整形后,能够形成探测信号,并向外部环境射出。所述收发光学组件可以对探测信号被反射形成的回波光进行传输和聚焦,从而得到回波信号,并传输至所述激光接收模块。所述激光接收模块适于对回波信号进行光电探测,并向控制装置输出光电探测得到的电信号。
激光发射模块包括多个激光器,激光接收模块包括多个探测器,多个激光器和多个探测器分别按照指定方向(如与转子M11轴线平行的方向)排布。多个激光器发射探测光的发射方向与指定方向之间存在不同的夹角。各探测器与各激光器存在对应关系,一个激光器发出探测光后,与之对应的探测器接收回波信号,当探测器接收到回波信号后,控制装置M14通过对应的激光器发射探测光的时间与探测器接收到回波信号的时间,可以计算得到光飞行时间,进而确定激光收发装置M12与外界障碍物之间的空间距离,得到深度信息。
不同探测器用于接收相对于指定方向不同夹角的回波信号,因而不同探测器测得的数据点可根据探测器位置获知其对应的相对于指定方向的夹角。
转子M11带动激光收发装置M12旋转,在激光收发装置M12旋转过程中,多个激光器依次轮巡发出探测光,探测器在与其对应的激光器发出探测光之后对回波信号进行探测。在所有探测器完成轮巡之后,获得了该水平角度下对应于探测器指定方向视场(Field of View,FOV)范围的深度信息。在完成一次探测时,激光收发装置M12已经从一个角度旋转至另一个角度,并进行再一次的轮巡探测。因而,同一个探测器相邻两次信号探测对应的水平角度差可以表示为探测器在水平方向上的角分辨率。
相应地,图像采集装置M13的视场可以覆盖一定指定方向角度和一定的旋转方向角度的二维区域。所述控制装置M14基于图像采集装置M13在图1所示位置处的曝光结果,生成图像P1。
在转子M11在旋转角度θ后,激光收发装置M12的视场中心与目标区域F1的中心对应,如图2所示,激光收发装置M12在图2所示视场范围内发射的探测信号X1'和接收的回波信号X2'。随着转子M11的旋转,激光收发装置M12能够对目标区域F1进行扫描探测,图1所示为逆时针旋转,因此激光收发装置M12对目标区域F1从右边缘向左边缘依次扫描。所述控制装置M14基于激光收发装置M12对目标区域F1发射的探测信号X1'和接收的回波信号X2',可以获得目标区域F1角度范围内的深度信息D1。
基于激光收发装置M12的视场范围、图像采集装置M13的视场范围、以及所述激光收发装置M12和所述图像采集装置M13之间的相对位姿关系,可以确定图像采集装置M13的图像P1在水平方向的不同像素与激光收发装置M12中提供深度信息D1的数据点分别对应的水平角度差,从而可将深度信息D1投影至图像P1,形成包含深度信息的图像PD1。
进一步,随着转子M11的旋转,激光收发装置M12进行水平方向360°的探测,同时,图像采集装置M13进行水平方向360°的成像,根据上述相对位姿关系,将激光收发装置M12的深度信息和图像采集装置M13的图像在同一坐标系中表示,进而将深度信息或图像信息进行水平角度补偿,即可实现深度信息和图像的融合,获得水平方向360°的包含深度信息的图像。
更进一步,激光收发装置M12和图像采集装置M13之间可能存在一定的数据输出时间差,通过计算在激光收发装置M12与图像采集装置M13之间的数据输出时间差内转子M11转过的角度,并与激光收发装置M12和图像采集装置M13之间的视场水平角度差同时用于计算所需补偿的水平角度,进而提高深度信息和图像融合的视场匹配精度。
可以理解的是,上述实施例和附图仅为示意性说明,并不对所述转子的旋转方向、所述激光收发装置和所述图像采集装置在所述转子上的排布情况以及控制装置生成的数据类型进行限制。例如,所述转子可以进行顺时针旋转;所述激光收发装置和所述图像采集装置之间的夹角可以变化;控制装置可基于所述探测信号和回波信号还可以生成角度信息和反射率信息。本领域技术人员可根据具体情况,对本说明书实施例和附图提供的技术方案进行调整。
在具体实施中,所述图像采集装置还可以包括按照第一方向排布的多个像素采集模块,且各所述像素采集模块包括按照第二方向排布的多个感光单元;所述第一方向为与所述轴线平行的方向,所述第二方向为:与所述轴线垂直的 方向。
根据多个像素采集模块的排布和各像素采集模块中多个感光单元的排布,在一个曝光周期内,各感光单元的视场在外部环境中覆盖的区域为所述目标区域的一部分,为了便于描述,将感光单元的视场在外部环境中覆盖的区域称为“视场扫描区域”,即在一个曝光周期内,所述图像采集装置的目标区域由所有进行曝光的感光单元的视场扫描区域组成。
由此,在转子带动图像采集装置旋转的过程中,通过控制多个像素采集模块,可以对目标区域中不同第一方向的视场扫描区域进行曝光,而通过控制各像素采集模块中的感光单元,可以对目标区域中不同第二方向的视场扫描区域进行曝光,从而可以提高曝光灵活性,并有效保障曝光效率。
在一可选示例中,如图3和图4所示,为本说明书实施例提供的一种图像采集装置的正视图和俯视图。需要说明的是,为了便于描述和理解图像采集装置中多个像素采集模块之间的位置关系,图3中未示出成像光学组件。
结合参考图3和图4,所述图像采集装置30可以包括:成像光学组件31和M个像素采集模块,即像素采集模块321至像素采集模块32M,其中,M为大于1的正整数。
像素采集模块321至像素采集模块32M按照第一方向排布,所述第一方向为与所述转子的轴线(图中未示出)平行的方向。在各所述像素采集模块中,包括多个感光单元,且多个感光单元按照第二方向排布。以所述像素采集模块321为例,所述像素采集模块321包括N个感光单元,即感光单元321-1至感光单元321-N。可以理解的是,根据所述像素采集模块321中多个感光单元的排布描述,能够类推出其他像素采集模块(即所述像素采集模块322至所述像素采集模块32M)中多个感光单元的排布情况,在此不再一一赘述。
其中,在图3所示视角中,所述转子的轴线、所述第一方向和所述第二方向均平行于图3所处平面;而在图4所示视角中,所述转子的轴线和所述第一方向垂直于图4所处平面,所述第二方向平行于图4所处平面。
需要说明的是,图3仅为示意性示出多个像素采集模块之间的位置关系,其并不对多个像素采集模块在激光雷达中的实际物理位置进行限定,在实际应用中,可根据具体情况,在激光雷达中设置多个像素采集模块以及各像素采集模块中的感光单元。例如,多个像素采集模块可以设置于同一基板上;又例如,所有像素采集模块的感光单元可以设置于同一基板上。
在具体实施中,所述像素采集模块在相邻两次曝光之间转过的角度与所述像素采集模块在所述第二方向的角度分辨率相当。其中,所述像素采集模块在所述第二方向的角度分辨率与感光单元在第二方向的视场相关。具体地,像素采集模块在所述第二方向的角度分辨率的对应角度为相邻两个感光单元的视场角之差。
由此,在控制所述像素采集模块的单次曝光时间小于所述像素采集模块转过一个第二方向的角度分辨率对应角度的时间,即可使所述像素采集模块的曝光时序和所述转子的转速适配,从而提高曝光质量。
在一可选示例中,继续参考图3和图4,感光单元321-1与感光单元321-2之间的视场角之差为α,则所述像素采集模块321在所述第二方向的角度分辨率的对应角度为α。由此,在控制所述像素采集模块321的单次曝光时间小于所述像素采集模块321转过一个第二方向的角度分辨率对应角度α的时间,即可使所述像素采集模块的曝光时序和所述转子的转速适配,从而提高曝光质量。
在实际应用中,所述像素采集模块在单次进行曝光时,根据具体情况和需求,可以启动所有的感光单元,也可以启动指定的部分感光单元,本说明书对此不做具体限制。由此,可以灵活调整进行曝光的感光单元数量以及感光单元位置,从而满足不同的曝光需求。
在具体实施中,所述控制装置可以在转动过程中控制各所述像素采集模块的多个所述感光单元依次对相同的视场扫描区域进行曝光。
具体地,在图像采集装置旋转过程中,像素采集模块也随之旋转,处于同一像素采集模块的多个所述感光单元的视场在不同时段可以覆盖相同的区域。通过控制各像素采集模块中各感光单元的曝光时机,以位于端部的第一个感光单元为起始单元进行曝光后,当像素采集模块旋转过一个第二方向的角度分辨率对应角度后,向后控制相应次序的感光单元进行曝光,从而可以使各像素采集模块中的多个所述感光单元依次对相同的视场扫描区域进行曝光。
由此,可以减少单个感光单元对相同视场扫描区域的曝光时间,并确保获得丰富的环境信息。
在一可选示例中,继续参考图3和图4,并结合参考图5和图6,以像素采集模块321为例,在像素采集模块321旋转过程中,在t 01时刻,像素采集模块321到达图4所示位置,控制感光单元321-1对视场扫描区域f1进行曝光。以感光单元321-1为起始单元,在t 02时刻,像素采集模块321旋转过一个第二方 向的角度分辨率对应角度α,控制感光单元321-2对视场扫描区域f1进行曝光。以此类推,在t 0N时刻,像素采集模块321旋转过N个第二方向的角度分辨率对应角度α,控制感光单元321-N对视场扫描区域f1进行曝光。
由此,可以减少感光单元321-1至感光单元321-N对相同视场扫描区域f1的曝光时间,并确保获得丰富的环境信息。
在具体实施中,所述感光单元在曝光过程中,其光敏面用于感应入射光子,从而使得所述感光单元被入射光子激励产生曝光电荷。所述像素采集模块可以将多个所述感光单元对所述视场扫描区域依次曝光产生的曝光电荷进行叠加,以作为所述曝光结果输出。具体地,由于同一像素采集模块中的相邻两个感光单元之间的相对位置固定,因而在根据转速确定相邻感光单元之间对相同视场扫描区域进行曝光的时间差后,可以将相同视场扫描区域的曝光电荷进行叠加,作为该视场扫描区域的曝光结果输出。
由此,通过叠加多个感光单元对同一视场扫描区域的曝光结果,可以确保同一视场扫描区域的曝光质量,进而提高图像精度。
在一可选示例中,继续参考图3至图6,并参考图7,叠加感光单元321-1在t 01时刻进行曝光产生的曝光电荷、感光单元321-2在t 02时刻进行曝光产生的曝光电荷……以及感光单元321-N在t 0N时刻进行曝光产生的曝光电荷,可以得到视场扫描区域f1的曝光结果df1。由此,通过叠加感光单元321-1至感光单元321-N对同一视场扫描区域f1的曝光结果,可以确保同一视场扫描区域f1的曝光质量,进而提高图像精度。
在具体实施中,像素采集模块可以对多个发光单元的曝光电荷进行存储、转移和转化,从而可以将相同视场扫描区域的曝光电荷进行叠加,并将所述曝光结果进行输出。
在一可选示例中,如图8所示,为本说明书实施例提供的一种像素采集模块的结构示意图,所述像素采集模块80可以包括:P个感光单元(如图8中感光单元81-1至感光单元81-P)、电荷移位寄存单元82和转化输出单元83;
所述电荷移位寄存单元82,包括多个电荷存储区域(如图8中的电荷存储区域82-1至电荷存储区域82-P),多个所述电荷存储区域与多个所述感光单元一一对应(如图8中电荷存储区域82-1与感光单元81-1对应),且多个所述电荷存储区域依次耦接(如图8中电荷存储区域82-1与电荷存储区域82-2耦接);所述电荷移位寄存单元82可存储和输出所述感光单元81-1至感光单元81-P对 所述视场扫描区域依次曝光产生的曝光电荷。
所述转化输出单元83,与所述电荷移位寄存单元82耦接,可对所述电荷移位寄存单元82输出的曝光电荷进行采样,并转化为电信号,所述电信号作为曝光结果进行输出。
在像素采集模块80进行旋转时,按照对相同视场扫描区域的曝光顺序,感光单元81-1第一个对相同视场扫描区域进行曝光,感光单元81-1对应的电荷存储区域82-1存储感光单元81-1产生的曝光电荷,并在感光单元81-1完成曝光后,将感光单元81-1产生的曝光电荷输出值电荷存储区域82-2进行存储,从而将感光单元81-1产生的曝光电荷转移至电荷存储区域82-2。
在所述像素采集模块80转过一个第二方向的角度分辨率对应角度后,感光单元81-2第二个对相同视场扫描区域进行曝光,感光单元81-2对应的电荷存储区域82-2基于已存储的感光单元81-1产生的曝光电荷,叠加存储感光单元81-2产生的曝光电荷,并在感光单元81-2完成曝光后,将经过叠加的曝光电荷(即感光单元81-1产生的曝光电荷和感光单元81-2产生的曝光电荷)输出至电荷存储区域82-3(图中未示出)存储,从而将经过叠加的曝光电荷转移至电荷存储区域82-3。
以此类推,直至所述像素采集模块80转过P个第二方向的角度分辨率对应角度后,感光单元81-P对相同视场扫描区域进行曝光,感光单元81-P对应的电荷存储区域82-P基于已存储的经过叠加的曝光电荷(即感光单元81-1产生的曝光电荷至感光单元81-(P-1)产生的曝光电荷),叠加存储感光单元81-P产生的曝光电荷,并在感光单元81-P完成曝光后,将叠加的曝光电荷输出至转化输出单元83。
转化输出单元83对所述电荷移位寄存单元82输出的经过叠加的曝光电荷(即感光单元81-1产生的曝光电荷至感光单元81-P产生的曝光电荷)进行采样,并转化为电信号向外输出。
在转子稳定旋转时,转化输出单元83在每一个曝光周期均能对P个感光单元依次对同一视场扫描区域进行曝光的叠加曝光电荷进行采样和输出。
由上可知,P个感光单元的曝光过程和P个感光单元之间的曝光电荷转移过程是连续的,在进行P次曝光之后,转化输出单元可以接收到经过P次叠加的曝光电荷,在不改变转子转速和各感光单元曝光时间的情况下,相当于将像素采集模块对同一视场扫描区域的曝光时间增加了P倍,这样,即使是恶劣天 气或光能不足的环境,也能够清晰成像,从而提高成像质量。
可以理解的是,上述多个示例仅描述了同一像素采集模块中多个感光单元对于一个相同的视场扫描区域的曝光过程。但是,在实际应用中,在同一像素采集模块中,在一个感光单元完成一个视场扫描区域的曝光后,还可以控制该感光单元对下一个视场扫描区域进行曝光,从而实现多个感光单元在同一时刻对不同的视场扫描区域进行曝光的动态曝光。
在实际应用中,可以对多个像素采集模块进行控制,从而使得多个像素采集模块中处于同一次序的感光单元进行同步曝光。
在一可选示例中,结合参考图3,对于像素采集模块321至32M,在t 01时刻,控制像素采集模块321的感光单元321-1、像素采集模块322的感光单元322-1至像素采集模块32M的感光单元32M-1进行曝光;在t 02时刻,控制像素采集模块321的感光单元321-2、像素采集模块322的感光单元322-2至像素采集模块32M的感光单元32M-2进行曝光;以此类推,在t 0N时刻,控制像素采集模块321的感光单元321-N、像素采集模块322的感光单元322-N至像素采集模块32M的感光单元32M-N进行曝光。
在一可选示例中,可以对所述多个像素采集模块进行控制,使所有感光单元按照预设的曝光周期持续进行曝光,即每个曝光周期中,所有感光单元同时曝光。各所述像素采集模块可以持续输出对应不同视场扫描区域的曝光结果,从而获得高精度二维图像。
在具体实施中,所述控制装置可读取所述像素采集模块输出的曝光结果,以生成图像。具体地,所述控制装置可以包括与图像采集装置适配的成像读出电路。成像读出电路可以分别读取图像采集装置中各像素采集模块输出的曝光结果。
在具体实施中,所述像素采集模块的曝光周期的时长至少大于:单次曝光时间、单次电荷转移时间以及单次读取时间之和。换而言之,所述像素采集模块在相邻两次曝光之间,可以进行一次曝光、一次电荷转移和被控制装置进行一次曝光结果读取。
由此,可以确保像素采集模块中多个感光单元之间的曝光电荷转移和各感光单元的曝光时序相匹配,从而提高曝光质量。
在具体实施中,所述激光雷达的转子的转速与像素采集模块在第二方向的角度分辨率以及曝光周期的时长相关。具体地,所述激光雷达的转子的角速度= 像素采集模块在第二方向的角度分辨率的对应角度/曝光周期的时长。
在实际应用中,可根据具体情况和需求,确定像素采集模块的具体器件类型,例如,像素采集模块可以包括:基于金属-氧化物半导体场效应晶体管(Metal-Oxide-Semiconductor Field-Effect Transistor,简称为MOSFET,缩写为MOS)电容器形成的感光单元、基于MOS电容器形成的移位寄存电路和基于MOS电容器形成的转化输出电路。在具体实施中,像素采集模块可包括电荷耦合元件(Charge-Coupled Device,CCD)。相似地,也可根据具体情况和需求,确定成像光学组件中实现入射光子传输功能的具体器件类型,例如,成像光学组件可以包括透镜或透镜组、反射镜、分光镜中一种或多种光学器件。
在实际应用中,可根据具体场景和需求,确定所述激光收发装置中各模块包含的具体器件类型。例如,激光发射模块可以包括:垂直腔面发射激光器(Vertical-Cavity Surface-Emitting Laser,VCSEL)和边缘发射激光器(Edge Emitting Laser,EEL)中一种或多种激光器。激光接收模块可以包括:单光子雪崩二极管(Single Photon Avalanche Diode,SPAD)和雪崩光电二极管(Avalanche Photo Diode,APD)中一种或多种探测器。所述激光接收模块还可以包括滤光元件,设置于探测器的光路上游,对回波信号中所包含的环境光进行滤除。所述收发光学组件可以包括透镜或透镜组、反射镜、半透半反镜、转镜、分光镜中一种或多种光学器件。
在一可选示例中,如图9所示,为本说明书实施例提供的一种激光收发装置的结构示意图。在图9中,激光收发装置90包括:激光发射模块91、激光接收模块92和收发光学组件93(图中未标示)。其中,收发光学组件93可以包括透镜93-1至透镜93-4以及反射镜93-5和反射镜93-6。
所述激光发射模块91可以从不同发散角发射探测光,形成一束探测光并发射至所述透镜93-1,一束探测光经过透镜93-1、反射镜93-5和透镜93-2的传输和整形后,形成探测信号X9-1,并向外部环境射出。多个回波光形成的一束回波光X9-2经过透镜93-4、反射镜93-6和透镜93-3的传输聚焦后,得到回波信号,并传输至所述激光接收模块92。所述激光接收模块92对回波信号进行光电探测,并向控制装置(图9中未示出)输出光电探测得到的电信号。在一可选示例中,激光接收模块92包括探测器和滤光元件,滤光元件设置于探测器的光路上游,对回波信号中所包含的环境光进行滤除,以提升探测的信噪比。
在实际应用中,收发光学组件的焦距与激光收发装置的探测距离正相关, 即相对于相同发散角的一束探测光,收发光学组件的焦距越大,经过收发光学组件整形后的探测信号的光束直径越大,相对应的,经过收发光学组件聚焦后能够被激光接收模块接收的回波信号的光束直径也越大,因此激光接收模块能够接收到更大光能量的回波信号,进而探测距离增大。此外,激光收发装置的视场范围FOV可表示为:
Figure PCTCN2022120814-appb-000001
其中,a为激光发射模块中单个激光器的发光面或激光接收模块中单个探测器的感光面的高度(高度也可称为垂直方向的尺寸,即与转子的轴线平行的方向上的尺寸),f为收发光学组件的焦距。可见,对于同样尺寸的激光器和探测器,激光收发装置的视场范围FOV与收发光学组件的焦距f反相关。
基于上述内容,在实际应用中,激光收发装置中可以设置一个或多个收发光学组件,激光收发装置的硬件结构调整方便,并且,可以根据具体探测需求,设置收发光学组件的焦距,从而满足各种探测需求。
在一可选示例中,所述激光收发装置可以包括:第一激光收发模组和第二激光收发模组,所述第一激光收发模组包括第一收发光学组件,所述第二激光收发模组包括第二收发光学组件,其中,所述第一收发光学组件的焦距大于所述第二收发光学组件的焦距。
需要说明的是,所述第一激光收发模组和第二激光收发模组还可以包括其他组成部分,例如,第一激光收发模组可以包括第一激光发射模块和第一激光接收模块;所述第二激光收发模组还可以包括第二激光发射模块和第二激光接收模块。其中,第一激光发射模块和第二激光发射模块的具体结构可以相同,也可以不相同,并且,第一激光接收模块和第二激光接收模块的具体结构可以相同,也可以不相同。此外,可根据具体情况确定第一激光收发模组和第二激光收发模组中包含的器件的具体类型。
基于此,由于第一激光收发模组的第一收发光学组件的焦距大于所述第二激光收发模组的第二收发光学组件的焦距,因而,对于第一激光收发模组,其通过第一收发光学组件,能够发射光束直径较大的探测信号,相应的接收光束直径较大的回波信号,从而增加探测距离;而对于第二激光收发模组,其通过第二收发光学组件,能够发射更大视场范围的探测信号,从而扩大探测范围。
采用上述方案,通过不同焦距的第一收发光学组件和第二收发光学组件, 可以使得第一激光收发模组和第二激光收发模组的探测范围不同,进而可以兼顾远距离探测的需求和大范围探测的需求,实现全多方位的环境探测。并且,由于第一激光收发模组和第二激光收发模组之间的光路互相独立,因而可以根据实际的探测距离需求,单独调整第一激光收发模组的结构,以及,根据实际的探测范围需求,单独调整第二激光收发模组的结构,结构易于调整,且集成度高,从而可以在激光雷达内部空间有限的情况下,兼顾满足多种探测需求,有利于优化激光雷达的探测性能。
此外,第一激光收发模组、第二激光收发模组和图像采集装置之间,彼此互不干扰,从而在兼顾远距离探测的需求和大范围探测的需求的情况下,还可以提高成像精度。并且,第一激光收发模组、第二激光收发模组和图像采集装置可以共用一些激光雷达的组成部分(如转子、供电模块和信号传输电路等),从而对激光雷达的整机尺寸和整体成本进行有效控制。
在具体实施时,所述第一激光收发模组和所述第二激光收发模组可以围绕所述轴线排布。其中,所述第一激光收发模组和所述第二激光收发模组之间的夹角可根据具体情况设定。
在一可选示例中,如图10所示,为本说明书实施例提供的另一种激光雷达的结构示意图,在图10中,所述激光雷达LS2包括:转子M21、激光收发装置M22(图中未标示)、图像采集装置M23和控制装置M24。激光收发装置M22包括:第一激光收发模组M22-1和第二激光收发模组M22-2。其中,第一激光收发模组M22-1、第二激光收发模组M22-2和图像采集装置M23之间的夹角均为60°。
可以理解的是,激光雷达LS2中各组成部分(如转子M21、激光收发装置M22、图像采集装置M23和控制装置M24等)的具体实现方式、作用和效果可参考上述相关部分的内容,在此不再赘述。
在实际应用中,在第一激光收发模组和第二激光收发模组均采用同一坐标系时,可根据第一激光收发模组和第二激光收发模组在同一坐标系中的水平夹角,确定二者的相对位姿关系,从而根据第一激光收发模组的视场范围、第二激光收发模组的视场范围、以及第一激光收发模组和第二激光收发模组之间的相对位姿关系,可以确定在同一坐标系中处于同一水平角度的第一激光收发模组的深度信息和第二激光收发模组的深度信息,以便于对二者深度信息进行数据融合。
在具体实施中,对于第一激光收发模组发射的探测信号中,不同发散角发射的探测光的发射方向与平行于转子轴线的方向之间的夹角不同,相似地,第二激光收发模组发射的探测信号中,不同位置发射的探测光的发射方向与平行于转子轴线的方向之间的夹角也不同。
基于此,在将第一激光收发模组和第二激光收发模组设置于转子上时,可以将第一激光收发模组和第二激光收发模组中发射方向与水平方向平行的探测光所处位置进行对齐,从而确保第一激光收发模组和第二激光收发模组具有相同的参考基准(即与水平方向平行的探测光),有利于数据融合。
在具体实施中,所述控制装置还可以对生成的图像进行质量评估,并根据评估结果调整多个所述像素采集模块的曝光时间。例如,在确定图像曝光过度的情况下,可以减小所述像素采集模块的曝光时间;在确定图像曝光不足的情况下,可以延长所述像素采集模块的曝光时间。由此,基于图像的评估结果,对所述像素采集模块的曝光时间进行反馈控制,从而提高成像质量。
可以理解的是,在实际应用中,可以根据具体情境和需求,确定图像的具体质量评估方式,例如,可以通过计算图像的亮度、亮度直方图等参数,对图像进行质量评估。本说明书对具体采用的质量评估方式不作限制。
在具体实施中,所述激光雷达还可以包括:补光模块,设置于所述转子上,适于为所述图像采集装置进行补光。由此,在无法延长像素采集模块的曝光时间,或者延长像素采集模块的曝光时间后仍然无法提供充足的曝光度的情况下,可以对像素采集模块进行补光。
在具体实施中,可根据光照情况,确定补光模块在转子上的具体位置和数量。例如,可以在图像采集装置的两侧分别设置补光模块。由此,可以根据光照情况,可以选择性地开启至少一个补光模块和补光模块,从而为图像采集装置进行补光,提高了补光的灵活性和光照情况的均匀性。
在实际应用中,可以根据具体情况确定所述补光模块中实现补光功能的器件类型。例如,所述补光模块可以包括:发光二极管;又例如,所述补光模块可以包括:发光模块和透镜。本说明书对补光模块的结构具体限制。
需要知道的是,上文描述了本说明书实施例提供的多个实施例方案,各实施例方案介绍的各可选方式可在不冲突的情况下相互结合、交叉引用,从而延伸出多种可能的实施例方案,这些均可认为是本说明书实施例披露、公开的实施例方案。
下面对本申请实施例提供的激光雷达控制方法进行介绍,下文描述的数据处理方法可以应用于本说明书实施例所述的任意一种激光雷达,下文描述的数据处理方法的内容,可与上文描述的激光雷达的相关内容相互对应参照。
在具体实施中,如图11所示,为本说明书实施例提供的一种激光雷达控制方法的流程图,其中,所述激光雷达可以包括:转子、激光收发装置、图像采集装置及控制装置,所述激光收发装置包括收发光学组件;所述图像采集装置包括成像光学组件;所述激光收发装置和所述图像采集装置设置于所述转子上,并围绕所述转子的轴线排布。
参考图11,所述激光雷达控制方法可以包括:
A1)控制所述转子绕所述轴线旋转。
A2)控制所述激光收发装置发射探测信号和接收所述探测信号被反射形成的回波信号。
A3)基于所述探测信号和回波信号生成深度信息。
A4)控制所述图像采集装置对目标区域进行曝光。
A5)基于所述图像采集装置的曝光结果生成图像。
由上可知,在所述转子旋转的过程中,通过带动所述激光收发装置和所述图像采集装置旋转,可使所述激光收发装置和所述图像采集装置均实现垂直于轴线方向的360°环境感知,激光收发装置和图像采集装置共同设置于转子上,二者之间的相对位姿关系固定,从而降低图像和深度信息之间的标定复杂性,提高数据处理效率;并且,转子的转速、所述激光收发装置的工作时序和所述图像采集装置的工作时序相适配,从而可以提高图像和深度信息之间的时间同步性;以及,所述激光收发装置和所述图像采集装置通过独立配置的光学组件进行运作,可以为图像采集装置提供足够的曝光时间,从而确保图像的成像质量。故上述方法可以提高激光雷达的数据质量和同步性,从而改善激光雷达的性能。
在具体实施中,如图11所示,所述激光雷达控制方法还可以包括:
A6)基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,获得包含深度信息的图像。
由此,在激光雷达工作之前完成所述激光收发装置和所述图像采集装置之间的标定,有利于在激光雷达工作时调用,从而在后续数据处理时无需反复进行算法标定,降低算法复杂性,并提高数据融合效率。
在具体实施中,所述图像采集装置还包括按照第一方向排布的多个像素采集模块,且各所述像素采集模块包括按照第二方向排布的多个感光单元;所述第一方向为与所述转子的轴线平行的方向,所述第二方向为:与所述转子的轴线垂直的方向。
基于此,所述步骤A1)可以包括:控制所述转子的转速,以使所述像素采集模块在相邻两次曝光之间转过的角度与所述像素采集模块在所述第二方向的角度分辨率相当。
由此,在控制所述像素采集模块的单次曝光时间小于所述像素采集模块转过一个第二方向的角度分辨率对应角度的时间,即可使所述像素采集模块的曝光时序和所述转子的转速适配,从而提高曝光质量。
在具体实施中,所述步骤A4)可以包括:在转动过程中控制各所述像素采集模块的多个所述感光单元依次对相同的视场扫描区域进行曝光。
由此,可以减少单个感光单元对相同视场扫描区域的曝光时间,并确保获得丰富的环境信息。
可以理解的是,本说明书实施例所称的“一个实施例”或“实施例”是指可包含于本说明书至少一个实现方式中的特定特征、结构或特性。在本说明书的描述中。
需要说明的是,在本说明书的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顺时针”、“逆时针”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本说明书和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本说明书的限制。此外,说明书实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。而且,术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以使这里描述的本说明书的实施例能够以除了在这里图示或描述的那些以外的顺序实施。
虽然本发明实施例披露如上,但本发明并非限定于此。任何本领域技术人员,在不脱离本发明的精神和范围内,均可作各种更动与修改,因此本发明的保护范围应当以权利要求所限定的范围为准。

Claims (16)

  1. 一种激光雷达,其特征在于,包括:转子、激光收发装置、图像采集装置及控制装置;所述激光收发装置和所述图像采集装置设置于所述转子上,并围绕所述转子的轴线排布;其中:
    所述转子,适于绕所述轴线旋转;
    所述激光收发装置,包括收发光学组件,适于发射探测信号和接收所述探测信号被反射形成的回波信号;
    所述图像采集装置,包括成像光学组件,适于对目标区域进行曝光;
    所述控制装置,适于基于所述探测信号和回波信号生成深度信息,并基于所述图像采集装置的曝光结果生成图像。
  2. 根据权利要求1所述的激光雷达,其特征在于,所述控制装置,还适于基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,获得包含深度信息的图像。
  3. 根据权利要求1所述的激光雷达,其特征在于,所述图像采集装置还包括按照第一方向排布的多个像素采集模块,且各所述像素采集模块包括按照第二方向排布的多个感光单元;所述第一方向为与所述轴线平行的方向,所述第二方向为:与所述轴线垂直的方向。
  4. 根据权利要求3所述的激光雷达,其特征在于,所述像素采集模块在相邻两次曝光之间转过的角度与所述像素采集模块在所述第二方向的角度分辨率相当。
  5. 根据权利要求3所述的激光雷达,其特征在于,所述控制装置还适于在转动过程中控制各所述像素采集模块的多个所述感光单元依次对相同的视场扫描区域进行曝光。
  6. 根据权利要求5所述的激光雷达,其特征在于,所述像素采集模块适于将多个所述感光单元对所述视场扫描区域依次曝光产生的曝光电荷进行叠加,以作为所述曝光结果输出。
  7. 根据权利要求6所述的激光雷达,其特征在于,所述像素采集模块还包括:电荷移位寄存单元和转化输出单元;
    所述电荷移位寄存单元,包括多个电荷存储区域,多个所述电荷存储区域与多个所述感光单元一一对应,且多个所述电荷存储区域依次耦接;所述电荷 移位寄存单元,适于存储和输出多个所述感光单元对所述视场扫描区域依次曝光产生的曝光电荷;
    所述转化输出单元,与所述电荷移位寄存单元耦接,适于对所述电荷移位寄存单元输出的曝光电荷进行采样,并转化为电信号输出。
  8. 根据权利要求7所述的激光雷达,其特征在于,所述控制装置,适于读取所述曝光结果,以生成图像;
    所述像素采集模块的曝光周期的时长至少大于:单次曝光时间、单次电荷转移时间以及单次读取时间之和。
  9. 根据权利要求1所述的激光雷达,其特征在于,所述激光收发装置包括:第一激光收发模组和第二激光收发模组,所述第一激光收发模组包括第一收发光学组件,所述第二激光收发模组包括第二收发光学组件,所述第一收发光学组件的焦距大于所述第二收发光学组件的焦距。
  10. 根据权利要求9所述的激光雷达,其特征在于,所述第一激光收发模组和所述第二激光收发模组围绕所述轴线排布。
  11. 根据权利要求8所述的激光雷达,其特征在于,所述控制装置还适于对生成的图像进行质量评估,并根据评估结果调整多个所述像素采集模块的曝光时间。
  12. 根据权利要求1-11任一项所述的激光雷达,其特征在于,还包括:补光模块,设置于所述转子上,适于为所述图像采集装置进行补光。
  13. 一种激光雷达控制方法,其特征在于,所述激光雷达包括:转子、激光收发装置、图像采集装置及控制装置,所述激光收发装置包括收发光学组件;所述图像采集装置包括成像光学组件;所述激光收发装置和所述图像采集装置设置于所述转子上,并围绕所述转子的轴线排布;
    所述激光雷达控制方法包括:
    A1)控制所述转子绕所述轴线旋转;
    A2)控制所述激光收发装置发射探测信号和接收所述探测信号被反射形成的回波信号;
    A3)基于所述探测信号和回波信号生成深度信息;
    A4)控制所述图像采集装置对目标区域进行曝光;
    A5)基于所述图像采集装置的曝光结果生成图像。
  14. 根据权利要求13所述的激光雷达控制方法,其特征在于,还包括:
    A6)基于所述激光收发装置和所述图像采集装置之间的相对位姿关系,获得包含深度信息的图像。
  15. 根据权利要求13所述的激光雷达控制方法,其特征在于,所述图像采集装置还包括按照第一方向排布的多个像素采集模块,且各所述像素采集模块包括按照第二方向排布的多个感光单元;所述第一方向为与所述转子的轴线平行的方向,所述第二方向为:与所述转子的轴线垂直的方向;
    所述步骤A1)包括:
    控制所述转子的转速,以使所述像素采集模块在相邻两次曝光之间转过的角度与所述像素采集模块在所述第二方向的角度分辨率相当。
  16. 根据权利要求15所述的激光雷达控制方法,其特征在于,所述步骤A4)包括:
    在转动过程中控制各所述像素采集模块的多个所述感光单元依次对相同的视场扫描区域进行曝光。
PCT/CN2022/120814 2022-04-07 2022-09-23 激光雷达及激光雷达控制方法 WO2023193408A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210360008.3 2022-04-07
CN202210360008.3A CN116930920A (zh) 2022-04-07 2022-04-07 激光雷达及激光雷达控制方法

Publications (1)

Publication Number Publication Date
WO2023193408A1 true WO2023193408A1 (zh) 2023-10-12

Family

ID=88244013

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/120814 WO2023193408A1 (zh) 2022-04-07 2022-09-23 激光雷达及激光雷达控制方法

Country Status (2)

Country Link
CN (1) CN116930920A (zh)
WO (1) WO2023193408A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227029A (ja) * 2010-04-23 2011-11-10 Honda Motor Co Ltd 車両の周辺監視装置
CN107219533A (zh) * 2017-08-04 2017-09-29 清华大学 激光雷达点云与图像融合式探测系统
CN107991662A (zh) * 2017-12-06 2018-05-04 江苏中天引控智能系统有限公司 一种3d激光和2d成像同步扫描装置及其扫描方法
CN208421236U (zh) * 2018-07-23 2019-01-22 上海禾赛光电科技有限公司 一种测距装置
CN111736169A (zh) * 2020-06-29 2020-10-02 杭州海康威视数字技术股份有限公司 一种数据同步方法、设备及系统
CN114114317A (zh) * 2020-08-28 2022-03-01 上海禾赛科技有限公司 激光雷达、数据处理方法及数据处理模块、介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011227029A (ja) * 2010-04-23 2011-11-10 Honda Motor Co Ltd 車両の周辺監視装置
CN107219533A (zh) * 2017-08-04 2017-09-29 清华大学 激光雷达点云与图像融合式探测系统
CN107991662A (zh) * 2017-12-06 2018-05-04 江苏中天引控智能系统有限公司 一种3d激光和2d成像同步扫描装置及其扫描方法
CN208421236U (zh) * 2018-07-23 2019-01-22 上海禾赛光电科技有限公司 一种测距装置
CN111736169A (zh) * 2020-06-29 2020-10-02 杭州海康威视数字技术股份有限公司 一种数据同步方法、设备及系统
CN114114317A (zh) * 2020-08-28 2022-03-01 上海禾赛科技有限公司 激光雷达、数据处理方法及数据处理模块、介质

Also Published As

Publication number Publication date
CN116930920A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
AU2021200905B2 (en) Synchronized spinning lidar and rolling shutter camera system
US11838689B2 (en) Rotating LIDAR with co-aligned imager
US5682229A (en) Laser range camera
CN101451833B (zh) 激光测距装置及方法
US11977167B2 (en) Efficient algorithm for projecting world points to a rolling shutter image
CN111623725A (zh) 一种跟踪式三维扫描系统
WO2023015880A1 (zh) 训练样本集的获取方法、模型训练方法及相关装置
WO2023207756A1 (zh) 图像重建方法和装置及设备
WO2023193408A1 (zh) 激光雷达及激光雷达控制方法
CN111654626B (zh) 一种包含深度信息的高分辨率相机
WO2022017441A1 (zh) 深度数据测量设备和结构光投射装置
CN114063111A (zh) 图像融合激光的雷达探测系统及方法
WO2024113328A1 (zh) 探测方法、阵列探测器、阵列发射器、探测装置及终端
CN103700678A (zh) 一种液晶基电调空间分辨率全色成像探测芯片
WO2023061386A1 (zh) 激光雷达、接收系统、发射系统以及控制方法
US20230170430A1 (en) Substrate for optical device, method of manufacturing the same, optical device including the substrate for optical device, method of manufacturing the same, and electronic apparatus including optical device
CN116753861A (zh) 基于多波长超表面元件的三维重建系统及三维重建方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22936336

Country of ref document: EP

Kind code of ref document: A1