WO2022042197A1 - 激光雷达、数据处理方法及数据处理模块、介质 - Google Patents

激光雷达、数据处理方法及数据处理模块、介质 Download PDF

Info

Publication number
WO2022042197A1
WO2022042197A1 PCT/CN2021/109212 CN2021109212W WO2022042197A1 WO 2022042197 A1 WO2022042197 A1 WO 2022042197A1 CN 2021109212 W CN2021109212 W CN 2021109212W WO 2022042197 A1 WO2022042197 A1 WO 2022042197A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
image acquisition
acquisition module
imaging unit
echo detection
Prior art date
Application number
PCT/CN2021/109212
Other languages
English (en)
French (fr)
Inventor
朱雪洲
孟飞
孙恺
向少卿
Original Assignee
上海禾赛科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海禾赛科技有限公司 filed Critical 上海禾赛科技有限公司
Publication of WO2022042197A1 publication Critical patent/WO2022042197A1/zh

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/46Indirect determination of position data
    • G01S17/48Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning

Definitions

  • the embodiments of this specification relate to the technical field of lidar, and in particular, to lidar, a data processing method, a data processing module, and a medium.
  • lidar and image acquisition devices can be integrated into the autonomous driving system.
  • Range, LIDAR or Laser Detection And Range, LADAR data or image acquisition device data can assist the autonomous driving system.
  • the data of the two need to be fused and used.
  • the field of view Field of View, FoV
  • the time of the data collected by the two needs to be calibrated and adjusted so that the two fields of view are matched and time-synchronized.
  • the lidar and the image acquisition device are two sets of relatively independent devices, which are usually installed in different positions of the vehicle to operate.
  • the lidar and the image acquisition device operate independently, there is a big difference in the field of view angle between the lidar and the image acquisition module, and the data obtained by the two need to be matched in the field of view to obtain the final result.
  • the difference between the data obtained by the two will be relatively large, and even if data fusion is performed, a completely corresponding data cannot be obtained Field of view information; at the same time, too many components will also cause the system structure to be more complicated and the volume to increase.
  • the embodiments of this specification provide a laser radar, a data processing method, a data processing module, and a medium, which can improve the field of view matching degree and time synchronization between image data and point cloud data, and can ensure image data and point cloud data. Accuracy of cloud data.
  • the embodiments of this specification provide a laser radar, the laser radar includes a rotating scanning mechanism, and the laser radar further includes: a receiving optical module, an echo detection module, and an image acquisition module.
  • the rotary scanning mechanism is adapted to be rotated by a mechanical device.
  • the receiving optical module is adapted to transmit incident light to the echo detection module and the image acquisition module during the rotation of the rotary scanning mechanism.
  • the echo detection module is adapted to acquire echo signals from incident light incident from the receiving optical module to obtain echo detection information.
  • the image acquisition module is adapted to convert the incident light incident through the receiving optical module into a corresponding electrical signal to obtain image information.
  • the laser radar includes at least one of the following: a rotating mirror scanning laser radar, wherein the rotating mirror is driven to rotate by the rotating scanning mechanism; a mechanical rotating laser radar, wherein the receiving optical module, echo The detection module and the image acquisition module are driven to rotate by the rotary scanning mechanism.
  • the laser radar is a rotating mirror scanning laser radar, and the echo detection module and the image acquisition module are located on the same side of the rotating mirror of the laser radar.
  • the laser radar is a rotating mirror scanning laser radar
  • the echo detection module and the image acquisition module are respectively located on both sides of the rotating mirror of the laser radar.
  • the echo detection module and the image acquisition module are arranged in the lidar in any of the following ways: the echo detection module and the image acquisition module are arranged on the same silicon wafer on the same substrate. the echo detection module and the image acquisition module are arranged on different silicon wafers of the same substrate; the echo detection module and the image acquisition module are arranged on different substrates of the same printed circuit board; the The echo detection module and the image acquisition module are arranged on the substrates of different printed circuit boards.
  • the substrate is covered with a plastic sealing layer, and the plastic sealing layer is provided with a first light transmission window corresponding to the echo detection module and a second light transmission window corresponding to the image acquisition module.
  • the image acquisition module includes a pixel-level filter module adapted to filter incident light, and the pixel-level filter module is implemented using a semiconductor process.
  • the echo detection module and the image acquisition module are respectively disposed on the silicon wafer using any one of the following structure types: a front-illuminated structure; a back-illuminated structure; and a stacked structure.
  • the echo detection module and the image acquisition module disposed on the same silicon wafer adopt the same structure type.
  • the image acquisition module includes: an imaging unit array composed of N ⁇ M imaging units, where N and M are both positive integers, N represents the number of rows, and M represents the number of columns.
  • the line-of-view angle of the echo detection module is consistent with the line-direction field of view of the image acquisition module or has a definite corresponding relationship.
  • the imaging unit array includes a plurality of imaging unit groups, the imaging unit group includes at least one imaging unit, and the exposure results of each imaging unit group are processed by signal integration to obtain image information.
  • the signal integration processing is time delay integration processing.
  • the imaging unit array is adapted to trigger the corresponding imaging unit to sense incident light in response to the control instruction.
  • the imaging unit array is adapted to trigger a corresponding imaging unit in response to the control instruction, and control the corresponding imaging unit to sense incident light within a corresponding exposure time according to preset exposure control parameters.
  • the imaging unit array is adapted to trigger each imaging unit group in sequence according to a preset timing sequence in response to the control instruction, so that each triggered imaging unit group collects image information of the corresponding field of view scanning area, the
  • the imaging unit group includes at least one imaging unit.
  • both the echo detection module and the image acquisition module are located on the focal plane of the receiving optical module.
  • the receiving optical module includes a flat mirror that reflects the incident light to the echo detection module or the image acquisition module.
  • the echo detection module includes at least one of the following: SPADs array; SiPM; APD array.
  • the image acquisition module includes at least one of the following: CIS array; CCD array.
  • the lidar adopts a one-dimensional scanning manner.
  • the embodiments of this specification also provide a data processing method, which is applied to any one of the above-mentioned lidars, and the data processing method includes the following steps: calculating the phase difference between the image acquisition module and the echo detection module. The scanning interval time of the corresponding field of view scanning area; based on the scanning interval time, obtain echo detection information and image information in the corresponding field of view scanning area; The image information is processed for data processing.
  • the acquiring echo detection information and image information in the corresponding field of view scanning area based on the scanning interval time includes: determining, based on the scanning interval time, the echo detection module of the echo detection module. The corresponding relationship between the detection frame time and the acquisition frame time of the image acquisition module; based on the detection frame time of the echo detection module, obtain corresponding echo detection information; based on the acquisition frame time of the image acquisition module, Acquire corresponding image information; and determine echo detection information and image information in the corresponding field of view scanning area based on the correspondence between the detection frame moment and the acquisition frame moment.
  • the image acquisition module includes an imaging unit array having a plurality of imaging unit groups, and the imaging unit group includes at least one imaging unit; at the time of the acquisition frame based on the image acquisition module, a corresponding image is acquired Before the information, the method further includes: performing signal integration processing on the exposure results of each imaging unit group to obtain the image information.
  • the signal integration processing is time delay integration processing.
  • the image acquisition module includes an imaging unit array, and in response to the control instruction, sequentially triggers each imaging unit group according to a preset timing, so that each triggered imaging unit group acquires the image of the corresponding field of view scanning area.
  • the imaging unit group includes at least one imaging unit.
  • the determining, based on the scanning interval time, the correspondence between the detection frame moment of the echo detection module and the acquisition frame moment of the image acquisition module includes: determining, according to the control instruction, that the acquisition is performed in the phase.
  • the acquisition frame time corresponding to the image information of the corresponding field of view scanning area is obtained, and the collection frame time set of the field of view scanning area is obtained; the start acquisition frame time is determined from the collection frame time set; based on the scanning interval time, A detection frame time corresponding to the initial collection frame time is determined, and a corresponding relationship is established with each collection frame time in the collection frame time set.
  • the acquiring corresponding image information based on the acquisition frame moment of the image acquisition module includes: acquiring the corresponding field of view from an imaging unit at a specified position based on the acquisition frame moment of the image acquisition module. Image information of the scanned area.
  • both the echo detection module and the image acquisition module are located on the same side of the receiving optical module.
  • the step of calculating the scanning interval time between the image acquisition module and the echo detection module in the corresponding field of view scanning area includes: based on the difference between the image acquisition module and the receiving optical module The distance between the echo detection module and the image acquisition module, and the scanning angular velocity of the lidar, calculate the field of view angle of the image acquisition module and the field of view angle of the echo detection module for The scanning interval of the corresponding field of view scanning area.
  • performing data processing on the acquired echo detection information and the image information includes any one of the following: performing data processing on the echo detection information and image information in the corresponding field of view scanning area Perform data processing respectively to obtain corresponding point cloud data and image data; for the echo detection information and image information in the corresponding field of view scanning area, fuse the echo detection information and the image information, The fusion information is obtained, and data processing is performed to obtain fusion data.
  • the data processing method further includes at least one of the following: adjusting the image acquisition parameters of the image acquisition module based on the echo detection information of the echo detection module; based on the image information of the image acquisition module , and adjust the echo signal detection parameters of the echo detection module.
  • the data processing method further includes: based on the image information collected by the image acquisition module, determining whether the image information conforms to a preset imaging condition; when it is determined that the imaging condition is not met, adjusting the image Exposure control parameters of the acquisition module.
  • the determining whether the image information complies with a preset imaging condition based on the image information collected by the image acquisition module includes: acquiring an exposure amount in the image information, and determining whether it belongs to the imaging condition. The exposure amount interval; if it does not belong to the exposure amount interval, the image information does not meet the imaging conditions.
  • adjusting the exposure control parameters of the image acquisition module includes at least one of the following: if the exposure amount in the image information is less than the minimum exposure amount interval If the exposure amount in the image information is greater than the maximum endpoint value of the exposure amount interval, the exposure control parameter of the image acquisition module is decreased.
  • the data processing method further includes: before increasing the exposure control parameter of the image acquisition module, judging the size of the exposure control parameter of the image acquisition module and the corresponding imaging unit in the image acquisition module Whether the scanning periods are equal or not, if they are equal, each imaging unit group is obtained according to the grouping setting of each imaging unit in the image acquisition module, which is used to sequentially acquire image information of the corresponding field of view scanning area according to the preset time sequence .
  • the data processing method further includes: detecting a current lighting situation to obtain a corresponding lighting intensity value; based on the lighting intensity value, judging the current exposure control parameters of the image acquisition module under the lighting intensity value Whether the imaging conditions are met; when it is determined that the imaging conditions are not met, the exposure control parameters of the image acquisition module are adjusted.
  • the embodiments of this specification also provide a data processing module, including a memory and a processor; the data processing module is applied to a lidar, the memory of the data processing module is suitable for storing one or more computer instructions, and the processor The steps of any of the methods described above are performed when the computer instructions are executed.
  • the embodiments of the present specification further provide a computer-readable storage medium on which computer instructions are stored, and when the computer instructions are executed, the steps of any one of the above-mentioned methods are executed.
  • Embodiments of the present specification further provide a laser radar, including: the above-mentioned data processing module, where the data processing module is adapted to perform data processing on the information collected by the laser radar.
  • the rotary scanning mechanism and the receiving optical module of the lidar are multiplexed, so that the incident light is collected and transmitted to the echo detection module and the image acquisition module, which can greatly reduce the echo detection.
  • the error between the field of view of the module and the field of view of the image acquisition module makes the time difference between the two processing the incident light in the corresponding field of view scanning area small, and can ensure that the scanning trajectories of the two are consistent.
  • the lidar provided by the embodiments of the specification can improve the matching degree of the field of view and the time synchronization between the image data and the point cloud data, and can ensure the accuracy of the image data and the point cloud data.
  • the image acquisition module may include an imaging unit array, and the imaging unit array can trigger the corresponding imaging unit to sense incident light in response to the control instruction, so as to flexibly control the imaging unit, and dynamically adjust the acquisition accuracy of the image acquisition module to meet the requirements.
  • the imaging unit array can trigger the corresponding imaging unit to sense incident light in response to the control instruction, so as to flexibly control the imaging unit, and dynamically adjust the acquisition accuracy of the image acquisition module to meet the requirements.
  • Various resolution requirements are examples of the imaging unit array.
  • the echo detection module and the image acquisition module can be arranged on the same silicon wafer on the same substrate, on different silicon wafers on the same substrate, or on different substrates on the same printed circuit board. It can also be arranged on the substrates of different printed circuit boards, so that the echo detection module and the image acquisition module can be flexibly arranged in the lidar, which is not limited by the existing layout.
  • the echo detection module can adjust echo signal detection parameters according to the image information of the image acquisition module to obtain point cloud data with better quality; the image acquisition module can be based on the echo detection module The echo detection information, adjust the image acquisition parameters to obtain better quality image data.
  • the echo detection module may include a CIS array and/or a CCD array, so that the cost of the lidar can be reduced, and color information can be obtained, thereby generating a color image with better visual effects.
  • the lidar adopts a one-dimensional scanning method, which can accurately control the scanning trajectory, which is beneficial to suppress the distortion of the field of view of the moving object, reduce the situation of dynamic blur, and make the subsequently generated point cloud data and/or image data more convenient for the object. identify.
  • FIG. 1 is a schematic structural diagram of a laser radar in an embodiment of the present specification.
  • FIG. 2a is a schematic structural diagram of an image acquisition module in an embodiment of the present specification.
  • Fig. 2b is a schematic diagram of a field of view in a row direction corresponding to the image acquisition module in Fig. 2a.
  • Fig. 2c is a schematic diagram corresponding to the acquisition in columns of the imaging unit array in Fig. 2a.
  • FIG. 2d is an exposure timing diagram of each imaging unit column for the corresponding field of view scanning area in FIG. 2a.
  • FIG. 2e is another exposure timing diagram of each imaging unit column for the corresponding field of view scanning area in FIG. 2a.
  • FIG. 2f is another exposure timing diagram of each imaging unit column in FIG. 2a for the corresponding field of view scanning area.
  • FIG. 3 is a schematic structural diagram of an echo detection module in an embodiment of the present specification.
  • FIG. 4 is a schematic diagram of an integration manner of an echo detection module and an image acquisition module in an embodiment of the present specification.
  • FIG. 5 is a schematic diagram of another integration manner of the echo detection module and the image acquisition module in the embodiment of the present specification.
  • FIG. 6 is a schematic diagram of another integration manner of the echo detection module and the image acquisition module in the embodiment of the present specification.
  • FIG. 7 is a schematic diagram of another integration manner of the echo detection module and the image acquisition module in the embodiment of the present specification.
  • FIG. 8 is a schematic structural diagram of a front-illuminated integrated chip in an embodiment of the present specification.
  • FIG. 9 is a schematic structural diagram of a back-illuminated integrated chip in an embodiment of the present specification.
  • FIG. 10 is a schematic structural diagram of a stacked integrated chip according to an embodiment of the present specification.
  • FIG. 11 is a schematic structural diagram of another stacked integrated chip in the embodiment of the present specification.
  • FIG. 12 is a schematic diagram of an application scenario of a mechanically rotating laser radar in an embodiment of the present specification.
  • FIG. 13 is a schematic diagram of an application scenario of a mirror scanning laser radar in an embodiment of the present specification.
  • FIG. 14 is a schematic diagram of an application scenario of another mirror scanning laser radar in the embodiment of this specification.
  • FIG. 15 is a schematic diagram of an application scenario of a mirror scanning laser radar in an embodiment of the present specification.
  • FIG. 16 is a schematic diagram of an application scenario of another mirror scanning laser radar in the embodiment of this specification.
  • FIG. 17 is a flowchart of a data processing method in the embodiment of the present specification.
  • FIG. 18 is a flowchart of establishing a corresponding relationship between the acquisition frame time and the detection frame time in the embodiment of the present specification.
  • FIG. 19 is a flowchart of another data processing method in the embodiment of this specification.
  • FIG. 20 is a flowchart of an imaging condition determination method in the embodiment of the present specification.
  • FIG. 21 is a flowchart of another data processing method in the embodiment of this specification.
  • FIG. 22 is a flowchart of a method for judging light intensity conditions in the embodiment of the present specification.
  • the data of the lidar and the data of the image acquisition device need to be fused and used.
  • the effect of field-of-view matching and time synchronization between the lidar and the image acquisition device is not ideal.
  • the waste of hardware resources affects the results of data fusion.
  • the embodiments of this specification provide a lidar structure, which transmits the incident light from the scanning area of the field of view corresponding to the field of view to the echo detection module and the image acquisition module for signal acquisition, thereby improving the image data.
  • the matching degree and time synchronization between the field of view and point cloud data, and the accuracy of image data and point cloud data can be guaranteed.
  • a lidar structure at least includes a rotating scanning mechanism, a receiving optical module, an echo detection module and an image acquisition module.
  • the receiving optical module includes: part or all of the optical devices that echo signals and/or ambient light pass through together from entering the lidar to reaching the echo detection module and/or the image acquisition module.
  • the optical devices may include, but is not limited to, lenses or lens groups, mirrors, half mirrors, turning mirrors, beam splitters, and other optical devices.
  • the echo detection module and the image acquisition module when the echo detection module and the image acquisition module are located on the same side of the receiving optical module, the echo detection module and the image acquisition module can completely share the same set of optical systems, that is, The echo signal and ambient light signal incident to the lidar reach the echo detection module and the image acquisition module respectively through the same optical receiving module. Meanwhile, the echo detection module and the image acquisition module are both located on the same plane corresponding to the optical receiving module. As a more preferred solution, both the echo detection module and the image acquisition module are located on the same focal plane corresponding to the optical receiving module.
  • the echo detection module and the image acquisition module are located on the same side of the receiving optical module, the echo detection module and the image acquisition module are closely arranged.
  • the deviation of the field of view due to the distance between the two is less than a pre-set threshold.
  • the size of this threshold is determined by the application scenario of lidar.
  • the echo detection module and the image acquisition module are miniaturized, chipped, and processed by a semiconductor manufacturing process. Their physical dimensions are in millimeters, as is the physical spacing between the two modules.
  • the echo detection module and the image acquisition module are located on the same side of the receiving optical module, the echo detection module and the image acquisition module share part of the optical module in the lidar, that is, the receiving optical module
  • the optical module includes part of the optical module in the lidar.
  • the receiving optical module may include a rotating mirror. That is, the rotating mirror reflects the incident light (echo signal and/or ambient signal) to the echo detection modules and image acquisition modules on both sides, respectively.
  • the echo detection module and the image acquisition module may also correspond to different subsequent optical device groups respectively, so as to guide the incident light from the rotating mirror to the echo detection module and the image acquisition module, respectively.
  • the subsequent optical device groups corresponding to the two modules respectively use devices of the same specification and setting methods of the same parameters, so that the synchronization of the optical paths on both sides is better.
  • the field-of-view scanning area may be: a range in which the laser radar scans the outside world in a frame sampling period, and the frame sampling period is the duration of a single acquisition of frame information by the laser radar.
  • the field-of-view scanning area may include: within a frame sampling period, the echo detection module and the image acquisition module scan the outside world according to their own field of view, respectively, according to the echo detection module and the image acquisition module.
  • the actual hardware structure of the module, the horizontal field of view of the echo detection module is consistent with the horizontal field of view of the image acquisition module or there is a definite corresponding relationship.
  • the frame information collected by the lidar may include: echo detection information of the echo detection module and image information of the image acquisition module.
  • the echo detection information may include point cloud information. For example, distance information, position coordinate information, etc. of each detection point.
  • the echo detection information may further include other information related to each detection point. For example, speed information, etc.
  • the scanning area of the field of view is not an area defined by a real boundary, but a dynamic area that changes according to the scanning changes of the lidar.
  • a field of view angle difference may exist between the field of view angle of the echo detection module and the field of view angle of the image acquisition module.
  • the field of view angle difference conforms to the deviation threshold range, it can be considered that the echo detection module
  • the field of view angle is the same as the field of view angle of the image acquisition module. In one frame sampling period, the field of view scanning area corresponding to the echo detection module and the image acquisition module is the same field of view scanning area.
  • the laser radar 10 may include: a rotating scanning mechanism 11 , a receiving optical module 12 , an echo detection module 13 and an image acquisition module module 14.
  • the rotary scanning mechanism 11 is adapted to be rotated by a mechanical device.
  • the receiving optical module 12 is adapted to transmit the incident light 1A to the echo detection module 13 and the image acquisition module 14 during the rotation of the rotary scanning mechanism 11 .
  • the echo detection module 13 is adapted to acquire echo signals from the incident light 1A incident from the receiving optical module 12 to obtain echo detection information.
  • the image acquisition module 14 is adapted to convert the incident light 1A incident through the receiving optical module 12 into a corresponding electrical signal to obtain image information.
  • the rotary scanning mechanism and the receiving optical module of the lidar are multiplexed, so that the incident light is collected and transmitted to the echo detection module or the image acquisition module, which can greatly reduce the echo detection.
  • the error between the field of view of the module and the field of view of the image acquisition module makes the time difference between the two processing the incident light in the corresponding field of view scanning area small, and can ensure that the scanning trajectories of the two are consistent.
  • the lidar provided by the embodiments of the specification can improve the matching degree of the field of view and the time synchronization between the image data and the point cloud data, and can ensure the accuracy of the image data and the point cloud data.
  • FIG. 1 is only an illustration.
  • the relative motion relationship between the rotating scanning mechanism and the receiving optical module, the echo detection module and the image acquisition module is can be different.
  • the laser radar is a rotating mirror scanning laser radar
  • the rotating mirror in the receiving optical module is driven to rotate by the rotating scanning mechanism, and the rotating scanning mechanism does not drive the receiving optical module, echo detection module or image acquisition module Rotation
  • the laser radar is a mechanical rotating laser radar
  • the receiving optical module, the echo detection module and the image acquisition module are driven to rotate by the rotating scanning mechanism.
  • the embodiments of this specification do not specifically limit the rotation manner of the lidar.
  • the image acquisition module 20 may include: an imaging unit array 21 , and the imaging unit array 21 is used for sensing incident light N ⁇ M imaging units can be arranged on one side of the , and the slashed part in FIG. 2 a is one imaging unit, that is, the imaging unit 211 .
  • N and M are positive integers
  • N represents the number of rows
  • M represents the number of columns.
  • Each imaging unit on the imaging unit array 21 is used to sense the incident light transmitted by the receiving optical module, and convert the sensed light signal into a corresponding electrical signal, thereby obtaining image information. It can be understood that FIG. 2 a is only an example, and in actual processing, the distance between the imaging units may be very small according to the level of processing technology.
  • the image acquisition module may further include: various circuits or various components adapted to the imaging unit array, such as an imaging readout circuit adapted to the imaging unit array, and the imaging readout circuit may use It is used to collect electrical signals generated by each imaging unit.
  • the side of the imaging unit array used for sensing incident light can be called the imaging photosensitive surface, which is composed of the imaging photosensitive surface of each imaging unit. Scanning can be performed on the imaging photosensitive surface, the row direction of the imaging unit array is parallel to the scanning direction of the incident light on the imaging unit array, and the column direction of the imaging unit array and the row direction are different from each other on the imaging photosensitive surface. parallel.
  • the imaging units in the same row direction may be referred to as a row of imaging units, and the imaging units in the same column direction may be referred to as a column of imaging units.
  • a row of imaging units the imaging units in the same column direction
  • a column of imaging units For example, reference may be made to a column of imaging units 21A where the imaging unit 211 is located in FIG. 2a.
  • the number of rows or columns of the imaging unit array can be adjusted, for example, the number of rows or columns of the imaging unit array can be reduced. , so that the number of rows of the imaging unit array N ⁇ M.
  • the imaging unit array may be an imaging unit line column, and the imaging unit line column refers to an imaging unit array with a large gap between the number of rows and the number of columns, for example, the number of rows N of the imaging unit array may be much larger. in the number of columns M.
  • the degree of the gap can be divided by setting a numerical limit, and whether it is a "large gap" is judged by the difference between the number of rows and the number of columns.
  • the number of rows and columns are different.
  • a difference of 5 times in the number of rows is regarded as a large gap, such as the number of rows of the imaging unit array N ⁇ 5*M; in other scenarios, a difference of 100 times between the number of rows and the number of columns is regarded as a large gap, such as the imaging unit
  • the number of rows of the array is N ⁇ 100*M.
  • the image acquisition module can be composed of CIS (CMOS image sensor, CMOS image sensor) and/or CCD (Charge-coupled Device, charge-coupled device), etc.
  • CIS CMOS image sensor, CMOS image sensor
  • CCD Charge-coupled Device, charge-coupled device
  • the imaging unit array can be implemented by any of the following types: 1) a CIS array formed by an independent CIS as an imaging unit; 2) a CCD array formed by an independent CCD as an imaging unit.
  • the image acquisition module may include a CIS array and/or a CCD array.
  • the image acquisition module may include a CIS array. Compatible with Complementary Metal Oxide Semiconductor Other hardware related to Metal Oxide Semiconductor, CMOS) process.
  • CMOS Complementary Metal Oxide Semiconductor
  • the cost of lidar can be reduced, and image information can be obtained, and then a black and white image with a smaller amount of data or a color image with better visual effect can be generated according to actual needs.
  • the lidar structure solution provided by the embodiment of this specification can be any device capable of rotating scanning from the perspective of hardware design.
  • the optical module can be any module that can realize the optical convergence function.
  • the echo detection module included in the lidar can be any module that can realize the echo detection function, combined with other infrastructure of the lidar, such as the transmitter, data The processing device and the transmission device, etc., can realize the normal operation of the lidar.
  • the image acquisition module is set on the path of the receiving optical module to transmit the incident light, and the existing laser radar's receiving optical module, rotating scanning mechanism, transmitting part, data processing device and transmission device are reused, that is, no need to change the laser
  • the original optical path setting of the radar preferably, a laser radar with a rotating scanning mechanism
  • the existing laser radar only includes an echo detection module, but according to an embodiment of the present invention, a module consisting of an echo detection module and an image detection module can be obtained. It is only necessary to replace the existing detection module of the lidar with the module composed of the echo detection module and the image detection module in the embodiment of the present invention, and then the function improvement and enhancement can be realized.
  • the relevant parameters of the image acquisition module can be adjusted individually, or the image acquisition module that meets the requirements can be selected, so that different image acquisition modules can be obtained.
  • Image information for precision and chromaticity may include a CIS array and/or a CCD array. Therefore, the lidar provided by the embodiments of this specification can adapt to more changeable scenarios and conditions, and has a wider application range.
  • the resolution is only 1000-level and 10,000-level, such as 1024 pixels, 65536 pixels, etc.
  • the resolution can be It can reach the level of tens of millions or even hundreds of millions, such as 30 million pixels, 100 million pixels, etc.
  • the CIS array and the CCD array can collect higher-precision image information.
  • both the CIS array and the CCD array can collect color information, and then can selectively generate black and white or color images.
  • the lidar using the embodiments of this specification is more flexible and can acquire high-quality image information without increasing hardware costs.
  • the imaging unit array is adapted to trigger the corresponding imaging unit to sense the incident light in response to the control instruction and convert it into a corresponding electrical signal.
  • the imaging unit can be flexibly controlled, and the acquisition accuracy of the image acquisition module can be dynamically adjusted to meet various resolution requirements.
  • the range that the image acquisition module scans to the outside world is determined by the field of view angle generated by the imaging unit array by receiving the optical module, and the field angle of the imaging unit array is determined by the combination of the field angles of the triggered imaging units. become.
  • the field of view angle of the imaging unit array may include: a row direction field angle of the imaging unit array scanning along the row direction, the row direction field angle of the imaging unit array is combined by the row direction field angle of the triggered imaging unit made.
  • the incident light is not refracted through the center of the receiving optical module, and can be transmitted to the imaging photosensitive surface of the triggered imaging unit array according to the incident angle. Therefore, in order to facilitate the calculation of the field of view of the imaging unit array, the center of the receiving optical module can be calculated. And the maximum included angle that can be transmitted to the edge of the imaging photosensitive surface of the triggered imaging unit is taken as the field angle of the imaging unit array.
  • FIG. 2b is a schematic diagram of the field of view in the row direction obtained after the imaging unit array in FIG. 2a triggers all imaging units, after the incident light 2A passes through the geometric center point 22A of the receiving optical module 22, the row direction can be transmitted to
  • the maximum included angle of the edge of the imaging photosensitive surface of the triggered imaging unit is ⁇ , that is, the field angle of the line direction is ⁇ .
  • the scanning range of the imaging unit in the row direction is related to the scanning range of the lidar. For example, the lidar can scan 360°, then the imaging unit is in 360° scanning is also possible in the row direction.
  • the width of the imaging photosensitive surface of the imaging unit can be regarded as the arc length swept by the incident light in the row direction once, and the calculation formula of the field angle of the imaging unit in the row direction can be approximated as FOV row ⁇ (A/L) ⁇ (360 °/2 ⁇ ).
  • the distance between the imaging unit array and the receiving optical module can be taken as the distance between the imaging unit and the receiving optical module. , thereby simplifying the calculation difficulty.
  • the distance between the imaging unit array and the receiving optical module is the focal length between the imaging unit array and the receiving optical module.
  • the outer dimension of the imaging unit can be used as the width of the imaging photosensitive surface of the imaging unit, which is convenient for calculation.
  • the vertical distance between the imaging photosensitive surface of the imaging unit array and the geometric center point of the receiving optical module can be used as the distance between the two.
  • the distance between the two refer to the geometric center point 22A in FIG. 2b and the imaging unit array.
  • the interval between adjacent imaging units makes the angle of field of view in the row direction different between adjacent imaging units. Because the arrangement of each imaging unit is relatively close, the interval between adjacent imaging units is very small. Therefore, it is possible to The line-direction field of view angle difference between adjacent imaging units is ignored.
  • the control device of the lidar generates corresponding control commands according to the resolution requirements.
  • the imaging unit array responds to the control commands, it can control the working state of each imaging unit, so that each triggered imaging unit collects image information, and each The image information collected by the triggering imaging unit can be processed to generate an image. Therefore, by controlling the triggered imaging units in the imaging unit array, the acquisition accuracy of the image acquisition module can be dynamically adjusted.
  • control device can trigger at least one imaging unit in the same row of the imaging unit array through control instructions, and the image information collected by the triggered imaging unit in the same row is used as the image information of the row.
  • the control device can trigger at least one imaging unit in the same row of the imaging unit array through control instructions, and the image information collected by the triggered imaging unit in the same row is used as the image information of the row.
  • the control device can simultaneously trigger at least two imaging units in different row directions in multiple rows of the imaging unit array through control instructions, and the image information collected by the triggered imaging units in the multiple rows is processed by logic. After the operation, it is used as the image information of one line. Therefore, by triggering multiple imaging units to collect image information of the same area, the resolution requirements of each triggered imaging unit can be reduced, and the accuracy of image information collected in dark light environments such as night and cloudy days can be improved.
  • the above embodiments are only illustrative, and in the actual application process of the present invention, the number of triggers of imaging units in different rows or columns can be controlled according to the resolution requirements, and the logic operation mode can also be set according to the actual situation. , such as summation, weighted average, etc., which are not limited in the embodiments of this specification.
  • the imaging unit array since the field of view of the imaging unit array is combined with the field of view of the triggered imaging unit, and the rotating scanning mechanism of the lidar rotates according to the preset scanning angular speed, the scanning angle is changed, so that the lidar can After receiving incident light from different sources and azimuths, the imaging unit array photoelectrically converts the sensed incident light to obtain image information.
  • the time that the imaging unit array senses the incident light once is called the exposure time.
  • the imaging unit array can trigger the corresponding imaging unit after responding to the control instruction, and control each triggered imaging unit according to the preset exposure control parameters.
  • the incident light from the corresponding field of view scanning area is sensed within the corresponding exposure time, and the image information of the corresponding field of view scanning area is collected.
  • the scanning time of the laser radar across the line of view of the imaging unit can be set as the exposure control parameter of the imaging unit.
  • the imaging unit can be The incident light is sensed for the corresponding line scan time.
  • the imaging unit can be turned off until the rotary scanning mechanism of the lidar rotates so that the imaging unit corresponds to the next field of view scanning. incident light in the area. As a result, the power consumption of the lidar can be reduced.
  • the image acquisition module captures the image information of the dynamically changing scene
  • the problem of motion blur occurs because the scene changes during the exposure process.
  • too long exposure time will damage the image.
  • the hardware of the acquisition module also prolongs the image generation time and cannot meet the needs of fast image acquisition in dynamic application scenarios such as automatic driving.
  • the imaging unit array is adapted to trigger each imaging unit group in sequence according to a preset timing sequence in response to the control instruction, so that each triggered imaging unit group collects the corresponding field of view scans respectively image information of the area. Then, by fusing the image information collected by each triggered imaging unit for the corresponding field of view scanning area within the respective exposure times, the fused image information of the field of view scanning area can be obtained. If the field of view angle difference of each imaging unit group conforms to the preset difference range, it can be considered that the field of view angle of each imaging unit group is the same, and the field of view scanning area collected by each imaging unit group is the same field of view scanning area.
  • the imaging unit group may include at least one imaging unit; the timing sequence may be set according to the time difference between the corresponding field of view scanning areas acquired by each imaging unit group and the scanning direction of the incident light on the imaging unit array; The time difference between the acquisition of the corresponding field of view scanning area by the unit group is determined by the separation distance between adjacent imaging unit groups.
  • the imaging unit array may include a plurality of imaging unit groups, and the imaging unit group includes at least one imaging unit.
  • imaging units in the same column are grouped into the same imaging unit group.
  • the exposure results of multiple imaging unit groups can be processed by signal integration, and finally image information can be obtained.
  • a time delay integral (Time Delay and Integration (TDI) processing method is used to process the exposure information of each imaging unit group to obtain image information after exposure information of multiple imaging units is accumulated.
  • TDI Time Delay and Integration
  • an imaging unit array with TDI function can be used, for example, a CCD array with TDI function, or a CIS array with TDI function.
  • the outputs of each column of imaging unit groups can be respectively connected to an integrating circuit with corresponding functions (such as TDI), and corresponding image information is output after being processed by the integrating circuit.
  • the corresponding viewing angle between adjacent imaging unit groups can be determined.
  • the time difference of the field scan area Specifically, between adjacent imaging unit groups, the line scan time of the imaging unit group in the front of the scanning sequence is the time difference between the adjacent imaging unit groups for the corresponding field of view scanning area.
  • the exposure start time difference corresponding to the adjacent imaging unit groups can be determined, so that the exposure start time of each imaging unit group can be set according to the time difference.
  • each triggered imaging unit group is used to collect image information for the corresponding field of view scanning area respectively, so as to realize multiple exposures of the corresponding field of view scanning area, and then the overall exposure time of the imaging unit array is changed.
  • Divided into the exposure time of each triggered imaging unit group for the corresponding field of view scanning area, reducing the overall exposure time of the imaging unit array for the corresponding field of view scanning area, and the fusion image information obtained through multiple exposures is richer
  • the information content can offset the problem of image blur caused by rotation, and further improve the picture clarity and image quality under the condition of ensuring the image generation efficiency.
  • the imaging units in the imaging unit array 21 are grouped in the column direction.
  • the imaging unit 211 and the imaging units in the same column direction are regarded as an imaging unit group 21A
  • the imaging units The unit 212 and the imaging units in the same column direction are regarded as one imaging unit group 21B
  • the imaging unit 21M and the imaging units in the same column direction are regarded as one imaging unit group 21M.
  • the imaging unit array 21 is thus divided into M imaging unit groups 21A to 21M.
  • the corresponding fields of view of the imaging unit groups 21A to 21M in the imaging unit array 21 in the outside world W are FA to FM, respectively.
  • the rotating scanning mechanism scans according to the rotation direction shown in FIG. 2c, the scanning direction of the incident light on the imaging unit array is from the imaging unit group 21A to 21M.
  • each imaging unit group may be sorted, the imaging unit group 21A is the first group, the imaging unit group 21B is the second group, and so on, the imaging unit group 21M is Group M.
  • the imaging unit group 21A sorted into the first group may be referred to as the imaging unit of the first column
  • the imaging unit group 21M sorted into the second group may be referred to as the imaging unit of the second column
  • imaging unit of the second column For convenience of description, the imaging unit group 21M sorted into the second group may be referred to as the imaging unit of the second column...imaging of the M-th group
  • the unit group 21M is referred to as the M-th column imaging unit.
  • Each column of imaging units changes the scanning direction of the laser radar under the rotation of the rotary scanning mechanism, and each column of imaging units sequentially corresponds to the same field of view acquisition area according to the arrangement sequence during the rotation of the rotary scanning mechanism.
  • the first column of imaging units 21A and the second column of imaging units 21B are used as examples for illustration. It can be seen from FIG. 2b that if the size error and interval of each imaging unit can be ignored, the width of the imaging photosensitive surfaces of the first imaging unit row 21A and the second imaging unit row 21B are both a, the imaging unit array 21 and the The distance between the receiving optical modules 22 is b. If the scanning angular velocity of the lidar is ⁇ , then the parameters are entered into the above-mentioned calculation formula of the field of view in the row direction, and the first column of imaging units 21A and the second column of the imaging unit 21A can be calculated.
  • the exposure start time t c1 of the imaging units 21A in the first column is set to t 0
  • the exposure start time t c2 of the imaging units in the second column can be set to (t 0 + ⁇ t 1 )... and so on.
  • the exposure start time t cM of the imaging unit in the M-th column can be set as [t 0 +(M-1)* ⁇ t 1 ].
  • each column of imaging units can be set according to the time difference between the corresponding field of view scanning areas acquired by each column of imaging units and the scanning direction of incident light on the imaging unit array.
  • the first row of imaging units 21A to the M-th row of imaging units are sequentially triggered according to a preset time sequence and capture images of the outside world W according to preset exposure control parameters.
  • each column of imaging units for the corresponding field of view scanning area may be the same or different, which is not limited in the embodiment of the present specification.
  • the exposure process of each column of imaging units for the corresponding field of view scanning area is described below through several embodiments.
  • the field of view scanning area F is used as an example for description.
  • the exposure time of each column of imaging units may be set to the corresponding row scanning time ts 1 .
  • the imaging unit 21A of the first column can sense the incident light of the field of view scanning area F within the exposure time ts1 from the exposure start time tc1, and then the imaging unit 21B of the second column can start the exposure From time t c2 onwards, the incident light in the field of view scanning area F is sensed within the exposure time ts 1 . . . and so on, the imaging unit 21M in the M-th column can sense the field of view within the exposure time ts 1 from the exposure start time t cm Incident light in the scanning area F.
  • the image information collected by each column of imaging units at the corresponding moment is acquired respectively, and fusion processing is performed, so as to obtain the fusion image information corresponding to the scanning area F of the field of view.
  • the exposure time of each column of imaging units may be set to be shorter than the row scanning time ts 1 .
  • the imaging unit 21A of the first column can sense the incident light of the field of view scanning area F within the exposure time ts1 ' from the exposure start time tc1 , and then the imaging unit 21B of the second column can start the exposure Starting from the initial time t c2 , the incident light of the field of view scanning area F is sensed within the exposure time ts 2 '... and so on, the imaging unit 21M of the M-th row can start from the exposure start time t cm , within the exposure time ts m ' Incident light in the scanning area F of the field of view is sensed.
  • the image information collected by each column of imaging units at the corresponding moment is acquired respectively, and fusion processing is performed, so as to obtain the fusion image information corresponding to the scanning area F of the field of view.
  • the exposure time of each column of imaging units may be set to be greater than the row scanning time ts 1 .
  • the imaging units 21A of the first column can sense the incident light of the field of view scanning area F within the exposure time ts 1 ′′ from the exposure start time t c1 , and then the imaging units 21B of the second column can be exposed to Starting from the start time t c2 , the incident light of the field of view scanning area F is sensed within the exposure time ts 2 ′′... and so on, the imaging unit 21M in the M-th row can start from the exposure start time t cm , and at the exposure time ts m ''The incident light in the scanning area F of the induction field of view.
  • the image information collected by each column of imaging units at the corresponding moment is acquired respectively, and fusion processing is performed, so as to obtain the fusion image information corresponding to the scanning area F of the field of view.
  • the exposure time of each column of imaging units can be adjusted according to the obtained image information and the illumination intensity of the current environment.
  • the description of the data processing method please refer to the description of the data processing method, which will not be repeated here.
  • each imaging unit group can dynamically collect image information of different field of view scanning areas in real time. Field scanning area for image information acquisition.
  • imaging units may be grouped according to actual scenarios, such as grouping by column, grouping by row, grouping by block, and the like. Moreover, some or all of the imaging units in the imaging unit array can be grouped according to actual requirements. For example, referring to Fig. 2a, all imaging units in the imaging unit array can be grouped by columns to obtain M imaging unit groups; the imaging units in the first x columns can also be grouped by columns to obtain x imaging unit groups, so as to obtain x imaging unit groups. Each imaging unit group is set in time sequence, wherein x is a non-zero natural number not greater than M. The embodiments of the present specification do not limit the grouping manner and grouping quantity of the imaging units.
  • the echo detection module 31 may include: a detection unit array 31 , and the detection unit array 31 is used for sensing One side of the incident light can be arranged with P ⁇ Q detection units (the slashed part in Figure 3 is a detection unit, that is, the imaging unit 311 ), P and Q are both positive integers, P represents the number of rows, and Q represents the column number.
  • Each detection unit on the detection unit array 31 is used for detecting echo signals from the incident light incident from the receiving optical module, so as to obtain echo detection information. It can be understood that FIG. 2 a is only an example, and in actual processing, the distance between the imaging units may be very small according to the level of processing technology.
  • the echo detection module may further include: various circuits or various components adapted to the detection unit array, such as a detection readout circuit adapted to the detection unit array, the detection readout circuit uses It is used to collect the electrical signals generated by each detection unit.
  • the side of the detection unit array used for sensing incident light can be called the detection photosensitive surface, which is composed of the detection photosensitive surface of each detection unit.
  • the incident light is transmitted to the detection photosensitive surface through the receiving optical module, and according to the transmission angle Scanning can be performed on the detection photosensitive surface, the row direction of the detection unit array is parallel to the scanning direction of the incident light on the detection unit array, and the column direction of the detection unit array and the row direction are different from each other on the detection photosensitive surface. parallel.
  • the detection unit array may be a detection unit line array, and the detection unit line array refers to For a detection unit array with a large gap between the number of rows and the number of columns, for example, the number of rows P of the detection unit array is much larger than the number of columns Q.
  • the degree of the gap can be divided by setting a numerical limit, and whether it is a "large gap" is judged by the difference between the number of rows and the number of columns.
  • the number of rows and columns are different.
  • a difference of 5 times in the number is regarded as a large gap, such as the number of rows of the detection unit array P ⁇ 5*M; in other scenarios, a difference of 100 times between the number of rows and the number of columns is regarded as a large gap, such as the detection unit array.
  • the field of view of the echo detection module to the outside world is determined by the field of view angle generated by the detection unit array by receiving the optical module, and the field of view angle of the detection unit array is determined by the combination of the field angles of the triggered detection units. become.
  • the field of view angle of the detection unit array may include: a row direction field angle of the detection unit array scanned along the row direction, the row direction field angle of the detection unit array is combined by the row direction field angle of the triggered detection unit made.
  • the echo detection module can be composed of SPAD (Single Photon Avalanche Diode, Single Photon Avalanche Diode) and/or APD (Avalanche Photo Diode, avalanche photodiode).
  • SPAD Single Photon Avalanche Diode, Single Photon Avalanche Diode
  • APD Avalanche Photo Diode, avalanche photodiode
  • the detection unit array can be implemented by any of the following types: (1) an array of SPADs formed by an independent SPAD as a detection unit; (2) a SiPM (Silicon Photo- Multiplier, silicon photomultiplier tube); (3) APD array formed by independent APD as detection unit.
  • both the SPADs array and SiPM can include multiple SPADs.
  • each SPAD in the SPADs array acts as a detection unit and can be individually addressed
  • each detection unit in the SiPM is composed of multiple paralleled It is composed of several SPADs.
  • the SPADs connected in parallel in the detection unit cannot be individually addressed separately, but can only be addressed as a whole.
  • the echo detection module can include an array of SPADs and/or an array of APDs.
  • the echo detection module may include an array of SPADs. So it can be compatible with other hardware related to CMOS process.
  • the echo detection module and the image acquisition module can be prepared according to the set process flow.
  • the echo detection module and the image acquisition module are arranged in the laser radar in any one of the following ways.
  • the echo detection module and the image acquisition module are arranged on the same silicon wafer on the same substrate.
  • a silicon wafer 41 is arranged on the substrate 40 , a detection unit array 411 and an imaging unit array 412 are arranged on the silicon wafer 41 , and a detection unit adapted to the detection unit array 411 is arranged on the silicon wafer 41
  • the readout circuit and the imaging readout circuit adapted to the imaging unit array 412, the area where the detection unit array 411 and the adapted detection readout circuit on the silicon wafer 41 are located can be regarded as an echo detection module, and the imaging unit array on the silicon wafer 41 is located
  • the area where 412 and the adapted imaging readout circuit are located can be regarded as an image acquisition module, and the echo detection module and the image acquisition module are integrated in the same chip and packaged on the same substrate.
  • the echo detection module and the image acquisition module are arranged on different silicon wafers of the same substrate.
  • a silicon wafer 51 and a silicon wafer 52 are arranged on the substrate 50 , and a detection unit array 511 and a detection readout circuit adapted to the detection unit array 511 are arranged on the silicon wafer 51 ; the silicon wafer 52 An imaging unit array 521 and an imaging readout circuit adapted to the imaging unit array 521 are provided on the above.
  • the silicon wafer 51 provided with the detection unit array 511 and the detection readout circuit can be regarded as an echo detection module
  • the silicon wafer 52 provided with the imaging unit array 521 and the imaging readout circuit can be regarded as an image acquisition module
  • the echo detection module and the image acquisition module are arranged on the same printed circuit board (Printed Circuit Board, PCB) on different substrates.
  • PCB printed Circuit Board
  • a substrate 61 and a substrate 62 are arranged on the printed circuit board 60 , a silicon wafer 611 is arranged on the substrate 61 , and a detection unit array 6111 is arranged on the silicon wafer 611 and is adapted to the detection unit array 6111
  • the substrate 62 is provided with a silicon wafer 621, and the silicon wafer 621 is provided with an imaging unit array 6211 and an imaging readout circuit adapted to the imaging unit array 6211.
  • the silicon wafer 611 provided with the detection unit array 6111 and the detection readout circuit can be regarded as an echo detection module
  • the silicon wafer 621 provided with the imaging unit array 6211 and the imaging unit array 6211 can be regarded as an image acquisition module
  • the echo detection module and the image acquisition module are arranged on substrates of different printed circuit boards.
  • the printed circuit board 7A is provided with a substrate 71 , the substrate 71 is provided with a silicon wafer 711 , and the silicon wafer 711 is provided with a detection unit array 7111 and a detection reader adapted to the detection unit array 7111
  • a substrate 72 is arranged on the printed circuit board 7B, a silicon wafer 721 is arranged on the substrate 72, and an imaging unit array 7211 and an imaging readout circuit adapted to the imaging unit array 7211 are arranged on the silicon wafer 721.
  • the silicon wafer 711 provided with the detection unit array 7111 and the detection readout circuit can be regarded as an echo detection module
  • the silicon wafer 721 provided with the imaging unit array 7211 and the imaging unit array 7211 can be regarded as an image acquisition module
  • the echo detection module alone Integrated in one chip the image acquisition module is separately integrated in another chip, and the echo detection module and the image acquisition module can be packaged on corresponding substrates respectively, and finally connected to different printed circuit boards.
  • the echo detection module and the image acquisition module can be arranged on the same silicon wafer on the same substrate, on different silicon wafers on the same substrate, or on different silicon wafers on the same printed circuit board. It can also be arranged on the substrates of different printed circuit boards, so that the echo detection module and the image acquisition module can be flexibly arranged in the lidar, which is not limited by the existing layout.
  • the echo detection module and the image acquisition module are respectively arranged on the silicon wafer using any of the following structural types.
  • the front-illuminated structure may include a metal wiring layer and a light receiving layer, the metal wiring layer is located on the light receiving layer, wherein the metal wiring layer includes a readout circuit, and the light receiving layer may include an array (detection unit array or imaging unit). array), the incident light passing through the metal wiring layer can reach the light receiving layer.
  • FIG. 8 it is a schematic structural diagram of a front-illuminated integrated chip.
  • the echo detection module and the image acquisition module are disposed on the same silicon wafer 81 of the same substrate 80 using a front-illuminated structure.
  • the front-illuminated integrated chip 8A includes: a light receiving layer 811 and a metal wiring layer 812 , and the light receiving layer 811 is located under the metal wiring layer 812 .
  • the detection unit array 8111 of the echo detection module is located in the light receiving layer 811, and the detection readout circuit 8121 is located in the metal cable layer 812; the imaging unit array 8112 of the image acquisition module is located in the light receiving layer 811, and the imaging readout circuit 8122 is located in the metal cable layer 811.
  • the front-illuminated structure can include a metal cable layer and a light receiving layer, the metal cable layer is located under the light receiving layer, the metal cable layer and the light receiving layer can be connected by bonding wires, and the metal cable layer includes a readout circuit.
  • the light-receiving layer includes an array (detection unit array or imaging unit array), and the incident light can directly reach the light-receiving layer.
  • FIG. 9 which is a schematic structural diagram of a back-illuminated integrated chip
  • the echo detection module and the image acquisition module are arranged on the same substrate using a back-illuminated structure. 90 on the same silicon wafer 91.
  • the back-illuminated integrated chip 9A may include: a light-receiving layer 911 and a metal wiring layer 912 , the light-receiving layer 911 is located on the metal wiring layer 912 , and a bonding wire passes between the metal wiring layer 912 and the light-receiving layer 911 connect.
  • the detection unit array 9111 of the echo detection module is located in the light receiving layer 911, and the detection readout circuit 9121 is located in the metal cable layer 912; the imaging unit array 9112 of the image acquisition module is located in the light receiving layer 911, and the imaging readout circuit 9121 is located in the metal cable layer Layer 912.
  • the stacked structure can place the metal wiring layer and the light receiving layer on different silicon wafers, and then stack the silicon wafer containing the metal wiring layer under the silicon wafer containing the light receiving layer, and the metal wiring layer and the light receiving layer. They can be connected by bonding wires, the metal wiring layer includes a readout circuit, and the light receiving layer includes an array (detection unit array or imaging unit array), and the incident light can directly reach the silicon wafer where the light receiving layer is located.
  • FIG. 10 which is a schematic structural diagram of a stacked integrated chip
  • the echo detection module and the image acquisition module are disposed on the same substrate in a stacked structure.
  • the silicon wafer 1011 includes a light receiving layer
  • the silicon wafer 1012 includes a metal wiring layer
  • the silicon wafer 1011 is located on the silicon wafer 1012
  • the silicon wafer 1011 and the silicon wafer 1012 are connected by bonding wires.
  • the detection unit array 10111 of the echo detection module is located on the silicon wafer 1011
  • the detection readout circuit 10121 is located on the silicon wafer 1012;
  • the silicon wafer in the area where the array is located may be cut.
  • FIG. 11 which is a schematic structural diagram of another stacked integrated chip, the difference from FIG. 10 is that the silicon wafer 1011 is cut to reduce the volume occupied by the silicon wafer 1011 .
  • the above-mentioned embodiments are only illustrative, and in the practical application of the present invention, the setting methods and structural types described in this specification can be reasonably cross-selected in combination with specific scenarios; and, according to specific scenarios, echo
  • the structure types adopted by the detection module and the image acquisition module may be the same or different.
  • the echo detection module and the image acquisition module disposed on the same silicon wafer may adopt the same structure type.
  • structure types such as a front-illuminated structure, a back-illuminated structure, and a stacked structure may be implemented by a COMS (Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor) process.
  • COMS Complementary Metal Oxide Semiconductor, complementary metal oxide semiconductor
  • packaging can be performed.
  • the substrate is covered with a plastic sealing layer, and a first light transmission window corresponding to the echo detection module and a second light transmission window corresponding to the image acquisition module are opened on the plastic sealing layer.
  • the substrate 80 is covered with a plastic sealing layer 8 a, and a first light transmission window corresponding to the echo detection module and corresponding on the second light-transmitting window of the image acquisition module.
  • the substrate 80 and the printed circuit board 8B are connected by soldering.
  • the silicon wafer 81 and the substrate 80 are connected by bonding wires.
  • the substrate 90 is covered with a plastic sealing layer 9a, and the plastic sealing layer 9a is provided with a first light transmission window corresponding to the echo detection module and a second light transmission window corresponding to the image acquisition module .
  • the substrate 90 and the printed circuit board 9B are connected by soldering.
  • the silicon wafer 91 and the substrate 90 are connected by bonding wires.
  • the substrate 100 is covered with a plastic sealing layer 10a, and a first light transmission window corresponding to the echo detection module and a second light transmission window corresponding to the image acquisition module are opened in the plastic sealing layer 10a window.
  • the substrate 100 and the printed circuit board 10B are connected by soldering.
  • the silicon wafer 1012 and the substrate 100 are connected by bonding wires.
  • the substrate and the printed circuit board are connected by soldering.
  • the substrate and the printed circuit board are soldered in a ball grid array (Ball Grid Array, BGA) manner.
  • a narrow-band filter module (refer to the narrow-band filter module 82 in FIG. 8 , the narrow-band filter module 92 in FIG. 9 , the narrow-band filter module 92 in FIG. filter module 102), the narrow-band filter module is adapted to perform wavelength filtering on the incident light, and transmit the wavelength-filtered incident light to the echo detection module, thereby reducing ambient light noise.
  • the bandpass of the narrowband filter module is related to the laser wavelength emitted by the laser of the lidar, and needs to cover the emission wavelength range of the laser.
  • the bandpass of the narrowband filter module can be in the range of several nanometers to several tens of nanometers. between.
  • the image acquisition module includes a pixel-level filter module
  • the pixel-level filter module is adapted to filter incident light and transmit the filtered incident light to the imaging unit array, thereby improving image quality.
  • the pixel-level filter module 83 in FIG. 8 the pixel-level filter module 93 in FIG. 9
  • the pixel-level filter module 103 in FIGS. 10 and 11 The incident light of the second light transmission window is filtered.
  • the pixel-level filter module can filter the incident light by RGGB (Red-Green-Green-Blue), RYYB (Red-Yellow- Yellow-Blue) filter or RWWB (Red-White- White-Blue) filter.
  • a pixel-level filter module can be formed on the imaging unit array using a semiconductor process.
  • the detection unit array and the imaging unit array may be arranged in at least one of the following manners Lidar: 1) the detection unit array and the imaging unit array are parallel to each other along the column direction; 2) the number of columns M of the imaging unit array is equal to the number of columns Q of the detection unit array.
  • the detection unit array and the imaging unit array are parallel to each other along the column direction to ensure that the field of view directions of the echo detection module and the image acquisition module are consistent;
  • the number Q is set equal to facilitate the collection of data by column.
  • the image acquisition module and the echo detection module are used to collect the incident light from the corresponding field of view scanning area transmitted by the receiving optical module, they appear in pairs in the lidar.
  • the echo detection module and the image acquisition module can be located on the same plane or different planes in the lidar.
  • FIG. 12 it is a schematic diagram of an application scenario applied to a mechanical rotating lidar.
  • the mechanical rotating lidar 120 may include a transmitting part 121 , a receiving part 122 and a rotating scanning mechanism (not shown in FIG. 12 ).
  • the rotating scanning mechanism can drive the transmitting part 121 and the receiving part 122 to rotate.
  • the transmitting part 121 may include a transmitting module 1211 .
  • a light-emitting unit array 12111 is arranged in the emission area of the emission module 1211, and the laser light generated by the light-emitting unit array 12111 is processed by the emission optical module 1212, and is output to the outside world as outgoing light. ), the object reflects the outgoing light.
  • the receiving part 122 may include a receiving optical module 1221 , an echo detection module 1222 and an image acquisition module 1223 .
  • the receiving optical module 1221 converges and transmits the incident light to the echo detection module 1222 or the image acquisition module 1223 .
  • the echo detection module 1222 detects the echo signal in the received incident light (that is, the signal of the outgoing light reflected by the object), and obtains the echo detection information, and the image acquisition module 1223 converts the sensed optical signal of the incident light into corresponding electrical signals to obtain image information.
  • the transmitting part and the receiving part may be arranged in the optical-mechanical rotor of the lidar.
  • FIG. 13 it is a schematic diagram of an application scenario applied to a mirror scanning laser radar.
  • the mirror scanning laser radar 130 may include: a transmitting part 131 , a receiving part 132 and a rotating scanning mechanism 133 .
  • the transmitting part 131 may include a transmitting module 1311 and a transmitting optical module (not marked in FIG. 13 ), and a light-emitting unit array 13111 is provided in the transmitting area of the transmitting module 1311 ; the receiving part 132 may include a receiving optical module (not marked in FIG. 13 ) , echo detection module 1322 and image acquisition module 1323 .
  • the transmitting optical module and the receiving optical module constitute the optical system of the laser radar 130, and a set of rotating mirrors can be shared in the process of transmitting and receiving. Therefore, from the perspective of sending and receiving respectively, it can be said that the rotating mirror 130a is included in the transmitting optical module, and is also included in the transmitting optical module. It can be said that the rotating mirror 130a is included in the receiving optical module.
  • the transmitting optical module may further include a transmitting lens 1312; the receiving optical module may further include a receiving lens 1321; the echo detection module 1322 and the image acquisition module 1323 are located on the same side of the rotating mirror 130a.
  • the rotary scanning mechanism 133 is rotated by a mechanical device, and the rotary scanning mechanism 133 also drives the rotating mirror 130a to rotate.
  • the laser light generated by the light-emitting unit array 13111 is processed by the emission lens 1312 and refracted by the rotating mirror 130a, and is output to the outside world as outgoing light. After encountering an object (such as object 13A in Figure 13), the object reflects the outgoing light.
  • the rotary scanning mechanism 133 rotates according to the set direction and moves with the loading platform (such as a high-precision vehicle), when the object (such as the object 13A in FIG. 13 ) reflects the outgoing light, the reflected outgoing light can be transmitted to the corresponding receiver Therefore, the outgoing light reflected by the object and the ambient light can be used as the incident light of the receiving portion 132 .
  • the loading platform such as a high-precision vehicle
  • the receiving lens 1321 converges and transmits the incident light to the echo detection module 1322 or the image acquisition module 1323 .
  • the echo detection module 1322 detects the echo signal in the received incident light (that is, the signal of the outgoing light reflected by the object), and obtains the echo detection information, and the image acquisition module 1323 converts the sensed optical signal of the incident light into the corresponding optical signal. electrical signals to obtain image information.
  • the rotating mirror 130a of the lidar may adopt a double-sided rotating mirror.
  • the rotating mirror scanning laser radar can adopt different rotating mirrors, such as three-sided rotating mirrors, four-sided rotating mirrors, etc., according to the actual situation. And according to the light transmission direction of the rotating mirror, the positions of the transmitting part and the receiving part relative to the rotating mirror can be adjusted, and the embodiment of this specification does not limit the type of the rotating mirror.
  • FIG. 14 it is a schematic diagram of another application scenario applied to the rotating mirror scanning laser radar.
  • the rotating mirror 140 a of the rotating mirror scanning laser radar 140 shown in FIG. 14 adopts a four-sided rotating mirror. According to the light transmission direction of the rotating mirror 140a, the echo detection module 1322 and the image acquisition module 1323 are respectively located on both sides of the rotating scanning mechanism.
  • the mirror scanning laser radar 140 may include: a transmitting part 141 , a receiving part 142 and a rotating scanning mechanism (not marked in FIG. 14 ).
  • the transmitting part 141 may include a transmitting module 1411 and a transmitting optical module (not marked in FIG. 14 ), and a light-emitting unit array 14111 is provided in the transmitting area of the transmitting module 1411 ;
  • the receiving part 142 may include a receiving optical module (not marked in FIG. 14 ) , echo detection module 1422 and image acquisition module 1423 .
  • the transmitting optical module and the receiving optical module constitute the optical system of the laser radar 140, and a set of rotating mirrors can be shared in the process of transmitting and receiving. Therefore, from the perspective of transmitting and receiving respectively, it can be said that the rotating mirror 140a is included in the transmitting optical module, and is also included in the transmitting optical module. It can be said that the rotating mirror 140a is included in the receiving optical module.
  • the transmitting optical module may further include a transmitting lens 1412; the receiving optical module may further include a receiving lens 1421; the echo detection module 1422 and the image acquisition module 1423 are located on the same side of the rotating mirror 140a.
  • the rotary scanning mechanism 143 is rotated by a mechanical device, and the rotary scanning mechanism 143 also drives the rotating mirror 140a to rotate, and the laser light generated by the light-emitting unit array 14111 is processed by the emission lens 1412 and refracted by the rotating mirror 140a, and is output to the outside as outgoing light. After encountering an object, such as object 14A in Figure 14, the object reflects the outgoing light.
  • the rotary scanning mechanism 143 rotates according to the set direction and moves with the loading platform (such as a high-precision vehicle), when the object (such as the object 14A in FIG. 14 ) reflects the outgoing light, the reflected outgoing light can be transmitted to the corresponding receiver Therefore, the outgoing light reflected by the object and the ambient light can be used as the incident light of the receiving portion 142 .
  • the loading platform such as a high-precision vehicle
  • the receiving lens 1421 converges and transmits the incident light to the echo detection module 1422 or the image acquisition module 1423 .
  • the echo detection module 1422 detects the echo signal in the received incident light (that is, the signal of the outgoing light reflected by the object) to obtain the echo detection information, and the image acquisition module 1423 converts the sensed optical signal of the incident light into corresponding electrical signals to obtain image information.
  • the structure division of the lidar in the above-mentioned embodiment is only an example. According to the actual requirements and description methods, the structure of the lidar can be divided into different dimensions. For structure division, structure division from the dimension of connection mode, etc., the embodiments of this specification do not specifically limit the structure division rules in the lidar.
  • the echo detection module and the image acquisition module can be located on the same side of the receiving optical module, and there is a difference in the field of view angle between the echo detection module and the image acquisition module (refer to Figs. 12 to 14 ).
  • line field angle difference ⁇ ) the line field angle difference and the distance between the imaging unit array and the receiving optical module (refer to the distance b in Figures 12 to 14 ), the echo detection module and The distance between the image acquisition modules (refer to the distance c in FIGS. 12 to 14 ) is related to the scanning angular velocity of the lidar.
  • the line-of-view between the echo detection module and the image acquisition module is proportional to the spacing between the echo detection module and the receiving optical module.
  • FIG. 12 it is assumed that the distance b between the image acquisition module and the receiving optical module is 50 (mm), and the distance c between the echo detection module and the image acquisition module is 2 (mm). ), then the line field angle difference between the echo detection module and the image acquisition module ⁇ (c/b) ⁇ (360°/2 ⁇ ) ⁇ (2/50) ⁇ (360°/2 ⁇ ) ⁇ 2.3°. It can be seen that the difference in the field of view angle between the image acquisition module and the receiving optical module is small and can be ignored in most cases. Therefore, it can be considered that the field of view of the echo detection module and the field of view of the image acquisition module Field match.
  • the time interval of one frame is 100ms, and the image acquisition module and the echo detection module are in the scanning area of the corresponding field of view.
  • Interval time ⁇ T ⁇ 100 ⁇ 2.3°/360° 0.6 (ms). It can be seen that the time difference between the image acquisition module and the receiving optical module is at the level of microseconds, which can be ignored in most cases. Therefore, it can be considered that the echo detection module and the image acquisition module collect data synchronously in time.
  • the relative positions between the image acquisition module and the echo detection module remain fixed, and the hardware substrates where the two are located are synchronized with the lidar rotating scanning mechanism Rotation, or the hardware substrate on which the two are located remains stationary relative to the lidar. Therefore, the relative position between the image acquisition module and the echo detection module is not affected by the rotation.
  • the relative position on the two sensors does not change, so that the scanning interval time between the two is stable, and the information obtained by the two maintains a fixed corresponding relationship in time, so as to avoid the image acquisition module and the echo detection module being placed in the
  • the jitter effect generated in different independent devices improves the stability of both.
  • reducing the distance between the echo detection module and the image acquisition module can reduce or eliminate the difference between the field of view of the echo detection module and the field of view of the image acquisition module during the installation and adjustment stage of the transmitting module. error between the two to ensure that the two fields of view match.
  • reducing the distance between the echo detection module and the image acquisition module can shorten the time difference between the two for processing incident light to a negligible level, so that the echo detection module and the image acquisition module can Acquire data synchronously.
  • the lidar structure provided by the embodiments of this specification can improve the matching degree and time synchronization of the field of view between the image data and the point cloud data.
  • the echo detection module and the image acquisition module are both located on the focal plane of the receiving optical module.
  • the echo detection module and the image acquisition module can be located on different planes of the lidar when limited by the volume and layout of the lidar, which will be described in detail below through embodiments.
  • the laser radar 150 is a mirror scanning laser radar, and the echo detection module 1512 and the image acquisition module 1522 are located in the mirror scanning laser radar 150 respectively.
  • the two sides of the rotating mirror 150a the receiving part (not marked in FIG. 15 ) includes two areas, the first area 151 may include a first receiving optical module 1511 and an echo detection module 1512 , and the second area 152 may include a second receiving optical module 1521 and image acquisition module 1522.
  • the rotating mirror 150a of the lidar refracts the incident light from the corresponding scanning area of the field of view to the first area 151 or the second area 152 .
  • the first receiving optical module 1511 converges and transmits the incident light refracted by the rotating mirror 150a to the echo detection module 1512, and the echo detection module 1512 detects the echo signal in the received incident light to obtain echo detection information.
  • the second receiving optical module 1521 converges and transmits the incident light refracted by the rotating mirror 150a to the image acquisition module 1522, and the image acquisition module 1522 converts the sensed optical signal of the incident light into a corresponding electrical signal to obtain image information.
  • the echo detection module 1512 and the transmitting unit may be on the same vertical plane.
  • the echo detection module and the image acquisition module are located on the two sides of the rotating mirror, respectively, and have more available space.
  • the echo detection module or the image acquisition module can be adjusted flexibly, respectively, so as to reduce the size and size of the image acquisition module. location restrictions.
  • the angle difference of the field of view between the echo detection module and the image acquisition module can be eliminated, and the field of view of the echo detection module can be completely consistent with the field of view of the image acquisition module, so that the echo detection module Basically complete synchronization of the field of view with the image acquisition module.
  • the transmission direction of incident light can be adjusted by a plane mirror.
  • the receiving part may include two areas, and the first area 161 may include a first receiving optical module 1611 and an echo detection module 1612, and the first receiving optical module A first flat mirror 16111 and a first convex lens 16112 may be included.
  • the second area 162 may include a second receiving optical module 1621 and an image capturing module 1622 , and the second receiving optical module 1621 may include a second flat mirror 16211 and a second convex lens 16212 .
  • the rotating mirror 160a of the lidar refracts the incident light from the corresponding scanning area of the field of view to the first area 161 or the second area 162 .
  • the first plane mirror 16111 transmits the incident light refracted by the rotating mirror 160a to the first convex lens 16112, and the first convex lens 16112 condenses and transmits the refracted incident light to the echo detection module 1612, and the echo detection module 1612 detects the received incident light.
  • the echo signal is obtained to obtain echo detection information.
  • the second plane mirror 16211 transmits the incident light refracted by the rotating mirror 160a to the second convex lens 16212, and the second convex lens 16212 converges and transmits the refracted incident light to the image acquisition module 1622, and the image acquisition module 1622 converts the sensed optical signal of the incident light For the corresponding electrical signal, image information is obtained.
  • the above-mentioned embodiments are merely illustrative.
  • the number of convex lenses and plane mirrors included in the receiving optical module can be changed according to the actual situation, and the plane mirror can be applied to the other above-mentioned embodiments, so as to adjust the transmission direction of the incident light, so that the echo detection module and/or Or the distribution scheme of the image acquisition module is more diverse, and the embodiment of this specification does not limit the application scenarios of the plane mirror.
  • the lidar may use a one-dimensional scanning manner to scan clockwise or counterclockwise in a specified rotation direction. Therefore, the scanning trajectory can be precisely controlled, which is beneficial to suppress the distortion of the field of view of the moving object, reduce the dynamic blur, and make the subsequently generated point cloud data and/or image data more convenient for object recognition.
  • the parameters of the echo detection module and the image acquisition module can also be debugged.
  • the echo detection module can The image information of the module is used to adjust the echo signal detection parameters; the image acquisition module can also adjust the image acquisition parameters according to the echo detection information of the echo detection module.
  • the echo detection module can dynamically adjust the echo signal detection parameters, such as sensitivity parameters, range parameters, etc., according to the information obtained by the scanned area of the image acquisition module, such as ambient light level information, object size information, etc. , so that more accurate echo detection information can be collected, and then point cloud data with higher quality can be generated.
  • the echo signal detection parameters such as sensitivity parameters, range parameters, etc.
  • the image acquisition module can also dynamically adjust the image acquisition parameters, such as exposure control parameters, dynamic range parameters, and gain parameters, according to the information obtained from the swept area of the echo detection module, such as distance information, reflectivity information, etc. and so on, so that more accurate image information can be collected, and then higher quality image data can be generated.
  • image acquisition parameters such as exposure control parameters, dynamic range parameters, and gain parameters
  • the echo detection module may further include: a time-to-digital converter (Time Digital Converter, TDC), which detects the echo signal in the received incident light through the detection unit array in the echo detection module, and transmits and transmits the echo signal.
  • TDC Time Digital Converter
  • the time-synchronized TDC records the time when the detection unit array generates the electrical signal, and then the processing device of the lidar can measure the time of flight (Direct Time Of Flight) directly. Flight, DTOF) algorithm, calculate the distance information between the lidar and the object.
  • TDC Time Digital Converter
  • the data processing method provided by the embodiments of the present application will be introduced below.
  • the data processing method described below can be applied to any of the lidars described in the embodiments of this specification.
  • the content of the data processing method described below may be the same as that described above.
  • the related content of lidar refers to each other.
  • the lidar using this specification can collect frame information according to the set frame sampling period, and ensure that the image acquisition module and the echo detection module are in a state of field-of-view matching and time synchronization. During subsequent data processing, in order to obtain higher-precision data, the collected image information and echo detection information can be optimized.
  • the data processing method may include steps S171 to S173 .
  • S171 Calculate the scanning interval time between the image acquisition module and the echo detection module in the corresponding field of view scanning area.
  • the distribution positions of the echo detection module and the image acquisition module of the lidar can be: located on the same side of the receiving optical module (refer to FIGS. 12 to 14 ), or, respectively, located on both sides of the rotating mirror (refer to Figures 15 to 16).
  • the horizontal field of view angle of the image acquisition module and the echo detection module has always maintained a fixed horizontal field of view angle difference.
  • the formula ⁇ (c/b) ⁇ (360°/2 ⁇ ) to estimate the line-direction field of view angle difference.
  • b is the distance between the image acquisition module and the receiving optical module
  • c is the distance between the echo detection module and the image acquisition module
  • v is the scanning angular velocity of the lidar.
  • the echo detection module and the image acquisition module are located on the same side of the receiving optical module, the echo detection module and the image acquisition module share the same group of receiving optical modules, and they have the same distance from the group of receiving optical modules. More preferably, both are located on the focal plane of the same group of receiving optical modules.
  • the receiving optical module includes all the optical devices that echo signals and/or ambient light pass through between entering the lidar and arriving at the echo detection module/image acquisition module. Including but not limited to lenses, mirrors, half mirrors, rotating mirrors, beam splitters and the like.
  • the distance between the echo detection module and the image acquisition module can be minimized, thereby achieving high synchronization between the signals obtained by the echo detection module and the image acquisition module.
  • the field of view of the echo detection module can match the field of view of the image acquisition module. Matching each other, during the rotation process of the lidar, the image acquisition module and the echo detection module can simultaneously receive the incident light in the corresponding field of view scanning area, so as to obtain the image acquisition module and the echo detection module.
  • the scan interval time for the corresponding field of view scan area is 0.
  • S172 based on the scanning interval time, acquire echo detection information and image information in the corresponding scanning area of the field of view.
  • the corresponding relationship between the detection frame moment of the echo detection module and the acquisition frame moment of the image acquisition module may be determined; based on the detection frame of the echo detection module time, obtain corresponding echo detection information; based on the acquisition frame time of the image acquisition module, obtain corresponding image information; then, based on the corresponding relationship between the detection frame time and the acquisition frame time, determine that it is in Echo detection information and image information of the corresponding field of view scanning area.
  • the acquisition frame time is used to represent the time information when the image acquisition module acquires the image information of the corresponding field of view scanning area, and has a corresponding relationship with the exposure start time of the triggered imaging unit.
  • the detection frame moment is used to represent the time information at which the echo detection module collects the echo detection information of the scanning area of the corresponding field of view.
  • S173 Perform data processing on the acquired echo detection information and the image information.
  • data processing can be performed for the echo detection information and image information in the corresponding field of view scanning area, respectively, to obtain corresponding point cloud data and image data;
  • the echo detection information and image information are fused to obtain fusion information, and data processing is performed to obtain fusion data.
  • the triggered imaging unit may Incident light is sensed for the corresponding exposure time.
  • the imaging unit array is adapted to trigger each imaging unit group in sequence according to a preset timing sequence in response to the control instruction, so that each triggered imaging unit group senses incident light within a corresponding exposure time, and collects For the image information of the corresponding field of view scanning area, the imaging unit group includes at least one imaging unit.
  • the image acquisition module may include an imaging unit array having a plurality of imaging unit groups, the imaging unit group including at least one imaging unit.
  • the method further includes: performing signal integration processing on the exposure results of each imaging unit group to acquire the image information.
  • the signal integration processing is time delay integration processing.
  • the image acquisition module adopts the grouping control triggering scheme of the imaging unit array to perform image acquisition, then based on the scanning interval time, the time between the detection frame time of the echo detection module and the acquisition frame time of the image acquisition module is determined.
  • the grouping control trigger scheme determine the acquisition frame time corresponding to the image information of the corresponding field of view scanning area collected by each imaging unit group, and establish a corresponding relationship with the detection frame time of the echo detection module, so as to After acquiring the echo detection information and the image information in the corresponding field of view scanning area, the image information collected by each imaging unit group can be acquired.
  • the corresponding relationship between the detection frame moment of the echo detection module and the acquisition frame moment of the image acquisition module is determined based on the scanning interval time, specifically: The following steps S181 to S183 may be included.
  • S182 Determine the start acquisition frame time from the collection frame time set.
  • the corresponding acquisition frame time can be determined, so that the image information acquired by each imaging unit for the corresponding field of view scanning area can be acquired.
  • the image information of each imaging unit group collected in the corresponding field of view scanning area is fused to obtain more abundant image information, and can offset the problem of image blur caused by rotation, thereby improving picture clarity and image quality.
  • the overall exposure time of the imaging unit array for the corresponding field of view scanning area can be reduced while ensuring the picture clarity and image quality, thereby ensuring image generation. It can reduce the power consumption of the lidar, and allow the imaging unit to have a rest buffer time to extend the life of the hardware.
  • the control device of the lidar can generate corresponding control instructions, so as to control the working state of each imaging unit in the imaging unit array.
  • the imaging unit array triggers the corresponding imaging unit to sense incident light in response to the control instruction.
  • the imaging unit can be flexibly controlled, and the acquisition accuracy of the image acquisition module can be dynamically adjusted to meet various resolution requirements.
  • the control device can trigger at least one imaging unit in each row of the imaging unit array through a control instruction, and the image information collected by the triggered imaging unit in each row is used as the image information of the row.
  • the number of triggers of the imaging unit can be reduced.
  • control device can simultaneously trigger multiple imaging unit rows in the imaging unit array through control instructions, and the image information collected by the triggered imaging unit row is used as the image information of one row after logical operation. Therefore, by triggering multiple imaging units to collect image information at the same position, the resolution requirements of each triggered imaging unit can be reduced, and the accuracy of image information collected in dark light environments such as night and cloudy days can be improved.
  • the imaging unit in the specified position can acquire the image information of the corresponding field of view scanning area.
  • the data processing method may further include: based on the echo detection information of the echo detection module, adjusting the image acquisition parameters of the image acquisition module; and/or, based on the image of the image acquisition module information, and adjust the echo signal detection parameters of the echo detection module.
  • the lidar may be repeated here.
  • the method may include steps S191 to S192 .
  • the exposure control parameters of the image acquisition module need to be adjusted, so that the exposure time of the image acquisition module can be dynamically changed, and the picture clarity and image quality of the image acquisition module can be improved.
  • the above-mentioned embodiment only shows the case where it is determined that the imaging condition is not met.
  • the imaging condition is met.
  • corresponding operation steps can be set according to the actual situation, and after it is determined that the imaging conditions are met, jump to the corresponding steps for execution. For example, if the imaging conditions are met, the current imaging condition judging process can be ended, and after acquiring new image information, a new imaging condition judging process can be entered.
  • the exposure amount in the image information by matching the exposure amount in the image information with the exposure amount interval in the imaging conditions, it can be determined whether the image information meets the preset imaging conditions. Specifically, as shown in FIG. 20 , when determining whether the image information meets the preset imaging conditions, the following steps S1911 to S1914 may be included.
  • step S1912 determine whether it belongs to the exposure amount interval, if it does not belong to the exposure amount interval, continue to execute step S1913, otherwise jump to step S1914.
  • the image information does not meet the imaging conditions.
  • the image information meets the imaging conditions.
  • the size of the exposure control parameter of the corresponding imaging unit may be increased or decreased according to the image information. For example, if the exposure amount in the image information is less than the minimum endpoint value of the exposure amount interval, the exposure control parameter of the imaging unit corresponding to the image information is increased; for another example, if the exposure amount in the image information is greater than The maximum endpoint value of the exposure amount interval reduces the exposure control parameter of the imaging unit corresponding to the image information.
  • each imaging unit group is obtained according to the grouping setting of each imaging unit in the image acquisition module, which is used to sequentially acquire image information of the corresponding field of view scanning area according to a preset time sequence.
  • the overall exposure time of the imaging unit array for the corresponding field of view scanning area is divided into the exposure time of each imaging unit group, thereby ensuring image generation efficiency and improving picture clarity and image quality.
  • FIG. 21 it is a flowchart of another data processing method provided in the embodiment of the present specification, and the data processing method may include steps S211 to S213 .
  • the detection of the current illumination situation can be performed by an illumination intensity detection device, and the illumination intensity detection device may be located in the lidar, or may be located in other loading platforms connected to the lidar.
  • the above-mentioned embodiment only shows the case where it is determined that the imaging condition is not met.
  • the imaging condition is met.
  • corresponding operation steps can be set according to the actual situation, and after it is determined that the imaging conditions are met, jump to the corresponding steps for execution. For example, if the imaging conditions are met, the current imaging condition determination process may be ended, and a new imaging condition determination process may be entered after a preset detection period.
  • the following steps S221 to S224 may be included.
  • step S222 Determine whether the illumination intensity value belongs to the illumination intensity interval, and if it does not belong to the illumination intensity interval, continue to perform step S223, otherwise jump to step S224.
  • the illumination intensity value does not meet the illumination intensity condition, and it is determined that the image information formed corresponding to the image acquisition parameter does not meet the imaging condition.
  • the illumination intensity value conforms to the illumination intensity condition, and it is determined that the image information formed corresponding to the image acquisition parameter conforms to the imaging condition.
  • the size of the exposure control parameter of the corresponding imaging unit may be increased or decreased according to the light intensity value. For example, if the light intensity value is less than the minimum endpoint value of the light intensity interval, the exposure control parameter of the image acquisition module is increased; for another example, if the light intensity value is greater than the maximum endpoint value of the light intensity interval value, the exposure control parameter of the image acquisition module is reduced.
  • each imaging unit group is obtained according to the grouping setting of each imaging unit in the image acquisition module, which is used to sequentially acquire image information of the corresponding field of view scanning area according to a preset time sequence.
  • the overall exposure time of the imaging unit array for the corresponding field of view scanning area is divided into the exposure time of each imaging unit group, thereby ensuring image generation efficiency and improving picture clarity and image quality.
  • the embodiments of the present specification also provide a data processing module, the data processing module is applied to the laser radar, and is connected to the receiving part of the laser radar.
  • the data processing module may include a memory and a processor, and the memory is adapted to store one or more computer instructions, and the processor executes the steps of the data processing method in any of the foregoing embodiments when the processor executes the computer instructions. For specific steps, reference may be made to the foregoing embodiments, which will not be repeated here.
  • the processor may be a processing chip such as a CPU (central processing unit), a GPU (Graphics Processing Unit, graphics processor), an FPGA (Field Programmable Gate Array, field programmable gate array), a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present invention.
  • a processing chip such as a CPU (central processing unit), a GPU (Graphics Processing Unit, graphics processor), an FPGA (Field Programmable Gate Array, field programmable gate array), a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present invention.
  • the memory may include high-speed RAM memory, or may also include non-volatile memory (non-volatile memory). memory), such as at least one disk storage.
  • the data processing module may further include an expansion interface, which is suitable for connecting with other modules (eg, acquisition module, control module, etc.) in the lidar to realize data interaction.
  • modules eg, acquisition module, control module, etc.
  • the embodiments of the present specification further provide a computer-readable storage medium on which computer instructions are stored, and when the computer instructions are executed, the steps of the data processing method described in any of the foregoing embodiments are executed. For specific steps, reference may be made to the foregoing embodiments, which will not be repeated here.
  • the computer-readable storage medium may include, for example, any suitable type of memory unit, storage device, storage item, storage medium, storage device, storage item, storage medium and/or storage unit, eg, memory, removable or non- Removable media, erasable or non-removable media, writable or rewritable media, digital or analog media, hard disks, floppy disks, compact disc read only memory (CD-ROM), compact disc recordable (CD-R), Compact Disc Rewritable (CD-RW), Optical Disc, Magnetic Media, Magneto-Optical Media, Removable Memory Card or Disk, Various Types of Digital Versatile Disc (DVD), Magnetic Tape, Cassette, etc.
  • any suitable type of memory unit e.g, any suitable type of memory unit, storage device, storage item, storage medium, storage device, storage item, storage medium and/or storage unit, eg, memory, removable or non- Removable media, erasable or non-removable media, w
  • Computer instructions may include any suitable type of code implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language, eg, source code, compiled code, interpreted code, executable code Execute code, static code, dynamic code, encrypted code, etc.
  • An embodiment of the present specification further provides a laser radar, including: the data processing module according to any one of the above embodiments, where the data processing module is adapted to perform data processing on the information collected by the laser radar.
  • the data processing module is adapted to perform data processing on the information collected by the laser radar.
  • the data processing module may be placed in a data processing device, or may be placed in other hardware devices (such as a control device), and the data processing module may process other data of the lidar, to which the embodiments of this specification are concerned. No restrictions.
  • one embodiment or “an embodiment” referred to in the embodiments of the present specification refers to a specific feature, structure or characteristic that may be included in at least one implementation of the present specification. in the description of this specification.
  • first and second in the embodiments of the specification are only used for description purposes, and cannot be understood as indicating or implying relative importance or implying the number of indicated technical features.
  • a feature defined as “first” or “second” may expressly or implicitly include one or more of that feature.
  • first, second, etc. are used to distinguish between similar objects, and are not necessarily used to describe a particular order or precedence. It is to be understood that data so used may be interchanged under appropriate circumstances to enable the embodiments of the specification described herein to be practiced in sequences other than those illustrated or described herein.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

一种激光雷达(10),包括:旋转扫描机构(11)、接收光学模块(12)、回波探测模块(13)和图像采集模块(14),旋转扫描机构(11)适于通过机械装置进行旋转;接收光学模块(12)适于在旋转扫描机构(11)进行旋转的过程中,将入射光(1A)传递至回波探测模块(13)和图像采集模块(14);回波探测模块(13)适于由接收光学模块(12)入射的入射光(1A)中获取回波信号,得到回波探测信息;图像采集模块(14)适于将通过接收光学模块(12)入射的入射光(1A)转换为相应的电信号,得到图像信息。一种数据处理方法以及数据处理模块、介质,能够提高图像数据和点云数据之间视场匹配度和时间同步性,并且保障图像数据和电云数据的精确度。

Description

激光雷达、数据处理方法及数据处理模块、介质 技术领域
本说明书实施例涉及激光雷达技术领域,尤其涉及激光雷达、数据处理方法及数据处理模块、介质。
背景技术
在自动驾驶应用领域中,自动驾驶系统中可以集成激光雷达和图像采集装置,通过激光雷达(Light Detection And Range,LIDAR或Laser Detection And Range,LADAR)的数据或者图像采集装置的数据能够辅助自动驾驶系统。在某些应用场景中,需要将二者的数据融合之后使用,此时,需要对二者的视场(Field of View,FoV)进行校准和调整,并且需要对二者收集的数据的时间进行校准和调整,使得二者视场匹配且时间同步。
在传统的技术里,激光雷达和图像采集装置作为两套相对独立的装置,通常分别安装于车辆的不同位置进行运作。
在激光雷达和图像采集装置独立运作的情况下,激光雷达与图像采集模块之间的视场角存在着较大的差异,两者获得的数据需要进行视场匹配以获得最终的结果。然而由于视场角相差较大,在某些高速运动的场景下,诸如汽车在高速上行驶的情况下,两者所获得的数据的差异会比较大,即使进行数据融合也无法获得完全对应的视场信息;同时过多的组件也会导致系统结构较为复杂,体积增加。
如何改善图像数据和点云数据之间视场匹配度和时间同步性成为了本领域技术人员亟待解决的问题。
为了解决该问题,出现一种通过单光子雪崩二极管(Single Photon Avalanche Diode,SPAD)阵列同时捕获图像信息和回波探测信息的解决方案。此种方案使用单个SPAD阵列同时捕获激光回波信息和图像信息。但是,由于SPADs阵列只能感应是否有光子被某个SPAD单元捕获,因此,仅能获得低分辨率的黑白图像信息。并且,回波探测信息与图像信息在捕获过程中对于环境光的需求相互矛盾,即更高精度的回波探测信息需要更少的环境光,而更高精度的图像信息需要更多的环境光。所以使用单SPAD阵列的传统方案无法同时保障二者的精度,局限性较大。此外,SPAD的高单价增加了系统硬件成本。
由上可知,通过SPADs阵列同时捕获图像信息和回波探测信息,虽然能够改进图像数据和点云数据之间视场匹配度和时间同步性,但是大大降低了图像采集质量,难以兼顾点云数据的质量和图像数据的精度,且增加系统硬件成本。
技术问题
因此,如何同时获得高质量的点云数据和图像数据,同时兼顾良好的同步性和较低的系统硬件成本,已成为本领域技术人员的技术难点。
技术解决方案
有鉴于此,本说明书实施例提供一种激光雷达、数据处理方法及数据处理模块、介质,能够提高图像数据和点云数据之间视场匹配度和时间同步性,并且可以保障图像数据和点云数据的精确度。
本说明书实施例提供了一种激光雷达,所述激光雷达包括旋转扫描机构,激光雷达还包括:接收光学模块、回波探测模块和图像采集模块。
所述旋转扫描机构,适于通过机械装置进行旋转。
所述接收光学模块,适于在所述旋转扫描机构进行旋转的过程中,将入射光传递至所述回波探测模块和所述图像采集模块。
所述回波探测模块,适于由所述接收光学模块入射的入射光中获取回波信号,得到回波探测信息。
所述图像采集模块,适于将通过所述接收光学模块入射的入射光转换为相应的电信号,得到图像信息。
可选地,所述激光雷达包括以下至少一种:转镜扫描式激光雷达,其中的转镜由所述旋转扫描机构带动旋转;机械转动式激光雷达,其中,所述接收光学模块、回波探测模块以及图像采集模块由所述旋转扫描机构带动旋转。
可选地,所述激光雷达为转镜扫描式激光雷达,所述回波探测模块和图像采集模块位于所述激光雷达的转镜的同侧。
可选地,所述激光雷达为转镜扫描式激光雷达,所述回波探测模块和图像采集模块分别位于所述激光雷达的转镜的两侧。
可选地,所述回波探测模块和所述图像采集模块采用如下任意一种方式设置于所述激光雷达中:所述回波探测模块和所述图像采集模块设置于同一基板的同一硅片上;所述回波探测模块和所述图像采集模块设置于同一基板的不同硅片上;所述回波探测模块和所述图像采集模块设置于同一印刷线路板的不同的基板上;所述回波探测模块和所述图像采集模块设置于不同印刷电路板的基板上。
可选地,所述基板上覆盖有塑封层,在所述塑封层开设对应于回波探测模块的第一透光窗和对应于图像采集模块的第二透光窗。
可选地,所述图像采集模块包括像素级滤光模块,适于对入射光进行滤光,所述像素级滤光模块采用半导体工艺实现。
可选地,所述回波探测模块和所述图像采集模块分别采用以下任意一种结构类型设置于硅片上:前照式结构;后照式结构;堆栈式结构。
可选地,设置于同一硅片上的所述回波探测模块和所述图像采集模块采用相同的结构类型。
可选地,所述图像采集模块包括:由N×M个成像单元组成的成像单元阵列,N和M均为正整数,N表示行数,M表示列数。
可选地,所述成像单元阵列的行数N≥M。
可选地,所述回波探测模块的行向视场角与所述图像采集模块的行向视场角一致或存在确定的对应关系。
可选地,所述成像单元阵列包括多个成像单元组,所述成像单元组包括至少一个成像单元,且各个成像单元组的曝光结果经过信号积分处理后得到图像信息。
可选地,所述信号积分处理为时间延迟积分处理。
可选地,所述成像单元阵列适于响应于控制指令,触发相应的成像单元感应入射光。
可选地,所述成像单元阵列适于响应于所述控制指令,触发相应的成像单元,并根据预设曝光控制参数控制相应的成像单元在相应的曝光时间内感应入射光。
可选地,所述成像单元阵列适于响应于控制指令,按照预设的时序依次触发各成像单元组,使被触发的各成像单元组采集相对应的视场扫描区域的图像信息,所述成像单元组包括至少一个成像单元。
可选地,所述回波探测模块和图像采集模块均位于所述接收光学模块的焦平面上。
可选地,所述接收光学模块包括平面镜,所述平面镜将所述入射光反射至所述回波探测模块或所述图像采集模块。
可选地,所述回波探测模块包括如下至少一种:SPADs阵列;SiPM;APD阵列。
并且,所述图像采集模块包括以下至少一种:CIS阵列;CCD阵列。
可选地,所述激光雷达采用一维扫描方式。
本说明书实施例还提供了一种数据处理方法,应用于上述任一项所述激光雷达,所述数据处理方法包括以下步骤:计算所述图像采集模块和所述回波探测模块之间处于相对应的视场扫描区域的扫描间隔时间;基于所述扫描间隔时间,获取处于所述相对应的视场扫描区域的回波探测信息和图像信息;对所获得的所述回波探测信息和所述图像信息进行数据处理。
可选地,所述基于所述扫描间隔时间,获取处于所述相对应的视场扫描区域的回波探测信息和图像信息,包括:基于所述扫描间隔时间,确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系;基于所述回波探测模块的探测帧时刻,获取相应的回波探测信息;基于所述图像采集模块的采集帧时刻,获取相应的图像信息;基于所述探测帧时刻和所述采集帧时刻之间的对应关系,确定处于所述相对应的视场扫描区域的回波探测信息和图像信息。
可选地,所述图像采集模块包括具有多个成像单元组的成像单元阵列,所述成像单元组包括至少一个成像单元;在所述基于所述图像采集模块的采集帧时刻,获取相应的图像信息之前,还包括:对每个成像单元组的曝光结果进行信号积分处理,获得所述图像信息。
可选地,所述信号积分处理为时间延迟积分处理。
可选地,所述图像采集模块包括成像单元阵列,响应于控制指令,按照预设的时序依次触发各成像单元组,使被触发各成像单元组采集所述相对应的视场扫描区域的图像信息,所述成像单元组包括至少一个成像单元。
所述基于所述扫描间隔时间,确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系,包括:根据所述控制指令,确定采集于所述相对应的视场扫描区域的图像信息对应的采集帧时刻,得到所述视场扫描区域的采集帧时刻集合;从所述采集帧时刻集合中确定起始采集帧时刻;基于所述扫描间隔时间,确定与所述起始采集帧时刻对应的探测帧时刻,与所述采集帧时刻集合中各采集帧时刻建立对应关系。
可选地,所述基于所述图像采集模块的采集帧时刻,获取相应的图像信息,包括:基于所述图像采集模块的采集帧时刻,获取指定位置的成像单元采集所述相对应的视场扫描区域的图像信息。
可选地,所述回波探测模块和图像采集模块均位于所述接收光学模块同一侧。
所述计算所述图像采集模块和所述回波探测模块之间处于所述相对应的视场扫描区域的扫描间隔时间的步骤,包括:基于所述图像采集模块和所述接收光学模块之间的间距、所述回波探测模块和所述图像采集模块之间的间距、以及所述激光雷达的扫描角速度,计算所述图像采集模块的视场角和所述回波探测模块视场角对于所述相对应的视场扫描区域的扫描间隔时间。
可选地,所述对所获得的所述回波探测信息和所述图像信息进行数据处理,包括以下任意一种:对处于所述相对应的视场扫描区域的回波探测信息和图像信息分别进行数据处理,得到相应的点云数据和图像数据;对处于所述相对应的视场扫描区域的回波探测信息和图像信息,将所述回波探测信息和所述图像信息进行融合,得到融合信息,并进行数据处理,得到融合数据。
可选地,所述的数据处理方法还包括以下至少一种:基于所述回波探测模块的回波探测信息,调整所述图像采集模块的图像采集参数;基于所述图像采集模块的图像信息,调整所述回波探测模块的回波信号探测参数。
可选地,所述数据处理方法还包括:基于所述图像采集模块采集的图像信息,确定所述图像信息是否符合预设的成像条件;在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数。
可选地,所述基于所述图像采集模块采集的图像信息,确定所述图像信息是否符合预设的成像条件,包括:获取所述图像信息中的曝光量,确定是否属于所述成像条件中的曝光量区间;若不属于所述曝光量区间,则所述图像信息不符合成像条件。
可选地,所述在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数,包括以下至少一种:若所述图像信息中的曝光量小于所述曝光量区间的最小端点值,则增大所述图像采集模块的曝光控制参数;若所述图像信息中的曝光量大于所述曝光量区间的最大端点值,则减小所述图像采集模块的曝光控制参数。
可选地,所述的数据处理方法还包括:在增大所述图像采集模块的曝光控制参数之前,判断所述图像采集模块的曝光控制参数的大小与所述图像采集模块中相应的成像单元的扫描周期是否相等,若相等,则根据所述图像采集模块中各成像单元的分组设定,得到各成像单元组,用以按照预设的时序依次采集相对应的视场扫描区域的图像信息。
可选地,所述数据处理方法还包括:检测当前光照情况,得到相应的光照强度值;基于所述光照强度值,判断在所述光照强度值下,所述图像采集模块当前的曝光控制参数是否符合所述成像条件;在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数。
本说明书实施例还提供了一种数据处理模块,包括存储器和处理器;所述数据处理模块应用于激光雷达,所述数据处理模块的存储器适于存储一条或多条计算机指令,所述处理器运行所述计算机指令时执行上述任一项所述方法的步骤。
本说明书实施例还提供了一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令运行时执行上述任一项所述方法的步骤。
本说明书实施例还提供了一种激光雷达,包括:上述数据处理模块,所述数据处理模块适于对所述激光雷达采集的信息进行数据处理。
有益效果
采用本说明书实施例提供的激光雷达结构,复用激光雷达的旋转扫描机构和接收光学模块,从而将入射光进行汇聚后传递至回波探测模块和图像采集模块,能够大幅度减小回波探测模块的视场和图像采集模块的视场之间的误差,使得二者处理相对应的视场扫描区域的入射光的时间差较小,且能够确保二者的扫描轨迹一致,综上可知,本说明书实施例提供的激光雷达能够提高图像数据和点云数据之间视场匹配度和时间同步性,并且可以保障图像数据和点云数据的精确度。
进一步地,所述图像采集模块可以包括成像单元阵列,成像单元阵列能够响应于控制指令,触发相应的成像单元感应入射光,从而灵活控制像成像单元,能够动态调整图像采集模块的采集精度,满足各种分辨率要求。
进一步地,所述回波探测模块和所述图像采集模块可以设置于同一基板的同一硅片上,又可以设置于同一基板的不同硅片上,也可以设置于同一印刷线路板的不同的基板上,还可以设置于不同印刷电路板的基板上,使所述回波探测模块和所述图像采集模块能够灵活布置于激光雷达中,不受到现有布局的限制。
进一步地,所述回波探测模块可以根据所述图像采集模块的图像信息,调整回波信号探测参数,以获得质量更好的点云数据;所述图像采集模块可以根据所述回波探测模块的回波探测信息,调整图像采集参数,以获得质量更好的图像数据。
进一步地,所述回波探测模块可以包括CIS阵列和/或CCD阵列,从而可以降低激光雷达的成本,并且可以获得颜色信息,进而生成视觉效果更好的彩色图像。
进一步地,所述激光雷达采用一维扫描方式,能够精确控制扫描轨迹,有利于抑制运动物体的视场畸变,减少动态模糊的情况,使后续生成的点云数据和/或图像数据更便于物体识别。
附图说明
为了更清楚地说明本说明书实施例的技术方案,下面将对本说明书实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面所描述的附图仅仅是本说明书的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本说明书实施例中一种激光雷达的结构示意图。
图2a是本说明书实施例中一种图像采集模块的结构示意图。
图2b是对应于图2a中图像采集模块的一种行向视场角示意图。
图2c是对应于图2a中成像单元阵列按列采集的示意图。
图2d是图2a中各成像单元列对于相对应的视场扫描区域的一种曝光时序图。
图2e是图2a中各成像单元列对于相对应的视场扫描区域的另一种曝光时序图。
图2f是图2a中各成像单元列对于相对应的视场扫描区域的又一种曝光时序图。
图3是本说明书实施例中一种回波探测模块的结构示意图。
图4是本说明书实施例中一种回波探测模块和图像采集模块的集成方式示意图。
图5是本说明书实施例中另一种回波探测模块和图像采集模块的集成方式示意图。
图6是本说明书实施例中另一种回波探测模块和图像采集模块的集成方式示意图。
图7是本说明书实施例中另一种回波探测模块和图像采集模块的集成方式示意图。
图8是本说明书实施例中一种前照式集成芯片的结构示意图。
图9是本说明书实施例中一种后照式集成芯片的结构示意图。
图10是本说明书实施例中一种堆栈式集成芯片的结构示意图。
图11是本说明书实施例中另一种堆栈式集成芯片的结构示意图。
图12是本说明书实施例中一种机械转动式激光雷达的应用场景示意图。
图13是本说明书实施例中一种转镜扫描式激光雷达的应用场景示意图。
图14是本说明书实施例中另一种转镜扫描式激光雷达的应用场景示意图。
图15是本说明书实施例中一种转镜扫描式激光雷达的应用场景示意图。
图16是本说明书实施例中另一种转镜扫描式激光雷达的应用场景示意图。
图17是本说明书实施例中一种数据处理方法的流程图。
图18是本说明书实施例中一种采集帧时刻与探测帧时刻建立对应关系的流程图。
图19是本说明书实施例中另一种数据处理方法的流程图。
图20是本说明书实施例中一种成像条件判断方法的流程图。
图21是本说明书实施例中另一种数据处理方法的流程图。
图22是本说明书实施例中一种光照强度条件判断方法的流程图。
本发明的实施方式
由背景技术部分可知,在某些应用场景中需要将激光雷达的数据和图像采集装置的数据融合之后使用,然而激光雷达和图像采集装置之间视场匹配和时间同步的效果不理想,反而造成硬件资源浪费,影响数据融合结果。
因此,如何改善图像数据和点云数据之间视场匹配度和时间同步性成为了本领域技术人员亟待解决的问题。
为了解决上述问题,本说明书实施例提供了一种激光雷达结构,将来自视场相对应的视场扫描区域的入射光分别传递至回波探测模块和图像采集模块进行信号采集,从而提高图像数据和点云数据之间视场匹配度和时间同步性,并且可以保障图像数据和点云数据的精确度。
根据本发明一个优选实施例的激光雷达结构,至少包括旋转扫描机构、接收光学模块、回波探测模块和图像采集模块。
其中,所述接收光学模块包括:回波信号和/或环境光从进入激光雷达至到达所述回波探测模块和/或图像采集模块之间共同经过的部分或全部光学器件。例如,可包括但不限于透镜或透镜组、反射镜、半透半反镜、转镜、分光镜等等光学器件。
优选地,根据本发明的一个优选实施例,当回波探测模块和图像采集模块位于接收光学模块的同一侧时,该回波探测模块和图像采集模块可完全共用同一套光学系统,亦即,入射至激光雷达的回波信号和环境光信号通过完全相同的光学接收模块分别到达回波探测模块和图像采集模块。同时,回波探测模块和图像采集模块均位于该光学接收模块相对应的同一个平面上。作为一个更优选地方案,回波探测模块和图像采集模块均位于该光学接收模块相对应的同一个焦平面上。
根据本发明的一个实施例,当回波探测模块和图像采集模块位于接收光学模块的同一侧时,回波探测模块和图像采集模块紧密排列。由于两者的间距而造成的视场的偏差小于一个提前设定的阈值。此阈值的大小由激光雷达的应用场景而决定。
优选地,回波探测模块和图像采集模块小型化,芯片化,由半导体制造工艺加工而成。他们的物理尺寸在毫米级,两个模块间的物理间距也在毫米级。
根据又一实施例,当回波探测模块和图像采集模块位于接收光学模块的同一侧时,该回波探测模块与图像采集模块共用所述激光雷达中的部分光学模块,亦即,所述接收光学模块包括激光雷达中的部分光学模块。
更优选地,当激光雷达为转镜式雷达且回波探测模块和图像采集模块分别位于转镜的两侧时,所述接收光学模块可以包括转镜。亦即,转镜将入射光(回波信号和/或环境信号)分别反射至两侧的回波探测模块和图像采集模块。而作为优选方案,此时,回波探测模块和图像采集模块还可分别对应不同的后续光学器件组,以将来自转镜的入射光分别引导至回波探测模块和图像采集模块。更为优选地,该两个模块分别对应的后续光学器件组采用相同规格的器件和相同参数的设置方式,以使得两侧的光路的同步性更好。
其中,所述视场扫描区域可以为:激光雷达在一个帧采样周期内对外界进行向扫描的范围,所述帧采样周期为激光雷达单次采集帧信息的时长。具体而言,所述视场扫描区域可以包括:在一个帧采样周期内,回波探测模块和图像采集模块分别根据自身的视场角在外界进行扫描的范围,根据回波探测模块和图像采集模块实际硬件结构,所述回波探测模块的行向视场角与所述图像采集模块的行向视场角一致或存在确定的对应关系。换而言之,回波探测模块的视场角和图像采集模块的视场角可以相同,从而回波探测模块的视场角和图像采集模块的视场角完全重叠),回波探测模块的视场角和图像采集模块的视场角也可以不完全相同,从而回波探测模块的视场角和图像采集模块的视场角部分重叠。激光雷达采集的帧信息可以包括:回波探测模块的回波探测信息和图像采集模块的图像信息。其中,所述回波探测信息可以包括点云信息。诸如,各个探测点的距离信息、位置坐标信息等。优选地,所述回波探测信息还可包括与各个探测点相关其他信息。例如,速度信息等。
可以理解的是,所述视场扫描区域不是一个真实存在边界限定的区域,而是根据激光雷达的扫描变化而变化的动态区域。在本发明的实际应用中,回波探测模块的视场角和图像采集模块的视场角之间可以存在视场角度差,在视场角度差符合偏差阈值范围时,可以认为回波探测模块的视场角和图像采集模块的视场角相同,在一个帧采样周期内,回波探测模块和图像采集模块相对应的视场扫描区域为同一视场扫描区域。
为使本领域技术人员更加清楚地了解及实施本说明书实施例的构思、实现方案及优点,以下参照附图,通过具体应用场景进行详细说明。
参照图1所示的本说明书实施例中一种激光雷达的结构示意图,在本说明书实施例中,激光雷达10可以包括:旋转扫描机构11、接收光学模块12、回波探测模块13和图像采集模块14。
所述旋转扫描机构11,适于通过机械装置进行旋转。
所述接收光学模块12,适于在所述旋转扫描机构11进行旋转的过程中,将入射光1A传递至所述回波探测模块13和图像采集模块14。
所述回波探测模块13,适于由所述接收光学模块12入射的入射光1A中获取回波信号,得到回波探测信息。
所述图像采集模块14,适于将通过所述接收光学模块12入射的入射光1A转换为相应的电信号,得到图像信息。
采用本说明书实施例提供的激光雷达结构,复用激光雷达的旋转扫描机构和接收光学模块,从而将入射光进行汇聚后传递至回波探测模块或图像采集模块,能够大幅度减小回波探测模块的视场和图像采集模块的视场之间的误差,使得二者处理相对应的视场扫描区域的入射光的时间差较小,且能够确保二者的扫描轨迹一致,综上可知,本说明书实施例提供的激光雷达能够提高图像数据和点云数据之间视场匹配度和时间同步性,并且可以保障图像数据和点云数据的精确度。
可以理解的是,图1仅为示例说明,在根据本发明的实际应用中,根据激光雷达的旋转方式,旋转扫描机构与接收光学模块、回波探测模块以及图像采集模块之间的相对运动关系可以不同。例如,若所述激光雷达为转镜扫描式激光雷达,则接收光学模块中的转镜由所述旋转扫描机构带动旋转,旋转扫描机构不会带动接收光学模块、回波探测模块或图像采集模块旋转;若所述激光雷达为机械转动式激光雷达,其中的接收光学模块、回波探测模块以及图像采集模块由所述旋转扫描机构带动旋转。本说明书实施例对激光雷达的旋转方式不做具体限制。
具体实施中,如图2a所示,为本说明书实施例中一种图像采集模块的结构示意图,所述图像采集模块20可以包括:成像单元阵列21,所述成像单元阵列21用于感应入射光的一面可以排列有N×M个成像单元,如图2 a中斜线部分所示为一个成像单元,即成像单元211。其中,N和M均为正整数,N表示行数,M表示列数。
所述成像单元阵列21上的各成像单元用于感应所述接收光学模块传递的入射光,并将感应到的光信号转换为相应的电信号,从而获得图像信息。可以理解的是,图2a仅为示例说明,在实际加工中,根据加工工艺水平,各成像单元之间的间距可以非常小。
在具体实施中,所述图像采集模块还可以包括:适配成像单元阵列的各种电路或各种元器件,如与成像单元阵列适配的成像读出电路,所述成像读出电路可以用于采集各成像单元生成的电信号。
需要说明的是,所述成像单元阵列用于感应入射光的一面可以称为成像光敏面,由各成像单元的成像光敏面组成,入射光经过接收光学模块传递到成像光敏面,并且根据传递角度可以在成像光敏面上进行扫描,所述成像单元阵列的行方向与入射光在成像单元阵列上的扫描方向平行,所述成像单元阵列的列方向与所述行方向在成像光敏面上互不平行。
为了便于描述,可以将处于同一行方向的成像单元称为一行成像单元,并可以将处于同一列方向的成像单元称为一列成像单元。例如,可参考图2a中成像单元211所在的一列成像单元21A。
可选地,为了减小图像采集模块的体积,使得图像采集模块能够根据实际需求灵活设置于激光雷达内,可以调整所述成像单元阵列的行数或列数,例如,减少所述成像单元阵列的列数,使得所述成像单元阵列的行数N≥M。进一步地,所述成像单元阵列可以为成像单元线列,所述成像单元线列是指行数与列数差距较大的成像单元阵列,例如,所述成像单元阵列的行数N可以远远大于列数M。
可以理解的是,根据实际情景,可以通过设定数值界限划分差距的程度,从通过行数与列数的差值判断是否为“较大差距”,例如,在一些情景中,行数与列数相差5倍视为较大差距,如所述成像单元阵列的行数N≥5*M;在另一些情景中,行数与列数相差100倍视为较大差距,如所述成像单元阵列的行数N≥100*M。
在具体实施中,图像采集模块可以由CIS(CMOS image sensor,CMOS图像传感器)和/或CCD(Charge-coupled Device,电荷耦合器件)等来实现。
相应地,成像单元阵列可以采用以下任一种类型来实现:1)由独立的CIS作为成像单元形成的CIS阵列;2)由独立的CCD作为成像单元形成的CCD阵列。
在一可选示例中,图像采集模块可以包括CIS阵列和/或CCD阵列。
进一步地,为了提高图像采集模块的兼容性,所述图像采集模块可以包括CIS阵列。从而可以兼容互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS)工艺相关的其他硬件。
采用上述方案,可以降低激光雷达的成本,并且可以获得图像信息,进而根据实际需求生成数据量较小的黑白图像或视觉效果更好的彩色图像。
而本说明书实施例提供的激光雷达结构方案相较于上述解决方案,从硬件设计角度而言,本说明书实施例的激光雷达包含的旋转扫描机构可以是任意一种能够实现旋转扫描的装置,接收光学模块可以是任意一种能够实现光学汇聚功能的模块,激光雷达包含的回波探测模块可以是任意一种能够实现回波探测功能的模块,结合激光雷达的其他基础设施,如发射部、数据处理装置和传输装置等,可以实现激光雷达的正常工作。
将图像采集模块设置于接收光学模块传递入射光的路径上,并复用现有激光雷达的接收光学模块、旋转扫描机构、发射部、数据处理装置和传输装置等设施,亦即,无需改变激光雷达(优选地,为具有旋转扫描机构的激光雷达)原本的光路设置,也不需要改变激光雷达内的各个器件本身的布局或相对位置等,即可实现360°图像采集功能,且无需在激光雷达中增加额外的硬件设施,节约硬件成本。
现有的激光雷达中仅包含回波探测模块,而按照一个本发明的一个实施例,可以得到由回波探测模块和图像探测模块组成的模组。只需将现有的激光雷达的探测模块替换成本发明实施例中回波探测模块和图像探测模块组成的模组,即可实现功能的改进和提升。
从需求角度而言,由于图像信息和回波探测信息通过相应的模块分开获取,根据实际图像采集需求,可以单独调整图像采集模块的相关参数,或者选用符合需求的图像采集模块,从而能够获取不同精度和色度的图像信息。例如,根据实际图像采集需求,图像采集模块可以包括CIS阵列和/或CCD阵列。因而,本说明书实施例提供的激光雷达可以适应更加多变的场景和条件,具有更广泛的应用范围。
从图像质量角度而言,由于技术发展程度不同,通常使用SPAD采集图像时,分辨率仅为千级别和万级别,如1024像素、65536像素等,而使用CIS或CCD采集图像时,分辨率可以达到千万级别甚至是亿万级别,如3千万像素、1亿像素等。由此可知,CIS阵列和CCD阵列能够采集更高精度的图像信息。并且,CIS阵列和CCD阵列均可以采集色彩信息,进而可以选择性地生成黑白或彩色的图像。
故相比于上述通过SPADs阵列同时捕获图像信息和回波探测信息的方案,采用本说明书实施例的激光雷达更加灵活多变,可以获取高质量的图像信息,且不增加硬件成本。
在具体实施中,所述成像单元阵列适于响应于控制指令,触发相应的成像单元感应入射光,并转换为相应的电信号。从而灵活控制像成像单元,能够动态调整图像采集模块的采集精度,满足各种分辨率要求。
在本发明的实际应用中,图像采集模块对外界进行扫描的范围由成像单元阵列通过接收光学模块产生的视场角决定,成像单元阵列的视场角通过被触发成像单元的视场角组合而成。
其中,成像单元阵列的视场角可以包括:成像单元阵列沿着行方向扫描的行向视场角,所述成像单元阵列的行向视场角通过被触发成像单元的行向视场角组合而成。
入射光通过接收光学模块的中心不发生折射,能够按照入射角度传递到被触发成像单元阵列的成像光敏面上,由此,为了便于计算成像单元阵列的视场角,可以计算经过接收光学模块中心且能够传递到被触发成像单元的成像光敏面边缘的最大夹角,将计算得到的最大夹角作为成像单元阵列的视场角。
例如,参照图2b所示,为图2a中成像单元阵列触发所有成像单元后得到的行向视场角的示意图,入射光2A通过接收光学模块22的几何中心点22A之后,行方向能够传递到被触发成像单元的成像光敏面边缘的最大夹角为θ,即行向视场角为θ。
在具体实施中,成像单元的行向视场角可以由成像单元在行方向的感光宽度(即成像单元的成像光敏面宽度)、所述成像单元和所述接收光学模块之间的间距决定,即行向视场角FOV =2×arctg(A/2L),其中,A为成像单元的成像光敏面宽度,L为所述成像单元和所述接收光学模块之间的间距。由于成像单元的行方向与入射光在成像单元阵列上的扫描方向平行,因此,成像单元在行方向的扫描范围与激光雷达的扫描范围相关,如激光雷达可以进行360°扫描,则成像单元在行方向也可以进行360°扫描。由此,成像单元的成像光敏面宽度可以视为入射光在行方向单次扫过的弧长,成像单元的行向视场角计算公式可以近似为FOV ≈(A/L)×(360°/2π)。
可以理解的是,由于各成像单元的排布较为紧密,为了便于计算视场角,可以将成像单元阵列和接收光学模块之间的间距作为所述成像单元和所述接收光学模块之间的间距,从而简化计算难度。进一步地,若成像单元阵列处于接收光学模块的焦平面上,成像单元阵列和接收光学模块之间的间距即为成像单元阵列和接收光学模块之间的焦距。此外,由于成像单元的尺寸通常较小,可以将成像单元的外形尺寸作为成像单元的成像光敏面宽度,便于计算。
需要说明的是,为了便于实际测量,可以将成像单元阵列的成像光敏面到接收光学模块几何中心点的垂直距离作为二者的间距,例如,参照图2b中的几何中心点22A与成像单元阵列21的成像光敏面之间的距离b。
在具体实施中,相邻成像单元之间的间隔使得相邻成像单元存在行向视场角度差,由于各成像单元的排布较为紧密,相邻成像单元之间的间隔非常小,因此,可以忽略相邻成像单元之间的行向视场角度差。
在具体实施中,激光雷达的控制装置根据分辨率要求生成相应的控制指令,成像单元阵列响应于控制指令后,可以控制各成像单元的工作状态,使各被触发成像单元采集图像信息,各被触发成像单元采集的图像信息经过数据处理后可以生成图像。因此,控制所述成像单元阵列中被触发的成像单元,可以动态调整图像采集模块的采集精度。
例如,对于分辨率要求不高的情况,控制装置通过控制指令可以触发成像单元阵列同一行中至少一个成像单元,同一行中被触发成像单元采集到的图像信息作为该行的图像信息。由此,通过控制成像单元的触发数量,可以降低激光雷达的功耗。
又例如,对于要求高分辨率的情况,控制装置通过控制指令可以同时触发成像单元阵列多行中至少两个处于不同行方向的成像单元,多行中被触发成像单元采集到的图像信息经过逻辑运算后,作为其中一行的图像信息。由此,通过触发多个成像单元来采集同一区域的图像信息,可以降低各被触发成像单元的分辨率要求,提高在黑夜、阴天等暗光环境下采集的图像信息的精度。
可以理解的是,上述实施例仅为示例说明,在本发明的实际应用过程中,可以根据分辨率要求可以控制不同行或列的成像单元的触发数量,还可以根据实际情景设定逻辑运算方式,如求和、加权平均等,本说明书实施例对此不做限制。
在具体实施中,由于成像单元阵列的视场角通过被触发成像单元的视场角组合而成,而激光雷达的旋转扫描机构按照预设的扫描角速度进行转动,改变扫描角度,使激光雷达能够接收不同来源和方位的入射光,成像单元阵列将感应到的入射光进行光电转换后,得到图像信息,成像单元阵列单次感应到入射光的时间称为曝光时间。
通过预设成像单元阵列中各成像单元的曝光控制参数,可以使成像单元阵列在响应于所述控制指令后,触发相应的成像单元,并根据预设曝光控制参数控制各被触发的成像单元在相应的曝光时间内感应来自相对应的视场扫描区域的入射光,采集相对应的视场扫描区域的图像信息。
在一可实现示例中,可以将激光雷达扫过所述成像单元的行向视场角的行向扫描时间设置为所述成像单元曝光控制参数,换而言之,成像单元在触发后可以在相应的行向扫描时间内感应入射光。
其中,所述行向扫描时间根据所述成像单元的成像光敏面宽度、所述成像单元阵列和所述接收光学模块之间的间距、以及所述激光雷达的扫描角速度计算得到,即行向扫描时间ts=FOV /v≈(A/L)×(360°/2π) ×(1/v),其中v为所述激光雷达的扫描角速度。
进一步地,若成像单元的曝光时间小于行向扫描时间,则在完成当前视场扫描区域的曝光后,可以关闭该成像单元,直至激光雷达的旋转扫描机构转动使成像单元对应下一个视场扫描区域的入射光。由此,可以降低激光雷达的功耗。
图像采集模块在捕捉动态变化场景的图像信息时,由于曝光过程中场景发生变化,从而产生运动模糊(motion blur)的问题。为了提高图像的清晰度,需要增加图像采集模块的曝光时间,曝光时间越长,图像采集模块采集到的图像信息越多,经过数据处理生成的图像越清晰,然而过长的曝光时间会损坏图像采集模块的硬件,也延长了图像生成时间,无法满足自动驾驶等动态应用场景对于快速采集图像的需求。
为了解决上述运动模糊的问题,所述成像单元阵列适于响应于所述控制指令,按照预设的时序依次触发各成像单元组,使被触发的各成像单元组分别采集相对应的视场扫描区域的图像信息。然后,通过融合各被触发的成像单元在各自曝光时间内对相对应的视场扫描区域采集的图像信息,可得到该视场扫描区域经过融合的图像信息。若各成像单元组的视场角度差符合预设差值范围时,可以认为各成像单元组的视场角相同,各成像单元组采集的视场扫描区域为同一视场扫描区域。
其中,所述成像单元组可以包括至少一个成像单元;所述时序可以根据各成像单元组采集相对应的视场扫描区域的时间差和入射光在成像单元阵列上的扫描方向进行设定;各成像单元组采集相对应的视场扫描区域的时间差由相邻成像单元组之间的间隔距离决定。
在具体实施中,所述成像单元阵列可包括多个成像单元组,所述成像单元组包括至少一个成像单元。例如,将同一列的成像单元归为同一成像单元组。并且,多个成像单元组的曝光结果可进行信号积分处理,并最终获得图像信息。可选地,采用时间延迟积分(Time Delay and Integration,TDI)处理方式来将各成像单元组的曝光信息进行处理,以得到多个成像单元的曝光信息累积后的图像信息。
作为一种可选方案,可以采用具有TDI功能的成像单元阵列,例如,具有TDI功能的CCD阵列,或者具有TDI功能的CIS阵列。
作为另一种可选方案,可将各列成像单元组的输出分别接入具有相应功能(诸如TDI)的积分电路中,经由该积分电路处理后输出相应的图像信息。通过对多个成像单元组的输出进行积分处理,能够弥补弱环境光条件下的成像困难,对于弱光情况下输出图像信息的信噪比具有较为显著的提升。
在具体实施中,若忽略各相邻成像单元之间的行向视场角度差,则根据入射光在成像单元阵列上的扫描先后顺序,可以确定相邻成像单元组之间对于相对应的视场扫描区域的时间差。具体而言,相邻成像单元组之间,扫描顺序在前的成像单元组的行扫描时间即为相邻成像单元组之间对于相对应的视场扫描区域的时间差。
从而能够确定相邻成像单元组分别对应的曝光起始时刻差,由此,根据时间差,可以设定各成像单元组的曝光起始时刻。
采用上述方案,通过各被触发的成像单元组对于相对应的视场扫描区域分别进行图像信息采集,从而实现对相对应的视场扫描区域的多次曝光,进而将成像单元阵列的整体曝光时间划分为各被触发的成像单元组对于相对应的视场扫描区域的曝光时间,减少成像单元阵列对于相对应的视场扫描区域的整体曝光时间,通过多次曝光得到的融合图像信息具有更加丰富的信息内容,可以抵消转动带来的图像模糊的问题,进而在保障图像生成效率的情况下,可以提高画面清晰度和图像质量。
为了使本领域技术人员能够清楚地理解和实施上述技术方案,以下通过具体实施例进行阐述。
在本说明书一实施例中,参照图2a,按列方向对成像单元阵列21中的成像单元进行分组,例如,将成像单元211和处于同一列方向的成像单元作为一个成像单元组21A,将成像单元212和处于同一列方向的成像单元作为一个成像单元组21B,将成像单元21M和处于同一列方向的成像单元作为一个成像单元组21M。由此,从而将成像单元阵列21分成M个成像单元组21A~21M。
如图2c所示,成像单元阵列21中各成像单元组21A~21M在外界W中对应的视野范围分别为FA~FM。若旋转扫描机构按照图2c中所示的转动方向进行扫描,则入射光在成像单元阵列上的扫描方向为从成像单元组21A至21M。根据入射光在成像单元阵列上的扫描先后顺序,可有对各成像单元组进行排序,成像单元组21A为第一组,成像单元组21B为第二组,以此类推,成像单元组21M为第M组。
为了便于描述,可以将排序为第一组的成像单元组21A称为第一列成像单元,排序为第二组的成像单元组21M称为第二列成像单元……排序为第M组的成像单元组21M称为第M列成像单元。各列成像单元在旋转扫描机构的转动下改变激光雷达的扫描方向,各列成像单元在旋转扫描机构的转动过程中按照排列顺序依次对应同一视场采集区域进行采集。
结合参考图2a至图2d,以第一列成像单元21A和第二列成像单元21B进行举例说明。由图2b可知,若各成像单元的尺寸误差和间隔可以忽略不计,则第一成像单元列21A和第二成像单元列21B的成像光敏面宽度均为a,所述成像单元阵列21和所述接收光学模块22之间的间距为b,若所述激光雷达的扫描角速度为ω,则将参数带入上述行向视场角的计算公式,可以计算得出第一列成像单元21A和第二列成像单元21B的行向视场角均为FOV ≈(a/b)×(360°/2π);第一列成像单元21A和第二列成像单元21B之间对于相对应的视场扫描区域的时间差即为第一列成像单元21A的行向扫描时间,即Δt 1=FOV /ω。
由此得到各相邻列对应同一视场F的时间差均为Δt 1。从而在确定第一列成像单元21A的曝光起始时刻t c1设定为t 0后,第二列成像单元的曝光起始时刻t c2可以设定为(t 0+Δt 1)……依次类推,第M列成像单元的曝光起始时刻t cM可以设定为[t 0+(M-1)*Δt 1]。
由此可知,根据各列成像单元采集相对应的视场扫描区域的时间差和入射光在成像单元阵列上的扫描方向,可以对各列成像单元的时序进行设定。
继续参照图2c,第一列成像单元21A至第M列成像单元按照预设的时序依次触发并根据预设的曝光控制参数对外界W进行图像采集。
其中,各列成像单元对于相对应的视场扫描区域的曝光时间可以相同也可以不相同,本说明书实施例对此不做限制。以下通过几个实施例示例说明各列成像单元对于相对应的视场扫描区域的曝光过程,为了便于理解和描述,以视场扫描区域F为示例进行说明。
在本说明书一实施例中,结合图2c和图2d,各列成像单元的曝光时间可以设置为相应的行向扫描时间ts 1
如图2d所示,第一列成像单元21A可以从曝光起始时刻t c1起,在曝光时间ts 1内感应视场扫描区域F的入射光,然后第二列成像单元21B可以在曝光起始时刻t c2起,在曝光时间ts 1内感应视场扫描区域F的入射光……依次类推,第M列成像单元21M可以在曝光起始时刻t cm起,在曝光时间ts 1内感应视场扫描区域F的入射光。分别获取各列成像单元在相应时刻采集的图像信息并进行融合处理,从而得到对应视场扫描区域F的融合图像信息。
在本说明书另一实施例中,结合图2c和图2e,在曝光量充足的情况下,各列成像单元的曝光时间可以设置为小于行向扫描时间ts 1
如图2e所示,第一列成像单元21A可以从曝光起始时刻t c1起,在曝光时间ts 1’内感应视场扫描区域F的入射光,然后第二列成像单元21B可以在曝光起始时刻t c2起,在曝光时间ts 2’内感应视场扫描区域F的入射光……依次类推,第M列成像单元21M可以在曝光起始时刻t cm起,在曝光时间ts m’内感应视场扫描区域F的入射光。分别获取各列成像单元在相应时刻采集的图像信息并进行融合处理,从而得到对应视场扫描区域F的融合图像信息。
在本说明书另一实施例中,结合图2c和图2f,在曝光量不足的情况下,各列成像单元的曝光时间可以设置为大于行向扫描时间ts 1
如图2f所示,第一列成像单元21A可以从曝光起始时刻t c1起,在曝光时间ts 1’’内感应视场扫描区域F的入射光,然后第二列成像单元21B可以在曝光起始时刻t c2起,在曝光时间ts 2’’内感应视场扫描区域F的入射光……依次类推,第M列成像单元21M可以在曝光起始时刻t cm起,在曝光时间ts m’’内感应视场扫描区域F的入射光。分别获取各列成像单元在相应时刻采集的图像信息并进行融合处理,从而得到对应视场扫描区域F的融合图像信息。
进一步地,各列成像单元的曝光时间可以根据得到的图像信息和当前环境的光照强度进行调整,具体可参考数据处理方法的描述,在此不进行赘述。
需要说明的是,虽然上述实施例中仅描述了各成像单元组对于相对应的视场扫描区域F进行图像信息采集的过程,但这种描述并非限制各成像单元组的采集过程。在本发明的实际应用中,各成像单元组可以实时动态采集不同视场扫描区域的图像信息,如在第一成像单元组完成视场扫描区域F的图像信息采集后,还可以对后续的视场扫描区域进行图像信息采集。
可以理解的是,上述实施例仅为示例说明,在本发明的实际应用中,可以根据实际情景对成像单元进行分组,如可以按列分组、按行分组、按块分组等。并且,可以根据实际需求,将成像单元阵列中部分成像单元或全部成像单元进行分组。例如,参照图2a,可以将成像单元阵列中全部成像单元按列进行分组,得到M个成像单元组;也可以将前x列成像单元按列进行分组,得到x个成像单元组,从而对x个成像单元组进行时序设置,其中,x为不大于M的非零自然数。本说明书实施例对于成像单元的分组方式和分组数量不进行限定。
具体实施中,如图3所示,为本说明书实施例中一种回波探测模块的结构示意图,所述回波探测模块31可以包括:探测单元阵列31,所述探测单元阵列31用于感应入射光的一面可以排列有P×Q个探测单元(如图3中斜线部分所示为一个探测单元,即成像单元311),P和Q均为正整数,P表示行数,Q表示列数。
所述探测单元阵列31上的各探测单元用于由所述接收光学模块入射的入射光中检测回波信号,从而获得回波探测信息。可以理解的是,图2a仅为示例说明,在实际加工中,根据加工工艺水平,各成像单元之间的间距可以非常小。
在具体实施中,所述回波探测模块还可以包括:适配探测单元阵列的各种电路或各种元器件,如与探测单元阵列适配的探测读出电路,所述探测读出电路用于采集各探测单元生成的电信号。
需要说明的是,所述探测单元阵列用于感应入射光的一面可以称为探测光敏面,由各探测单元的探测光敏面组成,入射光经过接收光学模块传递到探测光敏面,并且根据传递角度可以在探测光敏面上进行扫描,所述探测单元阵列的行方向与入射光在探测单元阵列上的扫描方向平行,所述探测单元阵列的列方向与所述行方向在探测光敏面上互不平行。
可选地,为了减小回波探测模块的体积,使得回波探测模块能够根据实际需求灵活设置于激光雷达内,所述探测单元阵列可以为探测单元线列,所述探测单元线列是指行数与列数差距较大的探测单元阵列,例如,所述探测单元阵列的行数P远远大于列数Q。
可以理解的是,根据实际情景,可以通过设定数值界限划分差距的程度,从通过行数与列数的差值判断是否为“较大差距”,例如,在一些情景中,行数与列数相差5倍视为较大差距,如所述探测单元阵列的行数P≥5*M;在另一些情景中,行数与列数相差100倍视为较大差距,如所述探测单元阵列的行数P≥100*M。
在本发明的实际应用中,回波探测模块对外界的视野范围由探测单元阵列通过接收光学模块产生的视场角决定,探测单元阵列的视场角通过被触发探测单元的视场角组合而成。
其中,探测单元阵列的视场角可以包括:探测单元阵列沿着行方向扫描的行向视场角,所述探测单元阵列的行向视场角通过被触发探测单元的行向视场角组合而成。
在具体实施中,所述回波探测模块可以由SPAD(Single Photon Avalanche Diode,单光子雪崩二极管)和/或APD(Avalanche Photo Diode,雪崩光电二极管)来实现。
相应地,探测单元阵列可以采用以下任一种类型来实现:(1)由独立的SPAD作为探测单元形成的SPADs阵列;(2)由并联的多个SPAD作为探测单元形成的SiPM(Silicon Photo-Multiplier,硅光电倍增管);(3)由独立的APD作为探测单元形成的APD阵列。
需要说明的是,SPADs阵列和SiPM均可以包括多个SPAD,二者的区别在于:SPADs阵列中各SPAD分别作为探测单元,可以进行单独寻址,而SiPM中的每个探测单元由并联的多个SPAD组成,探测单元中并联的SPAD不能分别进行单独寻址,只能作为整体进行寻址。
根据实际需求,回波探测模块可以包括SPADs阵列和/或APD阵列。为了提高回波探测模块的兼容性,所述回波探测模块可以包括SPADs阵列。从而可以兼容CMOS工艺相关的其他硬件。
在具体实施中,回波探测模块和图像采集模块可以根据设定的工艺流程进行制备。
其中,对于回波探测模块和图像采集模块的半导体工艺流程,根据实际需求,所述回波探测模块和所述图像采集模块采用如下任意一种方式设置于所述激光雷达中。
(1)所述回波探测模块和所述图像采集模块设置于同一基板的同一硅片上。
参考图4所示的一个实施例,基板40上设置有硅片41,硅片41上设置有探测单元阵列411和成像单元阵列412,硅片41上设置有与探测单元阵列411适配的探测读出电路以及与成像单元阵列412适配的成像读出电路,硅片41上探测单元阵列411和适配的探测读出电路所在区域可以视为回波探测模块,硅片41上成像单元阵列412和适配的成像读出电路所在区域可以视为图像采集模块,回波探测模块和图像采集模块集成于同一个芯片中,且封装于同一个基板上。
(2)所述回波探测模块和所述图像采集模块设置于同一基板的不同硅片上。
参考图5所示的一个实施例,基板50上设置有硅片51和硅片52,硅片51上设置有探测单元阵列511以及与探测单元阵列511适配的探测读出电路;硅片52上设置有成像单元阵列521以及与成像单元阵列521适配的成像读出电路。设置有探测单元阵列511以及探测读出电路的硅片51可以视为回波探测模块,设置有成像单元阵列521以及成像读出电路的硅片52可以视为图像采集模块,回波探测模块单独集成于一个芯片中,图像采集模块单独集成于另一个芯片中,然后可以封装于同一个基板上。
(3)所述回波探测模块和所述图像采集模块设置于同一印刷线路板(Printed Circuit Board,PCB)的不同的基板上。
参考如图6所示的实施例,印刷线路板60上设置有基板61和基板62,基板61上设置有硅片611,硅片611上设置有探测单元阵列6111以及与探测单元阵列6111适配的探测读出电路;基板62上设置有硅片621,硅片621上设置有成像单元阵列6211以及与成像单元阵列6211适配的成像读出电路。设置有探测单元阵列6111以及探测读出电路的硅片611可以视为回波探测模块,设置有成像单元阵列6211以及成像单元阵列6211的硅片621可以视为图像采集模块,回波探测模块单独集成于一个芯片中,图像采集模块单独集成于另一个芯片中,且回波探测模块和图像采集模块可以分别在相应的基板上进行封装,最后连接到同一印刷电路板上。
(4)所述回波探测模块和所述图像采集模块设置于不同印刷电路板的基板上。
参考如图7所示的实施例,印刷线路板7A上设置有基板71,基板71上设置有硅片711,硅片711上设置有探测单元阵列7111以及与探测单元阵列7111适配的探测读出电路;印刷线路板7B上设置有基板72,基板72上设置有硅片721,硅片721上设置有成像单元阵列7211以及与成像单元阵列7211适配的成像读出电路。设置有探测单元阵列7111以及探测读出电路的硅片711可以视为回波探测模块,设置有成像单元阵列7211以及成像单元阵列7211的硅片721可以视为图像采集模块,回波探测模块单独集成于一个芯片中,图像采集模块单独集成于另一个芯片中,且回波探测模块和图像采集模块可以分别在相应的基板上进行封装,最后连接到不同的印刷电路板上。
由上述方案可知,所述回波探测模块和所述图像采集模块可以设置于同一基板的同一硅片上,又可以设置于同一基板的不同硅片上,也可以设置于同一印刷线路板的不同的基板上,还可以设置于不同印刷电路板的基板上,使所述回波探测模块和所述图像采集模块能够灵活布置于激光雷达中,不受到现有布局的限制。
同时,对于回波探测模块和图像采集模块的封装工艺流程,根据实际情景,所述回波探测模块和所述图像采集模块分别采用以下任意一种结构类型设置于硅片上。
(1)前照式(Front-Side Illumination,FSI)结构。前照式结构可以包括金属排线层和光接收层,金属排线层位于光接收层之上,其中,金属排线层中包括读出电路,光接收层可以包括阵列(探测单元阵列或成像单元阵列),通过金属排线层的入射光才能到达光接收层。
作为一可选示例,如图8所示,为一种前照式集成芯片的结构示意图。在前照式集成芯片8A中,回波探测模块和图像采集模块采用前照式结构设置于同一基板80的同一硅片81上。其中,前照式集成芯片8A包括:光接收层811和金属排线层812,光接收层811位于金属排线层812之下。回波探测模块的探测单元阵列8111位于光接收层811,探测读出电路8121位于金属排线层812;图像采集模块的成像单元阵列8112位于光接收层811,成像读出电路8122位于金属排线层812。
(2)后照式(Back-Side Illumination,BSI)结构。前照式结构可以包括金属排线层和光接收层,金属排线层位于光接收层之下,金属排线层和光接收层之间可以通过键合线连接,金属排线层中包括读出电路,光接收层包括阵列(探测单元阵列或成像单元阵列),入射光能够直接到达光接收层。
作为一可选示例,如图9所示,为一种后照式集成芯片的结构示意图,在后照式集成芯片9A中,回波探测模块和图像采集模块采用后照式结构设置于同一基板90的同一硅片91上。其中,后照式集成芯片9A可以包括:光接收层911和金属排线层912,光接收层911位于金属排线层912之上,金属排线层912和光接收层911之间通过键合线连接。回波探测模块的探测单元阵列9111位于光接收层911,探测读出电路9121位于金属排线层912;图像采集模块的成像单元阵列9112位于光接收层911,成像读出电路9121位于金属排线层912。
(3)堆栈式(Stacked)结构。堆栈式结构可以将金属排线层和光接收层分别置于不同的硅片上,再将包含金属排线层的硅片堆叠于包含光接收层的硅片之下,金属排线层和光接收层之间可以通过键合线连接,金属排线层中包括读出电路,光接收层包括阵列(探测单元阵列或成像单元阵列),入射光能够直接到达光接收层所在的硅片。
作为一可选示例,如图10所示,为一种堆栈式集成芯片的结构示意图,在后照式集成芯片10A中,回波探测模块和图像采集模块采用堆栈式结构设置于同一基板上。硅片1011包括光接收层,硅片1012包括金属排线层,硅片1011位于硅片1012之上,硅片1011和硅片1012之间通过键合线连接。回波探测模块的探测单元阵列10111位于硅片1011,探测读出电路10121位于硅片1012;图像采集模块的成像单元阵列10112位于硅片1011,成像读出电路9121位于硅片1012。
在具体实施中,为了减少器件体积,在模块采用后照式结构或堆叠式结构时,可以对阵列所在区域的硅片进行切割。例如,如图11所示,为另一种堆栈式集成芯片的结构示意图,与图10的区别点在于:将硅片1011进行了切割,减小了硅片1011所占体积。
可以理解的是,上述实施例仅为示例说明,在本发明的实际应用中,本说明书所述的设置方式和结构类型可以结合具体情景,合理地进行交叉选用;并且,根据具体情景,回波探测模块和图像采集模块采用的结构类型可以相同,也可以不相同。可选地,为了便于封装,设置于同一硅片上的所述回波探测模块和所述图像采集模块可以采用相同的结构类型。
在具体实施中,前照式结构、后照式结构和堆栈式结构等结构类型可以通过COMS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)工艺实现。
在确定回波探测模块和图像采集模块的结构类型和设置方式之后,可以进行封装。所述基板上覆盖有塑封层,在所述塑封层开设对应于回波探测模块的第一透光窗和对应于图像采集模块的第二透光窗。
具体地,继续参考图8至图11,先以图8为例,所述基板80上覆盖有塑封层8a,在所述塑封层8a开设对应于回波探测模块的第一透光窗和对应于图像采集模块的第二透光窗。基板80与印刷电路板8B之间通过焊接方式连接。硅片81和基板80之间通过键合线连接。
同样地,在图9中,所述基板90上覆盖有塑封层9a,在所述塑封层9a开设对应于回波探测模块的第一透光窗和对应于图像采集模块的第二透光窗。基板90与印刷电路板9B之间通过焊接方式连接。硅片91和基板90之间通过键合线连接。
而在图10和图11中,所述基板100上覆盖有塑封层10a,在所述塑封层10a开设对应于回波探测模块的第一透光窗和对应于图像采集模块的第二透光窗。基板100与印刷电路板10B之间通过焊接方式连接。硅片1012和基板100之间通过键合线连接。
随后,所述基板与印刷电路板通过焊接连接。例如,继续参考图8至图11,所述基板与印刷电路板之间通过球栅阵列封装(Ball Grid Array,BGA)方式焊接。
在一可选示例中,在所述第一光窗中设置窄带滤光模块(可以参考图8中的窄带滤光模块82,图9中的窄带滤光模块92,图10和11中的窄带滤光模块102),所述窄带滤光模块适于对所述入射光进行波长过滤,并将波长过滤后的入射光传递至所述回波探测模块,从而减小环境光噪声。
其中,所述窄带滤光模块的带通与所述激光雷达的激光器发射的激光波长相关,需要能够覆盖激光器的发射波长范围,例如,窄带滤光模块的带通可以在几纳米到几十纳米之间。
在另一可选示例中,所述图像采集模块包括像素级滤光模块,所述像素级滤光模块适于对入射光进行滤光,并将滤光后的入射光传递至所述成像单元阵列,从而改善图像质量。具体地,可以参考图8中的像素级滤光模块83、图9中的像素级滤光模块93、图10和11中的像素级滤光模块103,所述像素级滤光模块对通过所述第二透光窗的入射光进行滤光。
其中,像素级滤光模块可以对入射光进行RGGB(Red-Green-Green-Blue)滤光、RYYB(Red-Yellow- Yellow -Blue)滤光或RWWB(Red-White- White -Blue)滤光。此外,可以采用半导体工艺在成像单元阵列上形成像素级滤光模块。
在具体实施中,为了确保所述探测单元阵列和所述成像单元阵列能够采集相对应的视场扫描区域的信号,所述探测单元阵列和所述成像单元阵列可以采用以下至少一种方式设置于激光雷达:1)所述探测单元阵列和所述成像单元阵列沿列方向相互平行;2)所述成像单元阵列的列数M与所述探测单元阵列的列数Q相等。
采用上述方案,探测单元阵列和成像单元阵列沿列方向相互平行可以确保回波探测模块和图像采集模块的视场方向一致;将所述成像单元阵列的列数M与所述探测单元阵列的列数Q设置相等,可以便于按列采集数据。
由于图像采集模块和回波探测模块用于采集接收光学模块传递的来自相对应的视场扫描区域的入射光,并在激光雷达中成对出现。根据激光雷达的扫描方式类型,回波探测模块和图像采集模块可以在激光雷达中位于同一平面,也可以位于不同平面。
为了便于本领域技术人员理解和实施,以下先通过几个具体实施例详细说明回波探测模块和图像采集模块在激光雷达中位于同一平面的情况。
在本说明书一实施例中,如图12所示,为一种应用于机械转动式激光雷达的应用场景示意图。机械转动式激光雷达120可以包括发射部121、接收部122和旋转扫描机构(在图12中未示出)。旋转扫描机构可以带动发射部121和接受部122进行旋转。
发射部121可以包括发射模块1211。在发射模块1211的发射区域设置有发光单元阵列12111,发光单元阵列12111生成的激光经过发射光学模块1212处理,作为出射光向外界输出,在遇到物体(如图12中实线部分的物体12A)后,物体将出射光反射。
由于机械转动式激光雷达自身按照设定方向旋转,并随着装载平台(如高精车辆)移动,物体(如图12中虚线部分的物体12A)反射出射光时,对应于接收部122,因此,物体反射的出射光以及环境光可以作为接收部122的入射光,被接收部122接收。接收部122可以包括接收光学模块1221、回波探测模块1222和图像采集模块1223。接收光学模块1221将入射光汇聚传递至回波探测模块1222或图像采集模块1223。回波探测模块1222检测接收到的入射光中的回波信号(即物体反射的出射光的信号),得到回波探测信息,图像采集模块1223将感应到的入射光的光信号转换为相应的电信号,得到图像信息。
此外,在本发明实施例的实际应用中,所述发射部和接收部可以设置于激光雷达的光机转子中。
在本说明书另一实施例中,如图13所示,为一种应用于转镜扫描式激光雷达的应用场景示意图。转镜扫描式激光雷达130可以包括:发射部131、接收部132和旋转扫描机构133。
发射部131可以包括发射模块1311和发射光学模块(图13中未标注),并且在发射模块1311的发射区域设置有发光单元阵列13111;接收部132可以包括接收光学模块(图13中未标注)、回波探测模块1322和图像采集模块1323。
发射光学模块和接收光学模块构成了激光雷达130的光学系统,在发射和接收的过程中可以共用一组转镜,因此,分别从收发角度出发,可以说转镜130a包含于发射光学模块,也可以说转镜130a包含于接收光学模块。
此外,发射光学模块还可以包括发射透镜1312;接收光学模块还可以包括接收透镜1321;所述回波探测模块1322和图像采集模块1323位于转镜130a的同侧。
旋转扫描机构133通过机械装置进行旋转,并且旋转扫描机构133还带动转镜130a进行旋转,发光单元阵列13111生成的激光经过发射透镜1312处理和转镜130a的折射,作为出射光向外界输出,在遇到物体(如图13中的物体13A)后,物体将出射光反射。
由于旋转扫描机构133按照设定方向旋转,并随着装载平台(如高精车辆)移动,物体(如图13中的物体13A)反射出射光时,可以将反射的出射光传递到对应的接收部132,因此,物体反射的出射光以及环境光可以作为接收部132的入射光。
接收透镜1321将入射光汇聚传递至回波探测模块1322或图像采集模块1323。回波探测模块1322检测接收到的入射光中的回波信号(即物体反射的出射光的信号),得到回波探测信息,图像采集模块1323将感应到的入射光的光信号转换为相应的电信号,得到图像信息。
其中,所述激光雷达的转镜130a可以采用双面转镜。
可以理解的是,在本发明的实际应用中,所述转镜扫描式激光雷达可以根据实际情景采用不同的转镜,如三面转镜、四面转镜等。且根据转镜的光传递方向,可以调整发射部和接收部相对于转镜的位置,本说明书实施例对于转镜的类型不做限制。
例如,如图14所示,为另一种应用于转镜扫描式激光雷达的应用场景示意图,与图13相比,图14所示的转镜扫描式激光雷达140的转镜140a采用四面转镜,根据转镜140a的光传递方向,回波探测模块1322和图像采集模块1323分别位于旋转扫描机构的两侧。
具体而言,转镜扫描式激光雷达140可以包括:发射部141、接收部142和旋转扫描机构(图14中未标注)。发射部141可以包括发射模块1411和发射光学模块(图14中未标注),并且在发射模块1411的发射区域设置有发光单元阵列14111;接收部142可以包括接收光学模块(图14中未标注)、回波探测模块1422和图像采集模块1423。
发射光学模块和接收光学模块构成了激光雷达140的光学系统,在发射和接收的过程中可以共用一组转镜,因此,分别从收发角度出发,可以说转镜140a包含于发射光学模块,也可以说转镜140a包含于接收光学模块。
此外,发射光学模块还可以包括发射透镜1412;接收光学模块还可以包括接收透镜1421;所述回波探测模块1422和图像采集模块1423位于转镜140a的同侧。
旋转扫描机构143通过机械装置进行旋转,并且旋转扫描机构143还带动转镜140a进行旋转,发光单元阵列14111生成的激光经过发射透镜1412处理和转镜140a的折射,作为出射光向外界输出,在遇到物体(如图14中的物体14A)后,物体将出射光反射。
由于旋转扫描机构143按照设定方向旋转,并随着装载平台(如高精车辆)移动,物体(如图14中的物体14A)反射出射光时,可以将反射的出射光传递到对应的接收部142,因此,物体反射的出射光以及环境光可以作为接收部142的入射光。
接收透镜1421将入射光汇聚传递至回波探测模块1422或图像采集模块1423。回波探测模块1422检测接收到的入射光中的回波信号(即物体反射的出射光的信号),得到回波探测信息,图像采集模块1423将感应到的入射光的光信号转换为相应的电信号,得到图像信息。
需要说明的是,上述实施例中对于激光雷达的结构划分仅为示例说明,根据实际需求和描述方式,可以对激光雷达进行不同维度的结构划分,例如,从功能维度进行结构划分、从材料维度进行结构划分、从连接方式维度进行结构划分等,本说明书实施例对于激光雷达中的结构划分规则不做具体限定。
在上述三个实施例中,回波探测模块和图像采集模块可以位于接收光学模块的同一侧,回波探测模块和图像采集模块之间存在行向视场角度差(参照图12至14中的行向视场角度差α),行向视场角度差与所述成像单元阵列和所述接收光学模块之间的间距(参照图12至14中的间距b)、所述回波探测模块和所述图像采集模块之间的间距(参照图12至14中的间距c)、以及所述激光雷达的扫描角速度相关。在所述所述成像单元阵列和所述接收光学模块之间的间距以及所述激光雷达的扫描角速度不变情况下,所述回波探测模块和所述图像采集模块之间的行向视场角度差与所述回波探测模块和所述接收光学模块之间的间距成正比。
以图12为例,假设所述图像采集模块和所述接收光学模块之间的间距b为50(mm),所述回波探测模块和所述图像采集模块之间的间距c为2(mm),则所述回波探测模块和图像采集模块之间的行向视场角度差α≈(c/b)×(360°/2π)≈(2/50)×(360°/2π)≈2.3°。由此可见,图像采集模块和所述接收光学模块之间行向视场角度差较小,在大多数情况下可以忽略不计,因此,可以认为回波探测模块的视场和图像采集模块的视场匹配。
再假设激光雷达旋转一圈的扫描角速度为10(rad/s),则一帧的时间间隔为100ms,所述图像采集模块和所述回波探测模块之间处于相对应视场扫描区域的扫描间隔时间ΔT≈100×2.3°/360°=0.6(ms)。由此可见,图像采集模块和所述接收光学模块之间时间差为微秒级别,在大多数情况下可以忽略不计,因此,可以认为回波探测模块和图像采集模块在时间上同步采集数据。
继续参考图12~14,在激光雷达扫描过程中,所述图像采集模块和所述回波探测模块之间的相对位置保持固定,并且二者所处的硬件基板与激光雷达的旋转扫描机构同步旋转,或者,二者所处的硬件基板相对于激光雷达而言保持静止,因此,所述图像采集模块和所述回波探测模块之间的相对位置不受到旋转的影响,在同一个硬件基板上的相对位置不发生变化,进而使得二者之间的扫描间隔时间稳定,二者得到的信息在时间上保持固定的对应关系,避免所述图像采集模块和所述回波探测模块分别置于不同的独立设备中产生的抖动影响,提高二者的稳定性。
综上可知,在发射模块装调阶段,减小所述回波探测模块和所述图像采集模块之间的间距,能够减小或消除回波探测模块的视场和图像采集模块的视场之间的误差,确保二者的视场相匹配。并且,减小所述回波探测模块和所述图像采集模块之间的间距,可以将二者之间处理入射光的时间差缩短到忽略不计的程度,从而使回波探测模块和图像采集模块能够同步采集数据。综上,本说明书实施例提供的激光雷达结构可以提高图像数据和点云数据之间视场匹配度和时间同步性。
进一步地,为了使回波探测模块和图像采集模块能够同步接收到入射光,所述回波探测模块和图像采集模块均位于所述接收光学模块的焦平面上。
由于图像采集模块主要采集的是环境光,因此,在受到激光雷达体积、布局等因素限制时,回波探测模块和图像采集模块可以位于激光雷达的不同平面,以下通过实施例详细说明。
在本说明书一实施例中,如图15所示,所述激光雷达150为转镜扫描式激光雷达,所述回波探测模块1512和图像采集模块1522分别位于所述转镜扫描式激光雷达150的转镜150a的两侧。具体而言,所述接收部(图15中未标注)包括两个区域,第一区域151可以包括第一接收光学模块1511和回波探测模块1512,第二区域152可以包括第二接收光学模块1521和图像采集模块1522。
激光雷达的转镜150a将来自相对应的视场扫描区域的入射光折射至第一区域151或第二区域152。第一接收光学模块1511将转镜150a折射的入射光汇聚传递至回波探测模块1512,回波探测模块1512检测接收到的入射光中的回波信号,得到回波探测信息。第二接收光学模块1521将转镜150a折射的入射光汇聚传递至图像采集模块1522,图像采集模块1522将感应到的入射光的光信号转换为相应的电信号,得到图像信息。
可选地,所述回波探测模块1512可以与发射部(图15未示出)处于同一垂直平面上。
采用上述方案,所述回波探测模块和图像采集模块分别位于转镜的两侧,具有更多可用空间,根据实际需求,可以对回波探测模块或图像采集模块分别进行灵活调整,减少尺寸与位置的限制。并且经过装调后,可以消除回波探测模块和图像采集模块之间的行向视场角度差,回波探测模块的视场能够与图像采集模块的视场完全保持一致,使得回波探测模块和图像采集模块之间视场的基本完全同步。
在具体实施中,为了可以实现更多回波探测模块和图像采集模块的分布方案,可以通过平面镜来调整入射光的传递方向。
具体而言,如图16所示,在激光雷达160中,所述接收部可以包括两个区域,第一区域161可以包括第一接收光学模块1611和回波探测模块1612,第一接收光学模块可以包括第一平面镜16111和第一凸透镜16112。第二区域162可以包括第二接收光学模块1621和图像采集模块1622,第二接收光学模块1621可以包括第二平面镜16211和第二凸透镜16212。
激光雷达的转镜160a将来自相对应的视场扫描区域的入射光折射至第一区域161或第二区域162。第一平面镜16111将转镜160a折射的入射光传递至第一凸透镜16112,第一凸透镜16112将折射的入射光汇聚传递至回波探测模块1612,回波探测模块1612检测接收到的入射光中的回波信号,得到回波探测信息。第二平面镜16211将转镜160a折射的入射光传递至第二凸透镜16212,第二凸透镜16212将折射的入射光汇聚传递至图像采集模块1622,图像采集模块1622将感应到的入射光的光信号转换为相应的电信号,得到图像信息。
可以理解的是,上述实施例仅为示例说明。在本发明的实际应用中,接收光学模块包括的凸透镜和平面镜的数量可以根据实际情景变化,且平面镜可以应用于上述其他实施例中,从而调整入射光的传递方向,使得回波探测模块和/或图像采集模块的分布方案更加多样化,本说明书实施例对平面镜的应用场景不作限制。
在具体实施中,所述激光雷达可以采用一维扫描方式,在指定旋转方向上进行顺时针或逆时针扫描。从而能够精确控制扫描轨迹,有利于抑制运动物体的视场畸变,减少动态模糊的情况,使后续生成的点云数据和/或图像数据更便于物体识别。
在实际装调中,除了对回波探测模块和图像采集模块进行空间分布调试,还可以对回波探测模块和图像采集模块进行参数调试,例如,所述回波探测模块可以根据所述图像采集模块的图像信息,调整回波信号探测参数;所述图像采集模块也可以根据所述回波探测模块的回波探测信息,调整图像采集参数。
具体而言,回波探测模块可以根据图像采集模块的已扫过的区域获得的信息,如环境光水平信息、物体尺寸信息等,动态调整回波信号探测参数,如灵敏度参数、测程参数等,从而能够采集更加精确的回波探测信息,进而生成质量更高的点云数据。
而图像采集模块也可以根据回波探测模块的已扫过区域获得的信息,如距离信息、反射率信息等,动态调节图像采集参数,如曝光控制参数、动态范围参数、高光度(gain)参数等,从而能够采集更加精确的图像信息,进而生成质量更高的图像数据。
在具体实施中,回波探测模块还可以包括:时间数字转换电路(Time Digtal Converter,TDC),通过回波探测模块中的探测单元阵列检测接收到的入射光中的回波信号,通过与发射部时间同步的TDC记录探测单元阵列产生电信号的时间,然后激光雷达的处理装置可以采用直接测量飞行时间(Direct Time Of Flight,DTOF)算法,计算得出激光雷达与物体之间的距离信息。
需要知道的是,上文描述了本说明书实施例提供的多个实施例方案,各实施例方案介绍的各可选方式可在不冲突的情况下相互结合、交叉引用,从而延伸出多种可能的实施例方案,这些均可认为是本说明书实施例披露、公开的实施例方案。
下面对本申请实施例提供的数据处理方法进行介绍,下文描述的数据处理方法可以应用于本说明书实施例所述的任意一种激光雷达,下文描述的数据处理方法的内容,可与上文描述的激光雷达的相关内容相互对应参照。
在实际工作中,采用本说明书的激光雷达可以按照设定的帧采样周期进行帧信息采集,并且确保图像采集模块和回波探测模块处于视场匹配和时间同步的状态。在后续数据处理时,为了能够得到的更高精确度的数据,可以对采集到的图像信息和回波探测信息进行优化处理。
具体而言,如图17所示,为本说明书实施例提供的一种数据处理方法的流程图,所述数据处理方法可以包括步骤S171至步骤S173。
S171,计算所述图像采集模块和所述回波探测模块之间处于相对应的视场扫描区域的扫描间隔时间。
在具体实施中,由上述激光雷达部分的相关实施例可知,激光雷达的回波探测模块和图像采集模块的分布位置可以为:位于接收光学模块同一侧(参考图12至14),或者,分别位于转镜的两侧(参考图15至16)。
根据一种优选实施例,对于回波探测模块和图像采集模块位于接收光学模块同一侧的情况,回波探测模块和图像采集模块之间存在行向视场角度差,并且,在激光雷达旋转过程中,所述图像采集模块和所述回波探测模块行向视场角一直保持固定的行向视场角度差,此时,可以通过公式α≈(c/b)×(360°/2π)来估算行向视场角度差。然后,根据公式ΔT=v×α/360°,可以计算所述图像采集模块和所述回波探测模块对于相对应的视场扫描区域的扫描间隔时间。其中,b为所述图像采集模块和所述接收光学模块之间的间距,c为回波探测模块和所述图像采集模块之间的间距,v为所述激光雷达的扫描角速度。
当回波探测模块和图像采集模块位于接收光学模块同一侧时,回波探测模块和图像采集模块共用同一组接收光学模块,并且,两者与该组接收光学模块有相同的间距。更优选地,两者均位于该同一组接收光学模块的焦平面上。
其中,根据本实施例的接收光学模块包括回波信号和/或环境光进入激光雷达至到达所述回波探测模块/图像采集模块之间的所经过的所有光学器件。包括但不限于透镜、反射镜、半透半反镜、转镜、分光镜等等。
可以理解,根据本发明的该种结构,可以使得回波探测模块和所述图像采集模块之间的间距最小化,进而实现回波探测模块和图像采集模块所获得的信号之间的高度同步。
根据另一种优选实施例,对于在激光雷达的回波探测模块和图像采集模块分别位于转镜两侧的情况,经过装调后,回波探测模块的视场能够与图像采集模块的视场相互匹配,在激光雷达旋转过程中,所述图像采集模块和所述回波探测模块能够同步接收相对应的视场扫描区域的入射光,从而得到所述图像采集模块和所述回波探测模块对于相对应的视场扫描区域的扫描间隔时间为0。
S172,基于所述扫描间隔时间,获取处于所述相对应的视场扫描区域的回波探测信息和图像信息。
在具体实施中,基于所述扫描间隔时间,可以确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系;基于所述回波探测模块的探测帧时刻,获取相应的回波探测信息;基于所述图像采集模块的采集帧时刻,获取相应的图像信息;然后,基于所述探测帧时刻和所述采集帧时刻之间的对应关系,确定处于相对应的视场扫描区域的回波探测信息和图像信息。
其中,采集帧时刻用于表征图像采集模块采集相应视场扫描区域的图像信息的时间信息,并且与被触发的成像单元的曝光起始时刻存在对应关系。探测帧时刻用于表征回波探测模块采集相应视场扫描区域的回波探测信息的时间信息。
可以理解的是,上述内容仅为示意说明,在本发明的实际应用中,确定探测帧时刻和采集帧时刻之间的对应关系、获取回波探测信息和获取图像信息的步骤之间不存在先后执行顺序,可以同时执行,也可以按照设定顺序执行。
S173,对所获得的所述回波探测信息和所述图像信息进行数据处理。
在具体实施中,对处于相对应的视场扫描区域的回波探测信息和图像信息,可以分别进行数据处理,得到相应的点云数据和图像数据;也可以将处于相对应的视场扫描区域回波探测信息和图像信息进行融合,得到融合信息,并进行数据处理,得到融合数据。
采用上述方案,通过计算所述扫描间隔时间,可以确保获取的回波探测信息和图像信息处于相对应的视场扫描区域,进而提高图像数据和点云数据之间时间同步性,保障图像数据和点云数据的精确度。
在具体实施中,由上述激光雷达部分的相关实施例可知,根据预设的曝光控制参数,所述成像单元阵列在响应于所述控制指令,触发相应的成像单元后,被触发的成像单元可以在相应的曝光时间内感应入射光。
在具体实施中,所述成像单元阵列适于响应于所述控制指令,按照预设的时序依次触发各成像单元组,使被触发的各成像单元组在相应的曝光时间内感应入射光,采集相对应的视场扫描区域的图像信息,所述成像单元组包括至少一个成像单元。
在一示例中,所述图像采集模块可以包括具有多个成像单元组的成像单元阵列,所述成像单元组包括至少一个成像单元。
在所述基于所述图像采集模块的采集帧时刻,获取相应的图像信息之前,还包括:对每个成像单元组的曝光结果进行信号积分处理,获得所述图像信息。
其中,所述信号积分处理为时间延迟积分处理。
若图像采集模块采用上述成像单元阵列的分组控制触发方案进行图像采集,则在基于所述扫描间隔时间,确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系时,根据分组控制触发方案,确定各成像单元组采集相对应的视场扫描区域的图像信息对应的采集帧时刻,与所述回波探测模块的探测帧时刻建立对应关系,从而在获取处于相对应的视场扫描区域的回波探测信息和图像信息后,能够获取各成像单元组采集的图像信息。
在一可选示例中,如图18所示,所述基于所述扫描间隔时间,确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系,具体可以包括以下步骤S181至步骤S183。
S181,根据所述控制指令,确定采集于所述相对应的视场扫描区域的图像信息对应的采集帧时刻,得到所述视场扫描区域的采集帧时刻集合。
S182,从所述采集帧时刻集合中确定起始采集帧时刻。
S183,基于所述扫描间隔时间,确定与所述起始采集帧时刻对应的探测帧时刻,与所述采集帧时刻集合中各采集帧时刻建立对应关系。
在具体实施时,根据各成像单元组的曝光控制参数和曝光起始时刻,可以确定对应的采集帧时刻,从而能够获取各成像单元对于相对应的视场扫描区域采集的图像信息,通过将相对应的视场扫描区域采集的各成像单元组的图像信息进行融合处理,可以得到更加丰富的图像信息,且能够抵消转动带来的图像模糊的问题,从而提升画面清晰度和图像质量。而且,通过获取各成像单元的图像信息进行融合的方式,可以在保证画面清晰度和图像质量的情况下,减少成像单元阵列对于相对应的视场扫描区域的整体曝光时间,从而能够保障图像生成效率,且可以降低激光雷达的功耗,并使成像单元有休息缓冲的时间,延长硬件寿命。
在具体实施中,根据分辨率要求,激光雷达的控制装置可以生成相应的控制指令,从而控制成像单元阵列中各成像单元的工作状态。所述成像单元阵列响应于控制指令,触发相应的成像单元感应入射光。从而灵活控制像成像单元,能够动态调整图像采集模块的采集精度,满足各种分辨率要求。
例如,对于低分辨率的情况,控制装置通过控制指令可以触发成像单元阵列每一行中至少一个成像单元,每一行中被触发成像单元采集到的图像信息作为该行的图像信息。由此,通过控制成像单元的触发数量,可以降低激光雷达的功耗。
又例如,对于高分辨率的情况,控制装置通过控制指令可以同时触发成像单元阵列中多个成像单元行,被触发成像单元行采集到的图像信息经过逻辑运算后,作为其中一行的图像信息。由此,通过触发多个成像单元来采集同一位置的图像信息,可以降低各被触发成像单元的分辨率要求,提高在黑夜、阴天等暗光环境下采集的图像信息的精度。
因此,在获取相应的图像信息时,根据相应的控制指令和所述图像采集模块的采集帧时刻,可以获取指定位置的成像单元采集所述相对应的视场扫描区域的图像信息。
在具体实施中,所述数据处理方法还可以包括:基于所述回波探测模块的回波探测信息,调整所述图像采集模块的图像采集参数;和/或,基于所述图像采集模块的图像信息,调整所述回波探测模块的回波信号探测参数。具体可以参考激光雷达相关描述内容,在此不再赘述。
在具体实施中,如图19所示,为本说明书实施例提供的另一种数据处理方法的流程图,所述方法可以包括步骤S191至步骤S192。
S191,基于所述图像采集模块采集的图像信息,确定所述图像信息是否符合预设的成像条件。
S192,在确定不符合所述成像条件时,调整所述图像采集模块的曝光时间控制参数。
采用上述方案,通过将采集的图像信息作为反馈控制信息,确定是否需要调整图像采集模块的曝光控制参数,从而使得图像采集模块的曝光时间能够动态变化,提升图像采集模块的画面清晰度和图像质量。
可以理解的是,上述实施例仅示出了确定不符合所述成像条件的情况,在确定所述图像信息是否符合预设的成像条件时,还可能存在符合所述成像条件的情况。对于符合所述成像条件的情况,可以根据实际情景设定相应的操作步骤,并在判断出符合所述成像条件后,跳转到相应步骤进行执行。例如,若符合所述成像条件,可以结束当前成像条件判断流程,并在获取新的图像信息后,进入新的成像条件判断流程。
在具体实施中,通过图像信息中的曝光量和所述成像条件中的曝光量区间进行匹配,可以确定所述图像信息是否符合预设的成像条件。具体而言,如图20所示,在确定所述图像信息是否符合预设的成像条件时,可以包括以下步骤S1911至步骤S1914。
S1911,获取所述图像信息中的曝光量。
S1912,确定是否属于所述曝光量区间,若不属于所述曝光量区间,则继续执行步骤S1913,否则跳转至步骤S1914。
S1913,所述图像信息不符合成像条件。
S1914,所述图像信息符合成像条件。
在具体实施中,若所述图像信息不符合预设的成像条件,根据所述图像信息可以增大或减小相应成像单元的曝光控制参数的大小。例如,所述图像信息中的曝光量小于所述曝光量区间的最小端点值,则增大所述图像信息相应的成像单元的曝光控制参数;又例如,若所述图像信息中的曝光量大于所述曝光量区间的最大端点值,则减小所述图像信息相应的成像单元的曝光控制参数。
在具体实施中,在增大所述图像采集模块的曝光控制参数之前,可以判断所述图像采集模块的曝光控制参数的大小与所述图像采集模块中相应的成像单元的扫描周期是否相等,若相等,则根据所述图像采集模块中各成像单元的分组设定,得到各成像单元组,用以按照预设的时序依次采集相对应的视场扫描区域的图像信息。
采用上述方案,将成像单元阵列对于相对应的视场扫描区域的整体曝光时间划分成各成像单元组的曝光时间,从而能够保障图像生成效率,提升画面清晰度和图像质量。
如图21所示,为本说明书实施例提供的另一种数据处理方法的流程图,所述数据处理方法可以包括步骤S211至步骤S213。
S211,检测当前光照情况,得到相应的光照强度值。
其中,可以通过光照强度检测装置进行当前光照情况的检测,光照强度检测装置可以位于所述激光雷达内,也可以位于与激光雷达连接的其他装载平台。
S212,基于所述光照强度值,确定图像采集参数对应形成的图像信息是否符合所述成像条件。
S213,在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数。
采用上述方案,通过将当前光照情况作为反馈控制信息,在进行图像信息采集之前确定是否需要调整图像采集模块的曝光控制参数,使得图像采集模块的曝光时间能够动态变化,提升图像采集模块的画面清晰度和图像质量。
可以理解的是,上述实施例仅示出了确定不符合所述成像条件的情况,在确定所述光照前度值是否符合预设的成像条件时,还可能存在符合所述成像条件的情况。对于符合所述成像条件的情况,可以根据实际情景设定相应的操作步骤,并在判断出符合所述成像条件后,跳转到相应步骤进行执行。例如,若符合所述成像条件,可以结束当前成像条件判断流程,并在预设检测周期后,进入新的成像条件判断流程。
在具体实施中,通过将检测得到的光照强度值和所述光照强度条件中的光照强度区间进行匹配,可以确定图像采集参数对应形成的图像信息是否符合预设的成像条件。具体而言,如图22所示,可以包括以下步骤S221至步骤S224。
S221,检测当前光照情况,得到相应的光照强度值。
S222,确定所述光照强度值是否属于所述光照强度区间,若不属于所述光照强度区间,则继续执行步骤S223,否则跳转至步骤S224。
S223,所述光照强度值不符合光照强度条件,并确定所述图像采集参数对应形成的图像信息不符合所述成像条件。
S224,所述光照强度值符合光照强度条件,并确定所述图像采集参数对应形成的图像信息符合所述成像条件。
在具体实施中,若所述光照强度值不符合光照强度条件,根据所述光照强度值可以增大或减小相应成像单元的曝光控制参数的大小。例如,若所述光照强度值小于所述光照强度区间的最小端点值,则增大所述图像采集模块的曝光控制参数;又例如,若所述光照强度值大于所述光照强度区间的最大端点值,则减小所述图像采集模块的曝光控制参数。
在具体实施中,在增大所述图像采集模块的曝光控制参数的大小之前,判断所述图像采集模块的曝光控制参数的大小与所述图像采集模块中相应的成像单元的扫描周期是否相等,若相等,则根据所述图像采集模块中各成像单元的分组设定,得到各成像单元组,用以按照预设的时序依次采集相对应的视场扫描区域的图像信息。
采用上述方案,将成像单元阵列对于相对应的视场扫描区域的整体曝光时间划分成各成像单元组的曝光时间,从而能够保障图像生成效率,提升画面清晰度和图像质量。
需要知道的是,上文描述了本说明书实施例提供的多个实施例方案,各实施例方案介绍的各可选方式可在不冲突的情况下相互结合、交叉引用,从而延伸出多种可能的实施例方案,这些均可认为是本说明书实施例披露、公开的实施例方案。
本说明书实施例还提供了一种数据处理模块,所述数据处理模块应用于激光雷达,并与所述激光雷达的接收部连接。
所述数据处理模块可以包括存储器和处理器,所述存储器适于存储一条或多条计算机指令,所述处理器运行所述计算机指令时执行前述任一实施例所述数据处理方法的步骤。具体步骤可以参照前述实施例,此处不再赘述。
可选地,处理器可以为CPU(中央处理器)、GPU(Graphics Processing Unit,图形处理器)、FPGA(Field Programmable Gate Array,现场可编程逻辑门阵列)等处理芯片,特定集成电路ASIC(Application Specific Integrated Circuit)或者是被配置成实施本发明实施例的一个或多个集成电路等。
可选地,存储器可以包含高速RAM存储器,也可以还包括非易失性存储器(non-volatile memory),例如至少一个磁盘存储器。
在具体实施中,所述数据处理模块还可以包括扩展接口,适于与激光雷达中其他模块(如采集模块、控制模块等)进行连接,实现数据交互。
本说明书实施例还提供了一种计算机可读存储介质,其上存储有计算机指令,所述计算机指令运行时执行前述任一实施例所述数据处理方法的步骤。具体步骤可以参照前述实施例,此处不再赘述。
所述计算机可读存储介质可以包括例如任何合适类型的存储器单元、存储器设备、存储器物品、存储器介质、存储设备、存储物品、存储介质和/或存储单元,例如,存储器、可移除的或不可移除的介质、可擦除或不可擦除介质、可写或可重写介质、数字或模拟介质、硬盘、软盘、光盘只读存储器(CD-ROM)、可刻录光盘(CD-R)、可重写光盘(CD-RW)、光盘、磁介质、磁光介质、可移动存储卡或磁盘、各种类型的数字通用光盘(DVD)、磁带、盒式磁带等。
计算机指令可以包括通过使用任何合适的高级、低级、面向对象的、可视化的、编译的和/或解释的编程语言来实现的任何合适类型的代码,例如,源代码、编译代码、解释代码、可执行代码、静态代码、动态代码、加密代码等。
本说明书实施例还提供了一种激光雷达,包括:上述任一项实施例所述的数据处理模块,所述数据处理模块适于对所述激光雷达采集的信息进行数据处理。具体可参照数据处理方法部分的相关描述,在此不再赘述。
其中,所述数据处理模块可以置于数据处理装置中,也可以置于其他硬件装置(如控制装置)中,并且,所述数据处理模块可以处理激光雷达的其他数据,本说明书实施例对此不做限制。
可以理解的是,本说明书实施例所称的“一个实施例”或“实施例”是指可包含于本说明书至少一个实现方式中的特定特征、结构或特性。在本说明书的描述中。
需要理解的是,术语“上”、“下”、“顶”、“底”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本说明书实施例以及简化描述,而不是指示或暗示所指的装置或模块必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本说明书实施例的限制。
此外,说明书实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含的包括一个或者更多个该特征。而且,术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以使这里描述的本说明书的实施例能够以除了在这里图示或描述的那些以外的顺序实施。
虽然本说明书实施例披露如上,但本说明书实施例并非限定于此。任何本领域技术人员,在不脱离本说明书实施例的精神和范围内,均可作各种更动与修改,因此本说明书实施例的保护范围应当以权利要求所限定的范围为准。

Claims (38)

  1. 一种激光雷达,所述激光雷达包括旋转扫描机构,其特征在于,激光雷达还包括:接收光学模块、回波探测模块和图像采集模块,其中:
    所述旋转扫描机构,适于通过机械装置进行旋转;
    所述接收光学模块,适于在所述旋转扫描机构进行旋转的过程中,将入射光传递至所述回波探测模块和所述图像采集模块;
    所述回波探测模块,适于由所述接收光学模块入射的入射光中获取回波信号,得到回波探测信息;
    所述图像采集模块,适于将通过所述接收光学模块入射的入射光转换为相应的电信号,得到图像信息。
  2. 根据权利要求1所述的激光雷达,其特征在于,所述激光雷达包括以下至少一种:
    转镜扫描式激光雷达;其中的转镜由所述旋转扫描机构带动旋转;
    机械转动式激光雷达;其中,所述接收光学模块、回波探测模块以及图像采集模块由所述旋转扫描机构带动旋转。
  3. 根据权利要求2所述的激光雷达,其特征在于,所述激光雷达为转镜扫描式激光雷达,所述回波探测模块和图像采集模块位于所述激光雷达的转镜的同侧。
  4. 根据权利要求2所述的激光雷达,其特征在于,所述激光雷达为转镜扫描式激光雷达,所述回波探测模块和图像采集模块分别位于所述激光雷达的转镜的两侧。
  5. 根据权利要求1至4中任一项所述的激光雷达,其特征在于,所述回波探测模块和所述图像采集模块采用如下任意一种方式设置于所述激光雷达中:
    所述回波探测模块和所述图像采集模块设置于同一基板的同一硅片上;
    所述回波探测模块和所述图像采集模块设置于同一基板的不同硅片上;
    所述回波探测模块和所述图像采集模块设置于同一印刷线路板的不同的基板上;
    所述回波探测模块和所述图像采集模块设置于不同印刷电路板的基板上。
  6. 根据权利要求5所述的激光雷达,其特征在于,所述基板上覆盖有塑封层,在所述塑封层开设对应于回波探测模块的第一透光窗和对应于图像采集模块的第二透光窗。
  7. 根据权利要求1所述的激光雷达,其特征在于,所述图像采集模块包括像素级滤光模块,适于对入射光进行滤光,所述像素级滤光模块采用半导体工艺实现。
  8. 根据权利要求1至4中任一项所述的激光雷达,其特征在于,所述回波探测模块和所述图像采集模块分别采用以下任意一种结构类型设置于硅片上:
    - 前照式结构;
    - 后照式结构;
    - 堆栈式结构。
  9. 根据权利要求8所述的激光雷达,其特征在于,设置于同一硅片上的所述回波探测模块和所述图像采集模块采用相同的结构类型。
  10. 根据权利要求1所述的激光雷达,其特征在于,所述图像采集模块包括:由N×M个成像单元组成的成像单元阵列,N和M均为正整数,N表示行数,M表示列数。
  11. 根据权利要求10所述的激光雷达,其特征在于,所述成像单元阵列的行数N≥M。
  12. 根据权利要求10所述的激光雷达,其特征在于,所述回波探测模块的行向视场角与所述图像采集模块的行向视场角一致或存在确定的对应关系。
  13. 根据权利要求10-12任一项所述的激光雷达,其特征在于,所述成像单元阵列包括多个成像单元组,所述成像单元组包括至少一个成像单元,且各个成像单元组的曝光结果经过信号积分处理后得到图像信息。
  14. 根据权利要求13所述的激光雷达,其特征在于,所述信号积分处理为时间延迟积分处理。
  15. 根据权利要求10-12任一项所述的激光雷达,其特征在于,所述成像单元阵列适于响应于控制指令,触发相应的成像单元感应入射光。
  16. 根据权利要求15所述的激光雷达,其特征在于,所述成像单元阵列适于响应于所述控制指令,触发相应的成像单元,并根据预设曝光控制参数控制相应的成像单元在相应的曝光时间内感应入射光。
  17. 根据权利要求10-12任一项所述的激光雷达,其特征在于,所述成像单元阵列适于响应于控制指令,按照预设的时序依次触发各成像单元组,使被触发的各成像单元组采集相对应的视场扫描区域的图像信息,所述成像单元组包括至少一个成像单元。
  18. 根据权利要求1所述的激光雷达,其特征在于,所述回波探测模块和图像采集模块均位于所述接收光学模块的焦平面上。
  19. 根据权利要求1所述的激光雷达,其特征在于,所述接收光学模块包括平面镜,所述平面镜将所述入射光反射至所述回波探测模块或所述图像采集模块。
  20. 根据权利要求1所述的激光雷达,其特征在于,所述回波探测模块包括如下至少一种:
    - SPADs阵列;
    - SiPM;
    - APD阵列;
    并且,所述图像采集模块包括以下至少一种:
    - CIS阵列;
    - CCD阵列。
  21. 根据权利要求1所述的激光雷达,其特征在于,所述激光雷达采用一维扫描方式。
  22. 一种数据处理方法,其特征在于,应用于上述权利要求1-19任一项所述激光雷达,所述数据处理方法包括以下步骤:
    计算所述图像采集模块和所述回波探测模块之间处于相对应的视场扫描区域的扫描间隔时间;
    基于所述扫描间隔时间,获取处于所述相对应的视场扫描区域的回波探测信息和图像信息;
    对所获得的所述回波探测信息和所述图像信息进行数据处理。
  23. 根据权利要求22所述的数据处理方法,其特征在于,所述基于所述扫描间隔时间,获取处于所述相对应的视场扫描区域的回波探测信息和图像信息,包括:
    基于所述扫描间隔时间,确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系;
    基于所述回波探测模块的探测帧时刻,获取相应的回波探测信息;
    基于所述图像采集模块的采集帧时刻,获取相应的图像信息;
    基于所述探测帧时刻和所述采集帧时刻之间的对应关系,确定处于所述相对应的视场扫描区域的回波探测信息和图像信息。
  24. 根据权利要求23所述的数据处理方法,其特征在于,所述图像采集模块包括具有多个成像单元组的成像单元阵列,所述成像单元组包括至少一个成像单元;
    在所述基于所述图像采集模块的采集帧时刻,获取相应的图像信息之前,还包括:
    对每个成像单元组的曝光结果进行信号积分处理,获得所述图像信息。
  25. 根据权利要求24所述的数据处理方法,其特征在于,所述信号积分处理为时间延迟积分处理。
  26. 根据权利要求23所述的数据处理方法,其特征在于,所述图像采集模块包括成像单元阵列,响应于控制指令,按照预设的时序依次触发各成像单元组,使被触发各成像单元组采集所述相对应的视场扫描区域的图像信息,所述成像单元组包括至少一个成像单元;
    所述基于所述扫描间隔时间,确定所述回波探测模块的探测帧时刻和所述图像采集模块的采集帧时刻之间的对应关系,包括:
    根据所述控制指令,确定采集于所述相对应的视场扫描区域的图像信息对应的采集帧时刻,得到所述视场扫描区域的采集帧时刻集合;
    从所述采集帧时刻集合中确定起始采集帧时刻;
    基于所述扫描间隔时间,确定与所述起始采集帧时刻对应的探测帧时刻,与所述采集帧时刻集合中各采集帧时刻建立对应关系。
  27. 根据权利要求23所述的数据处理方法,其特征在于,所述基于所述图像采集模块的采集帧时刻,获取相应的图像信息,包括:
    基于所述图像采集模块的采集帧时刻,获取指定位置的成像单元采集所述相对应的视场扫描区域的图像信息。
  28. 根据权利要求22所述的数据处理方法,其特征在于,所述回波探测模块和图像采集模块均位于所述接收光学模块同一侧;
    所述计算所述图像采集模块和所述回波探测模块之间处于所述相对应的视场扫描区域的扫描间隔时间的步骤,包括:
    基于所述图像采集模块和所述接收光学模块之间的间距、所述回波探测模块和所述图像采集模块之间的间距、以及所述激光雷达的扫描角速度,计算所述图像采集模块的视场角和所述回波探测模块视场角对于所述相对应的视场扫描区域的扫描间隔时间。
  29. 根据权利要求22所述的数据处理方法,其特征在于,所述对所获得的所述回波探测信息和所述图像信息进行数据处理,包括以下任意一种:
    对处于所述相对应的视场扫描区域的回波探测信息和图像信息分别进行数据处理,得到相应的点云数据和图像数据;
    对处于所述相对应的视场扫描区域的回波探测信息和图像信息,将所述回波探测信息和所述图像信息进行融合,得到融合信息,并进行数据处理,得到融合数据。
  30. 根据权利要求22所述的数据处理方法,其特征在于,还包括以下至少一种:
    基于所述回波探测模块的回波探测信息,调整所述图像采集模块的图像采集参数;
    基于所述图像采集模块的图像信息,调整所述回波探测模块的回波信号探测参数。
  31. 根据权利要求22所述的数据处理方法,其特征在于,所述数据处理方法还包括:
    基于所述图像采集模块采集的图像信息,确定所述图像信息是否符合预设的成像条件;
    在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数。
  32. 根据权利要求31所述的数据处理方法,其特征在于,所述基于所述图像采集模块采集的图像信息,确定所述图像信息是否符合预设的成像条件,包括:
    获取所述图像信息中的曝光量,确定是否属于所述成像条件中的曝光量区间;
    若不属于所述曝光量区间,则所述图像信息不符合成像条件。
  33. 根据权利要求32所述的数据处理方法,其特征在于,所述在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数,包括以下至少一种:
    若所述图像信息中的曝光量小于所述曝光量区间的最小端点值,则增大所述图像采集模块的曝光控制参数;
    若所述图像信息中的曝光量大于所述曝光量区间的最大端点值,则减小所述图像采集模块的曝光控制参数。
  34. 根据权利要求33所述的数据处理方法,其特征在于,还包括:
    在增大所述图像采集模块的曝光控制参数之前,判断所述图像采集模块的曝光控制参数的大小与所述图像采集模块中相应的成像单元的扫描周期是否相等,若相等,则根据所述图像采集模块中各成像单元的分组设定,得到各成像单元组,用以按照预设的时序依次采集相对应的视场扫描区域的图像信息。
  35. 根据权利要求33所述的数据处理方法,其特征在于,还包括:
    检测当前光照情况,得到相应的光照强度值;
    基于所述光照强度值,判断在所述光照强度值下,所述图像采集模块当前的曝光控制参数是否符合所述成像条件;
    在确定不符合所述成像条件时,调整所述图像采集模块的曝光控制参数。
  36. 一种数据处理模块,包括存储器和处理器;其特征在于,所述数据处理模块应用于激光雷达,所述数据处理模块的存储器适于存储一条或多条计算机指令,所述处理器运行所述计算机指令时执行权利要求22-35中任一项所述方法的步骤。
  37. 一种计算机可读存储介质,其上存储有计算机指令,其特征在于,所述计算机指令运行时执行权利要求22-35任一项所述方法的步骤。
  38. 一种激光雷达,其特征在于,包括:上述权利要求36所述数据处理模块,所述数据处理模块适于对所述激光雷达采集的信息进行数据处理。
PCT/CN2021/109212 2020-08-28 2021-07-29 激光雷达、数据处理方法及数据处理模块、介质 WO2022042197A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010887687.0A CN114114317B (zh) 2020-08-28 2020-08-28 激光雷达、数据处理方法及数据处理模块、介质
CN202010887687.0 2020-08-28

Publications (1)

Publication Number Publication Date
WO2022042197A1 true WO2022042197A1 (zh) 2022-03-03

Family

ID=80354515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/109212 WO2022042197A1 (zh) 2020-08-28 2021-07-29 激光雷达、数据处理方法及数据处理模块、介质

Country Status (2)

Country Link
CN (1) CN114114317B (zh)
WO (1) WO2022042197A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415123A (zh) * 2022-04-01 2022-04-29 北京海兰信数据科技股份有限公司 一种基于非相参邻域加权脉冲积累处理方法及系统
CN117452433A (zh) * 2023-12-25 2024-01-26 之江实验室 基于单点单光子探测器的360度三维成像装置及方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117616309A (zh) * 2021-09-23 2024-02-27 华为技术有限公司 信号处理方法、信号传输方法及装置
WO2023193407A1 (zh) * 2022-04-07 2023-10-12 上海禾赛科技有限公司 固态激光雷达及固态激光雷达控制方法
CN116930920A (zh) * 2022-04-07 2023-10-24 上海禾赛科技有限公司 激光雷达及激光雷达控制方法
CN117665742A (zh) * 2022-08-30 2024-03-08 上海禾赛科技有限公司 激光雷达及其扫描控制方法
CN115902818A (zh) * 2023-02-21 2023-04-04 探维科技(北京)有限公司 图像融合激光的信号探测系统、雷达系统及其探测方法
CN116962890B (zh) * 2023-09-21 2024-01-09 卡奥斯工业智能研究院(青岛)有限公司 点云图像的处理方法、装置、设备和存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2419759A (en) * 2003-07-11 2006-05-03 Omnicom Engineering Ltd Laser scanning surveying and measuring system
CN105549029A (zh) * 2016-01-19 2016-05-04 中国工程物理研究院流体物理研究所 一种照明扫描叠加成像系统及方法
CN106341586A (zh) * 2016-10-14 2017-01-18 安徽协创物联网技术有限公司 一种具有三轴云台的全景相机
CN206848481U (zh) * 2017-07-03 2018-01-05 百度在线网络技术(北京)有限公司 车载信息采集系统
CN107610084A (zh) * 2017-09-30 2018-01-19 驭势科技(北京)有限公司 一种对深度图像和激光点云图进行信息融合的方法与设备
CN107991662A (zh) * 2017-12-06 2018-05-04 江苏中天引控智能系统有限公司 一种3d激光和2d成像同步扫描装置及其扫描方法
CN108020825A (zh) * 2016-11-03 2018-05-11 岭纬公司 激光雷达、激光摄像头、视频摄像头的融合标定系统及方法
CN108957478A (zh) * 2018-07-23 2018-12-07 上海禾赛光电科技有限公司 多传感器同步采样系统及其控制方法、车辆
JP6520407B2 (ja) * 2015-05-29 2019-05-29 株式会社デンソーウェーブ レーザレーダ装置

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4218249B2 (ja) * 2002-03-07 2009-02-04 株式会社日立製作所 表示装置
CN101455070B (zh) * 2006-05-22 2014-06-11 汤姆森特许公司 图像传感器和用于读出图像传感器的像素的方法
US8648702B2 (en) * 2010-08-20 2014-02-11 Denso International America, Inc. Combined time-of-flight and image sensor systems
JP2014060573A (ja) * 2012-09-18 2014-04-03 Sony Corp 固体撮像素子、制御方法、および電子機器
CN105487082B (zh) * 2015-11-19 2018-04-10 中国空间技术研究院 一种用于远距离目标探测的激光雷达
CN106713754B (zh) * 2016-12-29 2019-04-16 中国科学院长春光学精密机械与物理研究所 基于面阵cmos图像传感器的运动场景成像方法及系统
DE102017208052A1 (de) * 2017-05-12 2018-11-15 Robert Bosch Gmbh Senderoptik für ein LiDAR-System, optische Anordnung für ein LiDAR-System, LiDAR-System und Arbeitsvorrichtung
CN107219533B (zh) * 2017-08-04 2019-02-05 清华大学 激光雷达点云与图像融合式探测系统
CN207557465U (zh) * 2017-08-08 2018-06-29 上海禾赛光电科技有限公司 基于转镜的激光雷达系统
KR102135560B1 (ko) * 2018-05-16 2020-07-20 주식회사 유진로봇 카메라와 라이다를 이용한 융합 센서 및 이동체
WO2019079211A1 (en) * 2017-10-19 2019-04-25 DeepMap Inc. LIDAR-CAMERA CALIBRATION TO GENERATE HIGH DEFINITION MAPS
US10739462B2 (en) * 2018-05-25 2020-08-11 Lyft, Inc. Image sensor processing using a combined image and range measurement system
WO2020097748A1 (zh) * 2018-11-12 2020-05-22 深圳市汇顶科技股份有限公司 一种光学传感装置和终端
CN109375237B (zh) * 2018-12-12 2019-11-19 北京华科博创科技有限公司 一种全固态面阵三维成像激光雷达系统
CN109557550B (zh) * 2018-12-25 2021-06-29 武汉万集信息技术有限公司 三维固态激光雷达装置及系统
CN109618113B (zh) * 2019-03-11 2019-05-21 上海奕瑞光电子科技股份有限公司 自动曝光控制方法及自动曝光控制组件系统
CN111050041B (zh) * 2019-11-25 2021-03-26 Oppo广东移动通信有限公司 图像传感器、控制方法、摄像头组件及移动终端
CN110971799B (zh) * 2019-12-09 2021-05-07 Oppo广东移动通信有限公司 控制方法、摄像头组件及移动终端
CN111405204B (zh) * 2020-03-11 2022-07-26 Oppo广东移动通信有限公司 图像获取方法、成像装置、电子设备及可读存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2419759A (en) * 2003-07-11 2006-05-03 Omnicom Engineering Ltd Laser scanning surveying and measuring system
JP6520407B2 (ja) * 2015-05-29 2019-05-29 株式会社デンソーウェーブ レーザレーダ装置
CN105549029A (zh) * 2016-01-19 2016-05-04 中国工程物理研究院流体物理研究所 一种照明扫描叠加成像系统及方法
CN106341586A (zh) * 2016-10-14 2017-01-18 安徽协创物联网技术有限公司 一种具有三轴云台的全景相机
CN108020825A (zh) * 2016-11-03 2018-05-11 岭纬公司 激光雷达、激光摄像头、视频摄像头的融合标定系统及方法
CN206848481U (zh) * 2017-07-03 2018-01-05 百度在线网络技术(北京)有限公司 车载信息采集系统
CN107610084A (zh) * 2017-09-30 2018-01-19 驭势科技(北京)有限公司 一种对深度图像和激光点云图进行信息融合的方法与设备
CN107991662A (zh) * 2017-12-06 2018-05-04 江苏中天引控智能系统有限公司 一种3d激光和2d成像同步扫描装置及其扫描方法
CN108957478A (zh) * 2018-07-23 2018-12-07 上海禾赛光电科技有限公司 多传感器同步采样系统及其控制方法、车辆

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114415123A (zh) * 2022-04-01 2022-04-29 北京海兰信数据科技股份有限公司 一种基于非相参邻域加权脉冲积累处理方法及系统
CN114415123B (zh) * 2022-04-01 2022-07-19 北京海兰信数据科技股份有限公司 一种基于非相参邻域加权脉冲积累处理方法及系统
CN117452433A (zh) * 2023-12-25 2024-01-26 之江实验室 基于单点单光子探测器的360度三维成像装置及方法
CN117452433B (zh) * 2023-12-25 2024-03-05 之江实验室 基于单点单光子探测器的360度三维成像装置及方法

Also Published As

Publication number Publication date
CN114114317A (zh) 2022-03-01
CN114114317B (zh) 2023-11-17

Similar Documents

Publication Publication Date Title
WO2022042197A1 (zh) 激光雷达、数据处理方法及数据处理模块、介质
US10658405B2 (en) Solid-state image sensor, electronic apparatus, and imaging method
US10804301B2 (en) Differential pixel circuit and method of computer vision applications
US10768301B2 (en) System and method for determining a distance to an object
KR20190055238A (ko) 물체까지의 거리를 결정하기 위한 시스템 및 방법
KR20190057124A (ko) 물체까지의 거리를 결정하기 위한 시스템
KR20110033567A (ko) 거리 센서를 포함하는 이미지 센서
US11531094B2 (en) Method and system to determine distance using time of flight measurement comprising a control circuitry identifying which row of photosensitive image region has the captured image illumination stripe
EP3519855A1 (en) System for determining a distance to an object
CN104681569A (zh) 固体摄像装置以及摄像系统
US20220018946A1 (en) Multi-function time-of-flight sensor and method of operating the same
US20210281791A1 (en) Pixel and image sensor including the same
US11665439B2 (en) Image sensor, a mobile device including the same and a method of controlling sensing sensitivity of an image sensor
KR20210132364A (ko) 이미지 센서
US11671722B2 (en) Image sensing device
US20220208815A1 (en) Image sensing device
US11860279B2 (en) Image sensing device and photographing device including the same
WO2021077374A1 (zh) 图像传感器、成像装置及移动平台
US20240118399A1 (en) Image sensor related to measuring distance
WO2019041250A1 (zh) 电子器件及包括其的测距装置和电子设备
EP4307378A1 (en) Imaging device and ranging system
US20220223639A1 (en) Image sensing device
US20230388650A1 (en) Image sensing device and image sensing method thereof
WO2021157386A1 (ja) 固体撮像素子および撮像装置
WO2021157393A1 (ja) 測距装置および測距方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21860037

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21860037

Country of ref document: EP

Kind code of ref document: A1