WO2021213432A1 - 数据融合 - Google Patents

数据融合 Download PDF

Info

Publication number
WO2021213432A1
WO2021213432A1 PCT/CN2021/088652 CN2021088652W WO2021213432A1 WO 2021213432 A1 WO2021213432 A1 WO 2021213432A1 CN 2021088652 W CN2021088652 W CN 2021088652W WO 2021213432 A1 WO2021213432 A1 WO 2021213432A1
Authority
WO
WIPO (PCT)
Prior art keywords
image sensor
rotation angle
information
image
point cloud
Prior art date
Application number
PCT/CN2021/088652
Other languages
English (en)
French (fr)
Inventor
宋爽
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Priority to US17/908,921 priority Critical patent/US20230093680A1/en
Priority to EP21792770.6A priority patent/EP4095562A4/en
Publication of WO2021213432A1 publication Critical patent/WO2021213432A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1652Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with ranging devices, e.g. LIDAR or RADAR
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4814Constructional features, e.g. arrangements of optical elements of transmitters alone
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/481Constructional features, e.g. arrangements of optical elements
    • G01S7/4817Constructional features, e.g. arrangements of optical elements relating to scanning
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Definitions

  • This manual relates to the field of computer technology, especially data fusion.
  • data collected by different sensors can be fused to obtain new data, and the new data obtained can contain richer information.
  • unmanned equipment can collect image data through image sensors and point cloud data through lidar.
  • the image data has rich color information, but the image data does not have more accurate depth information (that is, distance information), and the point cloud data has accurate distance information.
  • the point cloud data does not have color information, so no one
  • the device can obtain image data and point cloud data, and combine the advantages of image data and point cloud data to obtain data that combines distance information and color information, so as to control unmanned equipment based on the data that combines distance information and color information .
  • This specification provides a data fusion method, which is applicable to a system, the system includes a vehicle and a lidar and at least one image sensor provided on the vehicle, wherein the lidar rotates the laser
  • the point cloud is collected by means of a transmitter, and the method includes:
  • each rotation angle interval is: the value interval of the rotation angle of the laser transmitter in the collection overlap area of the lidar and the image sensor;
  • the pixel information in the image and the point information in the point cloud are fused.
  • sending a trigger signal to the designated image sensor includes:
  • the pixel information in the image and the point information in the point cloud are fused, include:
  • the pixel information in the image and the point information in the point cloud are fused.
  • fusing the pixel information in the image and the point information in the point cloud according to the compensation pose information includes:
  • mapping parameter the information of the pixels in the image and the information of the points in the point cloud are fused.
  • fusing the pixel information in the image and the point information in the point cloud according to the mapping parameter includes:
  • mapping parameters Performing coordinate transformation on the spatial position information of the points in the point cloud and/or the pixel point information in the image according to the mapping parameters
  • For each point in the point cloud determine the pixel in the image corresponding to the point according to the result of coordinate transformation, and combine the spatial position information of the point and the pixel value of the pixel in the image corresponding to the point Perform fusion to get the fused data.
  • the method is executed by a processor configured on the vehicle, where the processor includes a Field Programmable Gate Array (FPGA).
  • FPGA Field Programmable Gate Array
  • the system includes: a processor, a lidar, at least one image sensor, an inertial measurement unit (IMU), and a vehicle;
  • a processor a lidar
  • IMU inertial measurement unit
  • the vehicle is equipped with the processor, the lidar, the at least one image sensor, and the IMU, wherein the lidar collects a point cloud by rotating a laser transmitter;
  • the processor is configured to obtain the rotation angle of the laser transmitter determined by the rotation angle measurer of the lidar, and select all the obtained rotation angles according to the correspondence between each predetermined rotation angle interval and each image sensor.
  • the image sensor corresponding to the rotation angle interval where the rotation angle of the laser transmitter is located is used as the designated image sensor, wherein the rotation angle interval corresponding to each image sensor is: the rotation angle of the laser transmitter is between the laser radar and the image sensor
  • the point cloud collected and returned in the rotation angle interval is based on the position and attitude change information of the vehicle in the process of collecting the image and the point cloud. Fusion of points in the cloud;
  • the lidar is configured to collect a point cloud and provide the processor with the rotation angle of the laser transmitter determined by the rotation angle measurer;
  • the at least one image sensor is configured to receive the trigger signal sent by the processor, and collect and return an image according to the trigger signal;
  • the IMU is used to collect the pose information of the vehicle.
  • the vehicle on which the device is located is provided with a lidar and at least one image sensor, wherein the lidar collects point clouds by rotating a laser transmitter, and the device includes:
  • An obtaining module configured to obtain the rotation angle of the laser transmitter determined by the rotation angle measurer of the lidar
  • the selection module is configured to select the image sensor corresponding to the rotation angle interval in which the rotation angle of the laser transmitter is acquired as the designated image sensor according to the predetermined corresponding relationship between each rotation angle interval and each image sensor, wherein,
  • the rotation angle interval corresponding to each image sensor is: the value interval of the rotation angle of the laser transmitter in the overlapping area of the collection of the lidar and the image sensor;
  • a sending module configured to send a trigger signal to the designated image sensor, so that the designated image sensor collects an image
  • a receiving module configured to receive the image and the point cloud collected and returned by the lidar within the rotation angle interval where the rotation angle of the laser transmitter is acquired;
  • the fusion module is used for fusing the pixel information in the image and the point information in the point cloud according to the pose change information of the vehicle in the process of collecting the image and the point cloud .
  • This specification provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the above-mentioned data fusion method is realized.
  • An electronic device includes a memory, a processor, and a computer program that is stored in the memory and can run on the processor, and the processor implements the above-mentioned data fusion method when the processor executes the program.
  • the vehicle is equipped with a lidar and at least one image sensor.
  • the lidar collects the point cloud by rotating the laser transmitter.
  • the processor can obtain the rotation angle of the laser transmitter determined by the rotation angle measurer of the lidar.
  • the image sensor corresponding to the interval in which the rotation angle of the acquired laser transmitter is located is selected as the designated image sensor, where the rotation angle interval corresponding to each image sensor is:
  • the rotation angle of the laser transmitter is within the range of the overlap area between the laser radar and the image sensor, and the trigger signal is sent to the designated image sensor to make the designated image sensor collect the image, receive the image, and the laser transmitter acquired by the laser radar.
  • a trigger signal is sent to the specified image sensor to make the specified image sensor. The image is collected, and then the information of the pixels in the image and the information of the points in the point cloud can be fused according to the pose change information of the vehicle in the process of image collection and point cloud collection, and the data obtained by the fusion is more accurate.
  • Fig. 1 is a schematic diagram of a data fusion system provided by an embodiment of the specification
  • Figure 2 is a flow chart of a data fusion method provided by an embodiment of this specification
  • FIG. 3 is a schematic diagram of the collection overlapping area of the collection area of several image sensors and the collection area of the lidar provided by the embodiment of this specification;
  • FIG. 4 is a schematic structural diagram of a data fusion device provided by an embodiment of this specification.
  • FIG. 5 is a schematic diagram of an electronic device corresponding to FIG. 1 provided by an embodiment of the specification.
  • the data fusion method provided in this specification mainly involves fusing the information of the pixels in the image collected by the image sensor with the information of the points in the point cloud collected by the lidar.
  • the information of the pixels in the image can include rich color information, which can be determined by the pixel value of the pixel.
  • the color information can be expressed by the RGB color mode, and the range of the pixel value is [0 , 255].
  • the point information in the point cloud may include accurate distance information and reflection intensity information, etc.
  • the distance information may be determined by the spatial position information of the points in the point cloud. Therefore, the pixel information in the image and the point information in the point cloud can be fused to obtain fused data, where the fused data has rich color information, accurate distance information, and reflection intensity information.
  • the data fusion method provided in this specification can be widely used in a variety of scenarios.
  • the data fusion method provided in this specification can be applied to unmanned driving scenarios.
  • the unmanned equipment can fuse the information of the pixels in the image collected by the image sensor and the information of the points in the point cloud collected by the lidar to obtain the fused data, so that the fusion data can be based on the fusion data.
  • Human equipment control Taking the trajectory planning of unmanned equipment as an example, the unmanned equipment can perform image segmentation and other processing on the fused data, detect obstacles in the surrounding environment of the unmanned equipment, and plan the trajectory of the unmanned equipment based on the obstacle information.
  • the data fusion method provided in this specification can be applied to the scene of generating a high-precision map.
  • the processor can fuse the information of the pixels in the image collected by the image sensor equipped on the vehicle and the information of the points in the point cloud collected by the equipped lidar to obtain the fused data, which can determine the fusion
  • the semantic information of the data based on the semantic information of the fused data, generates a higher-precision high-precision map.
  • the data fusion method provided in this specification can be applied to the scenario of machine learning model training.
  • the processor may fuse the pixel information in the image collected by the image sensor and the point information in the point cloud collected by the lidar to obtain the fused data, and use the fused data as the training sample for training.
  • the fused data may be better.
  • Fig. 1 is a schematic diagram of a data fusion system provided by an embodiment of this specification.
  • the data fusion system includes a vehicle 1, a lidar 2, a number of image sensors 3, an inertial measurement unit (IMU) 4, and a processor 5.
  • the vehicle 1 may be an unmanned device such as an unmanned vehicle, or may be a common vehicle. Of course, here is only a vehicle as an example, and it can also be a flying device such as a drone.
  • the lidar 2 may be a mechanical lidar. The lidar 2 can collect point clouds by rotating the laser transmitter. The collection area of the lidar 2 may overlap with the collection area of each image sensor 3.
  • the lidar 2 can collect a panoramic point cloud
  • the lidar 2 and several image sensors 3 can be installed on the top of the vehicle 1.
  • the schematic diagram of the installation position is shown in Fig. 1.
  • the overlap area of the acquisition area of the lidar 2 and the acquisition area of the image sensor 3 is taken as the acquisition overlap area corresponding to the image sensor 3, that is, the acquisition overlap area of the lidar 2 and the image sensor 3,
  • the semi-closed area enclosed by the dotted line in FIG. 1 is the collection overlap area of the lidar 2 and the right image sensor 3.
  • the IMU 4 and the processor 5 can be installed on the top of the vehicle 1 or other positions of the vehicle 1.
  • the laser radar 2 may include a laser transmitter, a rotation angle measuring device, a motor, etc. (not shown in FIG. 1).
  • the lidar 2 can realize the function of collecting panoramic point clouds by rotating the laser transmitter.
  • the rotation angle of the laser transmitter can be determined by the rotation angle measuring device (that is, the photoelectric code disc in the mechanical laser radar), so as to provide the processor 5 with the rotation angle of the laser transmitter in real time .
  • the image sensor 3 can receive a trigger signal sent by the processor 5, collect an image according to the trigger signal, and send the collected image to the processor 5.
  • the collection time of the image sensor 3 is much shorter than the collection time required by the laser radar 2 to collect the point cloud by rotating the laser transmitter by the motor. Therefore, the collection time of the image sensor 3 can be ignored.
  • the IMU4 can collect the pose information of the vehicle in real time, and the installation position of the IMU4 on the vehicle 1 can be determined according to the actual situation.
  • the pose information may include the displacement information of the vehicle, the yaw angle information of the vehicle, and so on.
  • the processor 5 may include a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a central processing unit (CPU), etc. Since the delay of the FPGA is at the nanosecond (ns) level, the synchronization effect is better, so the processor 5 may be an FPGA.
  • the processor 5 may fuse the point information in the point cloud collected by the lidar 2 and the pixel point information in the image collected by the image sensor 3.
  • the processor 5 may first obtain the rotation angle of the laser transmitter determined by the rotation angle measuring device of the lidar 2. Then, according to the predetermined correspondence between each rotation angle interval and each image sensor 3, the image sensor 3 corresponding to the rotation angle interval in which the acquired rotation angle of the laser transmitter is located is selected as the designated image sensor 3. For each image sensor 3, the rotation angle interval corresponding to the image sensor 3 is: the value interval of the rotation angle of the laser transmitter in the overlapping area of the collection of the laser radar 2 and the image sensor 3. After that, the processor 5 may send a trigger signal to the designated image sensor 3, so that the designated image sensor 3 collects images, receives the collected images, and the laser radar 2 collects and returns within the rotation angle interval of the laser transmitter. Point cloud. Finally, according to the pose change information of the vehicle 1 in the process of acquiring the image and the point cloud, the information of the pixels in the image and the information of the points in the point cloud can be fused.
  • the process of the method in which the processor 5 fuses the point information in the point cloud collected by the lidar 2 and the pixel point information in the image collected by the image sensor 3 may be as shown in FIG. 2.
  • FIG. 2 is a flowchart of a data fusion method provided by an embodiment of this specification, which may include steps S200 to S208.
  • S200 The processor obtains the rotation angle of the laser transmitter determined by the rotation angle measurer of the lidar.
  • the rotation angle measurer located inside the lidar can determine the rotation angle of the laser transmitter in real time.
  • the rotation angle measuring device can also be called a photoelectric code disc.
  • the processor may be an FPGA, so that the time when the image sensor collects an image is synchronized with the time when the laser transmitter rotates to the edge of the collection overlap area corresponding to the image sensor with a low delay.
  • the processor can obtain the rotation angle of the laser transmitter determined by the rotation angle measuring device in real time.
  • the rotation angle interval corresponding to the image sensor is: the value interval of the rotation angle of the laser transmitter within the collection overlap area of the lidar and the image sensor.
  • the processor may determine the correspondence between each rotation angle interval and each image sensor in advance. Specifically, first, the processor may set a number (that is, an image sensor identification) for each image sensor in advance. Then, for each image sensor, the processor can determine the overlap area between the acquisition area of the image sensor and the acquisition area of the lidar as the acquisition overlap area corresponding to the image sensor, that is, the processor can determine the identity of each image sensor and each Collect the correspondence between overlapping areas. Finally, the value interval of the rotation angle of the laser transmitter in the collection overlap area corresponding to each image sensor is taken as the rotation angle interval. The processor may determine the correspondence between each rotation angle interval and each image sensor, that is, the processor may determine the correspondence between the image sensor identification, the rotation angle interval, and the collection overlap area.
  • a number that is, an image sensor identification
  • FIG. 3 is a schematic diagram of the overlapping area of the collection area of several image sensors and the collection area of the lidar provided by the embodiment of this specification.
  • 31 and 32 are two image sensors, and the two image sensor identifiers can be represented as 31 and 32 respectively.
  • Point O is the position where the laser light is emitted from the lidar, and the semi-closed area I (indicated by the dashed line and shadow) enclosed by the straight line OA and the straight line OB is the overlapping area of the acquisition area of the image sensor 31 and the lidar.
  • the semi-closed area part II enclosed by the straight line OB and the straight line OC (indicated by the dashed line and the shadow) is the overlapping area of the collection area of the image sensor 32 and the collection area of the lidar. Since the lidar emits laser light by rotating the laser transmitter, the rotation direction can be either clockwise or counterclockwise. For ease of description, the following content is uniformly based on the counterclockwise direction of rotation.
  • the straight line OA and the straight line OB form the two edges of the collection overlap area corresponding to the image sensor 31, and the straight line OB and the straight line OC form the two edges of the collection overlap area corresponding to the image sensor 32.
  • the rotation angle of the laser transmitter in the collection and coincidence area corresponding to the image sensor 31 ranges from 45° to 90°, and the rotation angle of the laser transmitter in the collection and coincidence area corresponding to the image sensor 32 ranges from 90° to 135°. Since the rotation direction is counterclockwise, the rotation angle interval corresponding to the image sensor 31 can be considered to be [45°, 90°), and the rotation angle interval corresponding to the image sensor 32 can be considered to be [90°, 135°).
  • the processor can also pre-calibrate the installation positions of the lidar, each image sensor, and IMU.
  • the processor can also calibrate the parameters of each image sensor in advance.
  • the parameters of the image sensor can include internal parameters, external parameters, distortion coefficients, and so on.
  • the rotation angle interval in which the obtained rotation angle of the laser transmitter is located can be determined.
  • a designated image sensor can be selected among each image sensor. That is, when the acquired value of the rotation angle of the laser transmitter belongs to the rotation angle interval corresponding to the image sensor, the image sensor can be used as the designated image sensor.
  • the image sensor 31 can be selected as the designated image sensor.
  • S204 Send a trigger signal to the designated image sensor, so that the designated image sensor collects an image.
  • the processor may send a trigger signal to the designated image sensor.
  • the designated image sensor may acquire an image after receiving the trigger signal sent by the processor, where the image acquisition area of the designated image sensor includes the acquisition overlap area corresponding to the designated image sensor.
  • the image sensor 31 receives the trigger signal, it acquires an image of the portion of the dotted area enclosed by the straight line OA and the straight line OB.
  • the processor may send a trigger signal to the designated image sensor without repeating the trigger signal.
  • the processor may use the image sensor corresponding to the rotation angle interval where the rotation angle of the laser transmitter is acquired at the last moment as the reference image sensor.
  • the processor can also determine whether the rotation angle of the laser transmitter is the end point of the rotation angle interval corresponding to the designated image sensor. If the determination result is yes, it means that the laser emitted by the laser transmitter and the edge of the collection overlap area corresponding to the designated image sensor Coincidence can send a trigger signal to the designated image sensor. If the judgment result is no, it means that the processor has already sent a trigger signal to the designated image sensor, and there is no need to send the trigger signal again.
  • S206 Receive the image and the point cloud collected and returned by the lidar in the rotation angle interval where the rotation angle of the laser transmitter is located.
  • the designated image sensor After the processor sends a trigger signal to the designated image sensor, the designated image sensor can acquire an image according to the received trigger signal and send the acquired image to the processor.
  • the processor can receive the image sent by the specified image sensor. Since the laser radar is in the collection overlap area corresponding to the designated image sensor, the collection time of the point cloud by physically rotating the laser transmitter is longer. Therefore, when the laser emitted by the lidar coincides with an edge of the collection overlap area corresponding to the specified image sensor, the specified image sensor collects the image, and the lidar continues to collect the point cloud. When the laser emitted by the lidar coincides with the collection corresponding to the specified image sensor When the other edge of the area coincides, the processor can receive the point cloud collected and returned by the laser radar in the rotation angle interval of the laser transmitter.
  • the processor can send a trigger signal to the image sensor 31, and the image sensor 31 collects images according to the trigger signal.
  • the processor receives the image sent by the image sensor 31, and the laser radar Continue to collect the point cloud in the collection overlapping area corresponding to the image sensor 31.
  • the rotation angle of the laser transmitter is 90° or greater than 90°, it means that the lidar has completed the point cloud collection in the collection overlap area corresponding to the image sensor 31, and will collect the point cloud in the collection overlap area corresponding to the image sensor 31 Send to the processor.
  • the processor After the processor receives the image collected and returned by the specified image sensor and the point cloud collected by the lidar in the collection overlap area corresponding to the specified image sensor, it can determine the pixel information in the image according to the pose information of the vehicle collected by the IMU And the point information in the point cloud is fused.
  • the processor can use the pose information of the vehicle collected by the IMU when the specified image sensor collects the image as the reference pose information, and will use the lidar Specify the pose information of the vehicle collected by the IMU when the point cloud is collected in the collection coincident area corresponding to the image sensor, as the offset pose information.
  • the processor can use the pose information of the vehicle collected by the IMU when the rotation angle of the laser transmitter is 45° as the reference pose information, and use the pose information of the vehicle collected by the IMU when the rotation angle of the laser transmitter is 90° The information is used as offset pose information.
  • the processor may determine the offset pose information to offset the pose information.
  • the pose information may include the displacement information of the vehicle and the deflection angle information of the vehicle, etc.
  • the difference between the offset pose information and the reference pose information can be used as the offset pose information for the offset pose information .
  • the compensation pose information may include information such as a displacement of 1 meter and a deflection angle of 0°.
  • the processor can determine the mapping parameters according to the compensated pose information and the relative position of the designated image sensor and the lidar. Specifically, the processor can compensate the vehicle's pose information when the lidar collects the point cloud according to the compensated pose information to ensure that the vehicle's pose information is collected by the designated image sensor when the lidar collects points in the point cloud.
  • the pose information of the vehicle is the same in the image.
  • the mapping parameters are determined according to the pose information of the vehicle when the compensated lidar collects the point cloud and the pre-calibrated relative position of the designated image sensor and the lidar.
  • the processor can merge the pixel information in the image and the point information in the point cloud according to the mapping parameters.
  • the processor may perform coordinate transformation on the spatial position information of the points in the point cloud and/or the pixel point information in the image. For each point in the point cloud, determine the pixel in the image corresponding to the point according to the result of coordinate transformation, and fuse the spatial position information of the point and the pixel value of the pixel in the image corresponding to the point , Get the fused data.
  • the processor can perform coordinate transformation on the spatial position information of the point in the point cloud according to the mapping parameters, and after coordinate transformation, obtain the mapping position of the point in the point cloud in the image , And use the pixel at the mapping position in the image as the pixel in the image corresponding to the point in the point cloud.
  • the processor can perform coordinate transformation on the information of the pixel according to the mapping parameters to obtain the mapped position of the pixel in the point cloud, and the point in the point cloud at the mapped position As the point in the point cloud corresponding to the pixel.
  • the processor can also establish a world coordinate system centered on the vehicle, and perform coordinate transformation on the spatial position information of each point in the point cloud and the information of each pixel in the image according to the mapping parameters, so as to obtain the position of each point in the point cloud.
  • the above-mentioned data fusion method provided in this manual can be applied to the field of delivery using unmanned equipment, for example, the use of unmanned equipment for express delivery, takeaway and other delivery scenarios.
  • unmanned equipment for example, the use of unmanned equipment for express delivery, takeaway and other delivery scenarios.
  • an unmanned vehicle fleet composed of multiple unmanned equipment can be used for delivery.
  • the embodiment of this specification also correspondingly provides a structural schematic diagram of a data fusion device, as shown in FIG. 4.
  • FIG. 4 is a schematic structural diagram of a data fusion device provided by an embodiment of this specification.
  • the vehicle on which the device is located is provided with a lidar and at least one image sensor, wherein the lidar is collected by rotating a laser transmitter Point cloud, the device includes:
  • the obtaining module 401 is configured to obtain the rotation angle of the laser transmitter determined by the rotation angle measurer of the lidar;
  • the selection module 402 is configured to select the image sensor corresponding to the rotation angle interval in which the rotation angle of the laser transmitter is acquired as the designated image sensor according to the predetermined corresponding relationship between each rotation angle interval and each image sensor, wherein ,
  • the rotation angle interval corresponding to each image sensor is: the value interval of the rotation angle of the laser transmitter in the collection overlap area of the lidar and the image sensor;
  • the sending module 403 is configured to send a trigger signal to the designated image sensor, so that the designated image sensor collects an image;
  • the receiving module 404 is configured to receive the image and the point cloud collected and returned by the lidar within the rotation angle interval where the rotation angle of the laser transmitter is acquired;
  • the fusion module 405 is configured to perform processing on the pixel information in the image and the point information in the point cloud according to the pose change information of the vehicle in the process of collecting the image and the point cloud. Fusion.
  • the sending module 403 is configured to use the image sensor corresponding to the rotation angle interval where the rotation angle of the laser transmitter is acquired at the last moment as the reference image sensor; if the specified image sensor is not the reference image sensor; The image sensor sends the trigger signal to the designated image sensor.
  • the fusion module 405 is configured to obtain the pose information of the vehicle when the designated image sensor collects the image as the reference pose information, and when the lidar corresponds to the designated image sensor
  • the pose information of the vehicle when the point cloud is collected in the collection coincident area is used as the offset pose information; according to the reference pose information, the offset pose information of the offset pose information is determined;
  • the compensation pose information merges the information of the pixels in the image and the information of the points in the point cloud.
  • the fusion module 405 is configured to determine a mapping parameter according to the compensation pose information and the relative position of the designated image sensor and the lidar; The pixel point information and the point information in the point cloud are merged.
  • the fusion module 405 is configured to perform coordinate transformation on the spatial position information of the points in the point cloud and/or the pixel point information in the image according to the mapping parameters; According to the result of coordinate transformation, determine the pixel point in the image corresponding to the point, and merge the spatial position information of the point and the pixel value of the pixel point in the image corresponding to the point to obtain The merged data.
  • the processor includes a field programmable logic gate array FPGA.
  • the embodiment of this specification also provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program can be used to execute the data fusion method provided in FIG. 2 above.
  • the embodiment of this specification also proposes a schematic structural diagram of the electronic device shown in FIG. 5.
  • the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory. Of course, it may also include hardware required for other services.
  • the processor reads the corresponding computer program from the non-volatile memory into the memory and then runs it, so as to implement the data fusion method described in FIG. 2 above.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow.
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the processor can be implemented in any suitable manner.
  • the processor can take the form of, for example, a microprocessor and a computer-readable medium or logic gates storing computer-readable program codes (such as software or firmware) that can be executed by the (micro)processor. , Switches, Application Specific Integrated Circuits (ASICs), Programmable Logic Controllers and embedded microcontrollers.
  • Examples of processors include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicone Labs C8051F320.
  • processors in addition to implementing the processor in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the processor use logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded logic.
  • the same function can be realized in the form of a microcontroller or the like. Therefore, such a processor can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • this specification can be provided as methods, systems, or computer program products. Therefore, this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this specification can take the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements one or more processes in the flowchart and/or one block or more functions specified in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing functions specified in one or more processes in the flowchart and/or one or more blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in computer readable media, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technologies, compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cartridge Type tape, magnetic tape, disk storage or other magnetic storage devices or any other non-transmission media that can be used to store information that can be accessed by computing devices.
  • computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • This specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices connected through a communication network.
  • program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Automation & Control Theory (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种数据融合的方法、装置和适用的系统,适用的系统包括车辆(1)以及车辆(1)上设有的激光雷达(2)和至少一个图像传感器(3),其中,激光雷达(2)通过旋转激光发射器的方式采集点云,处理器(5)可获取激光雷达(2)的激光发射器的旋转角(S200);根据预先确定的各旋转角区间与各图像传感器(3)之间的对应关系,选择获取的旋转角所在的区间对应的图像传感器(3)作为指定图像传感器(3)(S202);向指定图像传感器(3)发送触发信号以使指定图像传感器(3)采集图像(S204);接收图像以及激光雷达(2)在获取的旋转角所在的旋转角区间内采集并返回的点云(S206);根据在采集图像以及点云的过程中车辆(1)的位姿变化信息,对图像中的像素点的信息以及点云中点的信息进行融合(S208)。

Description

数据融合 技术领域
本说明书涉及计算机技术领域,尤其涉及数据融合。
背景技术
通常,针对同一目标,不同传感器采集到的数据可通过融合的方式得到新的数据,得到的新数据可包含更丰富的信息。
以无人驾驶领域为例,通常,无人设备可通过图像传感器采集图像数据,通过激光雷达采集点云数据。其中,图像数据具有丰富的颜色信息,但是图像数据没有较为准确的景深信息(也即,距离信息),而点云数据具有准确的距离信息,但是,点云数据没有颜色信息,因此,无人设备可获取图像数据与点云数据,结合图像数据与点云数据的优势,得到融合了距离信息与颜色信息的数据,以便于根据融合了距离信息与颜色信息的数据,对无人设备进行控制。
发明内容
本说明书提供的一种数据融合的方法,所述方法适用于一种系统,所述系统包括车辆以及所述车辆上设有的激光雷达和至少一个图像传感器,其中,所述激光雷达通过旋转激光发射器的方式采集点云,所述方法包括:
获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角;
根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间;
向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像;
接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云;
根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
可选地,向所述指定图像传感器发送触发信号,包括:
将上一时刻获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为参考图像传感器;
若所述指定图像传感器不是所述参考图像传感器,则向所述指定图像传感器发送所述触发信号。
可选地,根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合,包括:
获取在所述指定图像传感器采集所述图像时所述车辆的位姿信息作为基准位姿信息、以及在所述激光雷达在所述指定图像传感器对应的采集重合区域内完成采集所述点云时所述车辆的位姿信息作为偏移位姿信息;
根据所述基准位姿信息,确定所述偏移位姿信息的补偿位姿信息;
根据所述补偿位姿信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
可选地,根据所述补偿位姿信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合,包括:
根据所述补偿位姿信息以及所述指定图像传感器与所述激光雷达的相对位置,确定映射参数;
根据所述映射参数,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
可选地,根据所述映射参数,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合,包括:
根据所述映射参数,将所述点云中的点的空间位置信息和/或所述图像中的像素点的信息进行坐标变换;
针对点云中的各点,根据坐标变换的结果,确定该点对应的所述图像中的像素点,并将该点的空间位置信息以及该点对应的所述图像中的像素点的像素值进行融合,得到融合后的数据。
可选地,所述方法由所述车辆上配置的处理器来执行,其中,所述处理器包括现场可编程逻辑门阵列(Field Programmable Gate Array,FPGA)。
本说明书提供一种数据融合的系统,所述系统包括:处理器、激光雷达、至少一个图像传感器、惯性测量单元(Inertial measurement unit,IMU)、车辆;
所述车辆安装有所述处理器、所述激光雷达、所述至少一个图像传感器、所述IMU,其中,所述激光雷达通过旋转激光发射器的方式采集点云;
所述处理器用于,获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角,根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间,向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像,接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云,根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合;
所述激光雷达用于,采集点云并向所述处理器提供所述旋转角度测量器确定的、所述激光发射器的旋转角;
所述至少一个图像传感器用于,接收所述处理器发送的所述触发信号,根据所述触发信号采集并返回图像;
所述IMU用于,采集所述车辆的位姿信息。
本说明书提供一种数据融合的装置,所述装置所在的车辆上设有激光雷达和至少一个图像传感器,其中,所述激光雷达通过旋转激光发射器的方式采集点云,所述装置包括:
获取模块,用于获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角;
选择模块,用于根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间;
发送模块,用于向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像;
接收模块,用于接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云;
融合模块,用于根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
本说明书提供的一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述数据融合的方法。
本说明书提供的一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述数据融合的方法。
本说明书实施例采用的上述至少一个技术方案能够达到以下有益效果:
本说明书中,车辆上设有激光雷达和至少一个图像传感器,激光雷达通过旋转激光发射器的方式采集点云,处理器可获取激光雷达的旋转角度测量器确定的、激光发射器的旋转角,根据预先确定的旋转角区间与各图像传感器之间的对应关系,选择获取的激光发射器的旋转角所在的区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:激光发射器的旋转角在激光雷达与该图像传感器的采集重合区域内的取值区间,向指定图像传感器发送触发信号以使指定图像传感器采集图像,接收图像以及激光雷达在获取的激光发射器的旋转角所在的旋转角区间内采集并返回的点云,根据在采集图像以及点云的过程中车辆的位姿变化信息,对图像中的像素点的信息以及点云中的点的信息进行融合。本说明书提供的数据融合方法,当激光雷达的激光发射器的旋转角的取值属于激光雷达与指定图像传感器的采集重合区域对应的旋转角区间时,向指定图像传感器发送触发信号使指定图像传感器采集图像,然后可根据在图像采集以及点云采集的过程中车辆的位姿变化信息,融合图像中的像素点的信息以及点云中的点的信息,融合得到的数据更精确。
附图说明
图1为本说明书实施例提供的一种数据融合的系统示意图;
图2为本说明书实施例提供的一种数据融合的方法流程图;
图3为本说明书实施例提供的若干个图像传感器的采集区域与激光雷达的采集区域的采集重合区域示意图;
图4为本说明书实施例提供的一种数据融合的装置的结构示意图;
图5为本说明书实施例提供的对应于图1的电子设备示意图。
具体实施方式
为使本说明书的目的和优点更加清楚,下面将结合本说明书具体实施例及相应的附图对本说明书进行描述。显然,所描述的实施例仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本说明书保护的范围。
本说明书提供的数据融合的方法,主要涉及将图像传感器采集的图像中的像素点的信息与激光雷达采集的点云中的点的信息进行融合。图像中的像素点的信息可包含丰富的颜色信息,颜色信息可通过像素点的像素值确定,例如,颜色信息可通过RGB色彩模式(RGB color mode)表示,像素值的取值范围为[0,255]。点云中的点的信息可包含精确的距离信息以及反射强度信息等,距离信息可通过点云中的点的空间位置信息确定。因此,可融合图像中的像素点的信息以及点云中的点的信息,得到融合后的数据,其中,融合后的数据具有丰富的颜色信息以及精确的距离信息、反射强度信息等。
本说明书提供的数据融合的方法,可广泛应用于多种场景中,例如,本说明书提供的数据融合的方法可应用于无人驾驶场景中。具体地,无人设备可将图像传感器采集的图像中的像素点的信息以及激光雷达采集的点云中的点的信息进行融合,得到融合后的数据,以便根据融合后的数据,实现对无人设备的控制。以无人设备轨迹规划为例,无人设备可对融合后的数据进行图像分割等处理,检测无人设备周围环境中的障碍物,根据障碍物的信息规划无人设备的轨迹。
又如,本说明书提供的数据融合的方法可应用于生成高精地图的场景中。具体地,处理器可将车辆上配备的图像传感器采集的图像中的像素点的信息以及配备的激光雷达采集的点云中的点的信息进行融合,得到融合后的数据,可确定融合后的数据的语义信息,根据融合后的数据的语义信息,生成精度更高的高精地图。
再如,本说明书提供的数据融合的方法可应用于机器学习模型训练的场景中。具体地,处理器可将图像传感器采集的图像中的像素点的信息以及激光雷达采集的点云中的点的信息进行融合,得到融合后的数据,将融合后的数据作为训练样本进行训练,相对于单独使用图像数据或者点云数据,使用融合后的数据扩大了训练样本的特征维度,对机器学习模型进行训练的效果会更好。
以下结合附图,详细说明本说明书各实施例。
图1为本说明书实施例提供的一种数据融合的系统示意图。在图1中,数据融合的 系统包括车辆1、激光雷达2、若干个图像传感器3、惯性测量单元(Inertial measurement unit,IMU)4、处理器5。其中,车辆1可以为诸如无人车等无人设备,也可以为普通的车辆。当然,这里仅以车辆举例,也可以是无人机等飞行设备。激光雷达2可为机械式激光雷达。激光雷达2可通过旋转激光发射器的方式,采集点云。激光雷达2的采集区域可能与每一个图像传感器3的采集区域存在重合。由于激光雷达2可采集全景点云,因此,可将激光雷达2与若干个图像传感器3安装在车辆1的顶部,安装位置的示意图如图1所示。针对各图像传感器3,将激光雷达2的采集区域与该图像传感器3的采集区域的重合区域作为该图像传感器3对应的采集重合区域,即,激光雷达2与该图像传感器3的采集重合区域,例如,图1中虚线所包围的半封闭区域为激光雷达2与右侧图像传感器3的采集重合区域。IMU4与处理器5可安装在车辆1的顶部或者车辆1的其他位置。
激光雷达2中可包括激光发射器、旋转角度测量器、马达等(图1中未示出)。激光雷达2可通过旋转激光发射器的方式,实现采集全景点云的功能。在激光雷达2采集点云时,通过旋转角度测量器(也即,机械式激光雷达中的光电码盘)可确定激光发射器的旋转角,以向处理器5实时提供激光发射器的旋转角。
图像传感器3可接收处理器5发送的触发信号,根据触发信号采集图像,并将所采集的图像发送给处理器5。针对采集重合区域的数据采集,图像传感器3采集图像的采集时间远小于激光雷达2通过马达旋转激光发射器来采集点云所需要的采集时间,因此,可忽略图像传感器3采集图像的采集时间。
IMU4可实时采集车辆的位姿信息,IMU4在车辆1上的安装位置可根据实际情况确定。其中,位姿信息可包括车辆的位移信息、车辆的偏转角度信息等。
处理器5可包括现场可编程逻辑门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、中央处理器(central processing unit,CPU)等。由于FPGA的延迟为纳秒(ns)级别,同步效果比较好,因此,处理器5可为FPGA。处理器5可对激光雷达2采集的点云中的点的信息以及图像传感器3采集的图像中的像素点的信息进行融合。
具体地,处理器5可首先获取激光雷达2的旋转角度测量器确定的、激光发射器的旋转角。然后,根据预先确定的各旋转角区间与各图像传感器3之间的对应关系,选择获取的激光发射器的旋转角所在的旋转角区间对应的图像传感器3作为指定图像传感器3,其中,针对每个图像传感器3,与该图像传感器3对应的旋转角区间为:激光发射器的旋转角在激光雷达2与该图像传感器3的采集重合区域内的取值区间。之后,处理器5可向指定图像传感器3发送触发信号,以使指定图像传感器3采集图像,接收所采集的图像以及激光雷达2在激光发射器的旋转角所在的旋转角区间内采集并返回的点云。最后,可根据在采集图像以及点云的过程中车辆1的位姿变化信息,对图像中的像素点的信息以及点云中的点的信息进行融合。
其中,处理器5对激光雷达2采集的点云中的点的信息以及图像传感器3采集的图像中的像素点的信息进行融合的方法的过程,可如图2所示。
图2为本说明书实施例提供的一种数据融合的方法流程图,可包括步骤S200至S208。
S200:处理器获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角。
在本说明书中,激光雷达通过旋转激光发射器的方式,实时采集点云。因此,位于激光雷达内部的旋转角度测量器可实时确定激光发射器的旋转角。其中,当激光雷达为机械式激光雷达时,旋转角度测量器又可称为光电码盘。
处理器可为FPGA,使得以较低的延迟确保图像传感器采集图像的时间与激光发射器旋转至图像传感器对应的采集重合区域边缘的时间同步。处理器可实时获取旋转角度测量器确定的、激光发射器的旋转角。
S202:根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,针对每个图像传感器,与该图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间。
在本说明书中,处理器可预先确定各旋转角区间与各图像传感器之间的对应关系。具体地,首先,处理器可预先对每个图像传感器设置编号(也即,图像传感器标识)。然后,针对每个图像传感器,处理器可确定该图像传感器的采集区域与激光雷达的采集区域的重合区域作为该图像传感器对应的采集重合区域,也即,处理器可确定各图像传感器标识与各采集重合区域之间的对应关系。最后,将激光发射器的旋转角在各图像传感器对应的采集重合区域内的取值区间作为旋转角区间。处理器可确定各旋转角区间与各图像传感器之间的对应关系,也即,处理器可确定图像传感器标识、旋转角区间、采集重合区域三者之间的对应关系。
图3为本说明书实施例提供的若干个图像传感器的采集区域与激光雷达的采集区域的重合区域示意图。如图3所示,31、32为两个图像传感器,可将两个图像传感器标识分别表示为31、32。点O为激光从激光雷达中发出的位置,由直线OA与直线OB围成的半封闭区域部分Ⅰ(由虚线以及阴影指示)为图像传感器31的采集区域与激光雷达的采集区域的重合区域,由直线OB与直线OC围成的半封闭区域部分Ⅱ(由虚线以及阴影指示)为图像传感器32的采集区域与激光雷达的采集区域的重合区域。由于激光雷达以旋转激光发射器的方式发射激光,因此,旋转方向可以为顺时针方向,也可为逆时针方向,为便于描述,以下内容统一以旋转方向为逆时针方向为准。
由图3可知,直线OA、直线OB形成了图像传感器31对应的采集重合区域的两个边缘,直线OB、直线OC形成了图像传感器32对应的采集重合区域的两个边缘,若预先标定激光发射器的旋转角在图像传感器31对应的采集重合区域的取值范围为45°~90°,激光发射器的旋转角在图像传感器32对应的采集重合区域的取值范围为90°~135°,由于旋转方向为逆时针方向,因此,可认为图像传感器31对应的旋转角区间为[45°,90°),可认为图像传感器32对应的旋转角区间为[90°,135°)。
另外,除了预先确定各旋转角区间与各图像传感器之间的对应关系之外,处理器还可预先对激光雷达、各图像传感器、IMU的安装位置进行标定。并且,处理器还可预先对各图像传感器的参数进行标定。图像传感器的参数可包括内参、外参、畸变系数等。关于对激光雷达、各图像传感器、IMU的安装位置以及图像传感器的参数的标定过程, 本说明书不再赘述。
在处理器获取激光雷达的旋转角度测量器确定的、激光发射器的旋转角后,可确定获取的激光发射器的旋转角所在的旋转角区间。根据各旋转角区间与各图像传感器之间的对应关系以及获取的激光发射器的旋转角所在的旋转角区间,可在各图像传感器中,选择指定图像传感器。也即,当获取的激光发射器的旋转角的取值属于该图像传感器对应的旋转角区间时,可将该图像传感器作为指定图像传感器。沿用图3的例子,当获取的激光发射器的旋转角的取值为45°或者属于[45°,90°)中的任一值时,可选择图像传感器31作为指定图像传感器。
S204:向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像。
在选择指定图像传感器之后,处理器可向指定图像传感器发送触发信号。指定图像传感器接收到处理器发送的触发信号之后,可采集图像,其中,指定图像传感器的图像采集区域包括指定图像传感器对应的采集重合区域。沿用上例,图像传感器31接收到触发信号后,采集直线OA与直线OB围成的虚线区域部分的图像。
具体地,当激光雷达发射的激光光线在指定图像传感器对应的采集重合区域内时,处理器可向指定图像传感器发送一个触发信号,无需重复发送触发信号。处理器可将上一时刻获取的激光发射器的旋转角所在的旋转角区间对应的图像传感器作为参考图像传感器。同时,判断指定图像传感器是否为参考图像传感器。若判断结果为是,则说明处理器已经在上一时刻向指定图像传感器发送了触发信号,无需在当前时刻向指定图像传感器重复发送触发信号。若判断结果为否,则说明处理器在上一时刻未向指定图像传感器发送触发信号,指定图像传感器尚未采集图像,可向指定图像传感器发送触发信号以使指定图像传感器接收到触发信号后采集图像。
当然,处理器还可判断激光发射器的旋转角是否为指定图像传感器对应的旋转角区间端点,若判断结果为是,则说明激光发射器发射的激光与指定图像传感器对应的采集重合区域的边缘重合,可向指定图像传感器发送触发信号,若判断结果为否,则说明处理器已经向指定图像传感器发送了触发信号,无需再次发送触发信号。
S206:接收所述图像以及所述激光雷达在所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云。
处理器向指定图像传感器发送触发信号后,指定图像传感器可根据接收到的触发信号采集图像,并将采集的图像发送给处理器。处理器可接收指定图像传感器发送的图像。由于激光雷达在指定图像传感器对应的采集重合区域内,通过物理旋转激光发射器的方式采集点云的采集时间较长。因此,当激光雷达发射的激光与指定图像传感器对应的采集重合区域的一个边缘重合时,指定图像传感器采集图像,激光雷达继续采集点云,当激光雷达发射的激光与指定图像传感器对应的采集重合区域的另一个边缘重合时,处理器可接收激光雷达在激光发射器的旋转角所在的旋转角区间内采集并返回的点云。
沿用上例,当获取的激光发射器的旋转角为45°时,处理器可向图像传感器31发送触发信号,图像传感器31根据触发信号采集图像,处理器接收图像传感器31发送的图像,激光雷达继续在图像传感器31对应的采集重合区域内采集点云。当激光发射器 的旋转角为90°或者大于90°时,说明激光雷达在图像传感器31对应的采集重合区域内完成采集点云,并将在图像传感器31对应的采集重合区域内采集的点云发送给处理器。
S208:根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
处理器接收到指定图像传感器采集并返回的图像以及激光雷达在指定图像传感器对应的采集重合区域内采集的点云后,可根据IMU采集的车辆的位姿信息,对图像中的像素点的信息以及点云中的点的信息进行融合。
具体地,首先,由于IMU可实时采集车辆的位姿信息,因此,处理器可将IMU在指定图像传感器采集图像时采集的车辆的位姿信息作为基准位姿信息,并且,将在激光雷达在指定图像传感器对应的采集重合区域内完成采集点云时IMU采集的车辆的位姿信息,作为偏移位姿信息。沿用上例,处理器可将激光发射器的旋转角为45°时IMU采集的车辆的位姿信息作为基准位姿信息,将激光发射器的旋转角为90°时IMU采集的车辆的位姿信息作为偏移位姿信息。
然后,根据基准位姿信息,处理器可确定偏移位姿信息的补偿位姿信息。具体地,由于位姿信息可包括车辆的位移信息以及车辆的偏转角度信息等,因此,可将偏移位姿信息与基准位姿信息的差值,作为偏移位姿信息的补偿位姿信息。例如,当车辆为直线行驶时,IMU采集的基准位姿信息与偏移位姿信息之间相差1米,则补偿位姿信息可包括位移1米、偏转角度0°等信息。
之后,处理器可根据补偿位姿信息以及指定图像传感器与激光雷达的相对位置,确定映射参数。具体地,处理器可根据补偿位姿信息,对激光雷达采集点云时车辆的位姿信息进行补偿,以确保激光雷达在采集点云中的各点时车辆的位姿信息与指定图像传感器采集图像时车辆的位姿信息相同。根据补偿后的激光雷达采集点云时车辆的位姿信息以及预先标定的指定图像传感器与激光雷达的相对位置,确定映射参数。
最后,处理器可根据映射参数,对图像中的像素点的信息以及点云中的点的信息进行融合。具体地,根据映射参数,处理器可将点云中的点的空间位置信息和/或所述图像中的像素点的信息进行坐标变换。针对点云中的各点,根据坐标变换的结果,确定该点对应的所述图像中的像素点,并融合该点的空间位置信息以及该点对应的所述图像中的像素点的像素值,得到融合后的数据。
其中,针对点云中的各点,处理器可根据映射参数,将点云中的该点的空间位置信息进行坐标变换,经过坐标变换后,得到点云中的该点在图像中的映射位置,并将图像中位于该映射位置的像素点作为点云中的该点对应的图像中的像素点。另外,针对图像中的各像素点,处理器可根据映射参数,将该像素点的信息进行坐标变换,得到该像素点在点云中的映射位置,并将点云中位于该映射位置的点作为该像素点对应的点云中的点。当然,处理器还可以车辆为中心建立世界坐标系,根据映射参数,将点云中的各点的空间位置信息以及图像中的各像素点的信息进行坐标变换,得到点云中的各点在世界坐标系下的位置信息、以及图像中的各像素点在世界坐标系下的位置信息。可融合世界坐标系下同一位置的点云中的点的空间位置信息以及图像中的像素点的像素值,得到融 合后的数据。
本说明书提供的上述数据融合的方法,可应用于使用无人设备进行配送的领域,例如,使用无人设备进行快递、外卖等配送的场景。具体地,在上述的场景中,可使用多个无人设备所构成的无人驾驶车队进行配送。
基于图2所示的数据融合的方法,本说明书实施例还对应提供一种数据融合的装置的结构示意图,如图4所示。
图4为本说明书实施例提供的一种数据融合的装置的结构示意图,所述装置所在的车辆上设有激光雷达和至少一个图像传感器,其中,所述激光雷达通过旋转激光发射器的方式采集点云,所述装置包括:
获取模块401,用于获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角;
选择模块402,用于根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间;
发送模块403,用于向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像;
接收模块404,用于接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云;
融合模块405,用于根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
可选地,所述发送模块403用于,将上一时刻获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为参考图像传感器;若所述指定图像传感器不是所述参考图像传感器,则向所述指定图像传感器发送所述触发信号。
可选地,所述融合模块405用于,获取在所述指定图像传感器采集所述图像时所述车辆的位姿信息作为基准位姿信息、以及在所述激光雷达在所述指定图像传感器对应的采集重合区域内完成采集所述点云时所述车辆的位姿信息作为偏移位姿信息;根据所述基准位姿信息,确定所述偏移位姿信息的补偿位姿信息;根据所述补偿位姿信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
可选地,所述融合模块405用于,根据所述补偿位姿信息以及所述指定图像传感器与所述激光雷达的相对位置,确定映射参数;根据所述映射参数,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
可选地,所述融合模块405用于,根据所述映射参数,将所述点云中的点的空间位置信息和/或所述图像中的像素点的信息进行坐标变换;针对点云中的各点,根据坐标变换的结果,确定该点对应的所述图像中的像素点,并将该点的空间位置信息以及该点对应的所述图像中的像素点的像素值进行融合,得到融合后的数据。
可选地,所述处理器包括现场可编程逻辑门阵列FPGA。
本说明书实施例还提供了一种计算机可读存储介质,该存储介质存储有计算机程序,计算机程序可用于执行上述图2提供的数据融合的方法。
基于图2所示的数据融合的方法,本说明书实施例还提出了图5所示的电子设备的示意结构图。如图5,在硬件层面,该电子设备包括处理器、内部总线、网络接口、内存以及非易失性存储器,当然还可能包括其他业务所需要的硬件。处理器从非易失性存储器中读取对应的计算机程序到内存中然后运行,以实现上述图2所述的数据融合的方法。
当然,除了软件实现方式之外,本说明书并不排除其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以上处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
处理器可以按任何适当的方式实现,例如,处理器可以采取例如微处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,处理器的例子包括但不限于以下微控制器:ARC625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现处理器以外,完全可以通过将方法步骤进行逻辑编程来使得处理器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种处理器可以被认为是一种硬件 部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体地,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本说明书的实施例可提供为方法、系统、或计算机程序产品。因此,本说明书可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书是参照根据本说明书实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图中的一个流程或多个流程和/或方框图中的一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图中的一个流程或多个流程和/或方框图中的一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图中的一个流程或多个流程和/或方框图中的一个方框或多个方框中指定的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。 计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带、磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同或相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本说明书的实施例而已,并不用于限制本说明书。对于本领域技术人员来说,本说明书可以有各种更改和变化。凡在本说明书的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书的范围之内。

Claims (10)

  1. 一种数据融合的方法,其特征在于,所述方法适用于一种系统,所述系统包括车辆以及所述车辆上设有的激光雷达和至少一个图像传感器,其中,所述激光雷达通过旋转激光发射器的方式采集点云,
    所述方法包括:
    获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角;
    根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间;
    向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像;
    接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云;
    根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
  2. 如权利要求1所述的方法,其特征在于,向所述指定图像传感器发送触发信号,包括:
    将上一时刻获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为参考图像传感器;
    若所述指定图像传感器不是所述参考图像传感器,则向所述指定图像传感器发送所述触发信号。
  3. 如权利要求1所述的方法,其特征在于,根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合,包括:
    获取在所述指定图像传感器采集所述图像时所述车辆的位姿信息作为基准位姿信息、以及在所述激光雷达在所述指定图像传感器对应的采集重合区域内完成采集所述点云时所述车辆的位姿信息作为偏移位姿信息;
    根据所述基准位姿信息,确定所述偏移位姿信息的补偿位姿信息;
    根据所述补偿位姿信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
  4. 如权利要求3所述的方法,其特征在于,根据所述补偿位姿信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合,包括:
    根据所述补偿位姿信息以及所述指定图像传感器与所述激光雷达的相对位置,确定映射参数;
    根据所述映射参数,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
  5. 如权利要求4所述的方法,其特征在于,根据所述映射参数,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合,包括:
    根据所述映射参数,将所述点云中的点的空间位置信息和/或所述图像中的像素点的信息进行坐标变换;
    针对点云中的各点,根据坐标变换的结果,确定该点对应的所述图像中的像素点,并将该点的空间位置信息以及该点对应的所述图像中的像素点的像素值进行融合,得到融合后的数据。
  6. 如权利要求1~5任一所述的方法,其特征在于,所述方法由所述车辆上配置的处理器来执行,
    其中,所述处理器包括现场可编程逻辑门阵列FPGA。
  7. 一种数据融合的系统,其特征在于,所述系统包括:处理器、激光雷达、至少一个图像传感器、惯性测量单元IMU、车辆;
    所述车辆安装有所述处理器、所述激光雷达、所述至少一个图像传感器、所述IMU,其中,所述激光雷达通过旋转激光发射器的方式采集点云;
    所述处理器用于,获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角,根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间,向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像,接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云,根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合;
    所述激光雷达用于,采集点云并向所述处理器提供所述旋转角度测量器确定的、所述激光发射器的旋转角;
    所述至少一个图像传感器用于,接收所述处理器发送的所述触发信号,根据所述触发信号采集并返回图像;
    所述IMU用于,采集所述车辆的位姿信息。
  8. 一种数据融合的装置,其特征在于,所述装置所在的车辆上设有激光雷达和至少一个图像传感器,其中,所述激光雷达通过旋转激光发射器的方式采集点云,所述装置包括:
    获取模块,用于获取所述激光雷达的旋转角度测量器确定的、所述激光发射器的旋转角;
    选择模块,用于根据预先确定的各旋转角区间与各图像传感器之间的对应关系,选择获取的所述激光发射器的旋转角所在的旋转角区间对应的图像传感器作为指定图像传感器,其中,各图像传感器对应的旋转角区间为:所述激光发射器的旋转角在所述激光雷达与该图像传感器的采集重合区域内的取值区间;
    发送模块,用于向所述指定图像传感器发送触发信号,以使所述指定图像传感器采集图像;
    接收模块,用于接收所述图像以及所述激光雷达在获取的所述激光发射器的旋转角所在的旋转角区间内采集并返回的点云;
    融合模块,用于根据在采集所述图像以及所述点云的过程中所述车辆的位姿变化信息,对所述图像中的像素点的信息以及所述点云中的点的信息进行融合。
  9. 一种计算机可读存储介质,其特征在于,所述存储介质存储有计算机程序,所 述计算机程序被处理器执行时实现上述权利要求1-6任一所述的方法。
  10. 一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现上述权利要求1-6任一所述的方法。
PCT/CN2021/088652 2020-04-21 2021-04-21 数据融合 WO2021213432A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/908,921 US20230093680A1 (en) 2020-04-21 2021-04-21 Data fusion
EP21792770.6A EP4095562A4 (en) 2020-04-21 2021-04-21 Data fusion

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010316189.0 2020-04-21
CN202010316189.0A CN111522026B (zh) 2020-04-21 2020-04-21 一种数据融合的方法及装置

Publications (1)

Publication Number Publication Date
WO2021213432A1 true WO2021213432A1 (zh) 2021-10-28

Family

ID=71903944

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/088652 WO2021213432A1 (zh) 2020-04-21 2021-04-21 数据融合

Country Status (4)

Country Link
US (1) US20230093680A1 (zh)
EP (1) EP4095562A4 (zh)
CN (1) CN111522026B (zh)
WO (1) WO2021213432A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114475650A (zh) * 2021-12-01 2022-05-13 中铁十九局集团矿业投资有限公司北京信息技术分公司 一种车辆行驶行为确定方法、装置、设备及介质
CN117310756A (zh) * 2023-11-30 2023-12-29 宁波路特斯机器人有限公司 多传感器融合定位方法及系统、机器可读存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111522026B (zh) * 2020-04-21 2022-12-09 北京三快在线科技有限公司 一种数据融合的方法及装置
CN113763478B (zh) * 2020-09-09 2024-04-12 北京京东尚科信息技术有限公司 无人车相机标定方法、装置、设备、存储介质及系统
CN112861653B (zh) * 2021-01-20 2024-01-23 上海西井科技股份有限公司 融合图像和点云信息的检测方法、系统、设备及存储介质
CN112904863B (zh) * 2021-01-27 2023-10-20 北京农业智能装备技术研究中心 基于激光与图像信息融合的行走控制方法及装置
CN115718307A (zh) * 2021-02-02 2023-02-28 华为技术有限公司 一种探测装置、控制方法、融合探测系统及终端
CN115457353A (zh) * 2021-05-21 2022-12-09 魔门塔(苏州)科技有限公司 一种多传感器数据的融合方法及装置
CN113643321A (zh) * 2021-07-30 2021-11-12 北京三快在线科技有限公司 一种无人驾驶设备的传感器数据采集方法及装置
WO2024011408A1 (zh) * 2022-07-12 2024-01-18 阿波罗智能技术(北京)有限公司 同步采集数据的方法和同步确定方法、装置、自动驾驶车

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100157280A1 (en) * 2008-12-19 2010-06-24 Ambercore Software Inc. Method and system for aligning a line scan camera with a lidar scanner for real time data fusion in three dimensions
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
CN106043169A (zh) * 2016-07-01 2016-10-26 百度在线网络技术(北京)有限公司 环境感知设备和应用于环境感知设备的信息获取方法
CN107609522A (zh) * 2017-09-19 2018-01-19 东华大学 一种基于激光雷达和机器视觉的信息融合车辆检测系统
CN108957478A (zh) * 2018-07-23 2018-12-07 上海禾赛光电科技有限公司 多传感器同步采样系统及其控制方法、车辆
CN109859154A (zh) * 2019-01-31 2019-06-07 深兰科技(上海)有限公司 一种数据融合方法、装置、设备及介质
CN111522026A (zh) * 2020-04-21 2020-08-11 北京三快在线科技有限公司 一种数据融合的方法及装置

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105372717B (zh) * 2015-10-30 2017-12-26 中国民用航空总局第二研究所 一种基于雷达和图像信号的fod融合探测方法及装置
CN205195875U (zh) * 2015-12-08 2016-04-27 湖南纳雷科技有限公司 一种雷达与视频融合的大范围监控系统
CN107071341A (zh) * 2016-12-09 2017-08-18 河南中光学集团有限公司 小型雷达和光电转台的联动控制系统及其控制方法
CN107703506B (zh) * 2017-08-31 2020-09-25 安徽四创电子股份有限公司 一种一体化摄像雷达及其监测预警方法
US11644834B2 (en) * 2017-11-10 2023-05-09 Nvidia Corporation Systems and methods for safe and reliable autonomous vehicles
CN107972585A (zh) * 2017-11-30 2018-05-01 惠州市德赛西威汽车电子股份有限公司 结合雷达信息的自适应3d环视场景重建系统与方法
CN109816702A (zh) * 2019-01-18 2019-05-28 苏州矽典微智能科技有限公司 一种多目标跟踪装置和方法
CN110275179A (zh) * 2019-04-09 2019-09-24 安徽理工大学 一种基于激光雷达以及视觉融合的构建地图方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100157280A1 (en) * 2008-12-19 2010-06-24 Ambercore Software Inc. Method and system for aligning a line scan camera with a lidar scanner for real time data fusion in three dimensions
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
CN106043169A (zh) * 2016-07-01 2016-10-26 百度在线网络技术(北京)有限公司 环境感知设备和应用于环境感知设备的信息获取方法
CN107609522A (zh) * 2017-09-19 2018-01-19 东华大学 一种基于激光雷达和机器视觉的信息融合车辆检测系统
CN108957478A (zh) * 2018-07-23 2018-12-07 上海禾赛光电科技有限公司 多传感器同步采样系统及其控制方法、车辆
CN109859154A (zh) * 2019-01-31 2019-06-07 深兰科技(上海)有限公司 一种数据融合方法、装置、设备及介质
CN111522026A (zh) * 2020-04-21 2020-08-11 北京三快在线科技有限公司 一种数据融合的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4095562A4 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114475650A (zh) * 2021-12-01 2022-05-13 中铁十九局集团矿业投资有限公司北京信息技术分公司 一种车辆行驶行为确定方法、装置、设备及介质
CN117310756A (zh) * 2023-11-30 2023-12-29 宁波路特斯机器人有限公司 多传感器融合定位方法及系统、机器可读存储介质
CN117310756B (zh) * 2023-11-30 2024-03-29 宁波路特斯机器人有限公司 多传感器融合定位方法及系统、机器可读存储介质

Also Published As

Publication number Publication date
EP4095562A1 (en) 2022-11-30
CN111522026A (zh) 2020-08-11
CN111522026B (zh) 2022-12-09
US20230093680A1 (en) 2023-03-23
EP4095562A4 (en) 2023-06-28

Similar Documents

Publication Publication Date Title
WO2021213432A1 (zh) 数据融合
US11042762B2 (en) Sensor calibration method and device, computer device, medium, and vehicle
US20220198806A1 (en) Target detection method based on fusion of prior positioning of millimeter-wave radar and visual feature
CN109345596B (zh) 多传感器标定方法、装置、计算机设备、介质和车辆
US11074706B2 (en) Accommodating depth noise in visual slam using map-point consensus
CN109613543B (zh) 激光点云数据的修正方法、装置、存储介质和电子设备
WO2022127532A1 (zh) 一种激光雷达与imu的外参标定方法、装置及设备
CN110458112B (zh) 车辆检测方法、装置、计算机设备和可读存储介质
US11227395B2 (en) Method and apparatus for determining motion vector field, device, storage medium and vehicle
WO2022183685A1 (zh) 目标检测方法、电子介质和计算机存储介质
US20220319146A1 (en) Object detection method, object detection device, terminal device, and medium
CN111080784B (zh) 一种基于地面图像纹理的地面三维重建方法和装置
KR101890612B1 (ko) 적응적 관심영역 및 탐색창을 이용한 객체 검출 방법 및 장치
US20220301277A1 (en) Target detection method, terminal device, and medium
CN109978954A (zh) 基于箱体的雷达和相机联合标定的方法和装置
CN112312113A (zh) 用于生成三维模型的方法、装置和系统
CN114705121A (zh) 车辆位姿测量方法、装置及电子设备、存储介质
CN112907745B (zh) 一种数字正射影像图生成方法及装置
CN117250956A (zh) 一种多观测源融合的移动机器人避障方法和避障装置
WO2023143132A1 (zh) 传感器数据的标定
CN117557999A (zh) 一种图像联合标注方法、计算机设备及介质
CN112097742B (zh) 一种位姿确定方法及装置
WO2022256976A1 (zh) 稠密点云真值数据的构建方法、系统和电子设备
CN114299147A (zh) 一种定位方法、装置、存储介质及电子设备
CN112598736A (zh) 一种基于地图构建的视觉定位方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21792770

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021792770

Country of ref document: EP

Effective date: 20220822

NENP Non-entry into the national phase

Ref country code: DE