CN111209956A - Sensor data fusion method, and vehicle environment map generation method and system - Google Patents

Sensor data fusion method, and vehicle environment map generation method and system Download PDF

Info

Publication number
CN111209956A
CN111209956A CN202010004760.5A CN202010004760A CN111209956A CN 111209956 A CN111209956 A CN 111209956A CN 202010004760 A CN202010004760 A CN 202010004760A CN 111209956 A CN111209956 A CN 111209956A
Authority
CN
China
Prior art keywords
data
vehicle
sensor
environment map
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010004760.5A
Other languages
Chinese (zh)
Inventor
姜浙
陈新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Automotive Group Co Ltd
Beijing Automotive Research Institute Co Ltd
Original Assignee
Beijing Automotive Group Co Ltd
Beijing Automotive Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Automotive Group Co Ltd, Beijing Automotive Research Institute Co Ltd filed Critical Beijing Automotive Group Co Ltd
Priority to CN202010004760.5A priority Critical patent/CN111209956A/en
Publication of CN111209956A publication Critical patent/CN111209956A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • G01C21/32Structuring or formatting of map data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application provides a sensor data fusion method, a vehicle environment map generation method and a vehicle environment map generation system, wherein the sensor data fusion method comprises the following steps: acquiring first data and second data which are acquired by a first sensor and a second sensor in the same space in the same time period; determining time unified fusion data of the first data and the second data which are synchronous at a plurality of time points in the time period; determining spatial unified fusion data of the first data and the second data under the same coordinate system; and determining space-time fusion data fusing the first data and the second data according to the time unified fusion data and the space unified fusion data. By acquiring data acquired by different sensors in the same space in the same time period and fusing the data of different sensors in both time dimension (multiple time points in the time period) and space dimension (converted to the same coordinate system), the accuracy of fused data can be ensured as much as possible, and accurate space-time fused data can be provided.

Description

Sensor data fusion method, and vehicle environment map generation method and system
Technical Field
The application relates to the field of automobiles, in particular to a sensor data fusion method, a vehicle environment map generation method and a vehicle environment map generation system.
Background
Queue driving is one of the commonly used scenarios in autonomous driving. The fleet running in the queue consists of a plurality of vehicle groups, the head vehicle is used as a pilot vehicle and is driven by a driver in front of the fleet, and the following vehicles can adopt an automatic driving mode.
The current fusion mode of sensor data is difficult to determine accurate fusion data, so that the automatic driving perception data of the vehicle is inaccurate, the data error of automatic driving decision is caused, and the automatic driving is finally failed. How to solve the problem of sensor fusion is a problem difficult to solve in the prior art. The prior art often only proposes the idea of sensor fusion, and does not provide a specific implementation to realize data fusion of sensors.
Disclosure of Invention
An object of the embodiments of the present application is to provide a sensor data fusion method, a vehicle environment map generation method, and a vehicle environment map generation system, so that sensor data can be well fused, and accurate fusion data can be obtained.
In order to achieve the above object, embodiments of the present application are implemented as follows:
in a first aspect, an embodiment of the present application provides a sensor data fusion method, including: acquiring first data and second data which are acquired by a first sensor and a second sensor in the same space in the same time period; determining time unified fusion data of the first data and the second data which are synchronous at a plurality of time points in the time period; determining spatial unified fusion data of the first data and the second data under the same coordinate system; and determining space-time fusion data fusing the first data and the second data according to the time unified fusion data and the space unified fusion data.
The data acquired by different sensors in the same space in the same time period are acquired, and the data of different sensors are fused in the time dimension (a plurality of time points in the time period) and the space dimension (converted into the same coordinate system), so that the accuracy of the fused data can be ensured as much as possible, the data of the sensors can be fused well, and accurate space-time fused data can be provided.
With reference to the first aspect, in a first possible implementation manner of the first aspect, determining time unified fusion data in which the first data and the second data are synchronized at multiple time points in the time period includes: determining a first acquisition frame rate at which the first sensor acquires the first data and a second acquisition frame rate at which the second sensor acquires the second data; determining a plurality of time points from the time period according to the first acquisition frame rate and the second acquisition frame rate, wherein first data and second data are acquired at each time point in the plurality of time points; and fusing the first data and the second data at the same time point according to the plurality of determined time points to determine the time unified fused data in the time period.
A plurality of time points in a time period are determined through the acquisition frame rates when different sensors acquire data, and the time points are used as the directlines for the integration of the sensor data in the time dimension, so that the integration accuracy of the data acquired by the different sensors in the time dimension can be ensured as far as possible. The time points can be accurately determined through the acquisition frame rate when the sensors acquire data, so that the accuracy of fusion of the data acquired by different sensors in the time dimension is ensured.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, determining a plurality of time points from the time period according to the first acquisition frame rate and the second acquisition frame rate includes: determining a reference time point from the time interval, wherein the first sensor acquires first data at the reference time point and the second sensor acquires second data at the reference time point; determining a greatest common divisor between the first acquisition frame rate and the second acquisition frame rate, or determining a least common multiple between a first time length required for acquiring a frame of first data based on the first acquisition frame rate and a second time length required for acquiring a frame of second data based on the second acquisition frame rate; and determining a plurality of time points in the time period by combining the reference time point based on the greatest common divisor or the least common multiple.
The method comprises the steps of determining a plurality of time points in a time period by calculating the greatest common divisor or the least common multiple of the collection frame rates of different sensors when collecting data, and accurately determining the plurality of time points in the time period due to the fact that the reference time point is determined, so that the accuracy of the data collected by the sensors on the time dimension when the data are fused is guaranteed.
With reference to the first aspect, in a third possible implementation manner of the first aspect, when the first data and the second data are both image data, determining spatially unified fused data of the first data and the second data in the same coordinate system includes: determining first coordinate data in a world coordinate system based on the first data; determining second coordinate data in a world coordinate system based on the second data; and fusing the first coordinate data and the second coordinate data in the world coordinate system to determine spatial unified fusion data in the world coordinate system.
The data collected by different sensors are converted under a world coordinate system, so that the data collected by different sensors are based on the data of one coordinate system, errors caused by space inconsistency can be avoided as far as possible, and the accuracy of fused data is further ensured.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, when the second sensor is a vision sensor, the second data is vision sensor data, and second coordinate data in a world coordinate system is determined based on the second data, including: determining a conversion relation matrix between an image coordinate system and a world coordinate system; and converting the second data in the image coordinate system into second coordinate data in the world coordinate system according to the conversion relation matrix.
When the sensor is a vision sensor, the conversion matrix between the image coordinate system and the world coordinate system of the data acquired by the vision sensor is calculated, and the conversion of the data from the image coordinate system to the world coordinate system is realized according to the conversion matrix, so that the method is simple, fast and accurate.
In a second aspect, an embodiment of the present application provides an environment map generation method for a vehicle, where a vision sensor and a millimeter wave radar are installed on the vehicle, the method including: acquiring visual sensor data and millimeter wave radar data; determining the space-time fusion data according to the vision sensor data and the millimeter wave radar data by the sensor data fusion method of any one of the possible implementation manners of the first aspect or the first aspect; and generating a first environment map with a lane line reflecting the driving environment of the vehicle according to the space-time fusion data.
The millimeter wave radar data and the visual sensor number are subjected to sensor data fusion to determine space-time fusion data, and a first environment map with a lane line, which reflects the driving environment of the vehicle, can be further generated. Therefore, accurate sensing data can be given to the automobile, and automatic driving of the automobile is facilitated.
With reference to the second aspect, in a first possible implementation manner of the second aspect, the vehicle is further mounted with a laser radar and an inertial sensor, and the method further includes: acquiring laser radar data and inertial sensor data; determining the motion track of the vehicle according to the inertial sensor data and the laser radar data; and generating a second environment map which reflects the running environment of the vehicle based on vision according to the motion trail and the vision sensor data.
The movement track of the vehicle is calculated based on the inertial sensor data and the laser radar data, and then a second environment map which reflects the running environment of the vehicle based on vision is generated by combining the vision sensor data, so that the vehicle can have accurate vision perception data in the automatic driving process, and automatic driving of the vehicle is facilitated.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, after the first environment map and the second environment map reflecting a vehicle traveling environment are generated, the method further includes: and fusing the first environment map and the second environment map to determine a vehicle driving environment map with a lane line based on visual reflection of the vehicle driving environment.
Through further fusion of the first environment map and the second environment map, the vehicle driving environment map with the lane lines based on the visual reflection of the vehicle driving environment can be more accurate and contain richer information, so that the data can be corrected more accurately, and more accurate perception data can be provided for the vehicle.
With reference to the second possible implementation manner of the second aspect, in a third possible implementation manner of the second aspect, the vehicle is further provided with a positioning module, and the method further includes: acquiring positioning data; determining positioning information of the vehicle according to the positioning data, the vision sensor data and the laser radar data; and determining a positioning result containing the lane position relation according to the vehicle running environment map and the positioning information.
The positioning information of the vehicle is determined according to the positioning data, the visual sensor data and the laser radar data, and the accurate positioning of the lane level of the automobile can be realized by combining the vehicle running environment map.
In a third aspect, an embodiment of the present application provides an environment map generation system for a vehicle, where a visual sensor, a millimeter-wave radar, a laser radar, an inertial sensor, and a positioning module are installed on the vehicle, and the environment map generation system includes: the positioning information module is used for determining the positioning information of the vehicle according to the acquired positioning data, the vision sensor data and the laser radar data; an environment map generation module, configured to determine the space-time fusion data according to the vision sensor data and the acquired millimeter wave radar data by using the sensor data fusion method according to any one of the first aspect or possible implementation manners of the first aspect, and generate a first environment map having a lane line, which reflects a vehicle driving environment, based on the space-time fusion data; the system comprises a laser radar data acquisition unit, a vision sensor data acquisition unit, a first environment map and a second environment map, wherein the laser radar data acquisition unit is used for acquiring inertial sensor data of a vehicle; and the first environment map and the second environment map are fused, and a vehicle driving environment map with a lane line which reflects the vehicle driving environment based on vision is determined; and the positioning information module is also used for determining a positioning result containing a lane position relation according to the vehicle running environment map and the positioning information.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic diagram of a sensor distribution and a sensing range of a vehicle according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a sensor data fusion method according to an embodiment of the present disclosure.
Fig. 3 is a schematic diagram of a relative position relationship between an image coordinate system and a pixel coordinate system according to an embodiment of the present disclosure.
Fig. 4 is a schematic diagram of a relationship between a camera coordinate system and an image coordinate system according to an embodiment of the present application.
Fig. 5 is a flowchart of a method for generating an environment map of a vehicle according to an embodiment of the present application.
Fig. 6 is an operation flowchart of a method for generating an environment map of a vehicle according to an embodiment of the present application.
Fig. 7 is a block diagram of a vehicle environment map generation system according to an embodiment of the present application.
Icon: 10-a vehicle; 11-millimeter wave radar; 12-a vision sensor; 13-four wire lidar; 14-sixteen line lidar; 20-an environment map generation system of the vehicle; 21-a positioning information module; 22-environment map generation module.
Detailed Description
Queue driving is one of the commonly used scenarios in autonomous driving. The fleet running in the queue consists of a plurality of vehicle groups, the head vehicle is used as a pilot vehicle and is driven by a driver in front of the fleet, and the following vehicles can adopt an automatic driving mode. The current fusion mode of sensor data is difficult to determine accurate fusion data, so that the automatic driving perception data of the vehicle is inaccurate, the data of automatic driving decision is wrong, and the automatic driving (queue driving) is failed finally. How to solve the problem of sensor fusion is a problem difficult to solve in the prior art. The prior art often only proposes the idea of sensor fusion, and does not provide a specific implementation to realize data fusion of sensors.
Based on the above, the inventor of the present application provides a sensor data fusion method, a vehicle environment map generation method and a vehicle environment map generation system, so as to fuse data collected by different sensors, determine more accurate fusion data and provide the fusion data to a vehicle, so that the vehicle can better realize automatic driving.
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In order to realize automatic driving of a vehicle, various sensors are generally required to collect corresponding data so as to perfect the perception of the vehicle to the environment as much as possible. Referring to fig. 1, fig. 1 is a schematic diagram illustrating a sensor distribution and a sensing range of a vehicle 10 according to an embodiment of the present disclosure. In the present embodiment, the sensors may include a millimeter wave radar 11, a vision sensor 12, a laser radar.
Illustratively, the millimeter wave radar 11 may be installed right in front of the head of the vehicle for collecting millimeter wave radar data. The millimeter-wave radar 11 may be a dual-mode millimeter-wave radar, such as a millimeter-wave radar with model number ARS408-21, but should not be considered as limiting the present application. The range of the millimeter wave radar 11 for collecting data may refer to the coverage of the dual-mode millimeter wave radar shown in fig. 1. It should be noted that the range of the millimeter wave radar 11 for collecting data is merely exemplary, and is not limited thereto.
For example, the vision sensor 12 may be installed at a central position of an upper edge of a front windshield in the vehicle interior, and may refer to an installation position of a drive recorder. The vision sensor 12 is used to collect vision sensor data, the range of which can be seen in the coverage shown in fig. 1. It should be noted that the range of the data collected by the vision sensor 12 is only exemplary and not limited thereto. And the visual sensor 12 may be a camera, a webcam, a scanning device, etc., in this embodiment, the camera is taken as an example, and a camera with a model number of L3824 may be used, but should not be considered as a limitation to the present application.
Illustratively, the lidar may be sixteen-line lidar 14, such as model VLP-16, and two sixteen-line lidar 14 may be used in this embodiment, and may be mounted near the corners of the vehicle head (e.g., the positions of the vehicle lights). Sixteen-line lidar 14 is used to collect lidar data and the range over which the data is collected may be referred to as the coverage area of sixteen-line lidar 14 shown in fig. 1. Alternatively, a four-line lidar 13 may be used, such as a type LUX-4L four-line lidar 13, mounted directly in front of the vehicle head, and the range of data acquisition may be as shown in fig. 1 for the coverage of the four-line lidar 13. It should be noted that the type selection and the installation position of the lidar should not be considered as a limitation of the present application, and the lidar may also be installed at other positions (for example, a position of the vehicle head near two sides of the vehicle body) to collect lidar data.
In addition, a positioning module may be mounted on the vehicle 10 for implementing satellite positioning of the vehicle 10. The Positioning module may be a GPS (Global Positioning System) Positioning module, a GNSS (Global navigation Satellite System) Positioning module, and the like, and the present embodiment takes the GNSS Positioning module as an example, but should not be construed as limiting the present application.
In the present embodiment, before the sensor data fusion generates the environment map of the vehicle that contributes to the automated driving of the vehicle, the sensor data fusion method is described for easy understanding.
Referring to fig. 2, fig. 2 is a flowchart of a sensor data fusion method according to an embodiment of the present disclosure. The sensor data fusion method can be operated by a vehicle-mounted computer on the vehicle (when the vehicle-mounted computer operates the sensor data fusion method, the dependence on the internet is low, and even when the vehicle-mounted computer is not connected with the internet, the sensor data fusion method can also be operated, so that the sensor data fusion method can be suitable for most vehicle running environments); the server may also operate and send the operation result to the vehicle-mounted computer (when the server operates the sensor data fusion method, the server operates at a high speed, so that the sensor data fusion method can be rapidly operated, and the efficiency of sensor data fusion is improved). In the present embodiment, the sensor data fusion method includes step S11, step S12, step S13, and step S14.
In this embodiment, during driving of the vehicle, the sensors mounted on the vehicle may collect environmental data. Based on this, the in-vehicle computer may perform step S11.
Step S11: and acquiring first data and second data which are acquired by the first sensor and the second sensor in the same space in the same time period.
In this embodiment, the first sensor and the second sensor may be sensors on the vehicle, but are not the same sensor, but their functions may be the same, for example, the same sensor may be lidar, data collected by a four-line lidar and data collected by a sixteen-line lidar, and the data may be fused by using a sensor data fusion method; in addition, the sensors arranged at different positions can also adopt a sensor data fusion method to fuse data, and the sensor data fusion method is used for carrying out perception data fusion on the sensors at different positions in the same area in a common coverage range, so that the accuracy of the fused data is improved. Of course, for data collected by sensors with different types and different installation positions, sensor data fusion may also be performed to construct richer and more accurate perception data of the vehicle, for example, fusion of a vision sensor and lidar data, or fusion of a vision sensor and millimeter wave radar data, and the like, which is not limited herein.
In this embodiment, the vehicle-mounted computer may acquire first data and second data acquired by the first sensor and the second sensor in the same time period in the same space, respectively. In this embodiment, the first data and the second data do not refer to a specific data, but refer to data acquired by the first sensor and the second sensor when acquiring the same space in the same time period.
For example, in the same time period, the first data collected by the first sensor and the second data collected by the second sensor may be: the first data and the second data are respectively collected by the first sensor and the second sensor in a period from 11 minutes and 11 seconds at 11 points 11/11 th 16 th 11 th year 2019.
It should be noted that the same space may refer to a range centered on the vehicle, for example, a range of 20 meters around the vehicle. Of course, in consideration of the special characteristics of the vehicle, the same space in the present embodiment may also refer to a space in a certain range (for example, 15 meters, 30 meters, etc., without limitation) in front of the vehicle and on both sides of the vehicle. For the first sensor and the second sensor, monitored coverage ranges can have an intersection, and after sensor data in the intersection of the coverage ranges are fused, the vehicle can sense the environment in the intersection range more accurately, but the definition of the same space is not limited to the intersection of the coverage ranges of the two sensors.
After acquiring the first data and the second data, the in-vehicle computer may execute step S12.
Step S12: and determining time unified fusion data of the first data and the second data which are synchronous at a plurality of time points in the time period.
In order to ensure the accuracy of the fusion of the data acquired by the different sensors in the time dimension as much as possible, in this embodiment, the onboard computer may determine a first acquisition frame rate at which the first sensor acquires the first data and a second acquisition frame rate at which the second sensor acquires the second data. The acquisition frame rate indicates the number of times of acquiring data by the sensor per second, for example, the rate when millimeter wave radar data acquisition is performed is 20 frames/second, and the image data acquired by the camera is 30 frames/second.
It should be noted that, for sensor data fusion, the first sensor and the second sensor in this embodiment are not the same sensor, but it is not limited that only two kinds of sensor data can be fused. In this embodiment, multiple sensor data may be fused, and only the data acquired in the same space in the same time period needs to be satisfied, so that this embodiment should not be considered as a limitation of the present application.
After the first acquisition frame rate and the second acquisition frame rate are determined, the vehicle-mounted computer can determine a plurality of time points from the time period according to the first acquisition frame rate and the second acquisition frame rate, wherein the first data and the second data are acquired at each of the plurality of time points.
For example, the vehicle-mounted computer can determine a reference time point from the time period (i.e. the same time period during which the first sensor and the second sensor collect data) as a reference point for data fusion. Taking millimeter wave radar data with a collection frame rate of 20 frames/second and a camera with a collection frame rate of 30 frames/second as an example, in a period from 11: 11 seconds to 11: 16 seconds, 11: 11 seconds at 11 can be determined as a reference time point, because both the millimeter wave radar data and the camera collect data at the time point of 11: 11 seconds at 11 points, and both the millimeter wave radar data and the data collected by the camera at the time point are first frame data collected in the period. In order to determine a plurality of time points from the time interval as much as possible to improve the accuracy of the fused data, a first time point at which different sensors simultaneously acquire data in the same time interval is generally used as a reference time point, but should not be considered as a limitation of the present application (a second time point, a third time point, and the like at which different sensors simultaneously acquire data in the same time interval may also be used as a reference time point to avoid an error problem that may exist in previous data of several frames acquired by the sensors as much as possible).
Before or after the reference time point is determined from the time period, the vehicle-mounted computer may determine a greatest common divisor between the first acquisition frame rate and the second acquisition frame rate. Taking millimeter wave radar data with an acquisition frame rate of 20 frames/second and a camera with an acquisition frame rate of 30 frames/second as examples again, the common divisor of the two can be calculated by a vehicle-mounted computer, and the maximum common divisor is 10. Of course, the greatest common divisor is calculated in the example here by taking two types of acquisition frame rates as an example, and the greatest common divisor of the frame rates may be determined in the case of a plurality of types of acquisition frame rates of a plurality of types of sensors, without being limited thereto.
After the greatest common divisor is determined, the vehicle-mounted computer can determine a plurality of time points in the time period based on the greatest common divisor and by combining the reference time point.
Illustratively, continuing with the above example (the millimeter wave radar data with the acquisition frame rate of 20 frames/second and the camera with the acquisition frame rate of 30 frames/second, the reference time point being 11 points, 11 minutes and 11 seconds, and the greatest common divisor being 10), the onboard computer may determine that 60 time points including the reference time point are included (the time point 11 points, 11 minutes and 16 seconds is taken as the end of the example, the interval is 6 seconds, and the first data and the second data are acquired at 10 time points in each second). Of course, the on-board computer may determine some of these time points as the time points at which data fusion needs to be performed, which is not limited herein.
In this embodiment, another way is provided, so that the onboard computer determines a plurality of time points from the period of time according to the first acquisition frame rate and the second acquisition frame rate.
For example, before or after determining the reference time point from the time period, the vehicle-mounted computer may determine a least common multiple between a first time period required for acquiring a frame of first data based on the first acquisition frame rate and a second time period required for acquiring a frame of second data based on the second acquisition frame rate.
Continuing with the example of millimeter wave radar data with the acquisition frame rate of 20 frames/second and the example of a camera with the acquisition frame rate of 30 frames/second, it can be obtained that the time required for the millimeter wave radar to acquire one frame of data based on the acquisition frame rate (20 frames/second) is
Figure BDA0002354181980000111
Second, the time required for the camera to acquire one frame of data based on the acquisition frame rate (30 frames/second) is
Figure BDA0002354181980000112
Second, the least common multiple of the two is
Figure BDA0002354181980000113
And second. Therefore, the in-vehicle computer can determine 60 time points (for example, 11 points, 11 minutes and 16 seconds later) including the reference time point. Similarly, the vehicle-mounted computer may determine some of the time points as time points at which data fusion needs to be performed, which is not limited herein.
The method comprises the steps of determining a plurality of time points in a time period by calculating the greatest common divisor or the least common multiple of the collection frame rates of different sensors when collecting data, and accurately determining the plurality of time points in the time period due to the fact that the reference time point is determined, so that the accuracy of the data collected by the sensors on the time dimension when the data are fused is guaranteed.
The step of determining the reference time point and the step of calculating the greatest common divisor (or the least common multiple) are not strictly limited in the execution order, and the reference time point may be determined first and then the greatest common divisor (or the least common multiple) may be calculated first, and then the reference time point may be determined first, or both may be performed, and the steps are not limited herein.
After the plurality of time points in the time period are determined, the first data and the second data at the same time point can be fused according to the determined plurality of time points, so that the time unified fused data in the time period can be determined.
In this embodiment, the first data and the second data at each of the determined multiple time points may be fused, so that the fused data may be consistent in time, and the accuracy of the fused data may be greatly ensured. Therefore, the vehicle-mounted computer can determine the time unified fusion data in the time period.
In addition, in some possible implementations, the acquisition frame rate of the sensor may be adjusted in advance, based on the lower frame rate of the acquisition frame rates of the sensor data that needs to be fused (i.e., the higher acquisition frame rate is adjusted to be consistent with the lower acquisition frame rate, for example, the acquisition frame rate of the camera may be adjusted to be 20 frames/second so as to be consistent with the acquisition frame rate of the millimeter wave radar data by adjusting the millimeter wave radar data with the acquisition frame rate of 20 frames/second and the camera with the acquisition frame rate of 30 frames/second). The data acquisition is carried out by adopting the sensors with the adjusted and consistent acquisition frame rates, and then the acquired data are taken as the basis, and the acquired data are acquired at the same frame rate, so that when the acquired data are unified on a time scale, the reference time point is determined, so that the data of different sensors can be kept consistent on the frame rate, and the data are fused to determine the time unified fusion data in the period.
The data fusion is carried out in the mode, so that the mode is simple, the frame rate of the fused data can be higher, and the quality of the time unified fused data can be further improved on the basis of improving the efficiency of determining the time unified fused data.
And, in other possible implementations, the fusion algorithm itself has a processing frame rate, for example, the lane line identification rate is 25 frames/second, the visual classification algorithm rate is 30 frames/second, and the SSD (Single Shot multiple box detector, target detection model) -based multi-target identification rate is 10 frames/second. Then, while determining a plurality of time points within the time period, the corresponding frame rate may be taken into consideration (for example, millimeter wave radar data with an acquisition frame rate of 20 frames/second and a camera with an acquisition frame rate of 30 frames/second, and a rate of lane line identification is 25 frames/second, and then the maximum common divisor and the minimum common multiple may be calculated together.
A plurality of time points in a time period are determined by utilizing the acquisition frame rates of different sensors during data acquisition, and the time points are used as the directlines for the integration of the sensor data in the time dimension, so that the integration accuracy of the data acquired by the different sensors in the time dimension can be ensured as far as possible. The time points can be accurately determined through the acquisition frame rate when the sensors acquire data, so that the accuracy of fusion of the data acquired by different sensors in the time dimension is ensured.
After determining the time-uniform fusion data, the in-vehicle computer may perform step S13.
Step S13: and determining the spatial unified fusion data of the first data and the second data under the same coordinate system.
In this embodiment, the vehicle-mounted computer may determine that the first data and the second data are spatially unified and merged in the same coordinate system.
Since the world coordinate system is an absolute coordinate system of one system (for example, the environment map generation system of the vehicle in the embodiment of the present application), the world coordinate system can be regarded as a unified coordinate system of the first data and the second data. The onboard computer may determine first coordinate data in the world coordinate system based on the first data, and may determine second coordinate data in the world coordinate system based on the second data. Therefore, the first data and the second data can complete the unification of the coordinate system.
For example, the first coordinate data and the second coordinate data in the world coordinate system may be fused to determine spatially unified fused data in the world coordinate system. Because the image coordinate system where the image of the data collected by the vision sensor is located is different from the world coordinate system, the data in the image coordinate system can be converted into the coordinate data in the world coordinate system based on the conversion relation matrix by determining the conversion relation matrix between the image coordinate system and the world coordinate system.
In this embodiment, it is assumed that the first sensor is a millimeter wave radar, the first data is millimeter wave radar data, and the millimeter wave radar data is data in a world coordinate system, and it may be determined that the first data is the first coordinate data without conversion.
Taking the second sensor as the vision sensor as an example, the second data acquired by the second sensor is the vision sensor data, and the vision sensor data belongs to the data in the image coordinate system. Therefore, the conversion of the second data in the image coordinate system to the second coordinate data in the world coordinate system can be realized by determining the conversion relation matrix between the image coordinate system and the world coordinate system.
In the present embodiment, in order to determine the conversion relationship matrix between the image coordinate system and the world coordinate system, the relationship between the image coordinate system and the world coordinate system will be described.
The camera imaging process involves four coordinate systems in total: a pixel plane coordinate system (pixel coordinate system), a pixel plane coordinate system (image coordinate system), a camera coordinate system, a world coordinate system, and a conversion between the respective coordinate systems.
Assume that the four coordinate systems are: the pixel coordinate system (u, v), the image coordinate system (x, y), the camera coordinate system (Xc, Yc, Zc), and the world coordinate system (Xw, Yw, Zw) may be determined by the following method.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a relative position relationship between an image coordinate system and a pixel coordinate system according to an embodiment of the present disclosure. In FIG. 3, (0, 0), i.e., (u) in the pixel coordinate system (u, v)0,v0) The center of the image plane.
Setting the physical dimensions dx and dy of each pixel in the u-axis and v-axis directions in the pixel coordinate system, the following relationship is given:
Figure BDA0002354181980000151
equation (1) is a relational expression of the relative positional relationship between the image coordinate system and the pixel coordinate system (a symbolic explanation will be given later when a conversion relationship matrix between the world coordinate system and the image coordinate system is determined).
Referring to fig. 4, fig. 4 is a schematic diagram of a relationship between a camera coordinate system and an image coordinate system according to an embodiment of the present disclosure.
In this embodiment, the transformation between the camera coordinate system and the image coordinate system can be realized by using the pinhole model of the camera (camera) and the triangle similarity principle, and the following relation can be obtained:
Figure BDA0002354181980000152
and equation (2) can be represented by a matrix form:
Figure BDA0002354181980000153
equation (3) is a conversion relation between the camera coordinate system and the image coordinate system in a matrix form (a symbolic explanation will be given later when a conversion relation matrix between the world coordinate system and the image coordinate system is determined).
And because the axis of the lens of the camera (camera) has a certain pitch angle with the ground plane in the installation process, the camera coordinate system can realize the transfer to the world coordinate system through rotation and translation. The conversion relationship between the camera coordinate system and the world coordinate system is as follows:
Figure BDA0002354181980000154
equation (4) is a conversion relation between the camera coordinate system and the world coordinate system (a symbolic definition will be given later when a conversion relation matrix between the world coordinate system and the image coordinate system is determined).
By the equations (1), (3) and (4), a conversion relation matrix between the world coordinate system and the image coordinate system can be determined. For example, the transformation relationship matrix may be:
Figure BDA0002354181980000161
equation (5) is a transformation relation matrix between the image coordinate system and the world coordinate system. Wherein R represents a rotation matrix, t represents a translation matrix, f represents a focal length, dx and dy represent length units occupied by one pixel in the x direction and the y direction of an image coordinate system respectively, and u represents0And v0The number of horizontal pixels and the number of vertical pixels representing the phase difference between the center pixel coordinate (O1) of the image and the image origin pixel coordinate (O0), respectively.
In this way, a transformation relation matrix between the image coordinate system and the world coordinate system can be determined. Therefore, the vehicle-mounted computer can convert the second data in the image coordinate system into the second coordinate data in the world coordinate system according to the determined conversion relation matrix.
Certainly, in order to save the computing resources, the determined conversion relation matrix may be preset in the vehicle-mounted computer in advance, and when the second data in the image coordinate system acquired by the vision sensor needs to be converted into the second coordinate data in the world coordinate system, the second coordinate data in the world coordinate system may be automatically called and calculated, so as to quickly and accurately determine the second coordinate data in the world coordinate system.
Here, the second data is only used as the data collected by the vision sensor for example, and the data collected by the other sensors may be converted by the conversion relation matrix if the data is in the image coordinate system, which is not limited herein.
When the sensor is a vision sensor, the conversion matrix between the image coordinate system and the world coordinate system of the data acquired by the vision sensor is calculated, and the conversion of the data from the image coordinate system to the world coordinate system is realized according to the conversion matrix, so that the method is simple, fast and accurate.
After the first coordinate data and the second coordinate data in the world coordinate system are determined, the vehicle-mounted computer can fuse the first coordinate data and the second coordinate data (the first coordinate data and the second coordinate data determined based on the corresponding first data and the second data at the same time point are also at the same time point, and the first coordinate data and the second coordinate data are fused), so that the spatially unified fused data can be determined.
The data collected by different sensors are converted under a world coordinate system, so that the data collected by different sensors are based on the data of one coordinate system, errors caused by space inconsistency can be avoided as far as possible, and the accuracy of fused data is further ensured.
After the time-unified fusion data and the space-unified fusion data are determined, the in-vehicle computer may perform step S14.
Step S14: and determining space-time fusion data fusing the first data and the second data according to the time unified fusion data and the space unified fusion data.
In this embodiment, the vehicle-mounted computer can fuse the determined time unified fusion data and the determined space unified fusion data, so that the first data and the second data can be fused in a time dimension and a space dimension, and the time-space fusion data with uniform time and space can be accurately determined.
The sensor data fusion method can ensure the accuracy of fused data as far as possible by acquiring data acquired by different sensors in the same space in the same time period and fusing the data of the different sensors in both time dimension (a plurality of time points in the time period) and space dimension (converted into the same coordinate system), thereby enabling the data of the sensors to be well fused and providing accurate space-time fused data.
While the sensor data fusion method has been described above, and the vehicle environment map generation method will be described below, in the present embodiment, the sensor data fusion method is used in the steps of the vehicle environment map generation method, but the present invention is not limited to this fusion method, and is not limited to the present application.
Referring to fig. 5, fig. 5 is a flowchart of a method for generating an environment map of a vehicle according to an embodiment of the present disclosure. In this embodiment, the method for generating the environment map of the vehicle may be executed by the in-vehicle computer, or may be executed by the server and then the generated environment map may be transmitted to the in-vehicle computer. The present embodiment is described by taking an example of a method for generating an environment map of a vehicle executed by an on-board computer, but the present invention should not be construed as being limited thereto.
In the present embodiment, the environment map generating method of the vehicle may include step S21, step S22, and step S23.
In order to provide more accurate sensing data to the vehicle so that the vehicle can make more accurate judgment in automatic driving, the vehicle-mounted computer may execute step S21.
Step S21: and acquiring visual sensor data and millimeter wave radar data.
In this embodiment, the in-vehicle computer may acquire the vision sensor data acquired by the vision sensor mounted on the vehicle, and acquire the millimeter wave radar data acquired by the millimeter wave radar mounted on the vehicle.
After acquiring the vision sensor data and the millimeter wave radar data, the in-vehicle computer may perform step S22.
Step S22: and determining the space-time fusion data according to the vision sensor data and the millimeter wave radar data by a sensor data fusion method.
In this embodiment, the vehicle-mounted computer may perform data fusion on the visual sensor data and the millimeter wave radar data by using the sensor data fusion method provided in the embodiment of the present application, and determine space-time fusion data. Since how to fuse the visual sensor data and the millimeter wave radar data is specifically described, the process of how to implement the sensor data fusion is specifically described in the foregoing, and details are not repeated here.
After determining the spatiotemporal fusion data based on the vision sensor data and the millimeter wave radar data, the in-vehicle computer may perform step S23.
Step S23: and generating a first environment map with a lane line reflecting the driving environment of the vehicle according to the space-time fusion data.
In this embodiment, because vision sensor and millimeter wave radar can realize the detection to the lane line, consequently, fuse the vision sensor data that vision sensor gathered and the millimeter wave radar data that millimeter wave radar gathered, and the lane line in the environment is traveled to the vehicle can be reflected to the time-space fusion data that determine, and from this, on-vehicle computer can further according to time-space fusion data, generates the first environment map that has the lane line that reflects the vehicle environment of traveling. Therefore, accurate sensing data can be given to the automobile, and automatic driving of the automobile is facilitated.
In order to provide more accurate sensing data for the vehicle so that the vehicle can make more accurate judgment during automatic driving, in the embodiment, the vehicle-mounted computer can acquire the lidar data acquired by the lidar mounted on the vehicle and the inertial sensor data acquired by the inertial sensor mounted on the vehicle.
After the laser radar data and the inertial sensor data are obtained, the vehicle-mounted computer can calculate the motion track of the vehicle to determine the motion track of the vehicle (which may be a local motion track, where the local range may be a coverage range when the sensors around the vehicle collect data, and certainly may be outside the coverage range and inside the coverage range, and may be set according to actual needs, and this is not limited here). For example, the motion trajectory of the vehicle may be determined based on inertial sensors in combination with state parameters of the vehicle itself (e.g., speed of the vehicle, input parameters of the steering wheel, etc.).
After the motion trajectory of the vehicle is determined, the vehicle-mounted computer may perform sensor data fusion on the estimated motion trajectory (which may be represented as image data) and the visual sensor data. It should be noted that the time period of the estimated motion trajectory is the same as the time period of the visual sensor data acquisition. The motion trajectory of the vehicle and the visual sensor data of the vehicle also belong to data for the same space (the visual sensor data is collected by the visual sensor, and the motion trajectory is calculated).
Therefore, the vehicle-mounted computer can also fuse the motion trail with the visual sensor data (the sensor data fusion method provided by the embodiment of the application can be adopted for fusion), and determine the second environment map (namely the environment map of the vehicle). The vision sensor data collected by the vision sensor is mainly used for visually reflecting the running environment of the vehicle, the estimated motion trail belongs to estimation of the running environment of the vehicle, and after the vision sensor data and the estimated motion trail are fused, the generated second environment map can reflect the running environment of the vehicle based on vision. In such a way, the vehicle can have accurate visual perception data in the automatic driving process, thereby being beneficial to the automatic driving of the vehicle.
In this embodiment, after determining the first environment map reflecting the vehicle driving environment and having the lane lines and the second environment map visually reflecting the vehicle driving environment, the on-board computer may fuse the first environment map and the second environment map (may be fused by using the sensor data fusion method provided in the embodiment of the present application), and determine the vehicle driving environment map visually reflecting the vehicle driving environment and having the lane lines.
Through further fusion of the first environment map and the second environment map, the vehicle driving environment map with the lane lines based on the visual reflection of the vehicle driving environment can be more accurate and contain richer information, so that the data can be corrected more accurately, and more accurate perception data can be provided for the vehicle.
In this embodiment, in order to realize positioning of the vehicle on a lane level (i.e., positioning of the vehicle is accurate to the lane level), the on-board computer may acquire positioning data collected by a positioning module installed on the vehicle.
Illustratively, the vehicle-mounted computer may determine the positioning information of the vehicle based on the positioning data, the vision sensor data, and the lidar data.
After the positioning information of the vehicle is determined, the on-board computer may determine a positioning result including a lane position relationship based on a vehicle driving environment map (a vehicle driving environment map having a lane line that visually reflects the vehicle driving environment) and the determined positioning information. This allows a precise positioning of the vehicle at the lane level.
The steps of the method for generating an environment map of a vehicle by an in-vehicle computer are not limited to the above-described order. For example, the steps of determining the positioning information of the vehicle, generating a first environment map of the vehicle, and generating a second environment map of the vehicle may be performed simultaneously. Therefore, the order of the steps in the method for generating an environment map of a vehicle is not limited to the above description.
For the purpose of generally describing the environment map generation method of the vehicle and generally describing the operation process of positioning the vehicle to achieve the lane level, the following description will be made with reference to fig. 6.
Referring to fig. 6, fig. 6 is a flowchart illustrating an operation of a method for generating an environment map of a vehicle according to an embodiment of the present disclosure.
In this embodiment, the vehicle-mounted computer can acquire positioning data acquired by GNSS (positioning module), visual sensor data acquired by a visual sensor, millimeter radar wave data acquired by millimeter radar waves, laser radar data acquired by laser radar (which may include two sixteen-line laser radars and four-line laser radar), and inertial sensor data acquired by an inertial sensor. It should be noted that, for the data collected by the positioning module, the vision sensor, the millimeter radar wave, the lidar and the inertial sensor, some preprocessing may be performed to obtain corresponding positioning data, vision sensor data, millimeter radar wave data, lidar data and inertial sensor data, and the method is not limited to the raw data collected by these sensors or modules, and is not limited herein.
In order to improve the operating efficiency of the method for generating an environment map of a vehicle, the vehicle-mounted computer may execute the method in multiple threads (e.g., one thread executes the method steps for determining the positioning information, one thread executes the method steps for generating the first environment map, etc.). For example, the vehicle-mounted computer can perform sensor data fusion on positioning data, vision sensor data and laser radar data to determine positioning information. And the vehicle-mounted computer can calculate the motion trail of the vehicle according to the laser radar data and the inertial sensor data, and performs sensor data fusion on the determined motion trail and the visual sensor data to further generate a second environment map. And the vehicle-mounted computer can perform sensor data fusion on the millimeter radar wave data and the vision sensor data and further generate a first environment map.
After the first environment map and the second environment map are determined, the vehicle-mounted computer can perform visual data updating (for example, correcting some data in the first environment map and/or the second environment map) by combining the first environment map and the second environment map. The vehicle-mounted computer can also combine the first environment map and the second environment map with updated visual data with the determined positioning information to perform sensor data fusion, and further generate a vehicle driving environment map with a lane line and reflecting the vehicle driving environment based on the vision.
After the vehicle driving environment map is generated, the vehicle-mounted computer can realize the positioning of the vehicle at the lane level and determine the positioning result of the vehicle.
In this embodiment, the determined vehicle driving environment map may also be used to update feature points and feature maps of the vehicle, such as the determined first environment map, the determined second environment map, the estimated motion trajectory, and the like, so that the vehicle-mounted computer performs feedback adjustment on the first environment map, the second environment map, and the estimated motion trajectory according to the determined vehicle driving environment map, and determines a new vehicle driving environment map according to the adjusted first environment map, the second environment map, and the estimated motion trajectory, which is beneficial to further improving the accuracy of the vehicle driving environment map.
In addition, the generated vehicle driving environment map may be used for sensing a vehicle driving environment, determining a safe driving area of the vehicle, and determining and adjusting a pose of the vehicle, which is not limited herein.
In order to realize better automatic driving of the vehicle, the vehicle-mounted computer can also identify the driving environment of the vehicle according to the generated vehicle driving environment map so as to determine the current driving scene of the vehicle.
After the current driving scene of the vehicle is determined, the vehicle-mounted computer can also send the current driving scene of the vehicle to an enhanced learning network processing perception system (which can be arranged inside the vehicle-mounted computer or on a cloud server) communicated with the vehicle-mounted computer. The reinforcement learning network processing perception system can collect the relation between a driving scene and the operation of a driver and the parameters of the operation, such as a red light scene, and the parameters corresponding to braking; and the overtaking scene corresponds to the turning light, the steering wheel parameter, the accelerator pedal parameter and the like of the vehicle.
The reinforcement learning network processing perception system can pair the collected information of the surrounding environment of the driving with the record of the action of the driver. The reinforcement learning network processing perception system carries out automatic driving positioning and identification decision based on reinforcement learning by modeling the Markov process of driving behaviors, can feed back perception information to a vehicle-mounted computer, carries out comprehensive decision by the vehicle-mounted computer, and finally processes the perception information into instructions for controlling a steering wheel, an accelerator, a brake and the like.
By pairing a driving scenario with the experience of the driver (driving behavior, driving actions, etc. of the driver in such a scenario), automated driving of the vehicle may be better achieved.
Based on the same inventive concept, an environment map generating system 20 of a vehicle is also provided. Referring to fig. 7, fig. 7 is a block diagram illustrating an environment map generating system 20 of a vehicle according to an embodiment of the present disclosure.
In this embodiment, a vehicle is mounted with a vision sensor, a millimeter wave radar, a laser radar, an inertial sensor, and a positioning module, and the environment map generation system 20 includes: the positioning information module 21 is configured to determine positioning information of the vehicle according to the acquired positioning data, the vision sensor data, and the laser radar data; an environment map generation module 22, configured to determine, according to the vision sensor data and the acquired millimeter wave radar data, the space-time fusion data by using the sensor data fusion method provided in the embodiment of the present application, and generate, based on the space-time fusion data, a first environment map having a lane line and reflecting a vehicle driving environment; the system comprises a laser radar data acquisition unit, a vision sensor data acquisition unit, a first environment map and a second environment map, wherein the laser radar data acquisition unit is used for acquiring inertial sensor data of a vehicle; and the first environment map and the second environment map are fused, and a vehicle driving environment map with a lane line which reflects the vehicle driving environment based on vision is determined; the positioning information module 21 is further configured to determine a positioning result including a lane position relationship according to the vehicle driving environment map and the positioning information.
To sum up, the embodiment of the present application provides a sensor data fusion method, a vehicle environment map generation method and a vehicle environment map generation system, which can ensure the accuracy of fused data as much as possible by acquiring data collected by different sensors in the same space in the same time period and fusing different sensor data in the time dimension (multiple time points in the time period) and the space dimension (converting to the same coordinate system), so that the sensor data can be fused well, accurate space-time fused data can be provided, more accurate sensing data can be provided for a vehicle, and automatic driving of the vehicle is facilitated.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the modules is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed.
In addition, the modules described as the separate components may or may not be physically separated, and some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method of sensor data fusion, the method comprising:
acquiring first data and second data which are acquired by a first sensor and a second sensor in the same space in the same time period;
determining time unified fusion data of the first data and the second data which are synchronous at a plurality of time points in the time period;
determining spatial unified fusion data of the first data and the second data under the same coordinate system;
and determining space-time fusion data fusing the first data and the second data according to the time unified fusion data and the space unified fusion data.
2. The method of claim 1, wherein determining a time unified blend of the first data and the second data synchronized at a plurality of time points within the time period comprises:
determining a first acquisition frame rate at which the first sensor acquires the first data and a second acquisition frame rate at which the second sensor acquires the second data;
determining a plurality of time points from the time period according to the first acquisition frame rate and the second acquisition frame rate, wherein first data and second data are acquired at each time point in the plurality of time points;
and fusing the first data and the second data at the same time point according to the plurality of determined time points to determine the time unified fused data in the time period.
3. The sensor data fusion method of claim 2, wherein determining a plurality of time points from the period of time based on the first frame rate of acquisition and the second frame rate of acquisition comprises:
determining a reference time point from the time interval, wherein the first sensor acquires first data at the reference time point and the second sensor acquires second data at the reference time point;
determining a greatest common divisor between the first acquisition frame rate and the second acquisition frame rate, or determining a least common multiple between a first time length required for acquiring a frame of first data based on the first acquisition frame rate and a second time length required for acquiring a frame of second data based on the second acquisition frame rate;
and determining a plurality of time points in the time period by combining the reference time point based on the greatest common divisor or the least common multiple.
4. The sensor data fusion method of claim 1, wherein determining spatially unified fusion data of the first data and the second data in the same coordinate system when the first data and the second data are both image data comprises:
determining first coordinate data in a world coordinate system based on the first data;
determining second coordinate data in a world coordinate system based on the second data;
and fusing the first coordinate data and the second coordinate data in the world coordinate system to determine spatial unified fusion data in the world coordinate system.
5. The sensor data fusion method according to claim 4, wherein when the second sensor is a vision sensor, the second data is vision sensor data, and determining second coordinate data in a world coordinate system based on the second data includes:
determining a conversion relation matrix between an image coordinate system and a world coordinate system;
and converting the second data in the image coordinate system into second coordinate data in the world coordinate system according to the conversion relation matrix.
6. An environment map generation method of a vehicle on which a vision sensor and a millimeter wave radar are mounted, the method comprising:
acquiring visual sensor data and millimeter wave radar data;
determining the spatiotemporal fusion data by the sensor data fusion method of any one of claims 1 to 5 from the vision sensor data and the millimeter wave radar data;
and generating a first environment map with a lane line reflecting the driving environment of the vehicle according to the space-time fusion data.
7. The environmental map generation method of a vehicle according to claim 6, wherein a laser radar and an inertial sensor are further mounted on the vehicle, the method further comprising:
acquiring laser radar data and inertial sensor data;
determining the motion track of the vehicle according to the inertial sensor data and the laser radar data;
and generating a second environment map which reflects the running environment of the vehicle based on vision according to the motion trail and the vision sensor data.
8. The environment map generation method of a vehicle according to claim 7, wherein after generating the first environment map and the second environment map reflecting a vehicle travel environment, the method further comprises:
and fusing the first environment map and the second environment map to determine a vehicle driving environment map with a lane line based on visual reflection of the vehicle driving environment.
9. The method of claim 8, wherein a positioning module is further mounted on the vehicle, the method further comprising:
acquiring positioning data;
determining positioning information of the vehicle according to the positioning data, the vision sensor data and the laser radar data;
and determining a positioning result containing the lane position relation according to the vehicle running environment map and the positioning information.
10. The environment map generation system of a vehicle, characterized in that installs vision sensor, millimeter wave radar, laser radar, inertial sensor and orientation module on the vehicle, the environment map generation system includes:
the positioning information module is used for determining the positioning information of the vehicle according to the acquired positioning data, the vision sensor data and the laser radar data;
an environment map generation module, configured to determine the space-time fusion data by the sensor data fusion method according to any one of claims 1 to 5 according to the vision sensor data and the acquired millimeter wave radar data, and generate a first environment map having a lane line that reflects a vehicle driving environment based on the space-time fusion data; the system comprises a laser radar data acquisition unit, a vision sensor data acquisition unit, a first environment map and a second environment map, wherein the laser radar data acquisition unit is used for acquiring inertial sensor data of a vehicle; and the first environment map and the second environment map are fused, and a vehicle driving environment map with a lane line which reflects the vehicle driving environment based on vision is determined;
and the positioning information module is also used for determining a positioning result containing a lane position relation according to the vehicle running environment map and the positioning information.
CN202010004760.5A 2020-01-02 2020-01-02 Sensor data fusion method, and vehicle environment map generation method and system Pending CN111209956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010004760.5A CN111209956A (en) 2020-01-02 2020-01-02 Sensor data fusion method, and vehicle environment map generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010004760.5A CN111209956A (en) 2020-01-02 2020-01-02 Sensor data fusion method, and vehicle environment map generation method and system

Publications (1)

Publication Number Publication Date
CN111209956A true CN111209956A (en) 2020-05-29

Family

ID=70789525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010004760.5A Pending CN111209956A (en) 2020-01-02 2020-01-02 Sensor data fusion method, and vehicle environment map generation method and system

Country Status (1)

Country Link
CN (1) CN111209956A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112639910A (en) * 2020-09-25 2021-04-09 华为技术有限公司 Method and device for observing traffic elements
CN113610133A (en) * 2021-07-30 2021-11-05 上海德衡数据科技有限公司 Laser data and visual data fusion method and system
CN113837385A (en) * 2021-09-06 2021-12-24 东软睿驰汽车技术(沈阳)有限公司 Data processing method, device, equipment, medium and product
CN113900070A (en) * 2021-10-08 2022-01-07 河北德冠隆电子科技有限公司 Method, device and system for automatically drawing target data and accurately outputting radar lane
WO2022078442A1 (en) * 2020-10-15 2022-04-21 左忠斌 Method for 3d information acquisition based on fusion of optical scanning and smart vision
CN114531554A (en) * 2022-04-24 2022-05-24 浙江华眼视觉科技有限公司 Video fusion synthesis method and device for express code recognizer
CN114897066A (en) * 2022-05-06 2022-08-12 中国人民解放军海军工程大学 Bolt looseness detection method and device based on machine vision and millimeter wave radar
CN115083152A (en) * 2022-06-09 2022-09-20 北京主线科技有限公司 Vehicle formation sensing system, method, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN106204629A (en) * 2016-08-17 2016-12-07 西安电子科技大学 Space based radar and infrared data merge moving target detection method in-orbit
CN107578066A (en) * 2017-09-07 2018-01-12 南京莱斯信息技术股份有限公司 Civil defence comprehensive situation based on Multi-source Information Fusion shows system and method
US20180087907A1 (en) * 2016-09-29 2018-03-29 The Charles Stark Draper Laboratory, Inc. Autonomous vehicle: vehicle localization
CN109188932A (en) * 2018-08-22 2019-01-11 吉林大学 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving
CN109341706A (en) * 2018-10-17 2019-02-15 张亮 A kind of production method of the multiple features fusion map towards pilotless automobile
CN109682369A (en) * 2018-12-13 2019-04-26 上海航天控制技术研究所 Rotating Platform for High Precision Star Sensor data fusion method based on asynchronous exposure
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102303605A (en) * 2011-06-30 2012-01-04 中国汽车技术研究中心 Multi-sensor information fusion-based collision and departure pre-warning device and method
CN106204629A (en) * 2016-08-17 2016-12-07 西安电子科技大学 Space based radar and infrared data merge moving target detection method in-orbit
US20180087907A1 (en) * 2016-09-29 2018-03-29 The Charles Stark Draper Laboratory, Inc. Autonomous vehicle: vehicle localization
CN107578066A (en) * 2017-09-07 2018-01-12 南京莱斯信息技术股份有限公司 Civil defence comprehensive situation based on Multi-source Information Fusion shows system and method
CN109188932A (en) * 2018-08-22 2019-01-11 吉林大学 A kind of multi-cam assemblage on-orbit test method and system towards intelligent driving
CN109341706A (en) * 2018-10-17 2019-02-15 张亮 A kind of production method of the multiple features fusion map towards pilotless automobile
CN109682369A (en) * 2018-12-13 2019-04-26 上海航天控制技术研究所 Rotating Platform for High Precision Star Sensor data fusion method based on asynchronous exposure
CN110532896A (en) * 2019-08-06 2019-12-03 北京航空航天大学 A kind of road vehicle detection method merged based on trackside millimetre-wave radar and machine vision

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112639910A (en) * 2020-09-25 2021-04-09 华为技术有限公司 Method and device for observing traffic elements
WO2022078442A1 (en) * 2020-10-15 2022-04-21 左忠斌 Method for 3d information acquisition based on fusion of optical scanning and smart vision
CN113610133A (en) * 2021-07-30 2021-11-05 上海德衡数据科技有限公司 Laser data and visual data fusion method and system
CN113610133B (en) * 2021-07-30 2024-05-28 上海德衡数据科技有限公司 Laser data and visual data fusion method and system
CN113837385A (en) * 2021-09-06 2021-12-24 东软睿驰汽车技术(沈阳)有限公司 Data processing method, device, equipment, medium and product
CN113837385B (en) * 2021-09-06 2024-02-09 东软睿驰汽车技术(沈阳)有限公司 Data processing method, device, equipment, medium and product
CN113900070A (en) * 2021-10-08 2022-01-07 河北德冠隆电子科技有限公司 Method, device and system for automatically drawing target data and accurately outputting radar lane
CN114531554A (en) * 2022-04-24 2022-05-24 浙江华眼视觉科技有限公司 Video fusion synthesis method and device for express code recognizer
CN114531554B (en) * 2022-04-24 2022-08-16 浙江华眼视觉科技有限公司 Video fusion synthesis method and device of express mail code recognizer
CN114897066A (en) * 2022-05-06 2022-08-12 中国人民解放军海军工程大学 Bolt looseness detection method and device based on machine vision and millimeter wave radar
CN115083152A (en) * 2022-06-09 2022-09-20 北京主线科技有限公司 Vehicle formation sensing system, method, device, equipment and medium

Similar Documents

Publication Publication Date Title
CN111209956A (en) Sensor data fusion method, and vehicle environment map generation method and system
CN111442776B (en) Method and equipment for sequential ground scene image projection synthesis and complex scene reconstruction
EP3447528B1 (en) Automated driving system that merges heterogenous sensor data
US20190235521A1 (en) System and method for end-to-end autonomous vehicle validation
CN112543876B (en) System for sensor synchronicity data analysis in an autonomous vehicle
US11475678B2 (en) Lane marker detection and lane instance recognition
US20220188695A1 (en) Autonomous vehicle system for intelligent on-board selection of data for training a remote machine learning model
US20210389133A1 (en) Systems and methods for deriving path-prior data using collected trajectories
US20200218907A1 (en) Hybrid lane estimation using both deep learning and computer vision
US10974730B2 (en) Vehicle perception system on-line diangostics and prognostics
EP4020111B1 (en) Vehicle localisation
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN113985405A (en) Obstacle detection method and obstacle detection equipment applied to vehicle
CN116901875A (en) Perception fusion system, vehicle and control method
CN111753901B (en) Data fusion method, device, system and computer equipment
US11454701B2 (en) Real-time and dynamic calibration of active sensors with angle-resolved doppler information for vehicles
CN117387647A (en) Road planning method integrating vehicle-mounted sensor data and road sensor data
Eraqi et al. Static free space detection with laser scanner using occupancy grid maps
JP7462738B2 (en) Vehicle Cluster Tracking System
CN115359332A (en) Data fusion method and device based on vehicle-road cooperation, electronic equipment and system
CN112747757A (en) Method and device for providing radar data, computer program and computer-readable storage medium
CN113763693A (en) Vehicle data processing method, device, medium and equipment
US20230266451A1 (en) Online lidar-to-ground alignment
US20230385698A1 (en) Generating training data for vision systems detecting moving objects
US11682140B1 (en) Methods and apparatus for calibrating stereo cameras using a time-of-flight sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination