CN112307594A - Road data acquisition and simulation scene establishment integrated system and method - Google Patents

Road data acquisition and simulation scene establishment integrated system and method Download PDF

Info

Publication number
CN112307594A
CN112307594A CN202010990734.4A CN202010990734A CN112307594A CN 112307594 A CN112307594 A CN 112307594A CN 202010990734 A CN202010990734 A CN 202010990734A CN 112307594 A CN112307594 A CN 112307594A
Authority
CN
China
Prior art keywords
vehicle
data
information
target object
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010990734.4A
Other languages
Chinese (zh)
Other versions
CN112307594B (en
Inventor
王霁宇
秦孔建
郭魁元
孙航
端帅
俞彦辉
张志强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Automotive Technology and Research Center Co Ltd
CATARC Automotive Test Center Tianjin Co Ltd
Original Assignee
China Automotive Technology and Research Center Co Ltd
CATARC Automotive Test Center Tianjin Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Automotive Technology and Research Center Co Ltd, CATARC Automotive Test Center Tianjin Co Ltd filed Critical China Automotive Technology and Research Center Co Ltd
Priority to CN202010990734.4A priority Critical patent/CN112307594B/en
Publication of CN112307594A publication Critical patent/CN112307594A/en
Application granted granted Critical
Publication of CN112307594B publication Critical patent/CN112307594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques

Abstract

The invention relates to a road data acquisition and simulation scene establishment integrated system. The integrated system comprises a sensor acquisition unit, a data fusion center and a road environment simulation platform which are sequentially connected; the sensor acquisition unit includes: the component A comprises a front-view camera and a millimeter wave radar fusion group; the component B is configured on the laser radar group at the half-car height position and comprises at least 6 110-degree sub laser radar sensors; the component C is arranged on a laser radar group of the vehicle roof and comprises three laser radars; and the component D is configured on the IMU and the GPS set in the middle of the vehicle. The collected environmental data are used for scene simulation and generation of corresponding test cases, the problems that the collected vehicle road test cost is high, the collected vehicle road test cost is not coupled with the built simulation environment, and the environment scene cannot be automatically generated in the prior art are solved, the potential driving scenes possibly encountered by the automatic driving vehicle can be widely covered, and the kilometers of the road test are greatly reduced.

Description

Road data acquisition and simulation scene establishment integrated system and method
Technical Field
The invention relates to the field of road target data acquisition, in particular to a system and a method for integrating road data acquisition and simulation scene establishment.
Background
In recent years, intelligent driving technology and road test evaluation standards are rapidly developed, and higher requirements are put forward in various aspects of intelligent driving road tests. In the data acquisition process, high-quality guarantee is needed for sensing, storing and data synchronous processing of target objects in scenes around a road; in the process of using the existing data to simulate the scene, the requirements on the automation and integration of the simulation process and the fidelity and accuracy of the simulation environment are higher and higher. Therefore, the acquisition device with high adaptability, normalization and reliability is the guarantee of the safety of the automatic driving technology and is also the basis of a road scene test evaluation system.
The road target perception sensor mainly adopted by the intelligent driving technology at the present stage comprises:
arranging 360-degree cameras around the vehicle or configuring a millimeter wave radar and an ultrasonic radar, transmitting environment data to a data processing center in a distributed manner, and uniformly performing multi-sensor data fusion processing;
the foresight camera and the millimeter wave radar are combined into a set of collected data fusion system, and collected foresight image data and radar output data are subjected to centralized integrated processing;
the front-view camera and the millimeter wave radar fusion system are matched with a laser radar arranged on the roof of the vehicle to form a secondary data fusion system;
the environment acquisition system can be used for completing data acquisition of environment perception, but the accuracy of road target data acquisition and the quality of the environment perception effect need to be evaluated before road scene restoration simulation, and the problems in the road scene restoration simulation need to be improved.
With the acquisition of a large amount of road environment data, software-in-the-loop simulation and simulation with higher precision are required, and the requirements for simulation of the road environment are higher and higher. The establishment of the road environment simulation scene can greatly reduce the pressure of the actual road test of automatic driving, can help to diagnose the problem that the automatic driving system has errors or causes take-over more quickly and accurately for the restoration of the scene, and has important significance for the system test fault diagnosis, the improvement of the automatic driving reliability and the intelligent training of the system.
In an environmental data acquisition and fusion system, sensors widely used at present include a vision sensor (camera), a radar, an inertial navigation system and a GPS. The visual image collected by the camera is used as a perception link, the target object is detected and detected by a radar, and the vehicle is positioned by fusing a GPS and an inertial navigation system. In a scene restoration simulation system, vehicle environment model physical simulation platforms such as Prescan, Carmaker and Carsim are commonly used at present, but the establishment of a simulation scene generally needs links such as scene environment parameter setting, sensor configuration setting, control system adjustment and experiment parameter setting, is suitable for self-establishment of the simulation scene, and still needs relatively complex manual work for actual measurement scene restoration analysis, automatic driving segment playback, problem positioning and the like.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
The invention aims to provide a system and a method for integrating road data acquisition and simulation scene establishment, which are used for using acquired environmental data for scene simulation and generating corresponding test cases, are used for solving the problems that the cost for acquiring vehicle road tests is higher, the acquired vehicle road tests are not coupled with the established simulation environment and the environmental scenes cannot be automatically generated in the prior art, can widely cover potential types of driving scenes possibly encountered by automatically driven vehicles, and greatly reduce the kilometers of road tests.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the invention provides a road data acquisition and simulation scene establishment integrated system, which comprises a sensor acquisition unit, a data fusion center and a road environment simulation platform which are sequentially connected;
the sensor acquisition unit includes:
the component A comprises a front-view camera and a millimeter wave radar fusion group;
the component B is a laser radar group configured at the position of the half vehicle height and comprises at least 6 110-degree sub laser radar sensors, and the laser radar group is used for 360-degree vehicle periphery detection;
the component C is arranged on a laser radar group on the roof of the vehicle, comprises three laser radars and is used for scanning the front, the rear, the left, the right and the upper 5 directions of the three-dimensional space around the vehicle;
and the component D is configured on the IMU and GPS set in the middle of the vehicle and is used for providing real-time pose and absolute position of the vehicle.
In the component A, a front-view camera is arranged on the front windshield of the vehicle and is used for capturing image information data in front of the vehicle and recording video type information;
preferably, in the component a, the millimeter wave radar is mounted on a front bumper of the vehicle and is used for acquiring distance information data between the vehicle and an object in front of the vehicle, sending detection information to the camera, and then performing data pre-fusion with data of the camera;
preferably, the object in front of the vehicle comprises a static obstacle and/or a dynamic obstacle;
preferably, the dynamic barrier comprises at least one of a motor vehicle, a pedestrian, or a non-motor vehicle.
As a further preferable technical solution, the detection distance of the sub lidar sensor in the component B is 200 meters to 250 meters;
preferably, the lidar bank in assembly C comprises: one 32-line lidar and two 16-line lidar;
preferably, the 32-line lidar is disposed at the center of the roof of the vehicle, and the two 16-line lidar are disposed at both sides of the 32-line lidar, respectively.
As a further preferred technical scheme, the real-time pose provided by the assembly D comprises a pitch angle, a speed and an angular speed.
As a further preferred technical scheme, the data fusion center comprises a time synchronization module, a semantic recognition module and a distance detection module;
the time synchronization module is used for carrying out information time synchronization processing on the information acquired by the sensor acquisition unit to obtain synchronized data;
the semantic recognition module is used for carrying out semantic recognition on the synchronized data to obtain semantic information of the target object;
the distance detection module is used for carrying out distance detection on the synchronized data to obtain the relative motion information of the target object.
As a further preferred technical solution, the road environment simulation platform comprises a vehicle projection to map module, a positioning and track determining module, a static scene generating module and an automatic driving test case generating module;
the vehicle projection-to-map module is used for projecting semantic information of the target object and relative motion information of the target object to a map, the positioning and track determining module is used for positioning and track determining the target object according to the relative motion information of the target object, and the static scene generating module is used for generating a static scene according to static environment data;
the automatic driving test case generation module is used for generating an automatic driving test case according to semantic information of the target object projected to the map, the positioning and track of the target object and a static scene.
As a further preferable technical solution, the road environment simulation platform includes a Carmaker road environment simulation platform.
In a second aspect, the invention provides a method for integrating road data acquisition and simulation scene establishment, which adopts the integrated system to acquire road data and establish simulation scenes.
As a further preferred embodiment, the integration method comprises the steps of:
(a) the sensor acquisition unit acquires vehicle environment data;
(b) and the vehicle environment data is input into a data fusion center, target object fusion data is obtained after processing, and the target object fusion data is input into a road environment simulation platform for simulation.
As a further preferred technical scheme, in the step (b), a D-S evidence theory information synthesis algorithm is adopted for multi-source information fusion, firstly, time synchronization processing is carried out on the measurement information collected by each sensor group, semantic information identification is respectively carried out on the synchronized multi-source information to obtain semantic information and behavior information of a multi-source information target object, and finally, relative motion information of the target object of each sensor group is respectively obtained according to a distance detection module; fusing the processed sensor measurement by using a D-S evidence theory information fusion method, inputting data obtained by each sensor as different evidences, reasoning according to the evidences to obtain a reliability interval and a mass function of the different evidences in a specific scene, combining evidences of high uncertainty evidences and fuzzy evidences, performing synchronous measurement evidence matching on similar evidences, synthesizing the measurement of each sensor acquisition unit component to perform reasoning and synthesis on vehicle data and environment data, performing reliability distribution through the measurement of each sensor acquisition unit component, and finally outputting the fused data.
Compared with the prior art, the invention has the beneficial effects that:
the road data acquisition and simulation scene establishment integrated system provided by the invention is carried out by the configuration and combination of the multiple sensor groups, and can finish the acquisition of the diversity and integrity of the surrounding environment information of the self-vehicle, wherein the acquired environment elements comprise but are not limited to motor vehicles, non-motor vehicles, pedestrians, dynamic and static obstacles and the like near the self-vehicle. And in addition, the information redundancy requirement is considered in the configuration process of the sensor group, the synchronous verification of the acquired information and the matching and association of all elements can be facilitated, and the acquisition system is more reliable.
The data fusion center can perform data layer fusion and feature layer fusion on the environment elements, signals collected by the independently configured sensor collection units are input to the synchronization ring, different source data are synchronously processed according to timestamp information of the different source data, the signals after synchronous processing are input to the vehicle-mounted data control processing unit to perform data post-fusion, and vehicle surrounding environment information is output.
The data acquisition process provided by the invention acquires and marks various automatic driving complex scenes defined in a (file), fully records the response behaviors of the automatic driving vehicle in various scenes in different scenes, and is convenient for verification when the data analysis scene simulates. The specific acquisition scenario is defined as follows:
(1) following a scene: the method comprises the following behaviors of a collecting vehicle to a front target vehicle, wherein the specific process can be divided into the following behaviors that the target vehicle is at a constant speed, accelerated and decelerated to a safe distance away from the target vehicle and continues to run. The data acquisition can acquire the surrounding environment information from the beginning of the constant speed process to several seconds after the deceleration;
(2) target car cut-in scene: the target vehicle is located on a lane on the side of the collecting vehicle and accelerates to the left side to exceed the collecting vehicle. The data acquisition can acquire the surrounding environment information of each 5s complete process before and after the cutting-in process of the target vehicle;
(3) target car cutting scene: the target vehicle is positioned in the same lane of the collection vehicle, the collection vehicle initially follows the vehicle, and then the target vehicle changes lane to leave the lane of the lane. The peripheral environment information of each 5s complete process before and after the cutting-out process of the target vehicle can be collected during data collection;
(4) turning around scene: the collection vehicle turns around. The data acquisition process can collect the surrounding environment information of each 5s complete process before and after the turning process of the acquisition vehicle;
(5) crossroad scene: and collecting the behavior of the vehicle at the crossroad. The peripheral environment information of each 5s complete process before and after the process that the collection vehicle passes through the crossroad can be stored during data collection;
the road data acquisition and simulation scene establishment integrated method provided by the invention couples the data acquisition process with the scene restoration simulation process, improves the diversity of the simulation process, supports the simulation of the actual road test environment, is not limited to the manual establishment of the environment scene, and is beneficial to the test of the automatic driving positioning complex environment and the positioning solution of the test problem. The scene simulation provided by the invention is applied to a Carmaker platform, and can accurately restore various environmental information acquired during data acquisition. The various scenes involved in the acquisition can be played back in the scene restoration process, and the data backtracking and simulation of the various scenes can be supported.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a road data acquisition and simulation scene establishment integrated system provided by the invention.
Icon: 1-a sensor acquisition unit; 2-a data fusion center; 3-a road environment simulation platform; 101-component a; 102-component B; 103-component C; 104-component D; 201-time synchronization module; 202-a semantic recognition module; 203-a distance detection module; 301-vehicle projection to map module; 302-a positioning and trajectory determination module; 303-static scene generation module; 304-automatic driving test case generation module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
According to an aspect of the present invention, as shown in fig. 1, there is provided a road data acquisition and simulation scene establishment integrated system, comprising a sensor acquisition unit 1, a data fusion center 2 and a road environment simulation platform 3, which are connected in sequence;
the sensor acquisition unit 1 includes:
the component A101 is a front-view camera and millimeter wave radar fusion group;
a component B102, configured as a laser radar group at a half-car height position, including at least 6 110 ° sub-laser radar sensors, the laser radar group being used for 360 ° car-around detection;
a component C103, which is configured on the laser radar group of the vehicle roof, comprises three laser radars and is used for scanning the front, the rear, the left, the right and the upper 5 directions of the three-dimensional space around the vehicle;
and the component D104 is configured on the IMU and GPS set in the middle of the vehicle and is used for providing real-time pose and absolute position of the vehicle.
The integrated system adopts a sensor acquisition unit, a data fusion center and a road environment simulation platform which are sequentially connected, wherein the sensor acquisition unit comprises four sets of acquisition components, and the component A can confirm a road scene target object, obtain target object information and output target object acquisition data; the component B performs positioning identification on surrounding target objects in the surrounding directions of the self-vehicle through 6 paths of laser radars, and outputs target object data and point cloud data; the component C detects a target object in the surrounding space of the self-vehicle except the ground through 3 laser radars, and outputs target object data and point cloud data; the component D can monitor and position the position and the real-time pose of the self-vehicle with high precision through a calibrated IMU and GPS fusion system. The four sets of acquisition assemblies are used for acquiring environmental data in parallel, can comprehensively acquire information of all sides of the vehicle and input the information into the data fusion center as four environmental data input sources, and meets the requirements of synchronization, comprehensiveness and redundancy of environmental information acquisition. The collected multi-source data are matched and coupled according to the time stamps of the data, secondary fusion is carried out in the data fusion center, and target object fusion data F which are continuous and synchronous in time and complete in information, namely the surrounding environment information of the self-vehicle, are output.
And inputting the target object fusion data F serving as data into a road environment simulation platform, and restoring the environment information near the vehicle in the road test environment, wherein the environment information includes static or dynamic targets such as nearby motor vehicles, pedestrians and obstacles. Synchronous simulation is carried out through scene construction, and further data analysis such as fault diagnosis, behavior analysis and driving state playback can be carried out on the road test process.
The integrated system can systematically acquire environmental data, fuse multi-sensor data and automatically restore to generate a simulation scene, wherein the sensor acquisition unit is strong in robustness and high in integration level. The system couples the data acquisition process with the scene restoration simulation process, improves the diversity of the simulation process, supports the simulation of the actual road test environment, is not limited to the manual construction of the environment scene, and is favorable for the test of the automatic driving positioning complex environment and the positioning solution of the test problem.
It should be noted that:
the above-mentioned "IMU" refers to an Inertial Measurement Unit, Inertial Measurement Unit.
The "GPS" refers to Global Positioning System.
The "half vehicle height position" refers to a position at half vehicle height.
In a preferred embodiment, in the assembly a, a front view camera is mounted on a front windshield of the vehicle for capturing image information data ahead of the vehicle and recording video type information. The front-view camera may obtain visual data that may reflect environmental information and road geometry information.
Preferably, in the assembly a, the millimeter wave radar is mounted on a front bumper of the vehicle and is used for acquiring distance information data between the vehicle and an object in front of the vehicle, sending detection information to the camera, and performing data pre-fusion with data of the camera. The millimeter wave radar can obtain point cloud data, track tracking and target object distance information can be obtained from the point cloud data, and further road geometric information can be obtained from the track tracking.
And the environment information, the road geometric information and the target object distance information are subjected to information fusion to obtain fusion data information.
Preferably, the object in front of the vehicle comprises a static obstacle and/or a dynamic obstacle. The static barrier refers to an object with a constant position, such as a guardrail, an automobile fault warning board, a road pile or a stone pier and the like; a dynamic obstacle refers to an object that is moving or is about to move, such as a motor vehicle, a pedestrian, or a non-motor vehicle.
Preferably, the dynamic barrier comprises at least one of a motor vehicle, a pedestrian, or a non-motor vehicle. Such dynamic obstacles include, but are not limited to, automobiles, pedestrians, non-automobiles, combinations of automobiles and pedestrians, combinations of pedestrians and non-automobiles, combinations of automobiles and non-automobiles, or combinations of automobiles, pedestrians and non-automobiles, and the like.
In a preferred embodiment, the detection range of the sub lidar sensor in assembly B is 200 to 250 meters.
And the component B carries out sensing detection of more than 200 meters by using 6 four-wire laser radars arranged around the vehicle body, is used for capturing static or dynamic obstacles around the vehicle body, outputs 360-degree environmental point cloud information around the vehicle body, and can obtain semantic information around the vehicle body by identifying point cloud characteristics. The outputs of the six laser radars are independently transmitted to an information center and synchronously fused to obtain the information data around the vehicle body. High durability and reliability is maintained even under severe weather conditions.
And 6 sub laser radar sensors in the component B respectively detect and obtain vehicle left front environment information, vehicle right front environment information, vehicle left rear environment information, vehicle right rear environment information and vehicle right rear environment information, and the information is subjected to data synchronization processing and then fused to obtain vehicle surrounding distance information and vehicle surrounding semantic information.
In a preferred embodiment, the lidar assembly in assembly C includes: one 32-line lidar and two 16-line lidar.
Preferably, the 32-line lidar is disposed at the center of the roof of the vehicle, and the two 16-line lidar are disposed at both sides of the 32-line lidar, respectively.
The 32-line laser radar of the component C is installed in the center of the top of the vehicle, the two 16-line laser radars are arranged on two sides of the central radar and used for sweeping blind areas on two sides, and the laser radar group is used for capturing information and distance of a target level at a long distance nearby the vehicle. And (4) carrying out data synchronization processing on the information detected by the laser radar in the component C, and fusing to obtain the long-distance information around the vehicle.
In a preferred embodiment, the real-time pose provided by assembly D includes pitch, velocity and angular velocity. Vehicle positioning information obtained by fusing a GPS and an IMU in the component D is synchronized with acquired comprehensive environment information data, and absolute position information and real-time motion information of all objects are deduced based on acquired vehicle real-time absolute positions and relative information of other objects output by the component D. The obtained motion characteristics of the target object are applied to the establishment of a test environment for the driving environment simulator to use, for example, the driving simulation simulator can project the dynamic target object to a map to generate three-dimensional driving simulation by reading the position information of a vehicle and the relative position and motion information of other road vehicles, and the generated test case is utilized to restore a test scene for executing a test task on a driving system module or component.
In a preferred embodiment, as shown in fig. 1, the data fusion center 2 includes a time synchronization module 201, a semantic recognition module 202, and a distance detection module 203;
the time synchronization module is used for carrying out information time synchronization processing on the information acquired by the sensor acquisition unit to obtain synchronized data;
the semantic recognition module is used for carrying out semantic recognition on the synchronized data to obtain semantic information of the target object;
the distance detection module is used for carrying out distance detection on the synchronized data to obtain the relative motion information of the target object.
The semantic recognition module utilizes the video data and the point cloud data obtained by the sensor acquisition unit to detect and recognize any traffic participants influencing automatic driving decision in the automatic driving environment, such as other vehicles, pedestrians, non-motor vehicles and the like, and can also comprise static obstacles, signs, signal lamps and the like. The detection algorithm can realize target detection with higher accuracy rate through a deep learning algorithm and algorithm acceleration, and the detection can comprise the steps of carrying out feature recognition and tracking on video image data and point cloud data and identifying a target through a boundary frame mark in a data stream.
The distance detection module can obtain information such as relative distance and angle between the target object and the collected vehicle through a strictly calibrated radar sensor and is used for estimating the movement characteristic of the target object. Continuously tracking and judging a certain target object through semantic recognition tracking and distance detection to obtain continuous information of relative distance, relative speed, acceleration, corner orientation and the like of the object, storing all characteristic information of the corresponding target object, such as target object type (people, vehicle type, non-motor vehicles and the like), target object ID, time information, map information and the like, and deducing the position track motion of the target object according to the absolute position obtained by a test vehicle through a GPS/IMU positioning module.
In a preferred embodiment, as shown in fig. 1, the road environment simulation platform 3 includes a vehicle projection to map module 301, a positioning and track determining module 302, a static scene generating module 303, and an automatic driving test case generating module 304;
the vehicle projection-to-map module is used for projecting semantic information of the target object and relative motion information of the target object to a map, the positioning and track determining module is used for positioning and track determining the target object according to the relative motion information of the target object, and the static scene generating module is used for generating a static scene according to static environment data;
the automatic driving test case generation module is used for generating an automatic driving test case according to semantic information of the target object projected to the map, the positioning and track of the target object and a static scene.
The information of the data fusion center is obtained and stored in a time synchronization frame, and the target object information and the motion position information of each frame are projected into a map, so that dynamic configuration data of the self vehicle and other road users for driving scenes are generated. Specifically, motion parameters such as position trajectory, speed and the like can be smoothly adjusted through known prior information by an optimization method based on a determined target. In the example, important static target object information in the automatic driving scene, namely environment information, is included, and the test scene data can be stored in a frame of corresponding time through the zone bit and corresponding positioning information and provided to the driving simulation platform, so that the dynamic target and the static environment information are synchronously projected to an environment map, and three-dimensional test case simulation is generated through the driving simulation platform.
In a preferred embodiment, the road environment simulation platform comprises a Carmaker road environment simulation platform. The Carmaker platform can accurately restore various environmental information acquired during data acquisition. The various scenes involved in the acquisition can be played back in the scene restoration process, and the data backtracking and simulation of the various scenes can be supported.
According to another aspect of the invention, a method for integrating road data acquisition and simulation scene establishment is provided, and the integrated system is adopted for road data acquisition and simulation scene establishment. The integrated method adopts the integrated system to collect road data and build a simulated scene, so that the integrated method at least has the same advantages as the integrated system.
In a preferred embodiment, the integration method comprises the following steps:
(a) the sensor acquisition unit acquires vehicle environment data;
(b) and the vehicle environment data is input into a data fusion center, target object fusion data is obtained after processing, and the target object fusion data is input into a road environment simulation platform for simulation.
In a preferred embodiment, in step (b), the data fusion center processes the vehicle environment data by: and carrying out multi-source information fusion by adopting a D-S evidence theory based information synthesis algorithm. Firstly, time synchronization processing is carried out on measurement information collected by each sensor group, semantic information identification is carried out on the synchronized multi-source information respectively to obtain semantic information and behavior information of a multi-source information target object, and finally, relative motion information of the target object of each sensor group is obtained according to a distance detection module. Fusing the processed sensor measurement by using a D-S evidence theory information fusion method, inputting data obtained by each sensor as different evidences, reasoning according to the evidences to obtain a reliability interval and a mass function of the different evidences in a specific scene, combining the evidences of high uncertainty evidences and fuzzy evidences, matching the evidences of similar evidences, synthesizing the vehicle data and the environment data by reasoning through integrating the measurement of each sensor acquisition unit component, distributing the reliability through the measurement of each sensor acquisition unit component, and finally outputting the fused data. A large amount of integrated environmental element information can be mapped on a map through a driving simulator to restore a test case scene. The system can identify various semantic elements (e.g., vehicles, pedestrians, non-motorized vehicles, obstacles, etc.) contained in the information by analyzing the captured image data, video data, and point cloud data, and determine the motion characteristics (including the relative position of the object to the captured vehicle, speed, orientation angle, identification ID, size, etc.) of various road users. The motion state information of other road users in the test environment can be determined by identifying the information and collecting the motion state information of the vehicle.
A test experiment is carried out according to the information fusion processing method, the root mean square error of the relative distance and the relative speed which are measured by each sensor unit independently and the root mean square error of fusion measurement calculated by a fusion algorithm are obtained respectively, and the summary pair is shown in table 1 and table 2. Analysis can obtain: the multi-source data fusion algorithm provided by the invention can effectively and comprehensively utilize the advantages of measurement of each sensor, and improve the measurement precision of the sensor group.
TABLE 1 relative distance RMS error table
Figure BDA0002693654060000141
Figure BDA0002693654060000151
TABLE 2 RMS error table of relative velocity
Figure BDA0002693654060000152
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present invention.

Claims (10)

1. A road data acquisition and simulation scene establishment integrated system is characterized by comprising a sensor acquisition unit, a data fusion center and a road environment simulation platform which are sequentially connected;
the sensor acquisition unit includes:
the component A comprises a front-view camera and a millimeter wave radar fusion group;
the component B is a laser radar group configured at the position of the half vehicle height and comprises at least 6 110-degree sub laser radar sensors, and the laser radar group is used for 360-degree vehicle periphery detection;
the component C is arranged on a laser radar group on the roof of the vehicle, comprises three laser radars and is used for scanning the front, the rear, the left, the right and the upper 5 directions of the three-dimensional space around the vehicle;
and the component D is configured on the IMU and GPS set in the middle of the vehicle and is used for providing real-time pose and absolute position of the vehicle.
2. The integrated system of claim 1, wherein in the component a, a front-view camera is mounted on a front windshield of the vehicle for capturing image information data in front of the vehicle and recording video type information;
preferably, in the component a, the millimeter wave radar is mounted on a front bumper of the vehicle and is used for acquiring distance information data between the vehicle and an object in front of the vehicle, sending detection information to the camera, and then performing data pre-fusion with data of the camera;
preferably, the object in front of the vehicle comprises a static obstacle and/or a dynamic obstacle;
preferably, the dynamic barrier comprises at least one of a motor vehicle, a pedestrian, or a non-motor vehicle.
3. The integrated system of claim 1, wherein the sub lidar sensors in assembly B have a detection range of 200 to 250 meters;
preferably, the lidar bank in assembly C comprises: one 32-line lidar and two 16-line lidar;
preferably, the 32-line lidar is disposed at the center of the roof of the vehicle, and the two 16-line lidar are disposed at both sides of the 32-line lidar, respectively.
4. The integrated system of claim 1, wherein the real-time pose provided by assembly D includes pitch angle, velocity, and angular velocity.
5. The integrated system of claim 1, wherein the data fusion center comprises a time synchronization module, a semantic recognition module, and a distance detection module;
the time synchronization module is used for carrying out information time synchronization processing on the information acquired by the sensor acquisition unit to obtain synchronized data;
the semantic recognition module is used for carrying out semantic recognition on the synchronized data to obtain semantic information of the target object;
the distance detection module is used for carrying out distance detection on the synchronized data to obtain the relative motion information of the target object.
6. The integrated system of claim 5, wherein the road environment simulation platform comprises a vehicle projection to map module, a positioning and trajectory determination module, a static scene generation module, and an automatic driving test case generation module;
the vehicle projection-to-map module is used for projecting semantic information of the target object and relative motion information of the target object to a map, the positioning and track determining module is used for positioning and track determining the target object according to the relative motion information of the target object, and the static scene generating module is used for generating a static scene according to static environment data;
the automatic driving test case generation module is used for generating an automatic driving test case according to semantic information of the target object projected to the map, the positioning and track of the target object and a static scene.
7. The integrated system of any one of claims 1 to 6, wherein the road environment simulation platform comprises a Carmaker road environment simulation platform.
8. An integrated method for road data acquisition and simulated scene establishment, characterized in that the integrated system of any one of claims 1-7 is used for road data acquisition and simulated scene establishment.
9. The integrated process of claim 8, comprising the steps of:
(a) the sensor acquisition unit acquires vehicle environment data;
(b) and the vehicle environment data is input into a data fusion center, target object fusion data is obtained after processing, and the target object fusion data is input into a road environment simulation platform for simulation.
10. The integrated method of claim 9, wherein in step (b), the data fusion center processes the vehicle environment data by: the method comprises the steps of performing multi-source information fusion by using a D-S evidence theory information synthesis algorithm, firstly performing time synchronization processing on measurement information collected by each sensor group, respectively performing semantic information identification on the synchronized multi-source information to obtain semantic information and behavior information of a multi-source information target object, and finally respectively obtaining relative motion information of the target object of each sensor group according to a distance detection module; fusing the processed sensor measurement by using a D-S evidence theory information fusion method, inputting data obtained by each sensor as different evidences, reasoning according to the evidences to obtain a reliability interval and a mass function of the different evidences in a specific scene, combining evidences of high uncertainty evidences and fuzzy evidences, performing synchronous measurement evidence matching on similar evidences, synthesizing the measurement of each sensor acquisition unit component to perform reasoning and synthesis on vehicle data and environment data, performing reliability distribution through the measurement of each sensor acquisition unit component, and finally outputting the fused data.
CN202010990734.4A 2020-09-22 2020-09-22 Road data acquisition and simulation scene establishment integrated system and method Active CN112307594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010990734.4A CN112307594B (en) 2020-09-22 2020-09-22 Road data acquisition and simulation scene establishment integrated system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010990734.4A CN112307594B (en) 2020-09-22 2020-09-22 Road data acquisition and simulation scene establishment integrated system and method

Publications (2)

Publication Number Publication Date
CN112307594A true CN112307594A (en) 2021-02-02
CN112307594B CN112307594B (en) 2023-03-28

Family

ID=74483291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010990734.4A Active CN112307594B (en) 2020-09-22 2020-09-22 Road data acquisition and simulation scene establishment integrated system and method

Country Status (1)

Country Link
CN (1) CN112307594B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112948270A (en) * 2021-04-06 2021-06-11 东风小康汽车有限公司重庆分公司 Method, device, equipment and medium for road test simulation analysis of automatic driving vehicle
CN113049267A (en) * 2021-03-16 2021-06-29 同济大学 Physical modeling method for traffic environment fusion perception in-ring VTHIL sensor
CN113139299A (en) * 2021-05-13 2021-07-20 深圳市道通科技股份有限公司 Sensor fusion verification method and device and electronic equipment
CN113704042A (en) * 2021-07-20 2021-11-26 英博超算(南京)科技有限公司 Simulation test equipment and system based on cloud server
CN113703004A (en) * 2021-08-10 2021-11-26 一汽解放汽车有限公司 System and method for detecting running reliability of vehicle-mounted radar and computer equipment
CN113781471A (en) * 2021-09-28 2021-12-10 中国科学技术大学先进技术研究院 Automatic driving test field system and method
CN114167752A (en) * 2021-12-01 2022-03-11 中汽研(天津)汽车工程研究院有限公司 Simulation test method and system device for vehicle active safety system
CN114217104A (en) * 2021-11-24 2022-03-22 深圳市道通智能汽车有限公司 Indoor analog signal generation method and device and analog signal generator
CN115356951A (en) * 2022-10-19 2022-11-18 北京易控智驾科技有限公司 Simulation method, simulation system, storage medium thereof and electronic equipment
CN117521424A (en) * 2024-01-05 2024-02-06 中国电子科技集团公司第十五研究所 Simulation training scene generation method and device
CN113139299B (en) * 2021-05-13 2024-04-26 深圳市道通科技股份有限公司 Sensor fusion verification method and device and electronic equipment

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
CN107991898A (en) * 2016-10-26 2018-05-04 法乐第(北京)网络科技有限公司 A kind of automatic driving vehicle simulating test device and electronic equipment
US20190011924A1 (en) * 2017-07-07 2019-01-10 Jianxiong Xiao System and method for navigating an autonomous driving vehicle
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion
CN110597711A (en) * 2019-08-26 2019-12-20 湖南大学 Automatic driving test case generation method based on scene and task
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN110796194A (en) * 2019-10-29 2020-02-14 中国人民解放军国防科技大学 Target detection result fusion judgment method for multi-sensor information
CN210129116U (en) * 2019-08-07 2020-03-06 苏州索亚机器人技术有限公司 Be applied to autopilot car of garden scene
CN111160447A (en) * 2019-12-25 2020-05-15 中国汽车技术研究中心有限公司 Multi-sensor perception fusion method of autonomous parking positioning system based on DSmT theory
WO2020103533A1 (en) * 2018-11-20 2020-05-28 中车株洲电力机车有限公司 Track and road obstacle detecting method
CN210882093U (en) * 2019-09-16 2020-06-30 郑州宇通客车股份有限公司 Automatic driving vehicle environment perception system and automatic driving vehicle

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107991898A (en) * 2016-10-26 2018-05-04 法乐第(北京)网络科技有限公司 A kind of automatic driving vehicle simulating test device and electronic equipment
US20190011924A1 (en) * 2017-07-07 2019-01-10 Jianxiong Xiao System and method for navigating an autonomous driving vehicle
CN107862293A (en) * 2017-09-14 2018-03-30 北京航空航天大学 Radar based on confrontation generation network generates colored semantic image system and method
CN109556615A (en) * 2018-10-10 2019-04-02 吉林大学 The driving map generation method of Multi-sensor Fusion cognition based on automatic Pilot
WO2020103533A1 (en) * 2018-11-20 2020-05-28 中车株洲电力机车有限公司 Track and road obstacle detecting method
CN109948523A (en) * 2019-03-18 2019-06-28 中国汽车工程研究院股份有限公司 A kind of object recognition methods and its application based on video Yu millimetre-wave radar data fusion
CN210129116U (en) * 2019-08-07 2020-03-06 苏州索亚机器人技术有限公司 Be applied to autopilot car of garden scene
CN110597711A (en) * 2019-08-26 2019-12-20 湖南大学 Automatic driving test case generation method based on scene and task
CN110779730A (en) * 2019-08-29 2020-02-11 浙江零跑科技有限公司 L3-level automatic driving system testing method based on virtual driving scene vehicle on-ring
CN210882093U (en) * 2019-09-16 2020-06-30 郑州宇通客车股份有限公司 Automatic driving vehicle environment perception system and automatic driving vehicle
CN110796194A (en) * 2019-10-29 2020-02-14 中国人民解放军国防科技大学 Target detection result fusion judgment method for multi-sensor information
CN111160447A (en) * 2019-12-25 2020-05-15 中国汽车技术研究中心有限公司 Multi-sensor perception fusion method of autonomous parking positioning system based on DSmT theory

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113049267A (en) * 2021-03-16 2021-06-29 同济大学 Physical modeling method for traffic environment fusion perception in-ring VTHIL sensor
CN112948270A (en) * 2021-04-06 2021-06-11 东风小康汽车有限公司重庆分公司 Method, device, equipment and medium for road test simulation analysis of automatic driving vehicle
CN113139299A (en) * 2021-05-13 2021-07-20 深圳市道通科技股份有限公司 Sensor fusion verification method and device and electronic equipment
CN113139299B (en) * 2021-05-13 2024-04-26 深圳市道通科技股份有限公司 Sensor fusion verification method and device and electronic equipment
CN113704042A (en) * 2021-07-20 2021-11-26 英博超算(南京)科技有限公司 Simulation test equipment and system based on cloud server
CN113703004A (en) * 2021-08-10 2021-11-26 一汽解放汽车有限公司 System and method for detecting running reliability of vehicle-mounted radar and computer equipment
CN113781471B (en) * 2021-09-28 2023-10-27 中国科学技术大学先进技术研究院 Automatic driving test field system and method
CN113781471A (en) * 2021-09-28 2021-12-10 中国科学技术大学先进技术研究院 Automatic driving test field system and method
CN114217104A (en) * 2021-11-24 2022-03-22 深圳市道通智能汽车有限公司 Indoor analog signal generation method and device and analog signal generator
CN114167752A (en) * 2021-12-01 2022-03-11 中汽研(天津)汽车工程研究院有限公司 Simulation test method and system device for vehicle active safety system
CN115356951A (en) * 2022-10-19 2022-11-18 北京易控智驾科技有限公司 Simulation method, simulation system, storage medium thereof and electronic equipment
CN117521424A (en) * 2024-01-05 2024-02-06 中国电子科技集团公司第十五研究所 Simulation training scene generation method and device
CN117521424B (en) * 2024-01-05 2024-04-09 中国电子科技集团公司第十五研究所 Simulation training scene generation method and device

Also Published As

Publication number Publication date
CN112307594B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN112307594B (en) Road data acquisition and simulation scene establishment integrated system and method
JP7040867B2 (en) System, method and program
CN105946853B (en) The system and method for long range automatic parking based on Multi-sensor Fusion
CN109975035B (en) Whole-vehicle-level in-loop test bench system of L3-level automatic driving vehicle
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
CN112700470B (en) Target detection and track extraction method based on traffic video stream
CN110031238B (en) Test method for whole-vehicle-level in-loop test bench of L3-level automatic driving vehicle
CN107798699A (en) Depth map estimation is carried out with stereo-picture
CN106240565A (en) Collision alleviates and hides
CN113340325B (en) System, method and medium for verifying vehicle-road cooperative roadside perception fusion precision
CN112819968B (en) Test method and device for automatic driving vehicle based on mixed reality
CN112693466A (en) System and method for evaluating performance of vehicle environment perception sensor
CN112116031B (en) Target fusion method, system, vehicle and storage medium based on road side equipment
EP3770549B1 (en) Information processing device, movement device, method, and program
Ruder et al. Highway lane change assistant
CN110599853B (en) Intelligent teaching system and method for driving school
CN111445764A (en) Intelligent driving school system for driver road test training and working method
CN115257784A (en) Vehicle-road cooperative system based on 4D millimeter wave radar
WO2004006207A1 (en) Automatic guide apparatus for public transport
DE102023111485A1 (en) TRACKING SEGMENT CLEANUP OF TRACKED OBJECTS
CN115236673A (en) Multi-radar fusion sensing system and method for large vehicle
CN111445725A (en) Blind area intelligent warning device and algorithm for meeting scene
CN114841188A (en) Vehicle fusion positioning method and device based on two-dimensional code
CN116337101A (en) Unmanned environment sensing and navigation system based on digital twin technology
WO2022113196A1 (en) Traffic event reproduction system, server, traffic event reproduction method, and non-transitory computer-readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant