CN114817600A - Method, device and platform for generating automatic driving simulation test scene library - Google Patents
Method, device and platform for generating automatic driving simulation test scene library Download PDFInfo
- Publication number
- CN114817600A CN114817600A CN202110126331.XA CN202110126331A CN114817600A CN 114817600 A CN114817600 A CN 114817600A CN 202110126331 A CN202110126331 A CN 202110126331A CN 114817600 A CN114817600 A CN 114817600A
- Authority
- CN
- China
- Prior art keywords
- dynamic
- scene
- data
- dynamic scene
- library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/53—Querying
- G06F16/535—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/10—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
- G01C21/12—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
- G01C21/16—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
- G01C21/165—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01M—TESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
- G01M17/00—Testing of vehicles
- G01M17/007—Wheeled or endless-tracked vehicles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/55—Clustering; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/71—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/73—Querying
- G06F16/735—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/75—Clustering; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Human Computer Interaction (AREA)
- Automation & Control Theory (AREA)
- Traffic Control Systems (AREA)
Abstract
The embodiment of the application provides a method, a device and a platform for generating an automatic driving simulation test scene library, wherein the method comprises the following steps: acquiring high-precision map data, and generating static scene data according to the high-precision map data; acquiring dynamic scene data, and segmenting a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data; identifying dynamic characteristics of the segmented dynamic scene data; and generating a dynamic scene library according to the static scene data and the dynamic characteristics. The method provided by the embodiment of the application can overcome the problems that an automatic driving simulation test scene established in the prior art is limited, low in precision and high in data processing difficulty.
Description
Technical Field
The embodiment of the application relates to the technical field of automatic driving tests, in particular to a method, a device and a platform for generating an automatic driving simulation test scene library.
Background
With the development of the automatic driving technology, the automatic driving vehicle test becomes an important link for ensuring the safety of the automatic driving vehicle, a decision planning system is used as the core of the automatic driving, and in order to improve the safety and efficiency of the test and reduce the test cost, the automatic driving vehicle test is generally subjected to simulation test, and therefore a test scene needs to be established.
Currently, a simulation test scenario can be established through the following schemes: various dynamic scenes are manually manufactured by using a simulation platform, but the scheme is not based on the dynamic scene of an actual road but is simulated, so that the testing precision is low; the street view obtained by scanning and a real track are utilized to automatically synthesize a realistic image and a simulation moving mode, and a public data set is provided according to the technology, wherein a plurality of scenes are captured in cities with dense vehicles and complex road conditions, but the generated data set is only suitable for a specific platform and is not suitable for other platforms; millimeter wave radar, a fleet real-time collision avoidance System (Mobiley), a camera, a Global Navigation Satellite System (GNSS) can be adopted for acquisition, and dynamic scenes are generated through scene fusion, extraction, marking, analysis and establishment.
Therefore, the automatic driving simulation test scene established in the prior art has the problems of limitation, low precision and high data processing difficulty.
Disclosure of Invention
The embodiment of the application provides a method, a device and a platform for generating an automatic driving simulation test scene library, so as to solve the problems of limitation, low precision and high data processing difficulty of an automatic driving simulation test scene established in the prior art.
In a first aspect, an embodiment of the present application provides a method for generating an automatic driving simulation test scenario library, including:
acquiring high-precision map data, and generating static scene data according to the high-precision map data;
acquiring dynamic scene data, and segmenting a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data;
identifying dynamic characteristics of the segmented dynamic scene data;
and generating a dynamic scene library according to the static scene and the dynamic characteristics.
In a possible design, the acquiring dynamic scene data and segmenting the dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data includes:
acquiring dynamic scene data, wherein the dynamic scene data comprises video or image data;
and screening the dynamic scene according to the video or image data to obtain a dynamic scene video segment or an image frame segment matched with a preset characteristic category, wherein the dynamic scene video segment or the image frame segment is the dynamic scene data after being segmented.
In one possible design, the identifying the dynamic features of the segmented dynamic scene data includes:
classifying and positioning the dynamic target in the visual field range in the video frequency section or the image frame section of the dynamic scene, and determining the dynamic characteristics, wherein the dynamic characteristics comprise the category of the dynamic target and the position information of the dynamic target.
In one possible design, the classifying and locating the dynamic object in the visual field range in the video segment or the image frame segment of the dynamic scene, and determining the dynamic feature includes:
determining the category of the dynamic target and the position information of the dynamic target according to the video segment or the image frame segment of the dynamic scene through a dynamic feature identification model;
the dynamic feature recognition model is obtained by training historical dynamic scene video segments or image frame segments and corresponding historical dynamic target data, wherein the historical dynamic target data comprises the types of the historical dynamic targets and the position information of the historical dynamic targets.
In one possible design, after the determining the dynamic characteristic, the method further includes:
and marking the dynamic target, and tracking the dynamic target according to the marked identifier to obtain a target track for analyzing the behavior of the dynamic target.
In one possible design, the generating a dynamic scene library according to the static scene data and the dynamic feature includes:
according to the dynamic characteristics, marking a classification label of a dynamic scene on the video segment or the image frame segment of the dynamic scene;
editing the video segment or the image frame segment of the dynamic scene according to the static scene and the marked classification label to obtain a dynamic scene file;
and storing the dynamic scene file in a corresponding category library in the established scene libraries to generate a dynamic scene library, wherein the dynamic scene library comprises the category library corresponding to the preset feature categories, and each category library comprises at least one dynamic scene.
In one possible design, the generating static scene data from the high precision map data includes:
analyzing the high-precision map data to generate a first CSV file and a second CSV file, wherein the first CSV file is used for describing the connection relation of each lane, and the second CSV file is used for describing the corresponding relation between the road boundary and the lane;
and generating the static scene data according to the first CSV file and the second CSV file.
In one possible design, the generating the static scene data according to the first CSV file and the second CSV file includes:
merging the first CSV file and the second CSV file to generate a TXT file containing a road connection relation, wherein the TXT file containing the road connection relation is used for describing the front-back connection relation of a road boundary;
and generating the static scene data according to the TXT file. In a second aspect, an embodiment of the present application provides an automatic driving simulation test scenario library generating device, including:
the acquisition module is used for acquiring high-precision map data and generating static scene data according to the high-precision map data;
the segmentation module is used for acquiring dynamic scene data and segmenting a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data;
the identification module is used for identifying the dynamic characteristics of the segmented dynamic scene data;
and the scene generation module is used for generating a dynamic scene library according to the static scene data and the dynamic characteristics.
In a third aspect, an embodiment of the present application provides an automated driving simulation test platform, where the automated driving simulation test platform uses a dynamic scene library generated by the method according to the first aspect and various possible designs of the first aspect.
According to the method, the device and the platform for generating the automatic driving simulation test scene library, firstly, high-precision map data are obtained, and static scene data are generated according to the high-precision map data; then collecting dynamic scene data, and segmenting the dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data; and generating a dynamic scene library according to the static scene data and the dynamic characteristics by identifying the dynamic characteristics of the segmented dynamic scene data. Therefore, the production of the dynamic scene library is completed by combining the collected high-precision map through the steps of carrying out scene segmentation, dynamic feature recognition and the like on the collected dynamic scene data, a complex data processing process is not needed, the production of the dynamic scene is enabled to realize simple operation and automation, the efficiency and the precision of the production of the dynamic scene are greatly improved, the manual workload is saved, the generated dynamic scene library can provide scene library data for different simulation test platforms, and the problem that the prior art has limitations is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic diagram of a method for generating an autopilot simulation test scenario library according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for generating an autopilot simulation test scenario library according to an embodiment of the present application;
fig. 3 is a schematic diagram of a dynamic scene library provided in an embodiment of the present application;
FIG. 4 is a schematic diagram of neural network detection for dynamic feature recognition according to the present application;
FIG. 5 is a schematic diagram of neural network training and testing for dynamic feature recognition according to the present application;
FIG. 6 is a schematic diagram of an autopilot simulation test scenario library generation system provided herein;
fig. 7 is a schematic structural diagram of an automatic driving simulation test scenario library generation apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an automatic driving simulation test scenario library generation device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the above-described drawings (if any) are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The simulation test scenario can be established by the following schemes: various dynamic scenes are manually manufactured by using a simulation platform, but the scheme is not based on the dynamic scene of an actual road but is simulated, so that the testing precision is low; the street view obtained by scanning and a real track are utilized to automatically synthesize a realistic image and a simulation moving mode, and a public data set is provided according to the technology, wherein a plurality of scenes are captured in cities with dense vehicles and complex road conditions, but the generated data set is only suitable for a specific platform and is not suitable for other platforms; millimeter wave radar, a fleet real-time collision avoidance System (Mobiley), a camera, a Global Navigation Satellite System (GNSS) can be adopted for acquisition, and dynamic scenes are generated through scene fusion, extraction, marking, analysis and establishment. Therefore, the automatic driving simulation test scene established in the prior art has the problems of limitation, low precision and high data processing difficulty.
Therefore, aiming at the problems, the technical idea of the application is that the production of the dynamic scene is completed through the steps of scene segmentation, dynamic identification and the like based on high-precision map data and the dynamic data of the vehicles on the road, the complex data processing process is not needed, the production of the dynamic scene is simple to operate and automate, the production efficiency and the precision of the dynamic scene are greatly improved, the manual workload is saved, the dynamic scene library generated simultaneously can provide scene library data for different simulation test platforms, and the problem of limitation in the prior art is solved.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 1 is a schematic diagram of a method for generating an automatic driving simulation test scenario library according to an embodiment of the present application. In practical application, the scenario test case is mainly reproduced through a virtual simulation environment and a tool chain, so that the establishment of a virtual scenario database is a key bridge for connecting scenario data and scenario application. The scenario test cases require standards in order to implement scenario exchanges between simulation environments. The scope of the scene library should cover various types of typical scenes, corner scenes, accident scenes, and the like.
Referring to fig. 1, high-precision map (hadamp) data is acquired, the high-precision map data is automatically converted into a static scene required by automatic driving simulation, data is acquired based on binocular equipment, such as binocular camera equipment, GNSS positioning is matched, dynamic data of vehicles on the road is acquired, and then the production of the dynamic scene is completed through the steps of scene segmentation, dynamic feature recognition, dynamic scene editing and the like, so that the production method of the dynamic scene is simple in operation and automation, the production efficiency of the dynamic scene is greatly improved, and the manual workload is saved.
Specifically, static scene processing is performed on high-precision map data to obtain Opendrive data, then the Opendrive data is combined to edit a dynamic scene to generate OpenScanrario data, and then a dynamic scene library is generated based on the established scene library. The openscene is an open file format, and is used for describing dynamic content in a driving simulation application program. The main purpose of openscene is to describe complex, synchronized traffic participants, involving multiple entities such as vehicles, pedestrians, and other traffic participants. The traffic participant description may be based on the driver's behavior (e.g., performing lane changes) or trajectory (e.g., obtained from recorded driving maneuvers). Other things, such as the description of self-vehicles, driver appearance, pedestrians, traffic and environmental conditions, are also included in the standard.
Therefore, the method for generating the automatic driving simulation test scene library establishes the automatic driving simulation test scene library, provides abundant test scenes for test verification environments such as an automatic driving test model in-loop environment, a software in-loop environment, a hardware in-loop environment and a driver in-loop environment, supports services such as automatic driving virtual drive test, vehicle-scale product inspection and detection, forms an automatic driving simulation test scene standard suitable for actual road conditions, and can cover various roads and road conditions including expressways, intercity roads, urban roads, garden roads, underground parking lots and the like.
Specifically, the method for generating the automatic driving simulation test scene library can be used for constructing the automatic driving simulation test dynamic scene library, and the produced scene library data can provide scene library data for different types of customers and provide scene library data for different simulation test platforms. Providing training algorithm service for automobile factories, automatic driving industries and the like; and scene restoration and control service and evaluation service can be provided.
Fig. 2 is a schematic flowchart of a method for generating an autopilot simulation test scenario library according to an embodiment of the present application, where an execution subject of the method may be an autopilot simulation test scenario library generation device or an autopilot simulation test scenario library generation system; the method can comprise the following steps:
s201, obtaining high-precision map data, and generating static scene data according to the high-precision map data.
In this embodiment, high-precision map data is collected, read into a memory of a system (such as an automatic driving simulation test scene library generation device) according to a standard specification, and mainly distinguish high-precision map elements such as road, link, lane, node, object, and the like, and then redefine these elements in an optional storage format. The high-precision map data are converted into static scene data, and the process is completed automatically without manual operation.
S202, collecting dynamic scene data, and segmenting the dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data.
In this embodiment, binocular equipment may be used to acquire dynamic scene data, and then the dynamic scene may be segmented based on the dynamic scene data to obtain the segmented dynamic scene data.
In a possible design, the acquiring dynamic scene data and segmenting the dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data may be implemented by the following steps:
step a1, acquiring dynamic scene data, the dynamic scene data comprising video or image data.
Step a2, screening a dynamic scene according to the video or image data to obtain a dynamic scene video segment or an image frame segment matched with a preset feature type, wherein the dynamic scene video segment or the image frame segment is the dynamic scene data after being segmented.
In this embodiment, dynamic scene data is acquired through a binocular device, the binocular device is integrated by a GNSS antenna, an inertial navigation system, and a camera lens, and the dynamic scene data includes video or image data. The binocular device integrates a GNSS antenna, inertial navigation and a camera lens, the precision of the binocular device can reach 30cm in an open area, the real-time positioning precision is 1-1.5m, online transmission and background transmission are supported, a multi-data acquisition scheme is adopted, convenience is brought to more application scenes, and detection, three-dimensional scene restoration and construction of road element information of a cloud are achieved. The binocular equipment can comprise a camera lens, an SIM card slot, a GNSS antenna port, an IMU, an SD card, an HDMI, a Micro-USB, a network port and a DC power supply; the camera lens is used for acquiring images; the SIM card slot is used for inserting a 4G card and transmitting data; the GNSS antenna port is used for receiving GPS signals; the IMU is used for integrated navigation positioning; the SD card is used for storing data; the HDMI is used for displaying the acquired image; the Micro-USB is used for debugging equipment hardware; the network port is used for transmitting data through Ethernet; the DC power supply is used for device powering.
The binocular device can realize accurate identification and positioning of targets such as vehicles, pedestrians, lane lines, traffic signboards and the like in a forward road. And realizes the local storage of video, pictures and structured information. The structured information here may include positioning data and the like.
Specifically, according to the collected driving recording video or image data, namely dynamic scene data, scene screening and discrimination are performed. And intercepting the dynamic scene video band or the image frame band which accords with the characteristic category to realize dynamic scene segmentation.
The standard of scene segmentation depends on the production requirement of the dynamic scene, and the scene segmentation needs to be performed according to the type of the dynamic scene to be extracted. For example: the overtaking behavior scene of the vehicle can be segmented, and the sudden stop behavior scene of the front vehicle can also be segmented. And is not particularly limited herein.
And S203, identifying the dynamic characteristics of the segmented dynamic scene data.
In this embodiment, a dynamic feature recognition model may be adopted to recognize the dynamic features of the segmented dynamic scene data. Namely, each frame of image corresponding to the video segment or the image frame segment of the dynamic scene is input into the dynamic feature identification model, and the dynamic feature corresponding to the image is output.
And S204, generating a dynamic scene library according to the static scene data and the dynamic characteristics.
Before the dynamic scene library is generated, a classification label of the dynamic scene is defined for the intercepted dynamic scene video band or image frame band, so that the scene is stored in the dynamic scene library after being generated. To identify to which dynamic scene the current video segment or image frame segment belongs, so as to add to the classified dynamic scene when the field dynamic scene is generated.
In this embodiment, the high-precision map data may be made into static scene data, then each dynamic scene is generated by using the static scene data and the identified dynamic features of the divided dynamic scene data, and each dynamic scene is stored in a category library of a pre-established scene library.
The dynamic scene may refer to parts such as management and control, traffic flow and the like with dynamic characteristics in simulation, is a key component of a simulation test scene, and mainly may include: traffic management control simulation, motor vehicle simulation, pedestrian and non-motor vehicle simulation, and the like.
In practical application, referring to fig. 3, fig. 3 is a schematic diagram of a dynamic scene library provided in the embodiment of the present application. The dynamic scenes generated in the actual road environment are classified to form a dynamic scene library shown in fig. 3, which comprises 4 primary classifications, 9 secondary classifications and at least 147 fine classifications, and various dynamic scenes can be formed by arbitrary combination according to the fine classifications. It should be noted that the dynamic scenario library shown in fig. 3 is merely an example, and is not limited in this regard.
Wherein the 4 primary classifications may include environment, event, vehicle behavior, and digital signals; the 9 secondary classifications can include weather, illumination, time, road surface environment in the environment class, traffic events in the event class, own vehicle behavior, preceding vehicle behavior, parking in the vehicle behavior class, and digital signals in the digital signal class; at least 147 subclasses can include: rain, snow, sunny, cloudy, frozen rain, sand, dust, fog, wind, hail, dryness in the weather category, solar altitude, illumination intensity in the illumination category, date, time in the time category, accumulated water, snow, ice on, sandy soil road surfaces, pothole road surfaces, rutting road surfaces, wavy road surfaces, friction coefficients, accidents in the traffic incident category, obstacles, disasters, equipment failures, congestion, vehicle type restrictions, lane traffic restrictions, road traffic restrictions, pedestrians, non-motorized vehicles, signal lights, road construction, other moving objects, straight running, sudden insertion, turning, lane change, left turn, right turn, merging, overtaking, edgewise parking, braking, lane keeping, cruise control in the traffic incident category, straight running, sudden insertion, turning, lane change, left turn, right turn, merging, overtaking, in the traffic category, The parking system comprises the following components of side parking, braking, lane keeping, constant speed cruising, lateral parking in parking class, vertical parking and inclined parking, v2x in digital signal class, a digital map, a non-digital map and the like.
According to the method for generating the automatic driving simulation test scene library, firstly, high-precision map data are obtained, and static scene data are generated according to the high-precision map data; then collecting dynamic scene data, and segmenting the dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data; and generating a dynamic scene library according to the static scene data and the dynamic characteristics by identifying the dynamic characteristics of the segmented dynamic scene data. Therefore, the production of the dynamic scene library is completed by combining the collected high-precision map through the steps of carrying out scene segmentation, dynamic feature recognition and the like on the collected dynamic scene data, a complex data processing process is not needed, the production of the dynamic scene is enabled to realize simple operation and automation, the efficiency and the precision of the production of the dynamic scene are greatly improved, the manual workload is saved, the generated dynamic scene library can provide scene library data for different simulation test platforms, and the problem that the prior art has limitations is solved.
In a possible design, the present embodiment describes how to identify the dynamic features of the segmented dynamic scene data in detail on the basis of the above embodiments. The identifying the dynamic feature of the segmented dynamic scene data may include:
classifying and positioning the dynamic target in the visual field range in the video frequency section or the image frame section of the dynamic scene, and determining the dynamic characteristics, wherein the dynamic characteristics comprise the category of the dynamic target and the position information of the dynamic target.
After the dynamic characteristics are determined, the method for generating the automatic driving simulation test scene library may further include: and marking the dynamic target, and tracking the dynamic target according to the marked identifier to obtain a target track for analyzing the behavior of the dynamic target.
In this embodiment, the dynamic feature extraction mainly locates and classifies dynamic objects (vehicles and pedestrians) in the field of view of a dynamic scene video segment or an image frame segment, and determines the specific category and position of the dynamic objects. And then, giving a unique ID (identity document), namely an identifier, tracking a target track, and defining the behavior of the environmental vehicle. The environmental vehicle here may be a dynamic target.
In one possible design, how to classify and locate the dynamic objects and determine the dynamic features can be implemented by the following steps:
and determining the category of the dynamic target and the position information of the dynamic target according to the video segment or the image frame segment of the dynamic scene through a dynamic feature identification model.
The dynamic feature recognition model is obtained by training historical dynamic scene video segments or image frame segments and corresponding historical dynamic target data, wherein the historical dynamic target data comprises the types of the historical dynamic targets and the position information of the historical dynamic targets.
In this embodiment, referring to fig. 4, fig. 4 is a schematic diagram of a neural network detection for dynamic feature recognition according to the present application. And (3) building a neural network structure by a deep learning method to train and output position information BBox and category information Cls of the target. The neural network model built here can be a dynamic feature recognition model. In the building process, an Input module in a network structure, a backhaul module for extracting characteristic information, a Neck module for fusing different layers of characteristic information in the network structure and a final output Head module of a network need to be determined. Wherein, the backbone is a framework for helping to develop the heavyweight javascript application and can be a backbone network; the head is a network for acquiring network output content, and makes a prediction by using the characteristics extracted previously; the heck is placed between the backbone and the head in order to better utilize the features extracted by the backbone.
Specifically, referring to fig. 5, fig. 5 is a schematic diagram of training and testing a neural network for dynamic feature recognition according to the present application. The specific training and testing process comprises the following steps:
preparing a truth value data set used for training, such as a historical dynamic scene video segment or an image frame segment and corresponding historical dynamic target data, dividing the truth value data into a training sample and a test sample, wherein the training sample is used for training a dynamic target identification model (namely a dynamic feature identification model), and the test sample is used for testing the effect of the final model. And configuring parameters required by each item in the model training process. After the data and parameters are prepared, the designed network can be used for training the dynamic target recognition model. Illustratively, by experiment at GPU: on a GTX-1070 machine, the forward reasoning of the model can reach 25 frames/second, and the identification accuracy rate reaches about 90%.
Specifically, dynamic target data is used as a training sample, a detector is trained through feature selection and extraction, and position information BBox and category information Cls are output through the detector; the input image is used as a test sample, feature selection and extraction are carried out through a scanning window, the image is input into a detector, and position information BBox and category information Cls corresponding to the image are output. The training sample is used for training the neural network model, and the testing sample is used for testing the network model.
In a possible design, the present embodiment describes in detail how to generate a dynamic scene library according to the static scene data and the dynamic characteristics based on the above embodiments. The generating of the dynamic scene library according to the static scene data and the dynamic characteristics can be realized by the following steps:
step b1, marking the classification label of the dynamic scene for the dynamic scene video band or image frame band according to the dynamic characteristics.
Step b2, editing the dynamic scene video segment or image frame segment according to the static scene data and the marked classification label to obtain a dynamic scene file.
Step b3, storing the dynamic scene file in a corresponding category library in the established scene libraries to generate a dynamic scene library, wherein the dynamic scene library comprises a category library corresponding to the preset feature categories, and each category library comprises at least one dynamic scene.
In this embodiment, a classification tag of a dynamic scene is defined for an intercepted dynamic scene video segment or image frame segment, so that the generated scene is stored in a dynamic scene library. And after converting the high-precision map data into static scene data, editing the dynamic scene video segment or the dynamic scene image frame segment through the static scene data and the marked classification label to generate a dynamic scene file.
Specifically, the dynamic scene is edited according to the input information of the foregoing steps, such as static scene data and labeled classification tags, to form an executable dynamic scene XML file, i.e., a dynamic scene file. And storing the dynamic scene file in a corresponding category library in the established scene library to generate a dynamic scene library.
In the manufacturing process of the dynamic scene, the main influencing factors may include: the vehicle track coordinate, the vehicle speed point set, the environment vehicle track coordinate and the vehicle speed point set. Because the precision problem of obtaining the positioning coordinates in the dynamic target identification and Real-time kinematic (RTK) trajectory is often difficult to match with the static base map when a dynamic scene is generated, a trajectory coordinate point set and a vehicle speed point set need to be processed for performing visual editing on the dynamic scene, so that the vehicle trajectory is matched with the static base map.
The process of generating static scene data can be realized by the following steps:
and d1, analyzing the mif format data of the high-precision map data according to a reference line, a structure and a lane style respectively according to the high-precision map data, and generating TXT files corresponding to the reference line, the structure and the lane style respectively, wherein the TXT file corresponding to the reference line is used for describing lane and lane attributes, the TXT file corresponding to the structure is used for describing attributes, a lane to which the structure belongs and position information, and the TXT file corresponding to the lane style is used for describing style information of the lane.
D2, analyzing the high-precision map data to generate a first CSV file and a second CSV file, wherein the first CSV file is used for describing the connection relation of each lane, and the second CSV file is used for describing the corresponding relation between the road boundary and the lane.
And d3, merging the first CSV file and the second CSV file to generate a TXT file containing a road connection relationship, where the TXT file containing the road connection relationship is used to describe a front-back connection relationship of a road boundary.
And d4, converting the data in the TXT files respectively corresponding to the reference line, the structure and the lane style and the TXT files containing the road connection relationship into XML files, wherein the XML files are used for describing road elements and connection relationship.
Step d5, converting the XML file into a compatible XODR file, where the XODR file contains the static scene data, and the static scene data at least includes: the method comprises the following steps of lane and lane connection relation description, elevation description, structure description, signal lamp description, reference line description, road type description and road intersection and connection relation description between the intersection and the lane.
Specifically, (1) analyzing the mif format data, decomposing the data according to link (reference line), structure and lane style, and respectively converting the data into txt files corresponding to the link, the structure and the lane style, wherein the txt file of each link describes lane, lane attribute and the like, and the txt file of the structure describes the attribute, belonging lane, position and the like of the structure; the lane style txt file describes style information of a lane, for example, whether a lane line is a double yellow line, a 69 dotted line, a solid line, or the like. Meanwhile, two CSV files need to be generated, and one CSV file (i.e., the first CSV file) is used to describe the connection relationship of each lane; another CSV file (second CSV file) is used to describe the correspondence between the road boundaries and the lane lanes.
(2) And merging the two CSV files to generate a road connection relation txt file, and describing the front and back connection relation of the road boundary in the file.
(3) And comprehensively converting the data into XML, wherein all road elements and connection relations are described in the XML. Including reference lines, connection relationships between lanes and lane segments, connection relationships between structures, intersections and lanes, and the like.
The XML nodes are configured as follows:
-Road
------Planview
------Lanes
------------Section
------objects
-Junction
------Link
(4) the XML is converted into a plurality of XODR files compatible with the test platform, reference line discrete points in the XML are converted into a vector line description mode in the XODR files, and the reference lines are described by using the coordinates, the lengths and the directions of the starting points. The converted XODR file herein contains the static scene data, and the converted XODR file is more clear structurally, and includes: the method comprises the following steps of lane and lane connection relation description, elevation description, structure description, signal lamp description, reference line description, road type description and road intersection and connection relation description between the intersection and the lane.
The node of the XODR is configured to:
wherein, the above-mentioned process is all realized the automation mechanized operation, improves conversion efficiency.
In practical application, an execution main body of the automatic driving simulation test scenario library generation method takes an automatic driving simulation test scenario library generation system, namely a dynamic scenario production system as an example, and refer to fig. 6, where fig. 6 is a schematic diagram of the automatic driving simulation test scenario library generation system provided by the present application. The dynamic scene production system can comprise a static scene generation module, a scene segmentation module, a dynamic characteristic identification module and a dynamic scene editing module; the static scene generation module can automatically convert the existing HADMAP data into static scene data in an Opendrive format; the scene segmentation module can realize the segmentation of videos or images acquired by binocular equipment, set main segmentation parameters and add labels to dynamic scenes; the dynamic characteristic identification module can automatically identify the category and the motion trail of the targets such as environmental vehicles or pedestrians by utilizing the video segments or the image sets segmented in the last step; the dynamic scene editing module can edit the dynamic scene according to the static scene data, the type and the motion trail of the dynamically identified targets such as the environmental vehicles or pedestrians, the motion trail of the vehicle, the vehicle speed trail and the like, and output the OpenScenario format standard dynamic scene file. And storing the standard dynamic scene file into a corresponding category library of the dynamic scene library.
In the embodiment, the static scene required by automatic driving simulation is automatically converted based on the acquired high-precision map data, the dynamic data of vehicles on the road is acquired based on binocular camera equipment in cooperation with GNSS positioning, and the production of the dynamic scene is completed through the steps of scene segmentation, animal identification, scene editing and the like, so that the production method of the dynamic scene is simple in operation and automation, the production efficiency of the dynamic scene is greatly improved, and the manual workload is saved.
Therefore, the binocular equipment is adopted for processing and manufacturing the dynamic scene, compared with the prior art, the number of hardware equipment is reduced, the cost is reduced, the generation of the dynamic scene is simple to operate and automate, the efficiency and the precision of the dynamic scene production are greatly improved, and the manual workload is saved. And ensures that the scene is all the actual road scene. Meanwhile, due to the fact that the dynamic scene file in the OpenScenario standard format is adopted, the dynamic scene established by the method can be used for testing by various automatic driving simulation test platforms, and the problem that the prior art is limited is solved.
In order to implement the method for generating the automatic driving simulation test scene library, the embodiment provides a device for generating the automatic driving simulation test scene library. Referring to fig. 7, fig. 7 is a schematic structural diagram of an automatic driving simulation test scenario library generation apparatus provided in the embodiment of the present application; the automatic driving simulation test scene library generating device 70 includes: an acquisition module 701, a segmentation module 702, an identification module 703 and a scene generation module 704; the acquisition module 701 is used for acquiring high-precision map data and generating static scene data according to the high-precision map data; a segmentation module 702, configured to collect dynamic scene data, and segment a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data; an identifying module 703, configured to identify a dynamic feature of the segmented dynamic scene data; a scene generating module 704, configured to generate a dynamic scene library according to the static scene data and the dynamic feature.
In this embodiment, an obtaining module 701, a dividing module 702, an identifying module 703 and a scene generating module 704 are provided to obtain high-precision map data and generate static scene data according to the high-precision map data; then collecting dynamic scene data, and segmenting the dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data; and generating a dynamic scene library according to the static scene data and the dynamic characteristics by identifying the dynamic characteristics of the segmented dynamic scene data. Therefore, the production of the dynamic scene library is completed by combining the collected high-precision map through the steps of carrying out scene segmentation, dynamic feature recognition and the like on the collected dynamic scene data, a complex data processing process is not needed, the production of the dynamic scene is enabled to realize simple operation and automation, the efficiency and the precision of the production of the dynamic scene are greatly improved, the manual workload is saved, the generated dynamic scene library can provide scene library data for different simulation test platforms, and the problem that the prior art has limitations is solved.
The apparatus provided in this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
In one possible design, the segmentation module is specifically configured to: acquiring dynamic scene data, wherein the dynamic scene data comprises video or image data; and screening the dynamic scene according to the video or image data to obtain a dynamic scene video segment or an image frame segment matched with a preset characteristic category, wherein the dynamic scene video segment or the image frame segment is the dynamic scene data after being segmented.
In one possible design, an identification module includes: an identification unit; and the identification unit is used for classifying and positioning the dynamic target in the visual field range in the video frequency section or the image frame section of the dynamic scene and determining the dynamic characteristic, wherein the dynamic characteristic comprises the category of the dynamic target and the position information of the dynamic target.
In one possible design, the identification unit is specifically configured to: determining the category of the dynamic target and the position information of the dynamic target according to the video segment or the image frame segment of the dynamic scene through a dynamic feature identification model; the dynamic feature recognition model is obtained by training historical dynamic scene video segments or image frame segments and corresponding historical dynamic target data, wherein the historical dynamic target data comprises the types of the historical dynamic targets and the position information of the historical dynamic targets.
In one possible design, the apparatus may further include: a processing module; and the processing module is used for marking the dynamic target after the dynamic characteristics are determined, tracking the dynamic target according to the marked identifier to obtain a target track, and analyzing the behavior of the dynamic target.
In one possible design, the scenario library generation module includes: a marking unit, an editing unit and a scene library generating unit; the marking unit is used for marking the classification label of the dynamic scene on the video band or the image frame band of the dynamic scene according to the dynamic characteristics; the editing unit is used for editing the dynamic scene video segment or the image frame segment according to the static scene data and the marked classification label to obtain a dynamic scene file; and the scene library generating unit is used for storing the dynamic scene file in a corresponding category library in the established scene libraries to generate a dynamic scene library, wherein the dynamic scene library comprises a category library corresponding to the preset feature category, and each category library comprises at least one dynamic scene.
In one possible design, the obtaining module is specifically configured to:
analyzing mif format data of the high-precision map data according to a reference line, a structure and a lane pattern respectively to generate TXT files corresponding to the reference line, the structure and the lane pattern respectively, wherein the TXT file corresponding to the reference line is used for describing lane and lane attributes, the TXT file corresponding to the structure is used for describing attributes, a lane to which the structure belongs and position information, and the TXT file corresponding to the lane pattern is used for describing pattern information of the lane;
analyzing the high-precision map data to generate a first CSV file and a second CSV file, wherein the first CSV file is used for describing the connection relation of each lane, and the second CSV file is used for describing the corresponding relation between the road boundary and the lane;
and generating the static scene data according to the first CSV file and the second CSV file.
In one possible design, the obtaining module is specifically configured to:
merging the first CSV file and the second CSV file to generate a TXT file containing a road connection relation, wherein the TXT file containing the road connection relation is used for describing the front-back connection relation of a road boundary;
and generating the static scene data according to the TXT file.
In one possible design, the obtaining module is specifically configured to:
converting data in the TXT files respectively corresponding to the reference line, the structure and the lane style and the TXT files containing the road connection relationship into XML files, wherein the XML files are used for describing road elements and connection relationship;
and generating the static scene data according to the XML file.
In one possible design, the obtaining module is specifically configured to:
converting the XML file into a compatible XODR file, wherein the XODR file contains the static scene data;
wherein the static scene data at least comprises: the method comprises the following steps of lane and lane connection relation description, elevation description, structure description, signal lamp description, reference line description, road type description and road intersection and connection relation description between the intersection and the lane.
The embodiment of the application provides an automatic driving simulation test platform, which uses the dynamic scene library generated by the automatic driving simulation test scene library method.
In order to implement the method for generating the automatic driving simulation test scenario library, the embodiment provides an automatic driving simulation test scenario library generating device. Fig. 8 is a schematic structural diagram of an automatic driving simulation test scenario library generation device according to an embodiment of the present application. As shown in fig. 8, the automated driving simulation test scenario library generation device 80 of the present embodiment includes: a processor 801 and a memory 802; a memory 802 for storing computer-executable instructions; the processor 801 is configured to execute the computer-executable instructions stored in the memory to implement the steps performed in the above embodiments. Reference may be made in particular to the description relating to the method embodiments described above.
An embodiment of the present application further provides a computer-readable storage medium, where a computer execution instruction is stored in the computer-readable storage medium, and when a processor executes the computer execution instruction, the method for generating the automatic driving simulation test scene library is implemented.
An embodiment of the present application further provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the method for generating the automatic driving simulation test scenario library as described above is implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form. In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The unit formed by the modules can be realized in a hardware form, and can also be realized in a form of hardware and a software functional unit.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (in english: processor) to execute some steps of the methods described in the embodiments of the present application. It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus. The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device or host device.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
1. A method for generating an automatic driving simulation test scene library is characterized by comprising the following steps:
acquiring high-precision map data, and generating static scene data according to the high-precision map data;
acquiring dynamic scene data, and segmenting a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data;
identifying dynamic characteristics of the segmented dynamic scene data;
and generating a dynamic scene library according to the static scene data and the dynamic characteristics.
2. The method of claim 1, wherein the acquiring dynamic scene data and segmenting a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data comprises:
acquiring dynamic scene data, wherein the dynamic scene data comprises video or image data;
and screening the dynamic scene according to the video or image data to obtain a dynamic scene video segment or an image frame segment matched with a preset characteristic category, wherein the dynamic scene video segment or the image frame segment is the dynamic scene data after being segmented.
3. The method of claim 2, wherein the identifying the dynamic characteristics of the segmented dynamic scene data comprises:
classifying and positioning the dynamic target in the visual field range in the video frequency section or the image frame section of the dynamic scene, and determining the dynamic characteristics, wherein the dynamic characteristics comprise the category of the dynamic target and the position information of the dynamic target.
4. The method according to claim 3, wherein said classifying and locating dynamic objects in a field of view of said dynamic scene video segment or image frame segment, determining said dynamic features, comprises:
determining the category of the dynamic target and the position information of the dynamic target according to the video segment or the image frame segment of the dynamic scene through a dynamic feature identification model;
the dynamic feature recognition model is obtained by training historical dynamic scene video segments or image frame segments and corresponding historical dynamic target data, wherein the historical dynamic target data comprises the types of the historical dynamic targets and the position information of the historical dynamic targets.
5. The method of claim 3, wherein after said determining the dynamic characteristic, the method further comprises:
and marking the dynamic target, and tracking the dynamic target according to the marked identifier to obtain a target track for analyzing the behavior of the dynamic target.
6. The method according to any of claims 2-5, wherein generating a dynamic scene library from the static scene data and the dynamic features comprises:
according to the dynamic characteristics, marking a classification label of a dynamic scene on the video segment or the image frame segment of the dynamic scene;
editing the dynamic scene video segment or the image frame segment according to the static scene data and the marked classification label to obtain a dynamic scene file;
and storing the dynamic scene file in a corresponding category library in the established scene libraries to generate a dynamic scene library, wherein the dynamic scene library comprises the category library corresponding to the preset feature categories, and each category library comprises at least one dynamic scene.
7. The method of claim 1, wherein generating static scene data from the high precision map data comprises:
analyzing the high-precision map data to generate a first CSV file and a second CSV file, wherein the first CSV file is used for describing the connection relation of each lane, and the second CSV file is used for describing the corresponding relation between the road boundary and the lane;
and generating the static scene data according to the first CSV file and the second CSV file.
8. The method of claim 7, wherein the generating the static scene data according to the first CSV file and the second CSV file comprises:
merging the first CSV file and the second CSV file to generate a TXT file containing a road connection relation, wherein the TXT file containing the road connection relation is used for describing the front-back connection relation of a road boundary;
and generating the static scene data according to the TXT file.
9. An automatic driving simulation test scene library generation device is characterized by comprising:
the acquisition module is used for acquiring high-precision map data and generating static scene data according to the high-precision map data;
the segmentation module is used for acquiring dynamic scene data and segmenting a dynamic scene according to the dynamic scene data to obtain segmented dynamic scene data;
the identification module is used for identifying the dynamic characteristics of the segmented dynamic scene data;
and the scene generation module is used for generating a dynamic scene library according to the static scene data and the dynamic characteristics.
10. An automated driving simulation test platform, wherein the automated driving simulation test platform uses a library of dynamic scenarios generated by the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110126331.XA CN114817600A (en) | 2021-01-29 | 2021-01-29 | Method, device and platform for generating automatic driving simulation test scene library |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110126331.XA CN114817600A (en) | 2021-01-29 | 2021-01-29 | Method, device and platform for generating automatic driving simulation test scene library |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114817600A true CN114817600A (en) | 2022-07-29 |
Family
ID=82526249
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110126331.XA Pending CN114817600A (en) | 2021-01-29 | 2021-01-29 | Method, device and platform for generating automatic driving simulation test scene library |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114817600A (en) |
-
2021
- 2021-01-29 CN CN202110126331.XA patent/CN114817600A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110648389A (en) | 3D reconstruction method and system for city street view based on cooperation of unmanned aerial vehicle and edge vehicle | |
CN111179300A (en) | Method, apparatus, system, device and storage medium for obstacle detection | |
US20240017747A1 (en) | Method and system for augmenting lidar data | |
CN114596555B (en) | Obstacle point cloud data screening method and device, electronic equipment and storage medium | |
CN115357006A (en) | Intelligent networking automobile virtual and actual testing method, equipment and medium based on digital twins | |
CN112837414B (en) | Method for constructing three-dimensional high-precision map based on vehicle-mounted point cloud data | |
US20220318464A1 (en) | Machine Learning Data Augmentation for Simulation | |
CN110688943A (en) | Method and device for automatically acquiring image sample based on actual driving data | |
CN114663852A (en) | Method and device for constructing lane line graph, electronic equipment and readable storage medium | |
CN117576652B (en) | Road object identification method and device, storage medium and electronic equipment | |
Voelsen et al. | Classification and change detection in mobile mapping LiDAR point clouds | |
CN115035251A (en) | Bridge deck vehicle real-time tracking method based on domain-enhanced synthetic data set | |
Yang et al. | TR2RM: an urban road network generation model based on multisource big data | |
CN115857685A (en) | Perception algorithm data closed-loop method and related device | |
CN114817600A (en) | Method, device and platform for generating automatic driving simulation test scene library | |
Benčević et al. | Tool for automatic labeling of objects in images obtained from Carla autonomous driving simulator | |
Zhuo et al. | A novel vehicle detection framework based on parallel vision | |
Tarko et al. | Guaranteed LiDAR-aided multi-object tracking at road intersections: USDOT Region V Regional University Transportation Center final report. | |
Wang et al. | Extracting road surface from mobile laser scanning point clouds in large scale urban environment | |
Li et al. | Semi-Automatic Construction of Virtual Reality Environment for Highway Work Zone Training using Open-Source Tools | |
Teufel et al. | Collective Perception Datasets for Autonomous Driving: A Comprehensive Review | |
CN118115685B (en) | Simulation scene generation and test method, device, equipment and medium | |
de Paz Mouriño et al. | Multiview rasterization of street cross-sections acquired with mobile laser scanning for semantic segmentation with convolutional neural networks | |
Tarko et al. | Guaranteed lidar-aided multi-object tracking at road intersections | |
de Gordoa et al. | Scenario-Based Validation of Automated Train Systems Using a 3D Virtual Railway Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |