CN114067062A - Method and system for simulating real driving scene, electronic equipment and storage medium - Google Patents
Method and system for simulating real driving scene, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114067062A CN114067062A CN202210046583.6A CN202210046583A CN114067062A CN 114067062 A CN114067062 A CN 114067062A CN 202210046583 A CN202210046583 A CN 202210046583A CN 114067062 A CN114067062 A CN 114067062A
- Authority
- CN
- China
- Prior art keywords
- scene
- simulation
- data
- real
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Architecture (AREA)
- Geometry (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a simulation method, a simulation system, electronic equipment and a storage medium for a real driving scene, which are based on a multi-sensor data acquisition platform, construct a three-dimensional space map of the real scene, label and separate dynamic and static traffic elements in the real driving scene, output the local spatial position and the type of the dynamic and static traffic elements in the real scene, generate a simulation scene file, reproduce the real driving scene and store the simulation scene file as a simulation scene library file capable of being imported into a Carla simulation software platform. The method has various data sources, effectively ensures the test coverage and the truth of the simulation scene library, and ensures the accuracy of the overall scene mapping by adopting a data tight coupling fusion mode. The invention also has the functions of supporting the joint marking, the automatic interpolation marking and the like of the multi-sensor data by using a data marking tool; the continuity of the labeling tracks of the dynamic traffic elements is ensured, and the high-reduction construction work of the whole scene is finally realized.
Description
Technical Field
The present invention relates to a simulation method, a simulation system, an electronic device, and a storage medium, and more particularly, to a simulation method, a simulation system, an electronic device, and a storage medium for a real driving scene.
Background
In recent years, with the rapid development of the automatic driving technology, the corresponding automatic driving function test requirements are greatly increased, and the conventional software test method and the real vehicle test can not meet the rapidly-increased test requirements, so that the simulation automatic driving test is gradually becoming the mainstream test method. The conventional simulation automatic driving test method needs to manually construct a test environment and design various driving behaviors in advance, is low in test efficiency and not real enough, and cannot completely test the automatic driving function. Although the simulation scene construction tool chain based on commercialization can quickly construct a test simulation environment as real as possible, due to the characteristic of charging authorization, the use cost is high, large-scale popularization and application are difficult, the applicability is fixed, user-defined modification is not supported, and the automatic driving test requirement of quick development is difficult to adapt.
The existing similar technical scheme is realized based on VTD or SCANeR and other commercial simulation scene construction software tools, the data processing flows are similar, the three processes of dynamic and static traffic element separation, simulation three-dimensional scene construction and dynamic traffic flow generation are executed firstly, and the main defects of the prior art are as follows: the used simulation scene making tool is paid, has weak expansibility and does not support the user to perform function customization according to actual requirements.
Disclosure of Invention
The invention aims to provide a method, a system, electronic equipment and a storage medium for simulating a real driving scene, which can be divided into multi-sensor data fusion, real scene three-dimensional map construction, road network extraction, dynamic and static traffic element labeling and simulation scene file generation and import according to the type of an output result, and solve the defects in the prior art.
The invention provides the following scheme:
a real driving scene simulation method based on Carla is characterized in that the simulation injection method comprises the following steps:
calibrating and time synchronizing internal and external parameters among a plurality of sensors based on a multi-sensor data acquisition platform;
constructing a three-dimensional space map of a real scene, and extracting road network data of scene roads;
labeling and separating the dynamic and static traffic elements in the real driving scene, and outputting the local spatial position and type of the dynamic and static traffic elements in the real scene;
and generating a simulation scene file, reproducing the real driving scene, and storing the simulation scene file as a simulation scene library file which can be imported into a Carla simulation software platform.
Further, the sensor that multi-sensor data acquisition platform carried on specifically includes: mechanical three-dimensional lidar, solid state lidar, mono/binocular industrial cameras, wide angle cameras, inertial navigation, ultrasonic and differential GPS.
Further, in the process of constructing a three-dimensional space map of a real scene, the real scene three-dimensional map is constructed through three-dimensional laser radar, inertial navigation data and GPS data, real-time self pose estimation and key frame calculation are carried out through a tightly coupled data fusion mode, accumulated errors are eliminated through global pose of the GPS, key frames are spliced into a three-dimensional map model, a point cloud map model is generated, three-dimensional scene point cloud map data are obtained, an octree data structure is utilized to manage the three-dimensional scene point cloud map, and the point cloud map is compressed to obtain a two-dimensional scene compressed plan.
Further, based on data of the monocular industrial camera, the detection of the lane lines and the road surface features is executed by utilizing an algorithm framework of image processing or deep learning, two-dimensional plane data of the lane lines and the road surface features are obtained, three-dimensional lane line and road surface feature data with depth data are formed, the preliminary extraction of the road network data is completed, and the preliminarily extracted road network data and road features are led into a three-dimensional scene editor.
Further, the static traffic elements include: the road traffic sign, the road curb, the traffic light, roadside greenbelt and roadside building, the dynamic traffic element includes pedestrian, motor vehicle, non-motor vehicle.
The system further comprises a data labeling tool for joint labeling and automatic interpolation labeling, manual labeling is carried out on the point cloud data through a data labeling main interface, and meanwhile, a labeling result is automatically mapped to the RGB data of the monocular camera; and adjusting the selected marking frame in real time by observing the marking result and the point cloud marking result of the RGB data at the same time, wherein the adjusted marking frame completely comprises a marking object.
Further, generating a simulation scene file, reproducing the real driving scene and storing the simulation scene file as a simulation scene library file which can be imported into a Carla simulation software platform, wherein the simulation scene library file comprises two sub-processes:
constructing a three-dimensional model of a simulation scene by using a two-dimensional scene compression map marked with static traffic element position information and types and road network data;
and inserting the dynamic traffic elements according to the marked types and motion tracks of the dynamic traffic elements, and adding the FBX model data path to complete the construction of a complete simulation scene library file.
A real driving scene simulation system based on Carla is characterized in that the scene simulation system comprises:
the multi-sensor fusion acquisition module is used for calibrating and time synchronizing internal and external parameters among the sensors based on the multi-sensor data acquisition platform;
the scene three-dimensional map building module is used for building a three-dimensional space map of a real scene and extracting road network data of scene roads;
the dynamic and static traffic element labeling module is used for performing labeling separation on the dynamic and static traffic elements in the real driving scene based on multi-sensor fusion data and three-dimensional scene map data, and outputting the local spatial positions and types of the dynamic and static traffic elements in the real scene;
and the simulation scene file generation module is used for generating a simulation scene file, reproducing the real driving scene and storing the simulation scene file as a simulation scene library file which can be imported into the Carla simulation software platform.
An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus; the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method.
A computer-readable storage medium storing a computer program executable by an electronic device, the computer program, when run on the electronic device, causing the electronic device to perform the steps of the method.
Compared with the prior art, the invention has the following advantages:
the data sources of the sensors are various, the test coverage and the truth of a simulation scene library are effectively guaranteed, the overall traffic element distribution of the whole scene is comprehensively sensed by utilizing multi-sensor data to the maximum extent, and the accuracy of overall scene graph building is guaranteed by adopting a data tight coupling fusion mode. The invention also designs a data marking tool which supports the functions of joint marking, automatic interpolation marking and the like of the multi-sensor data; and the continuity of the labeling track of the dynamic traffic elements is ensured. In order to avoid repeatedly importing a three-dimensional scene model into a simulation software platform for verification and improving scene building efficiency, a two-dimensional scene compression map and static traffic element marking data of intermediate output data generated in a data injection process are imported into three-dimensional model making software such as 3DMax and the like to serve as a basic design drawing for scene building; selecting or building a three-dimensional model with the closest reduction degree from a model library according to the type and size position information of the labeled data, adjusting the size of the model and adding the adjusted model to a corresponding position; by the method, the three-dimensional model is continuously added into the scene, and finally the high-reduction-degree building work of the whole scene is realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic diagram of a real scene three-dimensional map construction.
Fig. 2 is a schematic diagram of a real scene compression map.
Fig. 3 is a schematic diagram of road network data construction.
FIG. 4 is a labeling diagram of dynamic and static traffic elements.
Fig. 5 is a diagram illustrating simulation injection comparison of real scene data.
Fig. 6 is a flowchart of a real driving scene simulation method based on carra.
Fig. 7 is an architecture diagram of a real driving scene simulation system based on carra.
Fig. 8 is a system architecture diagram of an electronic device.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The main idea of the invention is to design a data injection software tool chain with strong expansibility and a manufacturing process based on an open source software system platform Carla, so that data acquired by a real driving scene can be conveniently injected into a simulation software platform for reproduction test, and the simulation test of automatic driving is realized by utilizing multi-sensor data fusion, real scene three-dimensional map construction and road network extraction, dynamic and static traffic element labeling and simulation scene file generation and import. Under the guidance of the above main ideas, the present embodiment discloses a real driving scene simulation method based on cara as shown in fig. 1:
example 1:
calibrating and time synchronizing internal and external parameters among a plurality of sensors based on a multi-sensor data acquisition platform;
constructing a three-dimensional space map of a real scene, and extracting road network data of scene roads;
based on multi-sensor fusion data and three-dimensional scene map data, labeling and separating dynamic and static traffic elements in a real driving scene, and outputting local spatial positions and types of the dynamic and static traffic elements in the real scene;
and generating a simulation scene file, reproducing the real driving scene and storing the simulation scene file as a simulation scene library file which can be imported into a Carla simulation software platform.
The simulation method for the real driving scene disclosed by the embodiment is developed based on Carla which is an open source simulation software platform, the platform has strong performance, a Server-Client architecture is adopted as a basic structure of the whole software platform, secondary development and use are facilitated for users, the internal implementation of Carla is not required to be considered, and other simulation software platforms adopting the Server-Client architecture can also use the data injection process provided by the invention to construct the simulation scene. Although the present embodiment adopts the Server-Client architecture for development, it is not meant that the architecture/framework for simulation development of the real driving scene is limited to the Server-Client architecture, and a simulation method or system for the real driving scene is developed based on other web architectures, web development frameworks, and the like, such as an SOA (Service-oriented architecture), a B/S architecture, and the like, which all fall within the protection scope of the appended claims of the present patent, that is, the selected system architecture and the developed framework are only explanatory and have no limitation. Similarly, other embodiments of the present disclosure may adopt a C/S (Server-Client) architecture, a B/S (Browser/Server) architecture, an SOA (Service-oriented architecture) architecture, and the like.
The following is a further detailed analysis of the simulation method for the real driving scene disclosed in embodiment 1:
the main work of the multi-sensor data fusion at the stage is based on the existing multi-sensor data acquisition platform, internal and external parameter calibration and time synchronization work among multiple sensors are completed, the minimization of space and time errors among the data of the multiple sensors is ensured, and the data fusion and subsequent use are facilitated; through the calculation processing of the process, a real-time multi-sensor data stream is output, the data of each sensor are synchronized through a time stamp, and fusion between the data can be realized based on external reference calibration parameters.
The stage of real scene three-dimensional map construction and road network extraction mainly works based on multi-sensor data of a real driving scene, a three-dimensional space map of the scene is constructed, road network data of scene roads are extracted, a three-dimensional scene map taking PCD or LAS as file suffixes, a two-dimensional scene compressed map taking PGM as file suffixes and corresponding road network data files meeting OPENDIVE standards are output through the calculation processing of the process, and the road network data files usually take XODR as file suffixes.
The main work of the stage of labeling the dynamic and static traffic elements is to label and separate the dynamic and static traffic elements in the real driving scene based on multi-sensor fusion data and three-dimensional scene map data, and store the dynamic and static traffic elements as an off-line file for subsequent use; the calculation process needs to use the marking tool designed by the invention to carry out automatic or manual marking; through the calculation processing in the process, the local spatial position and the type of the dynamic and static traffic elements in the real scene are output, and it is noted that the labeling data of the static traffic elements (such as road traffic signs, flowers, trees, roadside buildings and the like) need to be marked on the two-dimensional scene compressed map output in the real scene three-dimensional map building process and then serve as the output data of the static traffic element labeling.
The main work of the stage of generating and importing the simulation scene file is to reproduce the real driving scene by utilizing the calculation output data of the three stages and store the real driving scene as a simulation scene library file which can be imported into a Carla simulation software platform; the first sub-process is to use a two-dimensional scene compression map and road network data marked with static traffic element position information and types to construct a three-dimensional model of a simulation scene, the process is completed by leading the compression map and the road network data into three-dimensional model making software of 3D Max, the corresponding three-dimensional model is placed on a correct position according to different traffic element types and is properly adjusted, and finally, an available FBX file of Carla is exported; the second sub-process is to insert the dynamic traffic elements into the file one by one according to the marked dynamic traffic element types and motion tracks and the element rules in the Open scene standard one by one according to the XML format, store the dynamic traffic elements as a simulation scene library file in the XOSC format, and finally add the manufactured FBX model data path into the file to complete the construction work of a complete simulation scene library file, and simultaneously, the method is also a manufacturing flow of injecting real driving scene data into simulation test software.
Example 2: on the basis of embodiment 1, the present embodiment extends the step of generating the simulation scene file, and includes two sub-processes in this step: constructing a three-dimensional model of a simulation scene by using a two-dimensional scene compression map marked with static traffic element position information and types and road network data; inserting dynamic traffic elements according to the marked types and motion tracks of the dynamic traffic elements, adding an FBX model data path, and completing a complete simulation scene library file construction, wherein the other parts of the embodiment 2 are the same as those of the embodiment 1, and the embodiments 1 and 2 can be combined with other embodiments to form more embodiments, so long as there is no conflict in substance, the feasibility of combination is achieved, and details are not repeated herein for the sake of brevity.
Example 3: the embodiment is further expanded and improved on the basis of the embodiment 1, the embodiment 2 and the combination of the two embodiments:
in the multi-sensor data fusion process based on the multi-sensor data acquisition platform, sensors carried by the multi-sensor data acquisition platform include but are not limited to common automatic driving sensors such as a mechanical three-dimensional laser radar, a solid-state laser radar, a single/binocular industrial camera, a wide-angle camera, inertial navigation, ultrasonic waves, a differential GPS and the like; sources of multi-sensor data include, but are not limited to, self-driving collection, map quotient data purchase, roadside unit collection, and real vehicle road testing. The method takes the self-driving collection as an example to carry out the specific application description of the whole process; the time synchronization process of the data of the multiple sensors is realized by adopting a hardware pulse synchronization mode, namely, a pulse signal sent by hardware equipment (usually pulses using a GPS) capable of sending timing pulses is taken as a standard, and data acquisition output of other sensors is triggered through a data synchronization hardware interface, so that the time synchronization is always based on the sensor with the slowest data output frequency in the sensors; the multi-sensor data involved in the invention must complete time synchronization and external parameter joint calibration.
In the real scene three-dimensional map construction and road network extraction of the steps, the real scene three-dimensional map construction process mainly uses three-dimensional laser radar and inertial navigation data and uses GPS data as assistance, real-time self pose estimation and key frame calculation are carried out in a tightly coupled data fusion mode, accumulated errors are eliminated through GPS global pose according to certain time frequency, and finally all key frames are spliced together to form a three-dimensional map model of the scene, wherein the map model is usually represented in a point cloud mode and exists as shown in figure 1.
In the real scene three-dimensional map construction and road network extraction in the steps, after the real scene three-dimensional map construction is finished, outputting and storing a frame of three-dimensional scene point cloud map data; then, the data structure of the octree is used for managing the whole three-dimensional scene point cloud map, and the whole scene point cloud map is compressed according to different height ranges to obtain a two-dimensional scene compressed plan; the parameters of map compression are mainly data of a coordinate axis of point cloud height, all point cloud data in a certain height range are selected, then projection processing is carried out on the point cloud coordinates, and all point cloud coordinates are projected onto a plane with the height of 0, so that a frame of compressed scene two-dimensional map data is obtained; the data is in pgm format and can be subsequently used for calculations using conventional image processing techniques, as shown in fig. 2.
In the real scene three-dimensional map construction and road network extraction of the steps, the road network extraction process mainly comprises two sensors, namely a three-dimensional laser radar and a monocular industrial camera, and the detection of the lane line and the road surface characteristics is executed by utilizing an algorithm framework of image processing or deep learning based on RGB data of the monocular industrial camera; the monocular industrial camera calculates to obtain two-dimensional plane data of the lane lines and the road surface characteristics, three-dimensional laser radar depth information under the same timestamp is used for forming three-dimensional lane line and road surface characteristic data with depth data together in a data fusion mode, and preliminary extraction work of road network data is completed according to the three-dimensional laser radar depth information; then, the preliminarily extracted road network data and road characteristics are imported into a special three-dimensional scene editor (such as RoadRunner) for adjustment, and finally, files are stored according to OPENDRIVE standard, and finally, the files are stored in an XODR format, as shown in fig. 3.
In the above step of labeling the static and dynamic traffic elements, the static traffic elements mainly refer to some common static fixed traffic elements, including but not limited to a series of relatively fixed static elements such as road traffic signs, road edges, traffic lights, roadside greenbelts and roadside buildings; the dynamic traffic elements mainly refer to moving objects with obvious motion states or trends, including but not limited to pedestrians, various vehicles, bicycles, motorcycles and other common vehicles or moving people.
In the dynamic and static traffic element labeling in the above steps, the embodiment further provides a data labeling tool, which is designed and researched based on a WebGL framework, and supports functions of joint labeling, automatic interpolation labeling and the like of multi-sensor data; at present, the tool only supports the joint labeling of a three-dimensional laser radar and a monocular camera, and before the tool is used, data to be labeled (namely, key frame point cloud data in the three-dimensional scene image building process and monocular camera RGB data with the same timestamp are named by taking the timestamp as a name, so that the data alignment and the data search are facilitated) needs to be prepared and placed in a specified data reading directory; when a tool is used for marking, manual marking is carried out on the point cloud data through the data marking main interface, and meanwhile, a marking result is automatically mapped to the RGB data of the monocular camera; by observing the labeling result of the RGB data and the point cloud labeling result at the same time, the selected labeling frame is adjusted in real time, and the final labeling frame can be ensured to completely comprise a labeling object. Because manual marking of data is time-consuming and labor-consuming and has high cost, the data marking tool adopts an interpolation algorithm to complete missing track data of a marked object between two frames of data. After all data are finally marked, a series of object track or pose data with ID numbers are obtained, as shown in FIG. 4, traffic elements, dynamic and static traffic elements on the road are all marked and selected by the system, and the selected marking frame can completely include marked objects.
In the generation and import of the simulation scene file in the step, 3d max and other three-dimensional model making software is needed to construct a three-dimensional model of a scene, and then FBX format data supported by a Carla software platform is exported and imported for use; 3DMax is common three-dimensional model making software with powerful functions, and compared with three-dimensional scene building tools in other commercial simulation scene building software tools, the model supported by 3DMax and the scene restoring capability are stronger; in order to restore a real scene more truly, the invention combines the two-dimensional scene compression map and the static traffic element labeling data to build a three-dimensional scene model with high restoration degree in a 3D Max scene editing interface, and the method can avoid the process of repeatedly importing a simulation software platform to check the restoration degree, thereby improving the efficiency and the restoration degree of the three-dimensional scene model building, as shown in FIG. 5, the left side is a simulation scene, the right side is a real scene, and the simulation and restoration degrees are very high.
Example 4: as shown in fig. 6 to 8, the present invention further provides a real driving scene simulation system based on cara, wherein the scene simulation system includes a plurality of functional modules, and the combination of the plurality of functional modules can realize the real driving scene simulation method based on cara set forth in the present patent, and the specific architecture of the simulation system includes:
the multi-sensor fusion acquisition module is used for calibrating and time synchronizing internal and external parameters among the sensors based on the multi-sensor data acquisition platform;
the scene three-dimensional map building module is used for building a three-dimensional space map of a real scene and extracting road network data of scene roads;
the dynamic and static traffic element labeling module is used for performing labeling separation on the dynamic and static traffic elements in the real driving scene based on multi-sensor fusion data and three-dimensional scene map data, and outputting the local spatial positions and types of the dynamic and static traffic elements in the real scene;
and the simulation scene file generation module is used for generating a simulation scene file, reproducing the real driving scene and storing the simulation scene file as a simulation scene library file which can be imported into the Carla simulation software platform.
It should be apparent that, through the text description of the present embodiment and with reference to fig. 7, although only the multi-sensor fusion acquisition module, the scene three-dimensional map construction module, the dynamic and static traffic element labeling module, and the simulation scene file generation module are disclosed in the architecture diagram, the simulation system is not limited to these four functional modules, on the contrary, the meaning of the present patent is expressed, and on the basis of these four basic modules, one skilled in the art can arbitrarily add one or more functional modules in combination with the prior art to form an infinite number of embodiments or technical solutions, that is, the simulation system is open rather than closed, and the protection scope of the patent claims is considered to be limited to the disclosed basic functional modules because only individual basic functional modules are disclosed in the present embodiment.
The embodiment discloses a real driving scene simulation method and a simulation system based on Carla, and on the basis of the embodiment, the invention also provides corresponding electronic equipment and a storage medium, wherein the electronic equipment comprises: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus. A computer program is stored in a storage medium, which when executed by a processor, causes the processor to perform the steps of the real driving scenario simulation method based on carra.
The electronic device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system.
The hardware layer includes hardware such as a Central Processing Unit (CPU), a Memory Management Unit (MMU), and a Memory.
The operating system may be any one or more computer operating systems that implement control of the electronic device through a Process (Process), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system.
In the embodiment of the present invention, the electronic device may be a handheld device such as a smart phone and a tablet computer, or an electronic device such as a desktop computer and a portable computer, which is not particularly limited in the embodiment of the present invention, as long as the CPU replacement can be implemented by running a program recorded with a code of the CPU replacement method in the embodiment of the present invention.
The execution main body of the electronic device control in the embodiment of the present invention may be the electronic device, or a functional module capable of calling a program and executing the program in the electronic device. The electronic device may obtain the firmware corresponding to the storage medium, the firmware corresponding to the storage medium is provided by a vendor, and the firmware corresponding to different storage media may be the same or different, which is not limited herein.
After the electronic device acquires the firmware corresponding to the storage medium, the firmware corresponding to the storage medium may be written into the storage medium, specifically, the firmware corresponding to the storage medium is burned into the storage medium. The process of burning the firmware into the storage medium can be realized by adopting the prior art, and details are not described in the embodiment of the present invention.
The electronic device may further acquire a reset command corresponding to the storage medium, where the reset command corresponding to the storage medium is provided by a vendor, and the reset commands corresponding to different storage media may be the same or different, and are not limited herein.
At this time, the storage medium of the electronic device is a storage medium in which the corresponding firmware is written, and the electronic device may respond to the reset command corresponding to the storage medium in which the corresponding firmware is written, so that the electronic device resets the storage medium in which the corresponding firmware is written according to the reset command corresponding to the storage medium. The process of resetting the storage medium according to the reset command can be implemented by the prior art, and is not described in detail in the embodiment of the present invention.
In connection with the above embodiments it can be seen that: the invention takes multi-sensor data as data input of the whole process, wherein the multi-sensor comprises but is not limited to mechanical three-dimensional laser radar, solid laser radar, single/binocular industrial camera, wide-angle camera, inertial navigation, ultrasonic wave, differential GPS and other common automatic driving sensors; the data sources of the multiple sensors include and are not limited to channels of self-driving acquisition, map business data purchase, road side unit acquisition, real vehicle road test and the like, and the test coverage and the truth of the simulation scene library are effectively guaranteed;
in order to ensure the construction truth of the three-dimensional model of the simulation scene, the overall traffic element distribution of the whole scene is comprehensively sensed by utilizing multi-sensor data to the maximum extent; therefore, the method adopts a data tight coupling fusion mode, utilizes the laser radar and inertial navigation to estimate the self pose change in real time, generates a map key frame, introduces global position information of a GPS according to a certain frequency to carry out global error elimination calculation, and ensures the accuracy of global scene mapping; based on the output result of the three-dimensional scene map building, the octree structure is utilized to simplify the compression process of the three-dimensional scene point cloud map, a two-dimensional scene plane compression map is output, and then the three-dimensional scene model building is carried out according to the map;
the invention also designs a data labeling tool which is designed and researched based on a WebGL framework and supports functions of joint labeling, automatic interpolation labeling and the like of multi-sensor data; the used multi-sensor data needs to be subjected to two pre-treatments of time synchronization and external parameter combined calibration, the marking accuracy of dynamic and static traffic elements is guaranteed to the maximum extent in a mode of simultaneously marking multi-source data and observing data at multiple angles, and meanwhile, automatic interpolation marking is added, so that the continuity of marking tracks of the dynamic traffic elements is guaranteed;
in order to avoid repeatedly importing a three-dimensional scene model into a simulation software platform for verification and improving scene building efficiency, a two-dimensional scene compression map and static traffic element marking data of intermediate output data generated in a data injection process are imported into three-dimensional model making software such as 3DMax and the like to serve as a basic design drawing for scene building; selecting or building a three-dimensional model with the closest reduction degree from a model library according to the type and size position information of the labeled data, adjusting the size of the model and adding the adjusted model to a corresponding position; by the method, the three-dimensional model is continuously added into the scene, and finally the high-reduction-degree building work of the whole scene is realized.
For convenience of description, the above devices are described as being divided into various units and modules by functions, respectively. Of course, the functions of the units and modules may be implemented in one or more software and/or hardware when the present application is implemented.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
The above-described embodiments of the apparatus are merely schematic, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. A real driving scene simulation method based on Carla is characterized in that the simulation injection method comprises the following steps:
calibrating and time synchronizing internal and external parameters among a plurality of sensors based on a multi-sensor data acquisition platform;
constructing a three-dimensional space map of a real scene, and extracting road network data of scene roads;
labeling and separating the dynamic and static traffic elements in the real driving scene, and outputting the local spatial position and type of the dynamic and static traffic elements in the real scene;
and generating a simulation scene file, reproducing the real driving scene, and storing the simulation scene file as a simulation scene library file which can be imported into a Carla simulation software platform.
2. The carra-based real driving scene simulation method according to claim 1, wherein the sensors carried by the multi-sensor data acquisition platform specifically comprise: mechanical three-dimensional lidar, solid state lidar, mono/binocular industrial cameras, wide angle cameras, inertial navigation, ultrasonic and differential GPS.
3. The Carla-based real driving scene simulation method according to claim 1, wherein in the process of constructing the three-dimensional space map of the real scene, the real scene three-dimensional map is constructed through a three-dimensional laser radar, inertial navigation data and GPS data, real-time self-pose estimation and key frame calculation are performed through a tightly coupled data fusion mode, accumulated errors are eliminated through the global pose of the GPS, key frames are spliced into a three-dimensional map model, a point cloud map model is generated, point cloud map data of the three-dimensional scene are obtained, the point cloud map is managed through an octree data structure, and the point cloud map is compressed to obtain a two-dimensional scene compressed plan.
4. The Carla-based simulation method for real driving scenes according to claim 1, wherein the method comprises the steps of performing lane line and road surface feature detection based on data of a monocular industrial camera by using an algorithm frame of image processing or deep learning, obtaining two-dimensional plane data of the lane line and the road surface feature, forming three-dimensional lane line and road surface feature data with depth data, completing preliminary extraction of the lane network data, and importing the preliminarily extracted lane network data and road features into a three-dimensional scene editor.
5. The real driving scenario simulation method based on cara according to claim 1, characterized in that the static traffic elements comprise: the road traffic sign, the road curb, the traffic light, roadside greenbelt and roadside building, the dynamic traffic element includes pedestrian, motor vehicle, non-motor vehicle.
6. The Carla-based simulation method for real driving scenes according to claim 5, further comprising a data annotation tool for joint annotation and automatic interpolation annotation, wherein manual annotation is performed on point cloud data through a data annotation main interface, and the annotation result is automatically mapped to RGB data of the monocular camera; and adjusting the selected marking frame in real time by observing the marking result and the point cloud marking result of the RGB data at the same time, wherein the adjusted marking frame completely comprises a marking object.
7. The Carla-based simulation method of real driving scenes according to claim 1, wherein generating a simulation scene file, reproducing the real driving scenes and saving as a simulation scene library file that can be imported into a Carla simulation software platform comprises two sub-processes:
constructing a three-dimensional model of a simulation scene by using a two-dimensional scene compression map marked with static traffic element position information and types and road network data;
and inserting the dynamic traffic elements according to the marked types and motion tracks of the dynamic traffic elements, and adding the FBX model data path to complete the construction of a complete simulation scene library file.
8. A real driving scene simulation system based on Carla is characterized in that the scene simulation system comprises:
the multi-sensor fusion acquisition module is used for calibrating and time synchronizing internal and external parameters among the sensors based on the multi-sensor data acquisition platform;
the scene three-dimensional map building module is used for building a three-dimensional space map of a real scene and extracting road network data of scene roads;
the dynamic and static traffic element labeling module is used for performing labeling separation on the dynamic and static traffic elements in the real driving scene based on multi-sensor fusion data and three-dimensional scene map data, and outputting the local spatial positions and types of the dynamic and static traffic elements in the real scene;
and the simulation scene file generation module is used for generating a simulation scene file, reproducing the real driving scene and storing the simulation scene file as a simulation scene library file which can be imported into the Carla simulation software platform.
9. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete mutual communication through the communication bus; the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that it stores a computer program executable by an electronic device, which, when run on the electronic device, causes the electronic device to perform the steps of the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210046583.6A CN114067062A (en) | 2022-01-17 | 2022-01-17 | Method and system for simulating real driving scene, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210046583.6A CN114067062A (en) | 2022-01-17 | 2022-01-17 | Method and system for simulating real driving scene, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114067062A true CN114067062A (en) | 2022-02-18 |
Family
ID=80231051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210046583.6A Pending CN114067062A (en) | 2022-01-17 | 2022-01-17 | Method and system for simulating real driving scene, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114067062A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114608563A (en) * | 2022-05-11 | 2022-06-10 | 成都瑞讯物联科技有限公司 | Navigation map generation method and fusion positioning navigation method |
CN115131455A (en) * | 2022-05-23 | 2022-09-30 | 华为技术有限公司 | Map generation method and related product |
CN115187742A (en) * | 2022-09-07 | 2022-10-14 | 西安深信科创信息技术有限公司 | Method, system and related device for generating automatic driving simulation test scene |
CN115290104A (en) * | 2022-07-14 | 2022-11-04 | 襄阳达安汽车检测中心有限公司 | Simulation map generation method, device, equipment and readable storage medium |
CN115374016A (en) * | 2022-10-25 | 2022-11-22 | 苏州清研精准汽车科技有限公司 | Test scene simulation system and method, electronic device and storage medium |
CN115687163A (en) * | 2023-01-05 | 2023-02-03 | 中汽智联技术有限公司 | Scene library construction method, device, equipment and storage medium |
CN115688484A (en) * | 2022-11-30 | 2023-02-03 | 西部科学城智能网联汽车创新中心(重庆)有限公司 | WebGL-based V2X simulation method and system |
CN116030211A (en) * | 2023-02-20 | 2023-04-28 | 之江实验室 | Method and device for constructing simulation map, storage medium and electronic equipment |
CN116206068A (en) * | 2023-04-28 | 2023-06-02 | 北京科技大学 | Three-dimensional driving scene generation and construction method and device based on real data set |
CN117132178A (en) * | 2023-10-27 | 2023-11-28 | 南京国准数据有限责任公司 | Scene application model construction method based on smart city |
CN118115685A (en) * | 2024-03-14 | 2024-05-31 | 重庆赛力斯凤凰智创科技有限公司 | Simulation scene generation and test method, device, equipment and medium |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1376197A (en) * | 1972-11-06 | 1974-12-04 | Gray J T W | Driving game |
CN101669106A (en) * | 2007-04-25 | 2010-03-10 | 微软公司 | Virtual machine migration |
CN109765803A (en) * | 2019-01-24 | 2019-05-17 | 同济大学 | A kind of the simulation hardware test macro and method of the synchronic sky of the more ICU of autonomous driving vehicle |
CN109856993A (en) * | 2019-01-29 | 2019-06-07 | 北京奥特贝睿科技有限公司 | A kind of autonomous driving emulation platform |
CN109992886A (en) * | 2019-04-01 | 2019-07-09 | 浙江大学 | A kind of mixed traffic emulation mode based on social force |
CN110059393A (en) * | 2019-04-11 | 2019-07-26 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of emulation test method of vehicle, apparatus and system |
CN110647839A (en) * | 2019-09-18 | 2020-01-03 | 深圳信息职业技术学院 | Method and device for generating automatic driving strategy and computer readable storage medium |
CN110795819A (en) * | 2019-09-16 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Method and device for generating automatic driving simulation scene and storage medium |
CN111679660A (en) * | 2020-06-16 | 2020-09-18 | 中国科学院深圳先进技术研究院 | Unmanned deep reinforcement learning method integrating human-like driving behaviors |
CN112198859A (en) * | 2020-09-07 | 2021-01-08 | 西安交通大学 | Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene |
CN112198794A (en) * | 2020-09-18 | 2021-01-08 | 哈尔滨理工大学 | Unmanned driving method based on human-like driving rule and improved depth certainty strategy gradient |
CN112862980A (en) * | 2021-02-06 | 2021-05-28 | 西藏宁算科技集团有限公司 | Car arhud system based on Carla simulation platform |
CN113095241A (en) * | 2021-04-16 | 2021-07-09 | 武汉理工大学 | Target detection method based on CARLA simulator |
CN113536548A (en) * | 2021-06-29 | 2021-10-22 | 的卢技术有限公司 | Carla-based prefabricated track simulation scene construction method |
-
2022
- 2022-01-17 CN CN202210046583.6A patent/CN114067062A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1376197A (en) * | 1972-11-06 | 1974-12-04 | Gray J T W | Driving game |
CN101669106A (en) * | 2007-04-25 | 2010-03-10 | 微软公司 | Virtual machine migration |
CN109765803A (en) * | 2019-01-24 | 2019-05-17 | 同济大学 | A kind of the simulation hardware test macro and method of the synchronic sky of the more ICU of autonomous driving vehicle |
CN109856993A (en) * | 2019-01-29 | 2019-06-07 | 北京奥特贝睿科技有限公司 | A kind of autonomous driving emulation platform |
CN109992886A (en) * | 2019-04-01 | 2019-07-09 | 浙江大学 | A kind of mixed traffic emulation mode based on social force |
CN110059393A (en) * | 2019-04-11 | 2019-07-26 | 东软睿驰汽车技术(沈阳)有限公司 | A kind of emulation test method of vehicle, apparatus and system |
CN110795819A (en) * | 2019-09-16 | 2020-02-14 | 腾讯科技(深圳)有限公司 | Method and device for generating automatic driving simulation scene and storage medium |
CN110647839A (en) * | 2019-09-18 | 2020-01-03 | 深圳信息职业技术学院 | Method and device for generating automatic driving strategy and computer readable storage medium |
CN111679660A (en) * | 2020-06-16 | 2020-09-18 | 中国科学院深圳先进技术研究院 | Unmanned deep reinforcement learning method integrating human-like driving behaviors |
CN112198859A (en) * | 2020-09-07 | 2021-01-08 | 西安交通大学 | Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene |
CN112198794A (en) * | 2020-09-18 | 2021-01-08 | 哈尔滨理工大学 | Unmanned driving method based on human-like driving rule and improved depth certainty strategy gradient |
CN112862980A (en) * | 2021-02-06 | 2021-05-28 | 西藏宁算科技集团有限公司 | Car arhud system based on Carla simulation platform |
CN113095241A (en) * | 2021-04-16 | 2021-07-09 | 武汉理工大学 | Target detection method based on CARLA simulator |
CN113536548A (en) * | 2021-06-29 | 2021-10-22 | 的卢技术有限公司 | Carla-based prefabricated track simulation scene construction method |
Non-Patent Citations (2)
Title |
---|
是小林吖-2020: "Carla Simulator介绍(一)", 《HTTPS://WWW.BILIBILI.COM/READ/CV8349176》 * |
王成康 等: "基于CARLA的驾驶仿真平台搭建", 《佳木斯大学学报(自然科学版)》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114608563A (en) * | 2022-05-11 | 2022-06-10 | 成都瑞讯物联科技有限公司 | Navigation map generation method and fusion positioning navigation method |
CN114608563B (en) * | 2022-05-11 | 2022-07-26 | 成都瑞讯物联科技有限公司 | Navigation map generation method and fusion positioning navigation method |
CN115131455A (en) * | 2022-05-23 | 2022-09-30 | 华为技术有限公司 | Map generation method and related product |
CN115290104A (en) * | 2022-07-14 | 2022-11-04 | 襄阳达安汽车检测中心有限公司 | Simulation map generation method, device, equipment and readable storage medium |
CN115187742A (en) * | 2022-09-07 | 2022-10-14 | 西安深信科创信息技术有限公司 | Method, system and related device for generating automatic driving simulation test scene |
CN115374016A (en) * | 2022-10-25 | 2022-11-22 | 苏州清研精准汽车科技有限公司 | Test scene simulation system and method, electronic device and storage medium |
CN115688484B (en) * | 2022-11-30 | 2023-07-25 | 西部科学城智能网联汽车创新中心(重庆)有限公司 | V2X simulation method and system based on WebGL |
CN115688484A (en) * | 2022-11-30 | 2023-02-03 | 西部科学城智能网联汽车创新中心(重庆)有限公司 | WebGL-based V2X simulation method and system |
CN115687163B (en) * | 2023-01-05 | 2023-04-07 | 中汽智联技术有限公司 | Scene library construction method, device, equipment and storage medium |
CN115687163A (en) * | 2023-01-05 | 2023-02-03 | 中汽智联技术有限公司 | Scene library construction method, device, equipment and storage medium |
CN116030211A (en) * | 2023-02-20 | 2023-04-28 | 之江实验室 | Method and device for constructing simulation map, storage medium and electronic equipment |
CN116030211B (en) * | 2023-02-20 | 2023-06-20 | 之江实验室 | Method and device for constructing simulation map, storage medium and electronic equipment |
CN116206068A (en) * | 2023-04-28 | 2023-06-02 | 北京科技大学 | Three-dimensional driving scene generation and construction method and device based on real data set |
CN117132178A (en) * | 2023-10-27 | 2023-11-28 | 南京国准数据有限责任公司 | Scene application model construction method based on smart city |
CN117132178B (en) * | 2023-10-27 | 2023-12-29 | 南京国准数据有限责任公司 | Scene application model construction method based on smart city |
CN118115685A (en) * | 2024-03-14 | 2024-05-31 | 重庆赛力斯凤凰智创科技有限公司 | Simulation scene generation and test method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114067062A (en) | Method and system for simulating real driving scene, electronic equipment and storage medium | |
US11763474B2 (en) | Method for generating simulated point cloud data, device, and storage medium | |
CN107678306B (en) | Dynamic scene information recording and simulation playback method, device, equipment and medium | |
US10297074B2 (en) | Three-dimensional modeling from optical capture | |
CN107844635B (en) | System for realizing BIM information and traffic simulation information integration and integration method thereof | |
Zollmann et al. | Augmented reality for construction site monitoring and documentation | |
US20190026400A1 (en) | Three-dimensional modeling from point cloud data migration | |
CN110796714B (en) | Map construction method, device, terminal and computer readable storage medium | |
WO2017020465A1 (en) | Modelling method and device for three-dimensional road model, and storage medium | |
CN110533768B (en) | Simulated traffic scene generation method and system | |
Azfar et al. | Efficient procedure of building university campus models for digital twin simulation | |
CN113009506A (en) | Virtual-real combined real-time laser radar data generation method, system and equipment | |
CN109685893B (en) | Space integrated modeling method and device | |
KR20200136723A (en) | Method and apparatus for generating learning data for object recognition using virtual city model | |
CN110793548A (en) | Navigation simulation test system based on virtual-real combination of GNSS receiver hardware in loop | |
Wang et al. | A synthetic dataset for Visual SLAM evaluation | |
CN116978010A (en) | Image labeling method and device, storage medium and electronic equipment | |
Lin et al. | 3D environmental perception modeling in the simulated autonomous-driving systems | |
CN113001985A (en) | 3D model, device, electronic equipment and storage medium based on oblique photography construction | |
Wahbeh et al. | Image-based reality-capturing and 3D modelling for the creation of VR cycling simulations | |
CN117036607A (en) | Automatic driving scene data generation method and system based on implicit neural rendering | |
Lu et al. | LiDAR-Forest Dataset: LiDAR Point Cloud Simulation Dataset for Forestry Application | |
EP4437497A1 (en) | Machine learning for vector map generation | |
Bai et al. | Cyber mobility mirror for enabling cooperative driving automation: A co-simulation platform | |
CN112906241B (en) | Mining area automatic driving simulation model construction method, mining area automatic driving simulation model construction device, mining area automatic driving simulation model construction medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220218 |