CN116663329B - Automatic driving simulation test scene generation method, device, equipment and storage medium - Google Patents

Automatic driving simulation test scene generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN116663329B
CN116663329B CN202310923570.7A CN202310923570A CN116663329B CN 116663329 B CN116663329 B CN 116663329B CN 202310923570 A CN202310923570 A CN 202310923570A CN 116663329 B CN116663329 B CN 116663329B
Authority
CN
China
Prior art keywords
coordinate
data
traffic participant
inertial measurement
traffic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310923570.7A
Other languages
Chinese (zh)
Other versions
CN116663329A (en
Inventor
任鹏飞
潘余曦
杨子江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Xinxin Science And Technology Innovation Information Technology Co ltd
Original Assignee
Anhui Xinxin Science And Technology Innovation Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Xinxin Science And Technology Innovation Information Technology Co ltd filed Critical Anhui Xinxin Science And Technology Innovation Information Technology Co ltd
Priority to CN202310923570.7A priority Critical patent/CN116663329B/en
Publication of CN116663329A publication Critical patent/CN116663329A/en
Application granted granted Critical
Publication of CN116663329B publication Critical patent/CN116663329B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/53Decompilation; Disassembly

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a method, a device, equipment and a storage medium for generating an automatic driving simulation test scene, and relates to the technical field of automatic driving simulation tests. The method comprises the following steps: acquiring road acquisition data, wherein the road acquisition data comprises perception data of at least one sensor; extracting coordinate data of each traffic participant from the perception data of at least one sensor; determining the movement track of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data. The simulation test scene can be conveniently and efficiently generated according to the road mining data, so that more various test scenes close to reality can be provided for the simulation test.

Description

Automatic driving simulation test scene generation method, device, equipment and storage medium
Technical Field
The disclosure relates to the technical field of automatic driving simulation test, in particular to a method, a device, equipment and a storage medium for generating an automatic driving simulation test scene.
Background
Autopilot technology needs to undergo rigorous complete system testing to ensure its safety before being applied on a large scale. Currently, road testing is the primary means of developing safety tests for autopilot systems. But road test is high in cost, more restricted by laws and regulations and difficult to realize extreme test scenes, so that people gradually turn attention to simulation tests. The scenerised simulation test has the advantages of low cost, high efficiency and high safety, and is widely considered to replace road tests.
The simulation test of the scene requires the user to accurately describe the traffic environment (scene) where the automatic driving vehicle to be tested is located, and the simulation system generates a corresponding simulation test scene according to the description. The source of the test scene is usually to construct an artificial scene according to the test requirements, traffic accident related data and the like. In this way, the built simulation test scene is single, and the building process is time-consuming and labor-consuming.
Disclosure of Invention
The present disclosure provides a method, an apparatus, a device, and a storage medium for generating an autopilot simulation test scenario, which can generate a simulation test scenario according to road mining data conveniently and efficiently, so as to provide more various simulation tests, and the simulation test scenario is close to a real test scenario.
According to a first aspect of the present disclosure, there is provided a method for generating an autopilot simulation test scenario, including: acquiring road acquisition data, wherein the road acquisition data comprises perception data of at least one sensor; extracting coordinate data of each traffic participant from the perception data of at least one sensor; determining the movement track of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
In some possible implementations, the route data includes sensory data of at least two sensors, and determining a movement track of each traffic participant according to coordinate data of each traffic participant includes: converting the coordinate data of the traffic participants extracted from the perception data of the at least two sensors into the same world coordinate system respectively; and determining the movement track of each traffic participant according to the converted coordinate data of the traffic participant.
In some possible implementations, the road acquisition data further includes perception data of the inertial measurement unit; converting the extracted coordinate data of the traffic participants into the same world coordinate system respectively, wherein the method comprises the following steps of: converting coordinate data of the traffic participant extracted from the perception data of the at least two sensors into the same world coordinate system respectively, wherein the coordinate data comprises: according to coordinate conversion relations among different pre-calibrated sensors, the extracted coordinate data of the traffic participants are respectively converted into a coordinate system of a corresponding frame of the inertial measurement unit; and respectively converting the coordinate data of the traffic participant into a coordinate system of a first frame of the inertial measurement unit according to the coordinate conversion relation among frames of the inertial measurement unit, which is calculated in advance.
In some possible implementations, determining the movement trajectories of the respective traffic participants from the coordinate data of the respective traffic participants includes: and tracking the track of the traffic participants corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and acquiring the motion track of the corresponding traffic participants.
In some possible implementations, track tracking is performed on traffic participants corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and a motion track of the corresponding traffic participant is obtained, including: tracking the trafficking participants corresponding to the coordinate data of each traffic participant by using Kalman filtering and Hungary algorithm, and acquiring the coordinates of the corresponding traffic participants in continuous frames under the condition of the type of the traffic participants and the three-dimensional Euclidean distance; and taking the coordinates of the corresponding traffic participants in the continuous frames as the motion trail of each traffic participant.
In some possible implementations, compiling a corresponding scene description language according to the motion trail to generate a simulation test scene corresponding to the road mining data, including: and restoring the motion trail of the traffic participant into a corresponding scene description language according to the scene description language selected in advance, obtaining a scene description file, and generating a simulation test scene.
In some possible implementations, according to a pre-selected scene description language, restoring a motion trail of a traffic participant to a corresponding scene description language, obtaining a scene description file, and generating a simulation test scene, including: the coordinates of each point on the motion trail of the traffic participant and the corresponding time node of each point are represented by vertexes through the track trail action in the pre-selected scene description language, so as to obtain a scene description file; and generating a simulation test scene according to the scene description file.
According to a second aspect of the present disclosure, there is provided an automatic driving simulation test scenario generating apparatus, including: the acquisition module is used for acquiring road acquisition data, wherein the road acquisition data comprises perception data of at least one sensor; the extraction module is used for respectively extracting coordinate data of each traffic participant from the perception data of the at least one sensor; the processing module is used for determining the motion trail of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
In some possible implementations, the road acquisition data includes sensing data of at least two sensors, and the processing module is specifically configured to convert coordinate data of the traffic participant extracted from the sensing data of the at least two sensors into the same world coordinate system respectively; and determining the movement track of each traffic participant according to the converted coordinate data of the traffic participant.
In some possible implementations, the road acquisition data further includes perception data of the inertial measurement unit; the processing module is specifically used for respectively converting the extracted coordinate data of the traffic participants into a coordinate system of a frame corresponding to the inertial measurement unit according to coordinate conversion relations among different pre-calibrated sensors; according to the coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participants are respectively converted into a coordinate system of a first frame of the inertial measurement unit.
In some possible implementations, the processing module is specifically configured to track a traffic participant corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and obtain a motion track of the corresponding traffic participant.
In some possible implementation manners, the processing module is specifically configured to perform track tracking on traffic participants corresponding to the coordinate data of each traffic participant by using a kalman filter and a hungarian algorithm, and obtain coordinates of the corresponding traffic participants in continuous frames under the condition of the type of the traffic participant and the three-dimensional euclidean distance; and taking the coordinates of the corresponding traffic participants in the continuous frames as the motion trail of each traffic participant.
In some possible implementations, the processing module is specifically configured to restore the motion trail of the traffic participant to a corresponding scene description language according to a pre-selected scene description language, obtain a scene description file, and generate a simulation test scene.
In some possible implementation manners, the processing module is specifically configured to obtain the scene description file by representing coordinates of each point on a motion track of a traffic participant and time nodes corresponding to each point respectively by using vertices through a track motion in a preselected scene description language; and generating a simulation test scene according to the scene description file.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as provided in the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method provided according to the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method provided according to the first aspect.
The system and the method can acquire the data of the host vehicle and the surrounding environment of the host vehicle in the driving process of the host vehicle through the sensor of the user environment perception arranged on the host vehicle, so as to obtain the road acquisition data. So that data related to the traffic participant in the route acquisition data can be extracted. The motion trail of each traffic participant perceived by the host vehicle in the road acquisition data can be further acquired according to the extracted data related to the traffic participant, and then the motion trail of the traffic participant can be restored into a simulation test scene, and a scene description file is generated based on a scene description language. The method and the system realize that the traffic participators perceived in the running process of the main vehicle are constructed into corresponding simulation test scenes according to the motion trail of the traffic participators, so that richer test scenes are provided for simulation tests, the test scenes can be constructed instead of manpower, and the efficiency of generating the simulation test scenes is improved.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is one of flow diagrams of an automatic driving simulation test scenario generating method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of detecting traffic participants in an image frame provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of detecting traffic participants in a Lei Dadian cloud data frame provided by an embodiment of the present disclosure;
FIG. 4 is a second flow chart of an automatic driving simulation test scenario generation method according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a traffic participant after fusing coordinate data provided by an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of converting a coordinate system between two adjacent frames of an inertial measurement unit according to an embodiment of the disclosure;
fig. 7 is a schematic diagram of the composition of an autopilot simulation test scenario generating apparatus provided in an embodiment of the present disclosure;
fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The automatic driving simulation test scene generation method and the automatic driving simulation test scene generation device are suitable for the situation that simulation test scenes are generated according to road mining data. The method for generating the automatic driving simulation test scene provided by the disclosure can be executed by an automatic driving simulation test scene generating device, the automatic driving simulation test scene generating device can be realized by software and/or hardware and is specifically configured in electronic equipment, and the electronic equipment can be a server, a computer, a user terminal, mobile equipment, a singlechip and other computing equipment, and is not limited herein.
The method for generating the automatic driving simulation test scene provided by the present disclosure is first described in detail below.
Often autopilot technology needs to undergo rigorous complete system testing to ensure its safety before being applied on a large scale. Currently, road testing is the primary means of developing safety tests for autopilot systems. But road test is high in cost, more restricted by laws and regulations and difficult to realize extreme test scenes, so that people gradually turn attention to simulation tests. The scenerised simulation test has the advantages of low cost, high efficiency and high safety, and is widely considered to replace road tests.
The simulation test of the scene requires the user to accurately describe the traffic environment (scene) where the automatic driving vehicle to be tested is located, and the simulation system generates a corresponding simulation test scene according to the description. The source of the test scene is usually to construct an artificial scene according to the test requirements, traffic accident related data and the like. In this way, the built simulation test scene is single, and the building process is time-consuming and labor-consuming.
In this regard, the present disclosure provides a method for generating an autopilot simulation test scenario, including: acquiring road acquisition data, wherein the road acquisition data comprises perception data of at least one sensor; extracting coordinate data of each traffic participant from the perception data of the at least one sensor; determining the motion trail of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
The system and the method can acquire the data of the host vehicle and the surrounding environment of the host vehicle in the driving process of the host vehicle through the sensor of the user environment perception arranged on the host vehicle, so as to obtain the road acquisition data. So that data related to the traffic participant in the route acquisition data can be extracted. The motion trail of each traffic participant perceived by the host vehicle in the road acquisition data can be further acquired according to the extracted data related to the traffic participant, and then the motion trail of the traffic participant can be restored into a simulation test scene, and a scene description file is generated based on a scene description language. The method and the system realize that the traffic participators perceived in the running process of the main vehicle are constructed into corresponding simulation test scenes according to the motion trail of the traffic participators, so that richer test scenes are provided for simulation tests, the test scenes can be constructed instead of manpower, and the efficiency of generating the simulation test scenes is improved.
Fig. 1 is a flowchart of an automatic driving simulation test scenario generation method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include the following S101-S104.
S101, acquiring road acquisition data, wherein the road acquisition data comprise perception data of at least one sensor.
For example, the road acquisition data may be sensor data (i.e., sensing data of a sensor) acquired by a sensor for environmental sensing provided to the host vehicle during the driving of the host vehicle.
The sensor for sensing environment may include RGB camera, laser radar, millimeter wave radar, satellite positioning and inertial navigation system, wheel speed meter, etc. without limitation.
During the driving of the host vehicle, the environment around the host vehicle can be sensed by means of these sensors. I.e. an environmental state that has a direct impact on the perception-decision-control system of the autonomous vehicle, such as: time, weather, road (road network structure, road surface conditions, lane lines), traffic identification, traffic participants (category, location, speed, etc.). Thereby facilitating the subsequent extraction of the coordinate data of the traffic participant according to the route acquisition data.
It should be noted that, since the road acquisition data is data acquired by the sensor, the road acquisition data is composed of data frames arranged in time series. When the plurality of sensors are included, different sensors respectively correspond to one data frame sequence in the road acquisition data.
S102, respectively extracting coordinate data of each traffic participant from the perception data of at least one sensor.
Wherein the sensor's sensory data may include target level data for each of the traffic participants. The target level data may include data collected by the sensor devices related to a particular target or task, such as image data of traffic participants, coordinate data of the traffic participants. The coordinate data of the traffic participants (i.e., the coordinate data under the coordinate system of the corresponding sensor) may be derived from the target level data of each traffic participant.
For example, the coordinate data of the corresponding traffic participant in each data frame may be marked in the route data (i.e. in the perception data of at least one sensor) by means of manual marking. Of course, the neural network target detection algorithm can also be used for identifying and detecting the traffic participants in each data frame of the road mining data, so that the coordinate data of the corresponding traffic participants are extracted according to the detection result.
Alternatively, in practical application, the road acquisition data may include multiple groups of sensor data acquired by multiple sensors respectively.
Accordingly, extracting the coordinate data of the traffic participant in the route acquisition data may include: coordinate data of the traffic participant is extracted from the plurality of sets of sensor data, respectively. The sensor data acquired by each sensor are respectively extracted to obtain the coordinate data of the traffic participants acquired by each sensor. Therefore, the collection degree of the traffic participants can be improved through the coordinate data of the traffic participants collected by the plurality of sensors, so that the motion trail of the traffic participants can be determined more accurately and abundantly according to the coordinate data of the traffic participants, and more accurate and abundantly simulation test scenes can be obtained.
By way of example, road acquisition data includes image data acquired by an RGB camera, and radar data acquired by a lidar.
The traffic participants in the image data acquired by the RGB cameras can be detected by a deep learning convolutional neural network target detection algorithm. For example, traffic participants may include motor vehicles, non-motor vehicles, and pedestrians. The image data may be subject to object detection to detect traffic participants in each image frame and to annotate. For example, as shown in FIG. 2, a traffic participant 201 in an image frame may be detected and the detected traffic participant 201 may be annotated.
And the traffic participants in the radar point cloud data acquired by the laser radar can be detected by a deep learning convolutional neural network target detection algorithm. For example, traffic participants may include motor vehicles, non-motor vehicles, and pedestrians. The radar point cloud data may be targeted to detect traffic participants in each radar point cloud data frame and annotate. For example, as shown in fig. 3, traffic participant 301 in a radar point cloud data frame may be detected and the detected traffic participant 301 may be annotated.
The neural network target detection algorithm for detecting the traffic participant in the image data and the neural network target detection algorithm for detecting the traffic participant in the Lei Dadian cloud data can be obtained by training a training set containing the image data or the radar point cloud data of the traffic participant, and of course, the target detection algorithm in the related art can also be directly adopted, which is not limited herein.
Of course, in other possible embodiments of the present application, a manual labeling manner may be adopted to identify and label the traffic participants in the image data collected by the RGB camera, and the traffic participants in the radar point cloud data collected by the laser radar, which is not limited herein.
S103, determining the movement track of each traffic participant according to the coordinate data of each traffic participant.
Since the route data is composed of data frames, the extracted coordinate data of the traffic participant is also composed of data frames. Thus, for example, the movement trajectories of the traffic participants in the coordinate data of the traffic participants may be determined in the order of the data frames. In general, a movement trajectory of a traffic participant may include world coordinates of a point in time when the traffic participant was collected in each corresponding data frame and its corresponding location.
Optionally, when the road-collecting data is composed of sensor data collected by a plurality of sensors (i.e., the road-collecting data includes sensing data of at least two sensors), the coordinate data of the traffic participant correspondingly extracted also includes a plurality of groups corresponding to the plurality of sensors.
At this time, determining the movement trajectories of the respective traffic participants based on the coordinate data of the respective traffic participants may include:
and respectively converting the extracted coordinate data of the traffic participants into the same world coordinate system.
And determining the movement track of each traffic participant according to the converted coordinate data of the traffic participant.
I.e. the coordinate data of the traffic participants corresponding to different sensors are respectively converted into the same world coordinate system (e.g. into the world coordinate system corresponding to the same sensor). And then determining the movement track of each traffic participant according to the coordinate data of the traffic participant corresponding to each traffic participant converted into the same world coordinate system.
Therefore, the positions of the corresponding traffic participants in the coordinate data of the traffic participants can be expressed in the same coordinate system, so that the movement track of each traffic participant can be accurately and uniformly determined in the same coordinate system. And, the coordinate data of the traffic participants collected by the detection of the same traffic participant by different sensors can be combined (or called fusion) by converting the coordinate data of each traffic participant into the same world coordinate system, so that redundant data in the coordinate data of the traffic participants can be removed.
S104, compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
The motion scenes of the traffic participants perceived by the host vehicle in the driving process are constructed into simulation test scenes according to the motion tracks of the traffic participants, and are compiled into scene description languages to generate the simulation test scenes corresponding to the road mining data.
Wherein the scene description language is a formal language for describing a scene and its evolution over time. It belongs to one of the domain specific languages (Domain Specific Language, DSL). Similar to general-purpose programming languages, scene description languages have certain grammatical rules and structures, but their scope of application is limited to specific problem areas (e.g., only for describing traffic environments).
Scene description scripts written using scene description languages can be parsed and processed by a computer, particularly in a simulation environment to enable simulation of the scene. Unlike natural language, scene description language follows a strict grammatical structure and unique parsing to disambiguate.
The presence of a scene description language helps to systematically and accurately describe complex scenarios. By using domain-specific language elements and grammar rules, it can more efficiently represent and convey information about a particular domain. Due to its structuring and accuracy, scene description languages are widely used in a variety of fields, such as traffic simulation, virtual reality, game development, etc.
The use of a scene description language may help developers and researchers define and process scenes more easily in a particular field and interact with a computer system. By eliminating language ambiguity, the scene description language can improve the accuracy and efficiency of communication, and ensure consistent understanding of scenes and correct simulation results.
The system and the method can acquire the data of the host vehicle and the surrounding environment of the host vehicle in the driving process of the host vehicle through the sensor of the user environment perception arranged on the host vehicle, so as to obtain the road acquisition data. So that data related to the traffic participant in the route acquisition data can be extracted. The motion trail of each traffic participant perceived by the host vehicle in the road acquisition data can be further acquired according to the extracted data related to the traffic participant, and then the motion trail of the traffic participant can be restored into a simulation test scene, and a scene description file is generated based on a scene description language. The method and the system realize that the traffic participators perceived in the running process of the main vehicle are constructed into corresponding simulation test scenes according to the motion trail of the traffic participators, so that richer test scenes are provided for simulation tests, the test scenes can be constructed instead of manpower, and the efficiency of generating the simulation test scenes is improved.
Optionally, the road acquisition data may further include sensing data of the inertial measurement unit. I.e. the host vehicle is also provided with an inertial measurement unit.
Accordingly, the extracted coordinate data of the traffic participants are respectively converted into the same world coordinate system, as shown in fig. 4, may include:
s401, according to coordinate conversion relations among different pre-calibrated sensors, the extracted coordinate data of the traffic participants are respectively converted into a coordinate system of a corresponding frame of the inertial measurement unit.
The coordinate conversion relation among different sensors can be obtained by calculating coordinates of different characteristic points in different sensor coordinate systems in advance according to the internal parameters and the external parameters calibrated by the sensors and the different characteristic points in the calibration object.
For example, sensors that collect coordinate data of traffic participants include RGB cameras and lidars.
The data corresponding to the calibration object can be respectively acquired through the RGB camera and the laser radar. And then respectively extracting data information acquired for the same characteristic point of the calibration object from the data acquired by the RGB camera and the data acquired by the laser radar, and calculating and determining the coordinate conversion relation between the RGB camera and the laser radar according to the coordinates of the characteristic point in the coordinate system of the RGB camera, the coordinates in the pixel coordinate system of the image data acquired by the RGB camera and the coordinates in the coordinate system of the laser radar.
For example, when the camera and the lidar simultaneously acquire the point P, the coordinates of the point P in the coordinate system of the RGB camera may be PC (x C ,y C ,z C ) The pixel coordinates of the image data acquired at the RGB camera may be (u, v, 1), and the coordinates in the coordinate system of the lidar may be PL (x L ,y L ,z L ). The conversion relation from the coordinate system of the RGB camera to the coordinate system of the laser radar is R CL |T CL ]The following formula can be specifically satisfied:
R CL ×P C + T CL = P L
where pc=k-1 (u, v, 1) T, where K is the internal reference matrix of the RGB camera (typically obtained when the RGB camera is internally referenced).
Therefore, the coordinate conversion relation between the RGB camera and the laser radar can be obtained by solving the pixel coordinates in the image data acquired by the RGB camera and the coordinates in the coordinate system of the laser radar based on a least square method through at least 4 different characteristic points.
Correspondingly, the coordinate conversion relation between the laser radar and the inertial measurement unit can also be calculated by measuring the included angles between the coordinate axes of the coordinate system of the inertial measurement unit and the coordinate axes of the coordinate system of the laser radar and the offset of the coordinate origin between the two coordinate systems.
For example, for point P, its coordinates in the laser radar coordinate system may be PL (x L ,y L ,z L ) The coordinates in the inertial measurement unit coordinate system may be PI (x I ,y I ,z I ). At this time, the conversion relation from the laser radar coordinate system to the inertial measurement unit coordinate system is [ R ] LI |T LI ]The following formula may be satisfied:
R LI ×P L + T LI = P I
therefore, the coordinate conversion relation between the laser radar and the inertial measurement unit can be obtained by solving a plurality of different characteristic points based on a least square method according to the corresponding coordinates in the coordinate system of the laser radar and the corresponding coordinates in the coordinate system of the inertial measurement unit.
Therefore, the coordinate data of each frame of traffic participants extracted from the image data acquired by the RGB camera can be converted into the coordinate system of the corresponding frame of the laser radar. And then, in a coordinate system of the laser radar, combining the coordinate data of the traffic participant extracted from the converted image data with the coordinate data of the traffic participant extracted from the radar point cloud data acquired by the laser radar (for example, matching and combining the overlapping data by adopting a Hungary algorithm). For example, as shown in fig. 5, in the coordinate system of the lidar, the coordinate data of the traffic participant extracted from the converted image data and the coordinate data of the traffic participant extracted from the radar point cloud data collected by the lidar are combined in the overlapping portion, so as to obtain 5 traffic participants corresponding to the frame, where a, b, and c are respectively the traffic participants after being combined, d is the traffic participant corresponding to the coordinate data of the traffic participant extracted from the image data after being converted into the coordinate system of the lidar, and e is the traffic participant corresponding to the coordinate data of the traffic participant extracted from the radar point cloud data.
Finally, according to the coordinate conversion relation between the laser radar and the inertial measurement unit, the coordinate data of each traffic participant in the coordinate system of each frame of the laser radar can be respectively transferred to the coordinate system of the corresponding frame of the inertial measurement unit.
S402, according to the coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participants are respectively converted into a coordinate system of a first frame of the inertial measurement unit.
The coordinate conversion relation between frames of the inertial measurement unit can be obtained by calculating the origin and longitudinal displacement and transverse displacement of the coordinate system of the inertial measurement unit between two adjacent frames according to longitudinal speed, transverse speed, yaw angle and other main vehicle motion parameters of the main vehicle in the corresponding time frames in advance according to sensor data (namely sensing data of the inertial measurement unit) acquired by the inertial measurement unit, and calculating the rotation angle of the coordinate system of the inertial measurement unit between the two adjacent frames around the Z axis according to the yaw angle. Therefore, the coordinate conversion relation between frames of the inertial measurement unit is determined according to the original displacement of the coordinate system between two adjacent frames and the rotation angle of the coordinate system around the Z axis.
For example, as shown in fig. 6, the coordinate system (X A ,Y A ,Z A ) And (X) B ,Y B ,Z B ) The origin displacement of (2) is T, and the rotation angle of the coordinate system between two adjacent frames around the Z axis is. Then, the coordinate conversion relation between the two corresponding frames is [ R|T ]]Wherein R is a rotation matrix and T is a translation matrix. R may satisfy the following formula:
thus, the coordinate data of the traffic participant in the coordinate system of each frame of the inertial measurement unit can be converted into the coordinate system of the same frame of the inertial measurement unit according to the coordinate conversion relation between the frames of the inertial measurement unit. For example, the coordinate data of the traffic participant in the coordinate system of each frame may be converted into the coordinate system corresponding to the first frame data frame of the inertial measurement unit. Of course, in other embodiments of the present application, the coordinate data of the traffic participant in the coordinate system of each frame may also be converted into the coordinate system corresponding to the data frame of the other frames of the inertial measurement unit, which is not limited herein.
Therefore, the coordinate data of the traffic participants extracted from the data acquired by the sensors can be converted into the coordinate system corresponding to the same frame of data frame of the inertial measurement unit, so that the coordinate data of the traffic participants can be simply and conveniently converted into the same world coordinate system, and the conversion efficiency is improved.
Alternatively, the sensory data of the sensor may include relative position information of each traffic participant with respect to the sensor. The relative position information of each traffic participant relative to the sensor may be the relative distance between each traffic participant and the sensor, or may be other forms of data such as images of the traffic participants collected by the sensor. When the relative position information is data in other forms such as images, the coordinate data of each traffic participant can be determined according to the relative position information when the motion trail of each traffic participant is determined according to the coordinate data of the traffic participant. And then determining the movement track of each traffic participant according to the coordinate data of each traffic participant.
Therefore, the movement track of each traffic participant can be determined simply, conveniently and quickly according to the coordinate data of each traffic participant, and the efficiency of determining the movement track of the traffic participant is improved.
Of course, in other possible embodiments of the present application, when the above-mentioned relative position information is other data such as an image, the movement track of the corresponding traffic participant may also be determined by means of object detection and identification, which is not limited herein.
Optionally, determining the movement track of each traffic participant according to the coordinate data of each traffic participant may include:
and tracking the track of the traffic participants corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and acquiring the motion track of the corresponding traffic participants.
For example, a kalman filtering algorithm and a hungarian algorithm may be used to track the trajectories of the traffic participants corresponding to the coordinate data of the traffic participants, and then, based on the converted identical world coordinate system, the world coordinates of the traffic participants in the continuous frames are obtained according to the type of the traffic participants and the three-dimensional euclidean distance as conditions, so as to obtain the motion trajectories of the traffic participants.
Therefore, the motion trail of each traffic participant can be simply and quickly calculated, and the efficiency of determining the motion trail of the traffic participant is improved.
Optionally, compiling a corresponding scene description language according to the motion trail to generate a simulation test scene corresponding to the road mining data, including:
and restoring the motion trail of the traffic participant into a corresponding scene description language according to the scene description language selected in advance, obtaining a scene description file, and generating a simulation test scene.
For example, taking the scene description language as OpenScenario 1.X, according to the pre-selected scene description language, the restoring the motion trail of each traffic participant to the corresponding scene description language may be that the motion trail of each traffic participant is represented by a tracking trail Action (FollowTrajectory Action) in an Entity (Entity) exclusive Action (Private Action), and coordinates of the traffic participant in a world coordinate system and corresponding time nodes are represented by vertexes (vertexes), so that decompilation is completed, and the scene description language is generated, and the scene description file is obtained. Thereby facilitating the generation of a simulation test scene from the scene description file. For example, the scene description file may be read according to an interpreter corresponding to the scene description language (i.e., the pre-selected scene description language described above), thereby generating a corresponding simulation test scene.
Therefore, the corresponding scene description language can be automatically generated conveniently and efficiently by decompiling the motion trail of the traffic participant, so that the efficiency of generating the scene description language according to the motion trail of the traffic participant is improved.
In an exemplary embodiment, the embodiment of the disclosure further provides an automatic driving simulation test scene generating device, which may be used to implement the automatic driving simulation test scene generating method described in the foregoing embodiment.
Fig. 7 is a schematic diagram of the composition of the automatic driving simulation test scenario generating device according to the embodiment of the present disclosure.
As shown in fig. 7, the automatic driving simulation test scenario generating apparatus may include:
an acquisition module 701, configured to acquire road acquisition data, where the road acquisition data includes sensing data of at least one sensor; an extraction module 702, configured to extract coordinate data of each traffic participant from the sensing data of at least one sensor; a processing module 703, configured to determine a movement track of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
In some possible implementations, the road acquisition data includes sensing data of at least two sensors, and the processing module 703 is specifically configured to convert coordinate data of the traffic participant extracted from the sensing data of the at least two sensors into the same world coordinate system respectively; and determining the movement track of each traffic participant according to the converted coordinate data of the traffic participant.
In some possible implementations, the road acquisition data further includes perception data of the inertial measurement unit; the processing module 703 is specifically configured to convert the extracted coordinate data of the traffic participant into coordinate systems of frames corresponding to the inertial measurement units according to coordinate conversion relationships among different pre-calibrated sensors; according to the coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participants are respectively converted into a coordinate system of a first frame of the inertial measurement unit.
In some possible implementations, the processing module 703 is specifically configured to track a traffic participant corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and obtain a motion trail of the corresponding traffic participant.
In some possible implementations, the processing module 703 is specifically configured to perform trajectory tracking on traffic participants corresponding to the coordinate data of each traffic participant by using a kalman filter and a hungarian algorithm, and obtain coordinates of the corresponding traffic participants in continuous frames under the condition of the type of the traffic participant and the three-dimensional euclidean distance; and taking the coordinates of the corresponding traffic participants in the continuous frames as the motion trail of each traffic participant.
In some possible implementations, the processing module 703 is specifically configured to restore the motion trail of the traffic participant to a corresponding scene description language according to a pre-selected scene description language, obtain a scene description file, and generate a simulation test scene.
In some possible implementations, the processing module 703 is specifically configured to obtain the scene description file by representing coordinates of points on a motion track of a traffic participant and time nodes corresponding to the points respectively by using vertices through a track motion in a preselected scene description language; and generating a simulation test scene according to the scene description file.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
In an exemplary embodiment, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the above embodiments.
In an exemplary embodiment, the readable storage medium may be a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the above embodiment.
In an exemplary embodiment, the computer program product comprises a computer program which, when executed by a processor, implements the method according to the above embodiments.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of user terminals, various forms of digital computers, such as laptops, desktops, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and the like, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, for example, an automatic driving simulation test scenario generation method. For example, in some embodiments, the autopilot simulation test scenario generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the autopilot simulation test scenario generation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the autopilot simulation test scenario generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (8)

1. The method for generating the automatic driving simulation test scene is characterized by comprising the following steps of:
acquiring road acquisition data;
extracting coordinate data of each traffic participant from the road acquisition data respectively;
determining the motion trail of each traffic participant according to the coordinate data of each traffic participant;
compiling a corresponding scene description language according to the motion trail to generate a simulation test scene corresponding to the road mining data;
The road acquisition data comprises perception data of at least two sensors, and the determining the motion trail of each traffic participant according to the coordinate data of each traffic participant comprises the following steps:
converting the coordinate data of the traffic participants extracted from the perception data of the at least two sensors into the same world coordinate system respectively;
determining the motion trail of each traffic participant according to the converted coordinate data of the traffic participant;
the road acquisition data also comprises perception data of an inertial measurement unit; the converting the coordinate data of the traffic participant extracted from the perception data of the at least two sensors into the same world coordinate system respectively includes:
according to the coordinate conversion relations among different sensors, the extracted coordinate data of the traffic participants are respectively converted into coordinate systems of corresponding frames of the inertial measurement units, wherein the coordinate conversion relations among the different sensors are calculated according to coordinates of different characteristic points in internal parameters, external parameters and calibration objects calibrated by the sensors in the coordinate systems of the different sensors;
according to the pre-calculated coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participant are respectively converted into a coordinate system of a first frame of the inertial measurement unit, the coordinate conversion relation among frames of the inertial measurement unit is determined according to the original displacement of the coordinate system between two adjacent frames of the inertial measurement unit and the rotation angle of the coordinate system between the two adjacent frames of the inertial measurement unit around a Z axis, the original displacement of the coordinate system between the two adjacent frames of the inertial measurement unit is calculated according to the main vehicle motion parameters of a main vehicle in sensor data acquired by the inertial measurement unit during corresponding time frames, the main vehicle motion parameters comprise longitudinal speed, transverse speed and yaw angle, and the rotation angle of the coordinate system between the two adjacent frames of the inertial measurement unit around the Z axis is calculated according to the yaw angle;
The step of respectively converting the extracted coordinate data of the traffic participants into the coordinate system of the corresponding frame of the inertial measurement unit according to the coordinate conversion relations among different sensors comprises the following steps:
according to the coordinate conversion relations between different sensors and the coordinate conversion relations between the sensors and the inertial measurement units, the extracted coordinate data of the traffic participants are respectively converted into coordinate systems of corresponding frames of the inertial measurement units, and the coordinate conversion relations between the sensors and the inertial measurement units are calculated by measuring the included angles between coordinate axes of the coordinate systems of the inertial measurement units and coordinate axes of the coordinate systems of the sensors, and the offset between the coordinate systems of the inertial measurement units and the coordinate origin of the coordinate systems of the sensors.
2. The method of any one of claim 1, wherein said determining the movement trajectories of the respective traffic participants based on the coordinate data of the respective traffic participants comprises:
and tracking the track of the traffic participant corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and obtaining the movement track of the corresponding traffic participant.
3. The method according to claim 2, wherein the tracking the traffic participant corresponding to the coordinate data of each traffic participant according to the preset algorithm and the coordinate data of each traffic participant to obtain the movement track of the corresponding traffic participant comprises:
tracking the trafficking participants corresponding to the coordinate data of each traffic participant by using Kalman filtering and Hungary algorithm, and acquiring the coordinates of the corresponding traffic participants in continuous frames under the condition of the type of the traffic participants and the three-dimensional Euclidean distance;
and taking the coordinates of the corresponding traffic participants in the continuous frames as the motion trail of each traffic participant.
4. The method according to any one of claim 1, wherein compiling a corresponding scene description language according to the motion trail to generate a simulation test scene corresponding to the road mining data includes:
and restoring the motion trail of the traffic participant into a corresponding scene description language according to the scene description language selected in advance, obtaining a scene description file, and generating the simulation test scene.
5. The method of claim 4, wherein the restoring the motion trail of the traffic participant to the corresponding scene description language according to the pre-selected scene description language to obtain the scene description file, generating the simulated test scene comprises:
The coordinates of each point on the motion trail of the traffic participant and the corresponding time node of each point are represented by vertexes through the track motion in the pre-selected scene description language, so as to obtain a scene description file;
and generating the simulation test scene according to the scene description file.
6. An automatic driving simulation test scene generating device is characterized by comprising:
the acquisition module is used for acquiring road acquisition data;
the extraction module is used for respectively extracting the coordinate data of each traffic participant from the road acquisition data;
the processing module is used for determining the motion trail of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail to generate a simulation test scene corresponding to the road mining data;
the road acquisition data comprise perception data of at least two sensors, and the processing module is specifically used for:
converting the coordinate data of the traffic participants extracted from the perception data of the at least two sensors into the same world coordinate system respectively;
determining the motion trail of each traffic participant according to the converted coordinate data of the traffic participant;
The road acquisition data also comprises perception data of an inertial measurement unit; the processing module is specifically configured to:
according to the coordinate conversion relations among different sensors, the extracted coordinate data of the traffic participants are respectively converted into coordinate systems of corresponding frames of the inertial measurement units, wherein the coordinate conversion relations among the different sensors are calculated according to coordinates of different characteristic points in internal parameters, external parameters and calibration objects calibrated by the sensors in the coordinate systems of the different sensors;
according to the pre-calculated coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participant are respectively converted into a coordinate system of a first frame of the inertial measurement unit, the coordinate conversion relation among frames of the inertial measurement unit is determined according to the original displacement of the coordinate system between two adjacent frames of the inertial measurement unit and the rotation angle of the coordinate system between the two adjacent frames of the inertial measurement unit around a Z axis, the original displacement of the coordinate system between the two adjacent frames of the inertial measurement unit is calculated according to the main vehicle motion parameters of a main vehicle in sensor data acquired by the inertial measurement unit during corresponding time frames, the main vehicle motion parameters comprise longitudinal speed, transverse speed and yaw angle, and the rotation angle of the coordinate system between the two adjacent frames of the inertial measurement unit around the Z axis is calculated according to the yaw angle;
The processing module is specifically configured to:
according to the coordinate conversion relations between different sensors and the coordinate conversion relations between the sensors and the inertial measurement units, the extracted coordinate data of the traffic participants are respectively converted into coordinate systems of corresponding frames of the inertial measurement units, and the coordinate conversion relations between the sensors and the inertial measurement units are calculated by measuring the included angles between coordinate axes of the coordinate systems of the inertial measurement units and coordinate axes of the coordinate systems of the sensors, and the offset between the coordinate systems of the inertial measurement units and the coordinate origin of the coordinate systems of the sensors.
7. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
8. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202310923570.7A 2023-07-26 2023-07-26 Automatic driving simulation test scene generation method, device, equipment and storage medium Active CN116663329B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310923570.7A CN116663329B (en) 2023-07-26 2023-07-26 Automatic driving simulation test scene generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310923570.7A CN116663329B (en) 2023-07-26 2023-07-26 Automatic driving simulation test scene generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116663329A CN116663329A (en) 2023-08-29
CN116663329B true CN116663329B (en) 2024-03-29

Family

ID=87717337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310923570.7A Active CN116663329B (en) 2023-07-26 2023-07-26 Automatic driving simulation test scene generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116663329B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222667A (en) * 2019-06-17 2019-09-10 南京大学 A kind of open route traffic participant collecting method based on computer vision
CN112069643A (en) * 2019-05-24 2020-12-11 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device
CN112732671A (en) * 2020-12-31 2021-04-30 华东师范大学 Automatic driving safety scene element modeling method driven by space-time trajectory data
CN113687600A (en) * 2021-10-21 2021-11-23 中智行科技有限公司 Simulation test method, simulation test device, electronic equipment and storage medium
CN114312840A (en) * 2021-12-30 2022-04-12 重庆长安汽车股份有限公司 Automatic driving obstacle target track fitting method, system, vehicle and storage medium
WO2022095468A1 (en) * 2020-11-06 2022-05-12 北京市商汤科技开发有限公司 Display method and apparatus in augmented reality scene, device, medium, and program
CN114970321A (en) * 2022-04-28 2022-08-30 长安大学 Scene flow digital twinning method and system based on dynamic trajectory flow
CN115187742A (en) * 2022-09-07 2022-10-14 西安深信科创信息技术有限公司 Method, system and related device for generating automatic driving simulation test scene
CN115203062A (en) * 2022-09-15 2022-10-18 清华大学苏州汽车研究院(吴江) Automatic driving test system, method, electronic device and storage medium
CN115616937A (en) * 2022-12-02 2023-01-17 广汽埃安新能源汽车股份有限公司 Automatic driving simulation test method, device, equipment and computer readable medium
CN116310679A (en) * 2023-03-04 2023-06-23 西安电子科技大学青岛计算技术研究院 Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN116448146A (en) * 2023-03-22 2023-07-18 上海人工智能创新中心 Inertial navigation system self-calibration method, device, equipment and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112069643A (en) * 2019-05-24 2020-12-11 北京车和家信息技术有限公司 Automatic driving simulation scene generation method and device
CN110222667A (en) * 2019-06-17 2019-09-10 南京大学 A kind of open route traffic participant collecting method based on computer vision
WO2022095468A1 (en) * 2020-11-06 2022-05-12 北京市商汤科技开发有限公司 Display method and apparatus in augmented reality scene, device, medium, and program
CN112732671A (en) * 2020-12-31 2021-04-30 华东师范大学 Automatic driving safety scene element modeling method driven by space-time trajectory data
CN113687600A (en) * 2021-10-21 2021-11-23 中智行科技有限公司 Simulation test method, simulation test device, electronic equipment and storage medium
CN114312840A (en) * 2021-12-30 2022-04-12 重庆长安汽车股份有限公司 Automatic driving obstacle target track fitting method, system, vehicle and storage medium
CN114970321A (en) * 2022-04-28 2022-08-30 长安大学 Scene flow digital twinning method and system based on dynamic trajectory flow
CN115187742A (en) * 2022-09-07 2022-10-14 西安深信科创信息技术有限公司 Method, system and related device for generating automatic driving simulation test scene
CN115203062A (en) * 2022-09-15 2022-10-18 清华大学苏州汽车研究院(吴江) Automatic driving test system, method, electronic device and storage medium
CN115616937A (en) * 2022-12-02 2023-01-17 广汽埃安新能源汽车股份有限公司 Automatic driving simulation test method, device, equipment and computer readable medium
CN116310679A (en) * 2023-03-04 2023-06-23 西安电子科技大学青岛计算技术研究院 Multi-sensor fusion target detection method, system, medium, equipment and terminal
CN116448146A (en) * 2023-03-22 2023-07-18 上海人工智能创新中心 Inertial navigation system self-calibration method, device, equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
An Automatic Driving Algorithm for Outdoor Wheeled Unmanned Vehicle;Ping Wu等;IEEE;20200423;全文 *
基于无损卡尔曼滤波的车载双雷达目标位置估计方法;向易等;光电工程;第46卷卷(第7期期);参见第2节 *
基于融合感知的场景数据提取技术研究;李英勃;于波;;现代计算机(专业版);20190325(09期);全文 *

Also Published As

Publication number Publication date
CN116663329A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
CN111998860B (en) Automatic driving positioning data verification method and device, electronic equipment and storage medium
EP4116462A2 (en) Method and apparatus of processing image, electronic device, storage medium and program product
CN113859264B (en) Vehicle control method, device, electronic equipment and storage medium
KR20230008000A (en) Positioning method and apparatus based on lane line and feature point, electronic device, storage medium, computer program and autonomous vehicle
CN114034295A (en) High-precision map generation method, device, electronic device, medium, and program product
US20200380085A1 (en) Simulations with Realistic Sensor-Fusion Detection Estimates of Objects
CN113298910A (en) Method, apparatus and storage medium for generating traffic sign line map
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN114882316A (en) Target detection model training method, target detection method and device
CN113554712B (en) Registration method and device of automatic driving vehicle, electronic equipment and vehicle
CN114111813A (en) High-precision map element updating method and device, electronic equipment and storage medium
CN116663329B (en) Automatic driving simulation test scene generation method, device, equipment and storage medium
Brata et al. An Enhancement of Outdoor Location-Based Augmented Reality Anchor Precision through VSLAM and Google Street View
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN114419564B (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN115147561A (en) Pose graph generation method, high-precision map generation method and device
CN114170282A (en) Point cloud fusion method and device, electronic equipment and medium
CN114140512A (en) Image processing method and related equipment
KR102721493B1 (en) Lane marking detecting method, apparatus, electronic device, storage medium, and vehicle
CN115775325B (en) Pose determining method and device, electronic equipment and storage medium
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN117710456A (en) Training method and device for positioning and mapping model, electronic equipment and storage medium
CN115468778A (en) Vehicle testing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: Room 533, 5th Floor, Building A3A4, Phase I, Anchenggu Science and Technology Park, No. 900 Wangjiang West Road, High tech Zone, Hefei City, Anhui Province, 230000

Applicant after: Anhui Xinxin Science and Technology Innovation Information Technology Co.,Ltd.

Address before: 2nd Floor, Building B2, Yunhui Valley, No. 156, Tiangu 8th Road, Software New Town, Yuhua Street Office, High-tech Zone, Xi'an City, Shaanxi Province 710000

Applicant before: Xi'an Xinxin Information Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant