Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The automatic driving simulation test scene generation method and the automatic driving simulation test scene generation device are suitable for the situation that simulation test scenes are generated according to road mining data. The method for generating the automatic driving simulation test scene provided by the disclosure can be executed by an automatic driving simulation test scene generating device, the automatic driving simulation test scene generating device can be realized by software and/or hardware and is specifically configured in electronic equipment, and the electronic equipment can be a server, a computer, a user terminal, mobile equipment, a singlechip and other computing equipment, and is not limited herein.
The method for generating the automatic driving simulation test scene provided by the present disclosure is first described in detail below.
Often autopilot technology needs to undergo rigorous complete system testing to ensure its safety before being applied on a large scale. Currently, road testing is the primary means of developing safety tests for autopilot systems. But road test is high in cost, more restricted by laws and regulations and difficult to realize extreme test scenes, so that people gradually turn attention to simulation tests. The scenerised simulation test has the advantages of low cost, high efficiency and high safety, and is widely considered to replace road tests.
The simulation test of the scene requires the user to accurately describe the traffic environment (scene) where the automatic driving vehicle to be tested is located, and the simulation system generates a corresponding simulation test scene according to the description. The source of the test scene is usually to construct an artificial scene according to the test requirements, traffic accident related data and the like. In this way, the built simulation test scene is single, and the building process is time-consuming and labor-consuming.
In this regard, the present disclosure provides a method for generating an autopilot simulation test scenario, including: acquiring road acquisition data, wherein the road acquisition data comprises perception data of at least one sensor; extracting coordinate data of each traffic participant from the perception data of the at least one sensor; determining the motion trail of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
The system and the method can acquire the data of the host vehicle and the surrounding environment of the host vehicle in the driving process of the host vehicle through the sensor of the user environment perception arranged on the host vehicle, so as to obtain the road acquisition data. So that data related to the traffic participant in the route acquisition data can be extracted. The motion trail of each traffic participant perceived by the host vehicle in the road acquisition data can be further acquired according to the extracted data related to the traffic participant, and then the motion trail of the traffic participant can be restored into a simulation test scene, and a scene description file is generated based on a scene description language. The method and the system realize that the traffic participators perceived in the running process of the main vehicle are constructed into corresponding simulation test scenes according to the motion trail of the traffic participators, so that richer test scenes are provided for simulation tests, the test scenes can be constructed instead of manpower, and the efficiency of generating the simulation test scenes is improved.
Fig. 1 is a flowchart of an automatic driving simulation test scenario generation method according to an embodiment of the present disclosure. As shown in fig. 1, the method may include the following S101-S104.
S101, acquiring road acquisition data, wherein the road acquisition data comprise perception data of at least one sensor.
For example, the road acquisition data may be sensor data (i.e., sensing data of a sensor) acquired by a sensor for environmental sensing provided to the host vehicle during the driving of the host vehicle.
The sensor for sensing environment may include RGB camera, laser radar, millimeter wave radar, satellite positioning and inertial navigation system, wheel speed meter, etc. without limitation.
During the driving of the host vehicle, the environment around the host vehicle can be sensed by means of these sensors. I.e. an environmental state that has a direct impact on the perception-decision-control system of the autonomous vehicle, such as: time, weather, road (road network structure, road surface conditions, lane lines), traffic identification, traffic participants (category, location, speed, etc.). Thereby facilitating the subsequent extraction of the coordinate data of the traffic participant according to the route acquisition data.
It should be noted that, since the road acquisition data is data acquired by the sensor, the road acquisition data is composed of data frames arranged in time series. When the plurality of sensors are included, different sensors respectively correspond to one data frame sequence in the road acquisition data.
S102, respectively extracting coordinate data of each traffic participant from the perception data of at least one sensor.
Wherein the sensor's sensory data may include target level data for each of the traffic participants. The target level data may include data collected by the sensor devices related to a particular target or task, such as image data of traffic participants, coordinate data of the traffic participants. The coordinate data of the traffic participants (i.e., the coordinate data under the coordinate system of the corresponding sensor) may be derived from the target level data of each traffic participant.
For example, the coordinate data of the corresponding traffic participant in each data frame may be marked in the route data (i.e. in the perception data of at least one sensor) by means of manual marking. Of course, the neural network target detection algorithm can also be used for identifying and detecting the traffic participants in each data frame of the road mining data, so that the coordinate data of the corresponding traffic participants are extracted according to the detection result.
Alternatively, in practical application, the road acquisition data may include multiple groups of sensor data acquired by multiple sensors respectively.
Accordingly, extracting the coordinate data of the traffic participant in the route acquisition data may include: coordinate data of the traffic participant is extracted from the plurality of sets of sensor data, respectively. The sensor data acquired by each sensor are respectively extracted to obtain the coordinate data of the traffic participants acquired by each sensor. Therefore, the collection degree of the traffic participants can be improved through the coordinate data of the traffic participants collected by the plurality of sensors, so that the motion trail of the traffic participants can be determined more accurately and abundantly according to the coordinate data of the traffic participants, and more accurate and abundantly simulation test scenes can be obtained.
By way of example, road acquisition data includes image data acquired by an RGB camera, and radar data acquired by a lidar.
The traffic participants in the image data acquired by the RGB cameras can be detected by a deep learning convolutional neural network target detection algorithm. For example, traffic participants may include motor vehicles, non-motor vehicles, and pedestrians. The image data may be subject to object detection to detect traffic participants in each image frame and to annotate. For example, as shown in FIG. 2, a traffic participant 201 in an image frame may be detected and the detected traffic participant 201 may be annotated.
And the traffic participants in the radar point cloud data acquired by the laser radar can be detected by a deep learning convolutional neural network target detection algorithm. For example, traffic participants may include motor vehicles, non-motor vehicles, and pedestrians. The radar point cloud data may be targeted to detect traffic participants in each radar point cloud data frame and annotate. For example, as shown in fig. 3, traffic participant 301 in a radar point cloud data frame may be detected and the detected traffic participant 301 may be annotated.
The neural network target detection algorithm for detecting the traffic participant in the image data and the neural network target detection algorithm for detecting the traffic participant in the Lei Dadian cloud data can be obtained by training a training set containing the image data or the radar point cloud data of the traffic participant, and of course, the target detection algorithm in the related art can also be directly adopted, which is not limited herein.
Of course, in other possible embodiments of the present application, a manual labeling manner may be adopted to identify and label the traffic participants in the image data collected by the RGB camera, and the traffic participants in the radar point cloud data collected by the laser radar, which is not limited herein.
S103, determining the movement track of each traffic participant according to the coordinate data of each traffic participant.
Since the route data is composed of data frames, the extracted coordinate data of the traffic participant is also composed of data frames. Thus, for example, the movement trajectories of the traffic participants in the coordinate data of the traffic participants may be determined in the order of the data frames. In general, a movement trajectory of a traffic participant may include world coordinates of a point in time when the traffic participant was collected in each corresponding data frame and its corresponding location.
Optionally, when the road-collecting data is composed of sensor data collected by a plurality of sensors (i.e., the road-collecting data includes sensing data of at least two sensors), the coordinate data of the traffic participant correspondingly extracted also includes a plurality of groups corresponding to the plurality of sensors.
At this time, determining the movement trajectories of the respective traffic participants based on the coordinate data of the respective traffic participants may include:
and respectively converting the extracted coordinate data of the traffic participants into the same world coordinate system.
And determining the movement track of each traffic participant according to the converted coordinate data of the traffic participant.
I.e. the coordinate data of the traffic participants corresponding to different sensors are respectively converted into the same world coordinate system (e.g. into the world coordinate system corresponding to the same sensor). And then determining the movement track of each traffic participant according to the coordinate data of the traffic participant corresponding to each traffic participant converted into the same world coordinate system.
Therefore, the positions of the corresponding traffic participants in the coordinate data of the traffic participants can be expressed in the same coordinate system, so that the movement track of each traffic participant can be accurately and uniformly determined in the same coordinate system. And, the coordinate data of the traffic participants collected by the detection of the same traffic participant by different sensors can be combined (or called fusion) by converting the coordinate data of each traffic participant into the same world coordinate system, so that redundant data in the coordinate data of the traffic participants can be removed.
S104, compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
The motion scenes of the traffic participants perceived by the host vehicle in the driving process are constructed into simulation test scenes according to the motion tracks of the traffic participants, and are compiled into scene description languages to generate the simulation test scenes corresponding to the road mining data.
Wherein the scene description language is a formal language for describing a scene and its evolution over time. It belongs to one of the domain specific languages (Domain Specific Language, DSL). Similar to general-purpose programming languages, scene description languages have certain grammatical rules and structures, but their scope of application is limited to specific problem areas (e.g., only for describing traffic environments).
Scene description scripts written using scene description languages can be parsed and processed by a computer, particularly in a simulation environment to enable simulation of the scene. Unlike natural language, scene description language follows a strict grammatical structure and unique parsing to disambiguate.
The presence of a scene description language helps to systematically and accurately describe complex scenarios. By using domain-specific language elements and grammar rules, it can more efficiently represent and convey information about a particular domain. Due to its structuring and accuracy, scene description languages are widely used in a variety of fields, such as traffic simulation, virtual reality, game development, etc.
The use of a scene description language may help developers and researchers define and process scenes more easily in a particular field and interact with a computer system. By eliminating language ambiguity, the scene description language can improve the accuracy and efficiency of communication, and ensure consistent understanding of scenes and correct simulation results.
The system and the method can acquire the data of the host vehicle and the surrounding environment of the host vehicle in the driving process of the host vehicle through the sensor of the user environment perception arranged on the host vehicle, so as to obtain the road acquisition data. So that data related to the traffic participant in the route acquisition data can be extracted. The motion trail of each traffic participant perceived by the host vehicle in the road acquisition data can be further acquired according to the extracted data related to the traffic participant, and then the motion trail of the traffic participant can be restored into a simulation test scene, and a scene description file is generated based on a scene description language. The method and the system realize that the traffic participators perceived in the running process of the main vehicle are constructed into corresponding simulation test scenes according to the motion trail of the traffic participators, so that richer test scenes are provided for simulation tests, the test scenes can be constructed instead of manpower, and the efficiency of generating the simulation test scenes is improved.
Optionally, the road acquisition data may further include sensing data of the inertial measurement unit. I.e. the host vehicle is also provided with an inertial measurement unit.
Accordingly, the extracted coordinate data of the traffic participants are respectively converted into the same world coordinate system, as shown in fig. 4, may include:
s401, according to coordinate conversion relations among different pre-calibrated sensors, the extracted coordinate data of the traffic participants are respectively converted into a coordinate system of a corresponding frame of the inertial measurement unit.
The coordinate conversion relation among different sensors can be obtained by calculating coordinates of different characteristic points in different sensor coordinate systems in advance according to the internal parameters and the external parameters calibrated by the sensors and the different characteristic points in the calibration object.
For example, sensors that collect coordinate data of traffic participants include RGB cameras and lidars.
The data corresponding to the calibration object can be respectively acquired through the RGB camera and the laser radar. And then respectively extracting data information acquired for the same characteristic point of the calibration object from the data acquired by the RGB camera and the data acquired by the laser radar, and calculating and determining the coordinate conversion relation between the RGB camera and the laser radar according to the coordinates of the characteristic point in the coordinate system of the RGB camera, the coordinates in the pixel coordinate system of the image data acquired by the RGB camera and the coordinates in the coordinate system of the laser radar.
For example, when the camera and the lidar simultaneously acquire the point P, the coordinates of the point P in the coordinate system of the RGB camera may be PC (x C ,y C ,z C ) The pixel coordinates of the image data acquired at the RGB camera may be (u, v, 1), and the coordinates in the coordinate system of the lidar may be PL (x L ,y L ,z L ). The conversion relation from the coordinate system of the RGB camera to the coordinate system of the laser radar is R CL |T CL ]The following formula can be specifically satisfied:
R CL ×P C + T CL = P L
where pc=k-1 (u, v, 1) T, where K is the internal reference matrix of the RGB camera (typically obtained when the RGB camera is internally referenced).
Therefore, the coordinate conversion relation between the RGB camera and the laser radar can be obtained by solving the pixel coordinates in the image data acquired by the RGB camera and the coordinates in the coordinate system of the laser radar based on a least square method through at least 4 different characteristic points.
Correspondingly, the coordinate conversion relation between the laser radar and the inertial measurement unit can also be calculated by measuring the included angles between the coordinate axes of the coordinate system of the inertial measurement unit and the coordinate axes of the coordinate system of the laser radar and the offset of the coordinate origin between the two coordinate systems.
For example, for point P, its coordinates in the laser radar coordinate system may be PL (x L ,y L ,z L ) The coordinates in the inertial measurement unit coordinate system may be PI (x I ,y I ,z I ). At this time, the conversion relation from the laser radar coordinate system to the inertial measurement unit coordinate system is [ R ] LI |T LI ]The following formula may be satisfied:
R LI ×P L + T LI = P I
therefore, the coordinate conversion relation between the laser radar and the inertial measurement unit can be obtained by solving a plurality of different characteristic points based on a least square method according to the corresponding coordinates in the coordinate system of the laser radar and the corresponding coordinates in the coordinate system of the inertial measurement unit.
Therefore, the coordinate data of each frame of traffic participants extracted from the image data acquired by the RGB camera can be converted into the coordinate system of the corresponding frame of the laser radar. And then, in a coordinate system of the laser radar, combining the coordinate data of the traffic participant extracted from the converted image data with the coordinate data of the traffic participant extracted from the radar point cloud data acquired by the laser radar (for example, matching and combining the overlapping data by adopting a Hungary algorithm). For example, as shown in fig. 5, in the coordinate system of the lidar, the coordinate data of the traffic participant extracted from the converted image data and the coordinate data of the traffic participant extracted from the radar point cloud data collected by the lidar are combined in the overlapping portion, so as to obtain 5 traffic participants corresponding to the frame, where a, b, and c are respectively the traffic participants after being combined, d is the traffic participant corresponding to the coordinate data of the traffic participant extracted from the image data after being converted into the coordinate system of the lidar, and e is the traffic participant corresponding to the coordinate data of the traffic participant extracted from the radar point cloud data.
Finally, according to the coordinate conversion relation between the laser radar and the inertial measurement unit, the coordinate data of each traffic participant in the coordinate system of each frame of the laser radar can be respectively transferred to the coordinate system of the corresponding frame of the inertial measurement unit.
S402, according to the coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participants are respectively converted into a coordinate system of a first frame of the inertial measurement unit.
The coordinate conversion relation between frames of the inertial measurement unit can be obtained by calculating the origin and longitudinal displacement and transverse displacement of the coordinate system of the inertial measurement unit between two adjacent frames according to longitudinal speed, transverse speed, yaw angle and other main vehicle motion parameters of the main vehicle in the corresponding time frames in advance according to sensor data (namely sensing data of the inertial measurement unit) acquired by the inertial measurement unit, and calculating the rotation angle of the coordinate system of the inertial measurement unit between the two adjacent frames around the Z axis according to the yaw angle. Therefore, the coordinate conversion relation between frames of the inertial measurement unit is determined according to the original displacement of the coordinate system between two adjacent frames and the rotation angle of the coordinate system around the Z axis.
For example, as shown in fig. 6, the coordinate system (X A ,Y A ,Z A ) And (X) B ,Y B ,Z B ) The origin displacement of (2) is T, and the rotation angle of the coordinate system between two adjacent frames around the Z axis is. Then, the coordinate conversion relation between the two corresponding frames is [ R|T ]]Wherein R is a rotation matrix and T is a translation matrix. R may satisfy the following formula:
thus, the coordinate data of the traffic participant in the coordinate system of each frame of the inertial measurement unit can be converted into the coordinate system of the same frame of the inertial measurement unit according to the coordinate conversion relation between the frames of the inertial measurement unit. For example, the coordinate data of the traffic participant in the coordinate system of each frame may be converted into the coordinate system corresponding to the first frame data frame of the inertial measurement unit. Of course, in other embodiments of the present application, the coordinate data of the traffic participant in the coordinate system of each frame may also be converted into the coordinate system corresponding to the data frame of the other frames of the inertial measurement unit, which is not limited herein.
Therefore, the coordinate data of the traffic participants extracted from the data acquired by the sensors can be converted into the coordinate system corresponding to the same frame of data frame of the inertial measurement unit, so that the coordinate data of the traffic participants can be simply and conveniently converted into the same world coordinate system, and the conversion efficiency is improved.
Alternatively, the sensory data of the sensor may include relative position information of each traffic participant with respect to the sensor. The relative position information of each traffic participant relative to the sensor may be the relative distance between each traffic participant and the sensor, or may be other forms of data such as images of the traffic participants collected by the sensor. When the relative position information is data in other forms such as images, the coordinate data of each traffic participant can be determined according to the relative position information when the motion trail of each traffic participant is determined according to the coordinate data of the traffic participant. And then determining the movement track of each traffic participant according to the coordinate data of each traffic participant.
Therefore, the movement track of each traffic participant can be determined simply, conveniently and quickly according to the coordinate data of each traffic participant, and the efficiency of determining the movement track of the traffic participant is improved.
Of course, in other possible embodiments of the present application, when the above-mentioned relative position information is other data such as an image, the movement track of the corresponding traffic participant may also be determined by means of object detection and identification, which is not limited herein.
Optionally, determining the movement track of each traffic participant according to the coordinate data of each traffic participant may include:
and tracking the track of the traffic participants corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and acquiring the motion track of the corresponding traffic participants.
For example, a kalman filtering algorithm and a hungarian algorithm may be used to track the trajectories of the traffic participants corresponding to the coordinate data of the traffic participants, and then, based on the converted identical world coordinate system, the world coordinates of the traffic participants in the continuous frames are obtained according to the type of the traffic participants and the three-dimensional euclidean distance as conditions, so as to obtain the motion trajectories of the traffic participants.
Therefore, the motion trail of each traffic participant can be simply and quickly calculated, and the efficiency of determining the motion trail of the traffic participant is improved.
Optionally, compiling a corresponding scene description language according to the motion trail to generate a simulation test scene corresponding to the road mining data, including:
and restoring the motion trail of the traffic participant into a corresponding scene description language according to the scene description language selected in advance, obtaining a scene description file, and generating a simulation test scene.
For example, taking the scene description language as OpenScenario 1.X, according to the pre-selected scene description language, the restoring the motion trail of each traffic participant to the corresponding scene description language may be that the motion trail of each traffic participant is represented by a tracking trail Action (FollowTrajectory Action) in an Entity (Entity) exclusive Action (Private Action), and coordinates of the traffic participant in a world coordinate system and corresponding time nodes are represented by vertexes (vertexes), so that decompilation is completed, and the scene description language is generated, and the scene description file is obtained. Thereby facilitating the generation of a simulation test scene from the scene description file. For example, the scene description file may be read according to an interpreter corresponding to the scene description language (i.e., the pre-selected scene description language described above), thereby generating a corresponding simulation test scene.
Therefore, the corresponding scene description language can be automatically generated conveniently and efficiently by decompiling the motion trail of the traffic participant, so that the efficiency of generating the scene description language according to the motion trail of the traffic participant is improved.
In an exemplary embodiment, the embodiment of the disclosure further provides an automatic driving simulation test scene generating device, which may be used to implement the automatic driving simulation test scene generating method described in the foregoing embodiment.
Fig. 7 is a schematic diagram of the composition of the automatic driving simulation test scenario generating device according to the embodiment of the present disclosure.
As shown in fig. 7, the automatic driving simulation test scenario generating apparatus may include:
an acquisition module 701, configured to acquire road acquisition data, where the road acquisition data includes sensing data of at least one sensor; an extraction module 702, configured to extract coordinate data of each traffic participant from the sensing data of at least one sensor; a processing module 703, configured to determine a movement track of each traffic participant according to the coordinate data of each traffic participant; compiling a corresponding scene description language according to the motion trail, and generating a simulation test scene corresponding to the road mining data.
In some possible implementations, the road acquisition data includes sensing data of at least two sensors, and the processing module 703 is specifically configured to convert coordinate data of the traffic participant extracted from the sensing data of the at least two sensors into the same world coordinate system respectively; and determining the movement track of each traffic participant according to the converted coordinate data of the traffic participant.
In some possible implementations, the road acquisition data further includes perception data of the inertial measurement unit; the processing module 703 is specifically configured to convert the extracted coordinate data of the traffic participant into coordinate systems of frames corresponding to the inertial measurement units according to coordinate conversion relationships among different pre-calibrated sensors; according to the coordinate conversion relation among frames of the inertial measurement unit, coordinate data of the traffic participants are respectively converted into a coordinate system of a first frame of the inertial measurement unit.
In some possible implementations, the processing module 703 is specifically configured to track a traffic participant corresponding to the coordinate data of each traffic participant according to a preset algorithm and the coordinate data of each traffic participant, and obtain a motion trail of the corresponding traffic participant.
In some possible implementations, the processing module 703 is specifically configured to perform trajectory tracking on traffic participants corresponding to the coordinate data of each traffic participant by using a kalman filter and a hungarian algorithm, and obtain coordinates of the corresponding traffic participants in continuous frames under the condition of the type of the traffic participant and the three-dimensional euclidean distance; and taking the coordinates of the corresponding traffic participants in the continuous frames as the motion trail of each traffic participant.
In some possible implementations, the processing module 703 is specifically configured to restore the motion trail of the traffic participant to a corresponding scene description language according to a pre-selected scene description language, obtain a scene description file, and generate a simulation test scene.
In some possible implementations, the processing module 703 is specifically configured to obtain the scene description file by representing coordinates of points on a motion track of a traffic participant and time nodes corresponding to the points respectively by using vertices through a track motion in a preselected scene description language; and generating a simulation test scene according to the scene description file.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
In an exemplary embodiment, an electronic device includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in the above embodiments.
In an exemplary embodiment, the readable storage medium may be a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method according to the above embodiment.
In an exemplary embodiment, the computer program product comprises a computer program which, when executed by a processor, implements the method according to the above embodiments.
Fig. 8 illustrates a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of user terminals, various forms of digital computers, such as laptops, desktops, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and the like, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The calculation unit 801 performs the respective methods and processes described above, for example, an automatic driving simulation test scenario generation method. For example, in some embodiments, the autopilot simulation test scenario generation method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the autopilot simulation test scenario generation method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the autopilot simulation test scenario generation method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.