CN115468778A - Vehicle testing method and device, electronic equipment and storage medium - Google Patents

Vehicle testing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115468778A
CN115468778A CN202211113822.1A CN202211113822A CN115468778A CN 115468778 A CN115468778 A CN 115468778A CN 202211113822 A CN202211113822 A CN 202211113822A CN 115468778 A CN115468778 A CN 115468778A
Authority
CN
China
Prior art keywords
traffic
simulated
images
information
static element
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211113822.1A
Other languages
Chinese (zh)
Other versions
CN115468778B (en
Inventor
徐力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202211113822.1A priority Critical patent/CN115468778B/en
Publication of CN115468778A publication Critical patent/CN115468778A/en
Application granted granted Critical
Publication of CN115468778B publication Critical patent/CN115468778B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M17/00Testing of vehicles
    • G01M17/007Wheeled or endless-tracked vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure provides a vehicle testing method, a vehicle testing device, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the technical field of automatic driving, computer vision and deep learning. The scheme is as follows: according to the set historical traffic flow information, driving simulation is carried out on the vehicles to obtain simulated traffic flow information corresponding to the vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; further, performing image fusion on a plurality of traffic static element images and a plurality of simulated traffic dynamic element images determined according to the simulated traffic flow information and/or the simulated parameter information to obtain a plurality of target fusion images; and testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle, so that interaction between the simulated traffic dynamic elements and the simulated traffic static elements is realized, and the accuracy of vehicle test is improved.

Description

Vehicle testing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to the field of automatic driving, computer vision, and deep learning technologies, and in particular, to a vehicle testing method and apparatus, an electronic device, and a storage medium.
Background
As vehicle technology matures, the vehicle industry is continuously developing, and before vehicles are put on the market, in order to ensure the performance of the vehicles, the vehicles need to be tested to improve the safety of the vehicles in running, so that how to test the vehicles is very important.
Disclosure of Invention
The disclosure provides a vehicle testing method, a vehicle testing device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a vehicle testing method including: according to set historical traffic flow information, driving simulation is carried out on a plurality of vehicles so as to obtain simulated traffic flow information corresponding to the vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information; performing image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; and testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
According to another aspect of the present disclosure, there is provided a vehicle testing apparatus including: the simulation module is used for carrying out driving simulation on a plurality of vehicles according to set historical traffic flow information so as to obtain simulated traffic flow information corresponding to the vehicles; the first determination module is used for determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; the second determination module is used for determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information; the fusion module is used for carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; and the test module is used for testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle testing method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to execute the vehicle testing method of the first aspect of the present disclosure.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the vehicle testing method of the embodiments of the first aspect of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic illustration according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
FIG. 5 is a schematic diagram according to a fifth embodiment of the present disclosure;
FIG. 6 is a schematic diagram according to a sixth embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating a manner of obtaining data for vehicle testing according to an embodiment of the present disclosure;
FIG. 8 is a schematic flow chart diagram of a vehicle testing method provided by an embodiment of the present disclosure;
FIG. 9 is a schematic diagram according to a seventh embodiment of the present disclosure;
FIG. 10 is a block diagram of an electronic device for implementing a vehicle testing method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the related art, a vehicle test is performed by a simulation system based on a graphics rendering engine or a game engine, wherein a vehicle test is performed by rendering an environment based on 3D modeling, but the 3D modeling generally has the problems of high texture repeatability, poor reality and the like. Neural rendering technologies based on Neural radiation Fields (NeRF for short) and the like can well solve the problem of texture reality, but the NeRF technology is mainly applied to static scenes at present and is not applicable to expression of dynamic scenes such as vehicles, pedestrians and traffic lights, processing of physical collision and the like.
Therefore, in view of the above problems, the present disclosure provides a vehicle testing method, apparatus, electronic device, and storage medium.
A vehicle testing method, a device, an electronic apparatus, and a storage medium according to embodiments of the present disclosure are described below with reference to the drawings.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. It should be noted that, the vehicle testing method implemented by the present disclosure is exemplified by being configured in a vehicle testing device, and the vehicle testing device may be applied to any electronic device, so that the electronic device may perform a vehicle testing function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the vehicle testing method may include the steps of:
and step 101, performing running simulation on a plurality of vehicles according to the set historical traffic flow information to obtain simulated traffic flow information corresponding to the plurality of vehicles.
As a possible implementation manner of the embodiment of the present disclosure, the historical traffic flow information may be historical traffic flow information of an actual road, for example, the historical traffic flow information may include position information, speed information, driving direction information, driving lane information, traffic signal information on a road on which a plurality of vehicles are driven, and the like of the plurality of vehicles, and further, according to the historical traffic flow information, driving simulation may be performed on vehicle models corresponding to the plurality of vehicles to obtain simulated traffic flow information corresponding to the plurality of vehicles.
As another possible implementation manner of the embodiment of the present disclosure, the historical traffic video may be played back to count historical traffic flow information corresponding to a plurality of vehicles on an actual road, and then, according to the historical traffic flow information, a driving simulation is performed on vehicle models corresponding to the plurality of vehicles to obtain simulated traffic flow information corresponding to the plurality of vehicles. The historical traffic flow information may include position information, speed information, driving direction information, driving lane information, traffic signal information on roads on which the vehicles travel, and the like of the vehicles.
And 102, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
In the embodiment of the present disclosure, the simulation parameter information of the vehicle-mounted sensor of the target vehicle in the multiple vehicles may be set according to the parameter information of the vehicle-mounted sensor of the actual vehicle, where the simulation parameter information may include internal parameters of the vehicle-mounted sensor and multiple simulation pose information (external parameters), where the vehicle-mounted sensor may include a vehicle-mounted camera, a millimeter wave radar, an ultrasonic radar, and the like, and the target vehicle may be an autonomous vehicle in the multiple vehicles or a vehicle that needs to be tested, which is not specifically limited in this disclosure.
And 103, determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information.
As a possible implementation manner of the embodiment of the present disclosure, a plurality of traffic static element images may be generated according to a plurality of simulation pose information in the simulation parameter information, and a plurality of simulated traffic dynamic element images may be generated according to the simulated traffic flow information and the plurality of simulation parameter information.
And 104, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
In order to improve the authenticity and accuracy of the vehicle test, the traffic static element and the traffic dynamic element can be fused to realize the vehicle test with high simulation degree.
And 105, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
Furthermore, by adopting the plurality of target fusion images, the target vehicle can be tested to obtain the test result of the target vehicle, for example, when the target vehicle is an automatic driving vehicle, the vehicle perception test and the track planning test can be performed on the target vehicle to obtain the perception test result and the track planning test result of the automatic driving vehicle.
In conclusion, running simulation is carried out on a plurality of vehicles according to the set historical traffic flow information, so as to obtain simulated traffic flow information corresponding to the plurality of vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information; carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; the target vehicle is tested according to the target fusion images to obtain a test result of the target vehicle, therefore, the traffic static element images and the simulated traffic dynamic element images are subjected to image fusion to obtain the target fusion images, and then the target vehicle is tested according to the target fusion images, so that interaction between the simulated traffic dynamic element and the traffic static element is realized, and the accuracy of vehicle testing is improved.
In order to clearly illustrate how the above-described embodiments determine the plurality of traffic static element images and the plurality of simulated traffic dynamic element images based on the simulated traffic flow information and the simulated parameter information, the present disclosure proposes another vehicle testing method.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure.
As shown in fig. 2, the vehicle testing method may include the steps of:
step 201, according to the set historical traffic flow information, driving simulation is carried out on a plurality of vehicles, and simulated traffic flow information corresponding to the plurality of vehicles is obtained.
In order to perform driving simulation on a plurality of vehicles, in the disclosed embodiment, driving parameter information of the plurality of vehicles may be extracted from traffic flow information to obtain driving parameter information of the plurality of vehicles; and performing running simulation on the plurality of vehicles according to the running parameter information to obtain simulated traffic flow information corresponding to the plurality of vehicles.
In order to improve the accuracy of the vehicle driving simulation, the driving parameter information may include: position information, direction information, speed information, acceleration information, travel lane information, and the like.
And step 202, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
And step 203, determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information.
In order to improve the authenticity of the traffic static element in the vehicle test and thus improve the confidence of the vehicle test, as a possible implementation manner of the embodiment of the present disclosure, a plurality of pieces of simulated pose information may be respectively input into the trained traffic static element image generation model to obtain the traffic static element image output by the traffic static element image generation model. The trained traffic static element image generation model learns the corresponding relation between the pose information and the traffic static element image, and the traffic static element image generation model can be a NeRF model.
And 204, rendering the image by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images.
In the embodiment of the disclosure, in order to enable the simulated traffic dynamic element image to include a plurality of traffic dynamic elements, three-dimensional rendering may be performed by using the simulated traffic flow information and a plurality of simulation parameter information to render a plurality of simulated traffic dynamic element images including a plurality of related traffic dynamic elements such as vehicles, pedestrians, traffic lights, and the like.
As an example, the simulated traffic flow information and the plurality of pieces of simulated parameter information are input into a three-dimensional rendering model, so that the three-dimensional rendering model three-dimensionally renders the simulated traffic flow information based on a plurality of pieces of simulated pose information in the plurality of simulated parameters, so as to obtain a plurality of simulated traffic dynamic element images output by the three-dimensional rendering model and matched with the plurality of pieces of simulated pose information.
And step 205, performing image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
And step 206, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
It should be noted that the execution processes of steps 201 to 202 and steps 205 to 206 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In summary, a traffic static element image matched with any simulated pose information is determined according to any simulated pose information in the plurality of simulated pose information, and image rendering is performed by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images, so that a plurality of simulated traffic dynamic element images including a plurality of traffic dynamic elements and a plurality of traffic static element images with higher reality can be generated according to the simulated traffic flow information and/or the simulated parameter information.
In order to clearly illustrate how the above embodiment trains the traffic static element image generation model so that the traffic static element image generation model learns the corresponding relationship between the pose information and the traffic static element image, the present disclosure proposes another vehicle testing method.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure.
As shown in fig. 3, the vehicle testing method may include the steps of:
step 301, according to the set historical traffic flow information, driving simulation is carried out on a plurality of vehicles to obtain simulated traffic flow information corresponding to the plurality of vehicles.
And step 302, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
Step 303, obtaining a sample traffic static element image.
And marking the pose information of the corresponding vehicle-mounted sensor on the sample traffic static element image.
In this disclosure, the sample traffic static element image may be acquired on-line, for example, an image including a plurality of static elements on a real road may be acquired on-line through a web crawler technology, and is used as the sample traffic static element image, or the sample traffic static element image may also be an image including a plurality of static elements on a real road acquired by a vehicle-mounted sensor, and the like, which is not limited in this disclosure.
It should be noted that, in order to make the sample traffic static elemental image carry the pose information, the pose information corresponding to the vehicle-mounted sensor may be labeled on the sample traffic static elemental image.
And 304, inputting the pose information of the vehicle-mounted sensor carried on the sample traffic static element image into the initial static element image generation model to obtain a traffic static element prediction image output by the initial static element image generation model.
In order to enable the static elemental image generation model to learn the corresponding relationship between the position and orientation information and the traffic static elemental image, as an example, the image information corresponding to the sample traffic static elemental image and the position and orientation information of the labeled vehicle-mounted sensor may be input into the initial static elemental image generation model to obtain the traffic static elemental prediction image output by the initial static elemental image generation model.
As another example, a plurality of pieces of sample traffic static elemental image information are preset in the initial static elemental image generation model, and furthermore, pose information of the vehicle-mounted sensor marked on the sample traffic static elemental image is input into the initial static elemental image generation model to obtain a traffic static elemental prediction image output by the initial static elemental image generation model.
And 305, training an initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
And further, according to the difference between the traffic static element prediction image and the sample traffic static element image, performing coefficient adjustment on the initial traffic static element image generation model so as to minimize the difference between the traffic static element prediction image and the sample traffic static element image.
It should be noted that, in the above example, only the termination condition of the model training is taken as minimization of the difference between the predicted image of the traffic static element and the image of the sample traffic static element, and in practical application, other termination conditions may also be set, for example, the termination condition may be that the number of times of training reaches a set number of times, or the termination condition may be that the training duration reaches a set duration, and the like, which is not limited by the present disclosure.
Step 306, inputting any simulated pose information into the trained traffic static element image generation model to obtain the traffic static element image output by the trained traffic static element image generation model.
And 307, rendering the image by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images.
And 308, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
Step 309, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
It should be noted that the execution processes of steps 301 to 302 and steps 307 to 309 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, by acquiring a sample traffic static element image; inputting the pose information of the vehicle-mounted sensor marked on the sample traffic static element image into an initial static element image generation model to obtain a traffic static element prediction image output by the initial static element image generation model; and training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image, so that the traffic static element image generation model can be trained to obtain the corresponding relation between the pose information and the traffic static element image.
In order to clearly illustrate how the above embodiments perform image fusion on the multiple traffic static element images and the multiple simulated traffic dynamic element images to obtain multiple target fusion images, the present disclosure proposes another vehicle testing method.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure.
As shown in fig. 4, the vehicle testing method may include the steps of:
step 401, according to the set historical traffic flow information, performing driving simulation on a plurality of vehicles to obtain simulated traffic flow information corresponding to the plurality of vehicles.
Step 402, determining simulation parameter information of an on-board sensor of a target vehicle from the plurality of vehicles according to the parameter information of the on-board sensor.
Step 403, determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information.
Step 404, for any one of the plurality of traffic static elemental images, determining a simulated traffic dynamic elemental image matched with any one of the traffic static elemental images according to the simulated pose information corresponding to any one of the traffic static elemental images.
In the embodiment of the disclosure, each of the plurality of traffic static element images may correspond to one piece of simulated pose information, and each of the simulated traffic dynamic element images may correspond to one piece of simulated pose information, so that the simulated traffic dynamic element image corresponding to the simulated pose information may be determined according to the simulated pose information corresponding to any one of the traffic static element images, and the simulated traffic dynamic element image corresponding to the simulated pose information may be used as an image matched with any one of the traffic static element images.
Step 405, performing augmented reality synthesis on any traffic static element image and the simulated traffic dynamic element image matched with any traffic static element to obtain a synthesized image.
In order to improve the sense of Reality of the vehicle running environment in the vehicle test, the virtual traffic dynamic element and the real traffic static element can be fused to realize the interaction between the simulated traffic dynamic element and the real traffic static element and improve the accuracy of the vehicle test.
Step 406, determining a plurality of target fusion images according to the synthesized images.
Further, the plurality of synthesized images are set as a plurality of target fusion images.
Step 407, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
It should be noted that the execution processes of steps 401 to 403 and step 407 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In summary, for any one of the plurality of traffic static element images, a simulated traffic dynamic element image matched with any one of the traffic static element images is determined according to the simulated pose information corresponding to any one of the traffic static element images; carrying out augmented reality synthesis on any traffic static element image and a simulated traffic dynamic element image matched with any traffic static element to obtain a synthesized image; and determining a plurality of target fusion images according to the synthesized images, so that the interaction between the simulated traffic dynamic elements and the real traffic static elements is realized by synthesizing the virtual traffic dynamic elements and the real traffic static elements, the sense of reality of the vehicle running environment in the vehicle test is improved, and the accuracy of the vehicle test is improved.
In order to clearly illustrate how the above embodiment tests the target vehicle according to the multiple target fusion images to obtain the test result of the target vehicle, the present disclosure proposes another vehicle testing method.
Fig. 5 is a schematic diagram according to a fifth embodiment of the present disclosure.
As shown in fig. 5, the vehicle testing method may include the steps of:
step 501, according to the set historical traffic flow information, driving simulation is carried out on a plurality of vehicles to obtain simulated traffic flow information corresponding to the vehicles.
Step 502, determining simulation parameter information of an on-board sensor of a target vehicle from a plurality of vehicles according to the parameter information of the on-board sensor.
Step 503, determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information.
And step 504, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
And 505, performing obstacle vehicle perception test on the target vehicle according to the plurality of target fusion images to obtain a perception test result of the target vehicle.
In order to improve the driving safety of the vehicle, the sensing of the vehicle for the obstacle may be tested, as an example, the target vehicle may be an autonomous vehicle, and the obstacle sensing test may be performed on the plurality of target fusion images by using a vehicle sensing algorithm (e.g., a target detection algorithm) to obtain a sensing test result of the autonomous vehicle.
Step 506, according to the plurality of target fusion images, performing a track planning test on the target vehicle to obtain a track planning test result of the target vehicle.
Meanwhile, a Planning and Control algorithm (PNC for short) can be adopted to perform track Planning test on the multiple target fusion images so as to obtain a track Planning test result of the automatic driving vehicle.
It should be noted that, in the present disclosure, the execution sequence of step 505 and step 506 is not specifically limited, and step 505 and step 506 may be executed in parallel or sequentially.
And 507, generating a test result according to the perception test result and the track planning test result.
And further splicing the perception test result and the track planning test result to obtain the test result of the target vehicle.
It should be noted that the execution processes of steps 501 to 504 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, the obstacle vehicle perception test is carried out on the target vehicle according to the plurality of target fusion images so as to obtain the perception test result of the target vehicle; according to the multiple target fusion images, the target vehicle is subjected to track planning testing to obtain a track planning testing result of the target vehicle, and the testing result is generated according to the perception testing result and the track planning testing result.
In order to further improve the driving safety of the vehicle, the present disclosure proposes another vehicle testing method.
Fig. 6 is a schematic diagram according to a sixth embodiment of the present disclosure. In the embodiment of the present disclosure, the test result may be evaluated to generate a test evaluation index, and a test report may be generated according to the test evaluation index, so that relevant personnel may improve the vehicle according to the test report, and the embodiment shown in fig. 6 may include the following steps:
step 601, according to the set historical traffic flow information, driving simulation is carried out on a plurality of vehicles to obtain simulated traffic flow information corresponding to the vehicles.
And step 602, determining simulation parameter information of the vehicle-mounted sensor of the target vehicle in the plurality of vehicles according to the parameter information of the vehicle-mounted sensor.
Step 603, determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information.
And step 604, carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images.
Step 605, testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
And 606, comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result.
The first test evaluation is used for representing the sensing accuracy of the target vehicle to the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle.
In the embodiment of the disclosure, the perception test result in the test result may be compared with the perception labeling result in the labeling result to determine a difference between the perception test result and the perception labeling result in the labeling result, and a first test evaluation index may be determined according to the difference between the perception test result and the perception labeling result in the labeling result, where the first test evaluation is used to represent the perception accuracy of the target vehicle for the obstacle, and the difference between the perception test result and the perception labeling result is in a negative correlation relationship with the first test evaluation index, that is, the smaller the difference between the perception test result and the perception labeling result in the labeling result, the higher the first test evaluation index.
Similarly, the track planning test result in the test result and the track planning annotation result in the annotation result can be compared to determine the difference between the track planning test result and the track planning annotation result in the annotation result, and a second test evaluation index is determined according to the difference between the track planning test result and the track planning annotation result, wherein the second test evaluation can be used for representing the track planning accuracy of the target vehicle, and the difference between the track planning test result and the track planning annotation result and the second test evaluation index are in a negative correlation relationship, that is, the smaller the difference between the track planning test result and the track planning annotation result in the annotation result is, the higher the second test evaluation index is.
Step 607, generating a test report according to the first test evaluation index and the second test evaluation index.
Furthermore, according to the first test evaluation index and the second test evaluation index, a test report can be generated, related personnel can determine the sensing accuracy rate of the vehicle to the obstacle and the trajectory planning accuracy rate according to the test report, and the vehicle can be improved according to the sensing accuracy rate of the vehicle to the obstacle and the trajectory planning accuracy rate, so that the driving safety of the vehicle can be improved.
It should be noted that the execution processes of steps 601 to 605 may be implemented by any one of the embodiments of the present disclosure, and the embodiments of the present disclosure do not limit this and are not described again.
In conclusion, a first test evaluation index and a second test evaluation index corresponding to the test result are obtained by comparing the test result with the labeling result; according to the first test evaluation index and the second test evaluation index, the test report is generated, so that the test evaluation index can be generated by evaluating the test result, and the test report can be generated according to the test evaluation index, so that related personnel can improve the vehicle according to the test report, and the driving safety of the vehicle can be further improved.
In order to clearly illustrate the above embodiments, the description will now be made by way of example.
For example, a vehicle testing method of an embodiment of the present disclosure may include the steps of:
1. as shown in fig. 7, on the basis of the NeRF technology, road environment image information (historical image information carrying pose information accumulated in actual road tests) in a certain area is taken as input, and a NeRF model corresponding to a scene is trained;
2. generating high-simulation traffic flow information such as positions, motions and the like of the main vehicle and the obstacle vehicles based on historical traffic flow data of large-scale actual roads, and driving the main vehicle and the obstacle vehicles to move in the scene to simulate a real traffic scene;
3. setting parameter information of a virtual vehicle-mounted sensor in a vehicle according to internal parameters and external parameters of a real vehicle-mounted sensor and by combining physical characteristics of the vehicle-mounted sensor;
4. as shown in fig. 8, the pose data of the sensor is used as the input of the NeRF model, and a high-photorealistic environment rendering image of a corresponding view angle is generated;
5. 3D rendering is carried out according to the traffic flow information generated in the step 2, the pose information of the sensor and the like, and sensor data containing relevant dynamic elements such as vehicles, pedestrians, traffic lights and the like are rendered;
6. performing AR synthesis on the results generated in the step 4 and the step 5 to obtain an image containing static environment and dynamic vehicle elements;
7. taking the result obtained by the step 6 as the input of vehicle control algorithms such as automatic driving perception and PNC (portable navigation control) and the like, and testing the obstacle perception and the trajectory planning of the vehicle;
8. and evaluating the test result to generate a test report.
According to the vehicle testing method, driving simulation is carried out on a plurality of vehicles according to set historical traffic flow information, so that simulated traffic flow information corresponding to the plurality of vehicles is obtained; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and the simulated parameter information; carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; the target vehicle is tested according to the plurality of target fusion images to obtain a test result of the target vehicle, so that the plurality of traffic static element images and the plurality of simulated traffic dynamic element images are subjected to image fusion to obtain the plurality of target fusion images, and the target vehicle is tested according to the plurality of target fusion images, so that interaction between the simulated traffic dynamic elements and the real traffic static elements is realized, the sense of reality of the vehicle running environment in the vehicle test is improved, and meanwhile, the simulation degree and the accuracy of the vehicle test are improved.
In order to implement the above embodiments, the present disclosure proposes a vehicle testing device.
Fig. 9 is a schematic diagram according to a seventh embodiment of the present disclosure. As shown in fig. 9, the vehicle testing apparatus 900 includes: a simulation module 910, a first determination module 920, a second determination module 930, a fusion module 940, and a test module 950.
The simulation module 910 is configured to perform driving simulation on a plurality of vehicles according to set historical traffic flow information to obtain simulated traffic flow information corresponding to the plurality of vehicles; a first determining module 920, configured to determine, according to parameter information of an on-board sensor, simulation parameter information of the on-board sensor of a target vehicle in the plurality of vehicles; a second determining module 930, configured to determine a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information; a fusion module 940, configured to perform image fusion on the multiple traffic static elemental images and the multiple simulated traffic dynamic elemental images to obtain multiple target fusion images; the testing module 950 is configured to test the target vehicle according to the plurality of target fusion images to obtain a testing result of the target vehicle.
As a possible implementation manner of the embodiment of the present disclosure, the second determining module 930 is configured to: determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information; and rendering the image by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain a plurality of simulated traffic dynamic element images.
As a possible implementation manner of the embodiment of the present disclosure, the second determining module 930 is further configured to: and inputting any simulated pose information into the trained traffic static element image generation model to obtain a traffic static element image output by the trained traffic static element image generation model.
As a possible implementation manner of the embodiment of the present disclosure, the traffic static element image generation model is obtained through the following module training: the device comprises an acquisition module, an input module and a training module.
The system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a sample traffic static element image, and the sample traffic static element image is marked with corresponding pose information of a vehicle-mounted sensor; the input module is used for inputting the pose information of the vehicle-mounted sensor marked on the sample traffic static element image into an initial static element image generation model so as to obtain a traffic static element prediction image output by the initial static element image generation model; and the training module is used for training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
As a possible implementation manner of the embodiment of the present disclosure, the second determining module 930 is further configured to: and inputting the simulated traffic flow information and the plurality of simulated parameter information into a three-dimensional rendering model so that the three-dimensional rendering model performs three-dimensional rendering on the simulated traffic flow information based on the plurality of simulated pose information in the plurality of simulated parameters to obtain a plurality of simulated traffic dynamic element images which are output by the three-dimensional rendering model and matched with the plurality of simulated pose information.
As a possible implementation manner of the embodiment of the present disclosure, the fusion module 940 is configured to: aiming at any one of the plurality of traffic static element images, determining a simulated traffic dynamic element image matched with any one of the traffic static element images according to simulated pose information corresponding to any one of the traffic static element images; carrying out augmented reality synthesis on any traffic static element image and any traffic static element matched simulated traffic dynamic element image to obtain a synthesized image; a plurality of target fusion images are determined from the synthesized images.
As a possible implementation manner of the embodiment of the present disclosure, the simulation module 910 is configured to: extracting the running parameter information of a plurality of vehicles from the traffic flow information to obtain the running parameter information of the plurality of vehicles; and performing running simulation on the plurality of vehicles according to the running parameter information to obtain simulated traffic flow information corresponding to the plurality of vehicles.
As a possible implementation manner of the embodiment of the present disclosure, the driving parameter information includes at least one of the following parameter information: position information, direction information, speed information, acceleration information, and lane information.
As a possible implementation manner of the embodiment of the present disclosure, the testing module 950 is configured to: according to the target fusion images, performing obstacle perception test on the target vehicle to obtain a perception test result of the target vehicle; according to the multiple target fusion images, performing track planning test on the target vehicle to obtain a track planning test result of the target vehicle; and planning a test result according to the perception test result and the track to generate a test result.
As a possible implementation manner of the embodiment of the present disclosure, the vehicle testing apparatus 900 further includes: the device comprises a comparison module and a generation module.
The comparison module is used for comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result, wherein the first test evaluation is used for representing the sensing accuracy of the target vehicle on the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle; and the generating module is used for generating a test report according to the first test evaluation index and the second test evaluation index.
The vehicle testing device of the embodiment of the disclosure simulates the running of a plurality of vehicles according to the set historical traffic flow information to obtain the simulated traffic flow information corresponding to the plurality of vehicles; determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor; determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to the simulated traffic flow information and/or the simulated parameter information; carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images; according to the target fusion images, the target vehicle is tested to obtain the test result of the target vehicle, therefore, the device can achieve image fusion of the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain the plurality of target fusion images, and further, the target vehicle is tested according to the plurality of target fusion images, interaction between the simulated traffic dynamic elements and the real traffic static elements is achieved, the sense of reality of the vehicle running environment in vehicle testing is improved, and meanwhile, the simulation degree and the accuracy of the vehicle testing are improved.
In order to implement the above embodiments, the present disclosure also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the vehicle testing method of the above embodiments.
In order to achieve the above embodiments, the present disclosure also proposes a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the vehicle testing method of the above embodiments.
In order to implement the above embodiments, the present disclosure also proposes a computer program product comprising a computer program which, when executed by a processor, implements the vehicle testing method of the above embodiments.
In the technical scheme of the present disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user are all performed under the premise of obtaining the consent of the user, and all meet the regulations of the related laws and regulations, and do not violate the good custom of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 10 shows a schematic block diagram of an example electronic device 1000 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the device 1000 can be stored. The calculation unit 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 executes the respective methods and processes described above, such as the vehicle test method. For example, in some embodiments, the vehicle testing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM 1002 and/or communications unit 1009. When the computer program is loaded into RAM 1003 and executed by the computing unit 1001, one or more steps of the vehicle testing method described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the vehicle testing method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), the Internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be noted that artificial intelligence is a subject for studying a computer to simulate some human thinking processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), and includes both hardware and software technologies. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge map technology and the like.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (23)

1. A vehicle testing method, comprising:
according to set historical traffic flow information, driving simulation is carried out on a plurality of vehicles so as to obtain simulated traffic flow information corresponding to the vehicles;
determining simulation parameter information of an on-board sensor of a target vehicle in the plurality of vehicles according to the parameter information of the on-board sensor;
determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information;
performing image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images;
and testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle.
2. The method according to claim 1, wherein the simulation parameter information includes a plurality of simulation pose information of the vehicle-mounted sensor, and the determining a plurality of traffic static element images and a plurality of simulation traffic dynamic element images according to simulation traffic flow information and/or the simulation parameter information includes:
determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information;
and rendering images by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain the plurality of simulated traffic dynamic element images.
3. The method of claim 2, wherein the determining, from any of the plurality of simulated pose information, a traffic static element image that matches the any of the simulated pose information comprises:
and inputting any simulation pose information into a trained traffic static element image generation model to obtain a traffic static element image output by the trained traffic static element image generation model.
4. The method of claim 3, wherein the traffic static element image generation model is trained by:
acquiring a sample traffic static element image, wherein the sample traffic static element image is marked with pose information corresponding to a vehicle-mounted sensor;
inputting the pose information of the vehicle-mounted sensor carried on the sample traffic static element image into an initial static element image generation model to obtain a traffic static element prediction image output by the initial static element image generation model;
and training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
5. The method of claim 2, wherein the image rendering using the simulated traffic flow information and the plurality of simulated parameter information to obtain the plurality of simulated traffic dynamic element images comprises:
and inputting the simulated traffic flow information and the plurality of pieces of simulated parameter information into a three-dimensional rendering model, so that the three-dimensional rendering model performs three-dimensional rendering on the simulated traffic flow information based on a plurality of pieces of simulated pose information in a plurality of simulated parameters, and a plurality of simulated traffic dynamic element images which are output by the three-dimensional rendering model and are matched with the plurality of pieces of simulated pose information are obtained.
6. The method of claim 2, wherein said image fusing said plurality of traffic static elemental images and said plurality of simulated traffic dynamic elemental images to obtain a plurality of target fused images comprises:
aiming at any one of the plurality of traffic static element images, determining a simulated traffic dynamic element image matched with any one of the traffic static element images according to the simulated pose information corresponding to any one of the traffic static element images;
carrying out augmented reality synthesis on any traffic static element image and the simulated traffic dynamic element image matched with any traffic static element to obtain a synthesized image;
and determining the plurality of target fusion images according to the synthesized images.
7. The method according to claim 1, wherein the driving simulation of a plurality of vehicles according to the set historical traffic flow information to obtain the simulated traffic flow information corresponding to the plurality of vehicles comprises:
extracting driving parameter information of the plurality of vehicles from the traffic flow information to obtain the driving parameter information of the plurality of vehicles;
and according to the running parameter information, running simulation is carried out on the vehicles to obtain simulated traffic flow information corresponding to the vehicles.
8. The method of claim 7, wherein the driving parameter information comprises at least one of:
position information, direction information, speed information, acceleration information, and lane information.
9. The method of claim 1, wherein the testing the target vehicle according to the plurality of target fusion images to obtain a test result of the target vehicle comprises:
according to the target fusion images, performing obstacle perception test on the target vehicle to obtain a perception test result of the target vehicle;
according to the plurality of target fusion images, performing track planning test on the target vehicle to obtain a track planning test result of the target vehicle;
and generating the test result according to the perception test result and the track planning test result.
10. The method according to any one of claims 1-9, wherein the method further comprises:
comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result, wherein the first test evaluation is used for representing the sensing accuracy of the target vehicle on the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle;
and generating a test report according to the first test evaluation index and the second test evaluation index.
11. A vehicle testing apparatus comprising:
the simulation module is used for carrying out driving simulation on a plurality of vehicles according to set historical traffic flow information so as to obtain simulated traffic flow information corresponding to the vehicles;
the first determining module is used for determining simulation parameter information of an on-board sensor of a target vehicle in the vehicles according to the parameter information of the on-board sensor;
the second determination module is used for determining a plurality of traffic static element images and a plurality of simulated traffic dynamic element images according to simulated traffic flow information and/or the simulated parameter information;
the fusion module is used for carrying out image fusion on the plurality of traffic static element images and the plurality of simulated traffic dynamic element images to obtain a plurality of target fusion images;
and the testing module is used for testing the target vehicle according to the plurality of target fusion images to obtain a testing result of the target vehicle.
12. The apparatus of claim 11, wherein the second determining means is configured to:
determining a traffic static element image matched with any simulated pose information according to any simulated pose information in the plurality of simulated pose information;
and rendering images by adopting the simulated traffic flow information and the plurality of simulated parameter information to obtain the plurality of simulated traffic dynamic element images.
13. The apparatus of claim 12, wherein the second determining means is further configured to:
and inputting any simulation pose information into a trained traffic static element image generation model to obtain a traffic static element image output by the trained traffic static element image generation model.
14. The apparatus of claim 13, wherein the traffic static element image generation model is trained by the following modules:
the acquisition module is used for acquiring a sample traffic static element image, wherein the sample traffic static element image carries pose information of an on-vehicle sensor;
the input module is used for inputting the pose information of the vehicle-mounted sensor carried on the sample traffic static element image into an initial static element image generation model so as to obtain a traffic static element prediction image output by the initial static element image generation model;
and the training module is used for training the initial traffic static element image generation model according to the difference between the traffic static element prediction image and the sample traffic static element image.
15. The apparatus of claim 12, wherein the second determining means is further configured to:
and inputting the simulated traffic flow information and the plurality of pieces of simulated parameter information into a three-dimensional rendering model, so that the three-dimensional rendering model performs three-dimensional rendering on the simulated traffic flow information based on a plurality of pieces of simulated pose information in a plurality of simulated parameters, and a plurality of simulated traffic dynamic element images which are output by the three-dimensional rendering model and are matched with the plurality of pieces of simulated pose information are obtained.
16. The apparatus of claim 12, wherein the fusion module is configured to:
aiming at any one of the plurality of traffic static element images, determining a simulated traffic dynamic element image matched with any one of the traffic static element images according to the simulated pose information corresponding to any one of the traffic static element images;
carrying out augmented reality synthesis on any one of the traffic static element images and the simulated traffic dynamic element image matched with any one of the traffic static elements to obtain a synthesized image;
and determining the plurality of target fusion images according to the synthesized images.
17. The apparatus of claim 11, wherein the simulation module is to:
extracting driving parameter information of the plurality of vehicles from the traffic flow information to obtain the driving parameter information of the plurality of vehicles;
and according to the running parameter information, running simulation is carried out on the vehicles to obtain simulated traffic flow information corresponding to the vehicles.
18. The apparatus of claim 17, wherein the driving parameter information includes at least one of the following parameter information:
position information, direction information, speed information, acceleration information, and travel lane information.
19. The apparatus of claim 11, wherein the testing module is to:
according to the target fusion images, performing obstacle perception test on the target vehicle to obtain a perception test result of the target vehicle;
according to the target fusion images, carrying out track planning test on the target vehicle to obtain a track planning test result of the target vehicle;
and generating the test result according to the perception test result and the track planning test result.
20. The apparatus of any of claims 11-19, wherein the apparatus further comprises:
the comparison module is used for comparing the test result with the labeling result to obtain a first test evaluation index and a second test evaluation index corresponding to the test result, wherein the first test evaluation is used for representing the sensing accuracy of the target vehicle on the obstacle, and the second test evaluation is used for representing the track planning accuracy of the target vehicle;
and the generating module is used for generating a test report according to the first test evaluation index and the second test evaluation index.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
23. A computer program product comprising a computer program, wherein the computer program realizes the method according to any one of claims 1-10 when executed by a processor.
CN202211113822.1A 2022-09-14 2022-09-14 Vehicle testing method and device, electronic equipment and storage medium Active CN115468778B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211113822.1A CN115468778B (en) 2022-09-14 2022-09-14 Vehicle testing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211113822.1A CN115468778B (en) 2022-09-14 2022-09-14 Vehicle testing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115468778A true CN115468778A (en) 2022-12-13
CN115468778B CN115468778B (en) 2023-08-15

Family

ID=84333890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211113822.1A Active CN115468778B (en) 2022-09-14 2022-09-14 Vehicle testing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115468778B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109781431A (en) * 2018-12-07 2019-05-21 山东省科学院自动化研究所 Automatic Pilot test method and system based on mixed reality
CN110160804A (en) * 2019-05-31 2019-08-23 中国科学院深圳先进技术研究院 A kind of test method of automatic driving vehicle, apparatus and system
CN110263381A (en) * 2019-05-27 2019-09-20 南京航空航天大学 A kind of automatic driving vehicle test emulation scene generating method
CN112198859A (en) * 2020-09-07 2021-01-08 西安交通大学 Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene
WO2022033810A1 (en) * 2020-08-14 2022-02-17 Zf Friedrichshafen Ag Computer-implemented method and computer programme product for obtaining an environment scene representation for an automated driving system, computer-implemented method for learning an environment scene prediction for an automated driving system, and control device for an automated driving system
WO2022095023A1 (en) * 2020-11-09 2022-05-12 驭势(上海)汽车科技有限公司 Traffic stream information determination method and apparatus, electronic device and storage medium
CN114817072A (en) * 2022-05-31 2022-07-29 国汽智控(北京)科技有限公司 Vehicle testing method, device, equipment and storage medium based on virtual scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109781431A (en) * 2018-12-07 2019-05-21 山东省科学院自动化研究所 Automatic Pilot test method and system based on mixed reality
CN110263381A (en) * 2019-05-27 2019-09-20 南京航空航天大学 A kind of automatic driving vehicle test emulation scene generating method
CN110160804A (en) * 2019-05-31 2019-08-23 中国科学院深圳先进技术研究院 A kind of test method of automatic driving vehicle, apparatus and system
WO2022033810A1 (en) * 2020-08-14 2022-02-17 Zf Friedrichshafen Ag Computer-implemented method and computer programme product for obtaining an environment scene representation for an automated driving system, computer-implemented method for learning an environment scene prediction for an automated driving system, and control device for an automated driving system
CN112198859A (en) * 2020-09-07 2021-01-08 西安交通大学 Method, system and device for testing automatic driving vehicle in vehicle ring under mixed scene
WO2022095023A1 (en) * 2020-11-09 2022-05-12 驭势(上海)汽车科技有限公司 Traffic stream information determination method and apparatus, electronic device and storage medium
CN114817072A (en) * 2022-05-31 2022-07-29 国汽智控(北京)科技有限公司 Vehicle testing method, device, equipment and storage medium based on virtual scene

Also Published As

Publication number Publication date
CN115468778B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
US11783590B2 (en) Method, apparatus, device and medium for classifying driving scenario data
CN112965466B (en) Reduction test method, device, equipment and program product of automatic driving system
CN111079619B (en) Method and apparatus for detecting target object in image
JP2023055697A (en) Automatic driving test method and apparatus, electronic apparatus and storage medium
CN112001287A (en) Method and device for generating point cloud information of obstacle, electronic device and medium
CN113467875A (en) Training method, prediction method, device, electronic equipment and automatic driving vehicle
CN114186007A (en) High-precision map generation method and device, electronic equipment and storage medium
CN112699765A (en) Method and device for evaluating visual positioning algorithm, electronic equipment and storage medium
US20240262385A1 (en) Spatio-temporal pose/object database
CN114111813B (en) High-precision map element updating method and device, electronic equipment and storage medium
CN115575931A (en) Calibration method, calibration device, electronic equipment and storage medium
CN115082690B (en) Target recognition method, target recognition model training method and device
CN115357500A (en) Test method, device, equipment and medium for automatic driving system
CN115468778B (en) Vehicle testing method and device, electronic equipment and storage medium
CN115657494A (en) Virtual object simulation method, device, equipment and storage medium
CN116663329B (en) Automatic driving simulation test scene generation method, device, equipment and storage medium
CN113361379B (en) Method and device for generating target detection system and detecting target
CN116168366B (en) Point cloud data generation method, model training method, target detection method and device
CN116449807B (en) Simulation test method and system for automobile control system of Internet of things
CN113538516B (en) Target object tracking method and device based on memory information and electronic equipment
CN117668761A (en) Training method, device, equipment and storage medium for automatic driving model
CN117710456A (en) Training method and device for positioning and mapping model, electronic equipment and storage medium
CN117826631A (en) Automatic driving simulation test scene data generation method and device
CN115903542A (en) Interactive driving simulation system, method, equipment and storage medium
CN116778447A (en) Training method of target detection model, target detection method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant