CN115774680B - Version testing method, device and equipment of automatic driving software and storage medium - Google Patents

Version testing method, device and equipment of automatic driving software and storage medium Download PDF

Info

Publication number
CN115774680B
CN115774680B CN202310059515.8A CN202310059515A CN115774680B CN 115774680 B CN115774680 B CN 115774680B CN 202310059515 A CN202310059515 A CN 202310059515A CN 115774680 B CN115774680 B CN 115774680B
Authority
CN
China
Prior art keywords
perception
automatic driving
data
driving software
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310059515.8A
Other languages
Chinese (zh)
Other versions
CN115774680A (en
Inventor
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310059515.8A priority Critical patent/CN115774680B/en
Publication of CN115774680A publication Critical patent/CN115774680A/en
Application granted granted Critical
Publication of CN115774680B publication Critical patent/CN115774680B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The disclosure relates to a version test method, a version test device, version test equipment and a version test storage medium of automatic driving software, wherein the method comprises the following steps: acquiring first sensing data of a first scene and a first truth value of the first sensing data; acquiring second sensing data of the first scene and a second true value of the second sensing data; evaluating the perceptual model of the to-be-sent version according to the first perceptual data and the first true value to obtain a first evaluation index of the perceptual model; evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software; and acquiring a version test result of the automatic driving software according to the first evaluation index and the second evaluation index. The technical scheme disclosed by the invention can be combined with evaluation and quality assurance in various aspects such as a perception model and automatic driving software, and the comprehensive performance of the automatic driving software is comprehensively evaluated, so that the efficiency of real vehicle road test and simulation test is improved.

Description

Version testing method, device, equipment and storage medium of automatic driving software
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method, an apparatus, a device, and a storage medium for testing a version of automatic driving software.
Background
In order to ensure the experience and safety of automatic driving, the automatic driving software of the vehicle needs to be fully tested. The test is divided into three types, namely software test, real vehicle drive test, simulation test and the like. How to fully test the automatic driving software before the real vehicle test is a key factor for improving the efficiency of the real vehicle drive test, and the version of the automatic driving software subjected to the full software test can also improve the efficiency of the real vehicle drive test and the simulation test.
In the related art, a software test is usually performed on the automatic driving software by adopting a model and software decoupling test method. For example, the precision rate is tested at a model end, and the post-processing only evaluates indexes such as speed measurement errors and distance measurement errors, and the technical scheme cannot ensure the delivery quality of the output result of the automatic driving software to the downstream.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a version test method and apparatus for autopilot software, an electronic device, and a storage medium.
According to a first aspect of an embodiment of the present disclosure, a version test method for automatic driving software is provided, including: acquiring first sensing data of a first scene and a first true value of the first sensing data; wherein the first perception data is two-dimensional data; acquiring second sensing data of the first scene and a second true value of the second sensing data; the second sensing data is three-dimensional data carrying time sequence information; evaluating the perception model of the to-be-sent version according to the first perception data and the first true value to obtain a first evaluation index of the perception model; evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software; the automatic driving software is engineering software which is added with post-processing after the deployment script of the perception model is converted; and acquiring a version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
In an implementation manner, the evaluating the perceptual model of the to-be-sent version according to the first perceptual data and the first truth value to obtain a first evaluation index of the perceptual model includes: inputting the first perception data into the perception model of the to-be-issued version to obtain a first perception prediction result output by the perception model; calculating a first recall rate and/or a first accuracy rate according to the first perception prediction result and the first true value; and determining the first recall rate and/or the first accuracy rate as a first evaluation index of the perception model.
In one implementation, the evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software includes: inputting the second perception data into the automatic driving software to obtain a second perception prediction result output by the automatic driving software; calculating a second recall rate and/or a second accuracy rate according to the second perception prediction result and the second true value; obtaining a distance measurement error index and/or a speed measurement error index according to the second perception prediction result, the second true value and the time sequence information in the perception data; and determining the second recall rate and/or the second accuracy rate, the distance measurement error index and/or the speed measurement error index as a second evaluation index of the automatic driving software.
In one implementation, the method further comprises: determining whether the first evaluation index meets the publishing standard of the perception model; and under the condition that the first evaluation index meets the release standard, performing deployment script conversion on the perception model meeting the release standard and adding post-processing engineering software.
In one implementation, the method further comprises: determining scenes and index items which can not pass according to the version test result; and optimizing the automatic driving software according to the scene of performance failure and the index item.
According to a second aspect of the embodiments of the present disclosure, there is provided a version test apparatus of an autopilot software, including: the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring first perception data of a first scene and a first true value of the first perception data; wherein the first perception data is two-dimensional data; a second obtaining module, configured to obtain second sensing data of the first scene and a second true value of the second sensing data; the second sensing data is three-dimensional data carrying time sequence information; the first evaluating module is used for evaluating the perception model of the to-be-sent version according to the first perception data and the first truth value to obtain a first evaluating index of the perception model; the second evaluating module is used for evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluating index of the automatic driving software; the automatic driving software is engineering software which carries out deployment script conversion on the perception model and is added with post-processing; and the third obtaining module is used for obtaining the version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
In one implementation, the first evaluation module is specifically configured to: inputting the first perception data into the perception model of the to-be-issued version to obtain a first perception prediction result output by the perception model; calculating a first recall rate and/or a first accuracy rate according to the first perception prediction result and the first true value; and determining the first recall rate and/or the first accuracy rate as a first evaluation index of the perception model.
In one implementation, the second evaluation module is specifically configured to: inputting the second perception data into the automatic driving software to obtain a second perception prediction result output by the automatic driving software; calculating a second recall rate and/or a second accuracy rate according to the second perception prediction result and the second true value; obtaining a distance measurement error index and/or a speed measurement error index according to the second perception prediction result, the second true value and the time sequence information in the perception data; and determining the second recall rate and/or the second accuracy rate, the distance measurement error index and/or the speed measurement error index as a second evaluation index of the automatic driving software.
In one implementation, the apparatus further comprises: the first determining module is used for determining whether the first evaluation index meets the publishing standard of the perception model; and the processing module is used for converting the deployment script of the perception model meeting the publishing standard and adding post-processed engineering software under the condition that the first evaluation index meets the publishing standard.
In one implementation, the apparatus further comprises: the second determining module is used for determining the scene which can not pass and the index item according to the version test result; and the optimization module is used for optimizing the automatic driving software according to the scene with the performance failing and the index items.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed, cause the method according to the first aspect to be implemented.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the first evaluation index of the perception model can be obtained based on the first perception data of the first scene and the first truth value of the first perception data, and the second evaluation index of the automatic driving software can be obtained based on the second perception data of the first scene and the second truth value of the second perception data, so that the version test result of the automatic driving software can be obtained according to the first evaluation index and the second evaluation index. The comprehensive performance of the automatic driving software can be comprehensively evaluated by combining the evaluation and quality assurance of multiple aspects such as a perception model, the automatic driving software and the like, so that the efficiency of real vehicle drive test and simulation test is improved, and support is provided for version release of the automatic driving software.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a method for version testing of autopilot software according to an exemplary embodiment.
FIG. 2 is a flow diagram illustrating another method for version testing of autopilot software according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating yet another method for version testing of autopilot software in accordance with an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating a version testing scheme for autopilot software according to an exemplary embodiment.
FIG. 5 is a block diagram illustrating a version testing arrangement for autopilot software according to an exemplary embodiment.
FIG. 6 is a block diagram of another version testing device for autopilot software according to an exemplary embodiment.
FIG. 7 is a block diagram of yet another version testing apparatus for autopilot software, according to an exemplary embodiment.
FIG. 8 is a schematic diagram illustrating an electronic device in accordance with an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Where in the description of the present disclosure, "/" indicates an OR meaning, for example, A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The various numbers of the first, second, etc. involved in this disclosure are merely for convenience of description and are not intended to limit the scope of the embodiments of the disclosure, nor to indicate a sequential order.
FIG. 1 is a flow diagram illustrating a method for version testing of autopilot software according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as one example. As shown in fig. 1, the method may include, but is not limited to, the following steps.
Step S101: first perception data of a first scene and a first true value of the first perception data are obtained.
In an embodiment of the present disclosure, the first sensing data is two-dimensional data.
For example, when the vehicle is in a first scene, the vehicle environmental perception sensor acquires environmental perception data and acquires a first real value corresponding to the environmental perception data. The electronic equipment acquires real-time environment perception data acquired by the vehicle environment perception sensor from the vehicle side as first perception data, and acquires a first true value as a first true value representing the real condition corresponding to the first perception data.
It should be noted that, in the embodiment of the present disclosure, the first scene may be a road scene and/or a weather scene.
As an example, the first scene is a road scene. The first scene may be any one of roads such as a tunnel, an expressway, a curve, or an intersection.
As another example, take the example that the first scene is a weather scene. The first scene may be any one of rainy, snowy, or cloudy days.
As yet another example, the first scene may be a scene that is a combination of a road scene and a weather scene. For example, the first scenario may be a rainy highway.
Step S102: second perception data of the first scene and a second true value of the second perception data are obtained.
In the embodiment of the disclosure, the second sensing data is three-dimensional data carrying timing information.
For example, when the vehicle is in a first scene, the vehicle environment perception sensor acquires environment perception data and time sequence information, and a second real value corresponding to the real-time environment perception data is acquired. The electronic equipment acquires real-time environment perception data and time sequence information acquired by the vehicle environment perception sensor from the vehicle side as second perception data, and acquires a second real value as a second true value representing the real condition corresponding to the second perception data.
Step S103: and evaluating the perceptual model of the to-be-issued version according to the first perceptual data and the first true value to obtain a first evaluation index of the perceptual model.
For example, a sensing model with a to-be-issued version is deployed on the electronic device, and the electronic device can obtain, according to the sensing model, the first sensing data and the first true value, an accuracy of the first sensing data relative to the first true value as a first evaluation index of the sensing model.
It should be noted that, in the embodiment of the present disclosure, the specific model category of the perception model corresponds to the specific type of the first perception data.
As an example, taking the first perception data as the image data acquired by the camera as an example, the perception model may be an image recognition model.
As another example, taking the first sensing data as laser point cloud data acquired by the laser radar as an example, the sensing model may be a laser point cloud model.
As yet another example, taking the first perception data as point cloud data acquired by the millimeter wave radar as an example, the perception model may be a point cloud model.
As yet another example, taking the first perception data as ultrasound data acquired by the ultrasound radar as an example, the perception model may be an ultrasound-based distance perception model.
Step S104: and evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software.
In the embodiment of the disclosure, the automatic driving software is engineering software which converts the deployment script of the perception model and adds post-processing.
For example, the electronic device is deployed with autopilot software, and the electronic device may obtain, according to the autopilot software, the second perception data and the second true value, an accuracy rate of the second perception data relative to the second true value as a second evaluation index of the autopilot software.
Step S105: and acquiring a version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
For example, the electronic device determines that the version test result of the automatic driving software passes in response to the first evaluation index being greater than or equal to a first evaluation threshold and the second evaluation index being greater than a second evaluation threshold; otherwise, determining that the version test result of the automatic driving software is failed.
In the embodiment of the disclosure, the first evaluation threshold and the second evaluation threshold are preset thresholds used for judging the version test result of the automatic driving software.
By implementing the embodiment of the disclosure, a first evaluation index of the perception model can be obtained based on the first perception data of the first scene and the first truth value of the first perception data, and a second evaluation index of the automatic driving software can be obtained based on the second perception data of the first scene and the second truth value of the second perception data, so that a version test result of the automatic driving software can be obtained according to the first evaluation index and the second evaluation index. The comprehensive performance of the automatic driving software can be comprehensively evaluated by combining the evaluation and quality assurance of multiple aspects such as a perception model, the automatic driving software and the like, so that the efficiency of real vehicle drive test and simulation test is improved.
In an implementation manner, a first recall rate and/or a first accuracy rate of a to-be-sent version of a perception model may be obtained according to first perception data and a first truth value, and the first recall rate and/or the first accuracy rate is used as a first evaluation index of the perception model. As an example, referring to fig. 2, fig. 2 is a flow chart illustrating another version testing method for autopilot software according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as one example. As shown in fig. 2, the method may include, but is not limited to, the following steps.
Step S201: first perception data of a first scene and a first true value of the first perception data are obtained.
In the embodiment of the present disclosure, step S201 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S202: second sensing data of the first scene and a second true value of the second sensing data are obtained.
In the embodiment of the present disclosure, step S202 may be implemented by any one of the embodiments of the present disclosure, which is not limited in this disclosure and is not described again.
Step S203: and inputting the first perception data into the perception model of the to-be-issued version to obtain a first perception prediction result output by the perception model.
For example, a sensing model of a to-be-issued version is deployed on the electronic device, the electronic device inputs first sensing data into the sensing model of the to-be-issued version, and the sensing model performs sensing prediction on a real existing target (for example, a vehicle or a pedestrian) in a first scene based on the first sensing data, so that a first sensing prediction result output by the sensing model is obtained.
Step S204: and calculating a first recall rate and/or a first accuracy rate according to the first perception prediction result and the first truth value.
As an example, the electronic device obtains the predicted target quantity according to the first perceptual prediction result, obtains the real target quantity actually existing in the first scene according to the first true value, calculates a percentage of the predicted target quantity to the real target quantity, and takes the percentage as the first recall rate.
As another example, the electronic device obtains the predicted target number according to the first perceptual prediction result, obtains the actual predicted target number that is actually present in the predicted target according to the first perceptual prediction result and the first true value, and determines a percentage of the actual predicted target number to the predicted target number, where the percentage is used as the first accuracy rate.
As another example, the electronic device obtains, according to the first perceptual prediction result, the predicted target number obtained by prediction, obtains, according to the first true value, the real target number actually existing in the first scene, and calculates a percentage of the predicted target number in the real target number, which is the first recall rate; and meanwhile, acquiring the number of real predicted targets which really exist in the predicted targets according to the first perception prediction result and the first true value, and taking the percentage of the number of the real predicted targets in the number of the predicted targets as a first accuracy.
Step S205: and determining the first recall rate and/or the first accuracy rate as a first evaluation index of the perception model.
As an example, the electronic device determines a first recall rate as a first evaluation index of the perception model.
As another example, the electronic device determines the first accuracy as a first evaluation indicator of the perceptual model.
As yet another example, the electronic device determines the first recall rate and the first accuracy rate as a first evaluation indicator of the perceptual model.
Step S206: and evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software.
In the embodiment of the present disclosure, step S206 may be implemented by using any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S207: and acquiring a version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
In the embodiment of the present disclosure, step S207 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
By implementing the embodiment of the disclosure, a first recall rate and/or a first accuracy rate of a sensing model of a to-be-sent version can be acquired according to first sensing data and a first truth value and used as a first evaluation index of the sensing model, a second evaluation index of the automatic driving software is acquired based on second sensing data of a first scene and a second truth value of the second sensing data, and a version test result of the automatic driving software is acquired according to the first evaluation index and the second evaluation index. The comprehensive performance of the automatic driving software can be comprehensively evaluated by combining the evaluation and quality assurance of multiple aspects such as a perception model, the automatic driving software and the like, so that the efficiency of real vehicle drive test and simulation test is improved.
In an implementation manner, a second recall rate and/or a second accuracy rate of the autopilot software may be obtained according to the second sensing data and the second true value, and a distance measurement error index and/or a speed measurement error index of the autopilot software may be obtained as a second evaluation index of the autopilot software. Referring to fig. 3 as an example, fig. 3 is a flowchart illustrating yet another version testing method of auto-pilot software according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as one example. As shown in fig. 3, the method may include, but is not limited to, the following steps.
Step S301: first perception data of a first scene and a first true value of the first perception data are acquired.
In the embodiment of the present disclosure, step S301 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S302: second sensing data of the first scene and a second true value of the second sensing data are obtained.
In the embodiment of the present disclosure, step S302 may be implemented by using any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S303: and evaluating the perception model of the to-be-sent version according to the first perception data and the first true value to obtain a first evaluation index of the perception model.
In the embodiment of the present disclosure, step S303 may be implemented by using any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S304: and inputting the second perception data into the automatic driving software to obtain a second perception prediction result output by the automatic driving software.
For example, the electronic device is deployed with automatic driving software, and the electronic device inputs second perception data into the automatic driving software, so that the automatic driving software performs perception prediction on a target actually existing in the first scene and target related information (for example, a distance or a speed with the automatic driving software) based on the second perception data, and thus obtains a second perception prediction result output by the automatic driving software.
Step S305: and calculating a second recall rate and/or a second accuracy rate according to the second perception prediction result and the second true value.
As an example, the electronic device obtains the predicted target quantity according to the second perceptual prediction result, obtains the real target quantity actually existing in the first scene according to the second true value, calculates a percentage of the predicted target quantity to the real target quantity, and uses the percentage as the second recall ratio.
As another example, the electronic device obtains the predicted target number according to the second perceptual prediction result, obtains the real predicted target number actually existing in the predicted target according to the second perceptual prediction result and the second true value, and takes the percentage of the real predicted target number in the predicted target number as the second accuracy.
As another example, the electronic device obtains the predicted target quantity according to the second perceptual prediction result, obtains the real target quantity actually existing in the first scene according to the second true value, and calculates a percentage of the predicted target quantity to the real target quantity, which is the second recall ratio; and meanwhile, acquiring the number of real predicted targets which really exist in the predicted targets according to a second perception prediction result and a second true value, and taking the percentage of the number of the real predicted targets in the number of the predicted targets as a second accuracy.
Step S306: and acquiring a distance measurement error index and/or a speed measurement error index according to the second perception prediction result, the second true value and the time sequence information in the perception data.
As an example, the electronic device obtains a predicted distance value and a true distance value from the real existing target at the same time according to the second perceptual prediction result, the second true value and the timing information in the perceptual data, and uses a difference between the predicted distance value and the true distance value as a ranging error indicator.
As another example, the electronic device obtains a predicted speed value and a true speed value of a target that actually exists at the same time according to the second perceptual prediction result, the second true value, and the time sequence information in the perceptual data, and uses a difference between the predicted speed value and the true speed value as a speed measurement error indicator.
As another example, the electronic device obtains a predicted distance value and a true distance value from the real existing target at the same time according to the second perceptual prediction result, the second true value and the timing information in the perceptual data, and uses a difference between the predicted distance value and the true distance value as a ranging error indicator; and obtaining a predicted speed value and a true speed value of a real target at the same moment, and taking a difference value between the predicted speed value and the true speed value as a speed measurement error index.
Step S307: and determining the second recall rate and/or the second accuracy rate, the distance measurement error index and/or the speed measurement error index as a second evaluation index of the automatic driving software.
As an example, the electronic device determines the second recall rate and the ranging error index as a second evaluation index of the autopilot software.
As another example, the electronic device determines the second recall rate and the speed measurement error index as a second evaluation index of the automatic driving software.
As yet another example, the electronic device determines the second recall rate, the ranging error indicator, and the speed measurement error indicator as second evaluation indicators of the autopilot software.
As yet another example, the electronic device determines the second accuracy and range error indicator as a second evaluation indicator of the autopilot software.
As another example, the electronic device determines the second accuracy and the speed measurement error index as a second evaluation index of the automatic driving software.
As yet another example, the electronic device determines the second accuracy, the ranging error indicator, and the speed measurement error indicator as a second evaluation indicator of the autopilot software.
As yet another example, the electronic device determines the second recall rate, the second accuracy rate, and the range error indicator as a second evaluation indicator of the autopilot software.
As another example, the electronic device determines the second recall rate, the second accuracy rate, and the speed measurement error indicator as a second evaluation indicator of the automatic driving software.
As another example, the electronic device determines the second recall rate, the second accuracy rate, the distance measurement error index, and the speed measurement error index as the second evaluation index of the autopilot software.
Step S308: and acquiring a version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
In the embodiment of the present disclosure, step S308 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
By implementing the embodiment of the disclosure, a first evaluation index of the perception model can be obtained based on the first perception data of the first scene and the first truth value of the first perception data, a second recall rate and/or a second accuracy rate of the autopilot software can be obtained according to the second perception data and the second truth value, a distance measurement error index and/or a speed measurement error index of the autopilot software are obtained and serve as a second evaluation index of the autopilot software, and a version test result of the autopilot software is obtained according to the first evaluation index and the second evaluation index. The comprehensive performance of the automatic driving software can be comprehensively evaluated by combining the evaluation and quality assurance of multiple aspects such as a perception model, the automatic driving software and the like, so that the efficiency of real vehicle drive test and simulation test is improved.
In some embodiments of the present disclosure, the version testing method of the automatic driving software may further include: determining whether the first evaluation index meets the publishing standard of the perception model; and under the condition that the first evaluation index meets the release standard, performing deployment script conversion on the perception model meeting the release standard and adding post-processing engineering software.
For example, the electronic device determines whether the first evaluation index meets the publishing standard of the perception model in response to the fact that the first evaluation index is larger than the evaluation index of the automatic driving software of the previous version, and the electronic device converts the deployment script of the perception model meeting the publishing standard and adds post-processing engineering software.
By implementing the embodiment of the disclosure, whether the perception model of the version to be released meets the release standard or not can be judged based on the first evaluation index, so that support is provided for the release of the version of the automatic driving software.
In some embodiments of the present disclosure, the version testing method of the automatic driving software may further include: determining scenes and index items with performance failing according to version test results; and optimizing the automatic driving software according to the scene with the performance not passing and the index items.
As an example, according to the version test result, the electronic device determines that the first scene with the failed performance is an intersection in rainy days, and the index item with the failed performance is a ranging error index, so that the automatic driving software performs algorithm optimization based on the scene and the index item.
By implementing the embodiment of the disclosure, scenes and index items with failed performance can be determined according to version test results, so that the automatic driving software performs targeted algorithm optimization based on the scenes and the index items to improve the performance of the algorithm.
Referring to fig. 4, fig. 4 is a schematic diagram illustrating a version testing scheme for autopilot software according to an exemplary embodiment. As shown in fig. 4, in this scheme, a 2D (two-dimensional) scene library is first constructed based on a test image database of a plurality of different scenes and truth values; constructing a record (record) scene library based on a record database containing time sequences under various different scenes and the truth value; then, inputting the 2D scene library into each perception model to be evaluated, and obtaining different tasks (namely, perception based on data obtained by different sensors (such as a camera, a laser radar, a millimeter wave radar, an ultrasonic radar and the like)) and task-divided scene indexes under different scenes; deploying a script on the current evaluation automatic driving software to convert the deployment script into post-processed engineering software; inputting a record scene library into the engineering software to obtain a task-based scene accuracy index and a task-based scene distance and speed measuring index; and obtaining the version test result of the automatic driving software based on the task-based scene index, the task-based scene accuracy index and the task-based scene distance and speed measurement index.
Referring to fig. 5, fig. 5 is a block diagram of a version testing apparatus of an automatic driving software according to an exemplary embodiment. As shown in fig. 5, the apparatus 500 includes: a first obtaining module 501, configured to obtain first perception data of a first scene and a first true value of the first perception data; wherein the first perception data is two-dimensional data; a second obtaining module 502, configured to obtain second sensing data of the first scene and a second true value of the second sensing data; the second sensing data is three-dimensional data carrying time sequence information; the first evaluating module 503 is configured to evaluate the perceptual model of the to-be-issued version according to the first perceptual data and the first true value, so as to obtain a first evaluation index of the perceptual model; the second evaluating module 504 is configured to evaluate the autopilot software according to the second perception data and the second true value to obtain a second evaluation index of the autopilot software; the automatic driving software is engineering software which carries out deployment script conversion on the perception model and adds post-processing; and a third obtaining module 505, configured to obtain a version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
In one implementation, the first evaluating module 503 is specifically configured to: inputting the first sensing data into a sensing model of a to-be-issued version to obtain a first sensing prediction result output by the sensing model; calculating a first recall rate and/or a first accuracy rate according to the first perception prediction result and the first truth value; and determining the first recall rate and/or the first accuracy rate as a first evaluation index of the perception model.
In one implementation, the second evaluating module 504 is specifically configured to: inputting the second perception data into the automatic driving software to obtain a second perception prediction result output by the automatic driving software; calculating a second recall rate and/or a second accuracy rate according to a second perception prediction result and a second true value; acquiring a distance measurement error index and/or a speed measurement error index according to a second perception prediction result, a second true value and time sequence information in perception data; and determining the second recall rate and/or the second accuracy rate, the distance measurement error index and/or the speed measurement error index as a second evaluation index of the automatic driving software.
In one implementation, the apparatus further includes a first determining module and a processing module. As an example, referring to fig. 6, fig. 6 is a block diagram of another version testing apparatus for autopilot software according to an exemplary embodiment. As shown in fig. 6, the apparatus 600 further comprises: a first determining module 606, configured to determine whether the first evaluation index meets a release standard of the perceptual model; and the processing module 607 is configured to, under the condition that the first evaluation index meets the release standard, perform deployment script conversion on the perception model meeting the release standard and add post-processed engineering software. The modules 601 to 605 in fig. 6 have the same structures and functions as the modules 501 to 505 in fig. 5.
In one implementation, the apparatus further includes a second determining module and an optimizing module. Referring to fig. 7 as an example, fig. 7 is a block diagram of a version testing apparatus of another type of auto-pilot software according to an exemplary embodiment. As shown in fig. 7, the apparatus 700 further includes: the second determining module 706 is configured to determine, according to the version test result, a scene and an index item whose performance does not pass; and the optimization module 707 is configured to optimize the automatic driving software according to the scene with the performance failing and the index item. The modules 701 to 705 in fig. 7 have the same structures and functions as the modules 501 to 505 in fig. 5.
Through the device of the embodiment of the disclosure, a first evaluation index of the perception model can be obtained based on the first perception data of the first scene and the first truth value of the first perception data, and a second evaluation index of the automatic driving software can be obtained based on the second perception data of the first scene and the second truth value of the second perception data, so that a version test result of the automatic driving software is obtained according to the first evaluation index and the second evaluation index. The comprehensive performance of the automatic driving software can be comprehensively evaluated by combining the evaluation and quality assurance of multiple aspects such as a perception model, the automatic driving software and the like, so that the efficiency of real vehicle drive test and simulation test is improved.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
Referring to fig. 8, fig. 8 is a schematic diagram of an electronic device according to an exemplary embodiment. For example, the electronic device 800 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, an exercise device, a personal digital assistant, a wearable device, and the like.
Referring to fig. 8, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communications component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 may include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a touch-sensitive display screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the methods described in any of the above embodiments.
The present disclosure also provides a readable storage medium having stored thereon instructions which, when executed by a computer, implement the functionality of any of the above-described method embodiments.
The present disclosure also provides a computer program product which, when executed by a computer, implements the functionality of any of the above-described method embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs. The procedures or functions according to the embodiments of the present disclosure are wholly or partially generated when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center through a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) manner. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will understand that: various numerical numbers of the first, second, etc. referred to in this disclosure are only for convenience of description and distinction, and are not used to limit the scope of the embodiments of the disclosure, and also represent a sequential order.
At least one of the present disclosure may also be described as one or more, and a plurality may be two, three, four or more, without limitation of the present disclosure. In the embodiment of the present disclosure, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", etc., and the technical features described in "first", "second", "third", "a", "B", "C", and "D" are not in a sequential order or a magnitude order.
Predefinition in this disclosure may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-firing.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily think of the changes or substitutions within the technical scope of the present disclosure, and shall cover the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A version test method of automatic driving software is characterized by comprising the following steps:
acquiring first sensing data of a first scene and a first true value of the first sensing data; wherein the first perception data is two-dimensional data;
acquiring second sensing data of the first scene and a second true value of the second sensing data; the second sensing data is three-dimensional data carrying time sequence information;
evaluating the perception model of the to-be-sent version according to the first perception data and the first true value to obtain a first evaluation index of the perception model;
evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software; the automatic driving software is engineering software which is added with post-processing after the deployment script of the perception model is converted;
and acquiring a version test result of the automatic driving software according to the first evaluation index and the second evaluation index.
2. The method according to claim 1, wherein evaluating the perceptual model of the to-be-issued version according to the first perceptual data and the first truth value to obtain a first evaluation index of the perceptual model, includes:
inputting the first perception data into the perception model of the to-be-issued version to obtain a first perception prediction result output by the perception model;
calculating a first recall rate and/or a first accuracy rate according to the first perception prediction result and the first true value;
and determining the first recall rate and/or the first accuracy rate as a first evaluation index of the perception model.
3. The method according to claim 1, wherein evaluating the autopilot software based on the second perception data and the second truth value to obtain a second evaluation indicator of the autopilot software comprises:
inputting the second perception data into the automatic driving software to obtain a second perception prediction result output by the automatic driving software;
calculating a second recall rate and/or a second accuracy rate according to the second perception prediction result and the second true value;
obtaining a distance measurement error index and/or a speed measurement error index according to the second perception prediction result, the second true value and the time sequence information in the perception data;
and determining the second recall rate and/or the second accuracy rate, the distance measurement error index and/or the speed measurement error index as a second evaluation index of the automatic driving software.
4. The method of claim 1, further comprising:
determining whether the first evaluation index meets the publishing standard of the perception model;
and under the condition that the first evaluation index meets the release standard, performing deployment script conversion on the perception model meeting the release standard and adding post-processing engineering software.
5. The method of any of claims 1 to 4, further comprising:
determining scenes and index items which can not pass according to the version test result;
and optimizing the automatic driving software according to the scene of performance failure and the index item.
6. A version test device of automatic driving software is characterized by comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring first perception data of a first scene and a first true value of the first perception data; wherein the first perception data is two-dimensional data;
a second obtaining module, configured to obtain second sensing data of the first scene and a second true value of the second sensing data; the second sensing data is three-dimensional data carrying time sequence information;
the first evaluating module is used for evaluating the perception model of the to-be-sent version according to the first perception data and the first truth value to obtain a first evaluating index of the perception model;
the second evaluation module is used for evaluating the automatic driving software according to the second perception data and the second truth value to obtain a second evaluation index of the automatic driving software; the automatic driving software is engineering software which is added with post-processing after the deployment script of the perception model is converted;
and the third obtaining module is used for obtaining a version test result of the automatic driving software according to the first evaluating index and the second evaluating index.
7. The apparatus according to claim 6, wherein the first evaluation module is specifically configured to:
inputting the first perception data into the perception model of the to-be-issued version to obtain a first perception prediction result output by the perception model;
calculating a first recall rate and/or a first accuracy rate according to the first perception prediction result and the first true value;
and determining the first recall rate and/or the first accuracy rate as a first evaluation index of the perception model.
8. The apparatus according to claim 6, wherein the second evaluation module is specifically configured to:
inputting the second perception data into the automatic driving software to obtain a second perception prediction result output by the automatic driving software;
calculating a second recall rate and/or a second accuracy rate according to the second perception prediction result and the second true value;
obtaining a distance measurement error index and/or a speed measurement error index according to the second perception prediction result, the second true value and the time sequence information in the perception data;
and determining the second recall rate and/or the second accuracy rate, the distance measurement error index and/or the speed measurement error index as a second evaluation index of the automatic driving software.
9. The apparatus of claim 6, further comprising:
the first determination module is used for determining whether the first evaluation index meets the publishing standard of the perception model;
and the processing module is used for converting the deployment script of the perception model meeting the publishing standard and adding post-processed engineering software under the condition that the first evaluation index meets the publishing standard.
10. The apparatus of any of claims 6 to 9, further comprising:
the second determining module is used for determining scenes and index items which can not pass according to the version test result;
and the optimization module is used for optimizing the automatic driving software according to the scene with the failed performance and the index item.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
12. A computer-readable storage medium storing instructions that, when executed, cause the method of any of claims 1-5 to be implemented.
CN202310059515.8A 2023-01-16 2023-01-16 Version testing method, device and equipment of automatic driving software and storage medium Active CN115774680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310059515.8A CN115774680B (en) 2023-01-16 2023-01-16 Version testing method, device and equipment of automatic driving software and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310059515.8A CN115774680B (en) 2023-01-16 2023-01-16 Version testing method, device and equipment of automatic driving software and storage medium

Publications (2)

Publication Number Publication Date
CN115774680A CN115774680A (en) 2023-03-10
CN115774680B true CN115774680B (en) 2023-04-11

Family

ID=85393436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310059515.8A Active CN115774680B (en) 2023-01-16 2023-01-16 Version testing method, device and equipment of automatic driving software and storage medium

Country Status (1)

Country Link
CN (1) CN115774680B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116030551B (en) * 2023-03-29 2023-06-20 小米汽车科技有限公司 Method, device, equipment and storage medium for testing vehicle autopilot software
CN116303103B (en) * 2023-05-19 2023-08-15 小米汽车科技有限公司 Evaluation set generation method, device and equipment of automatic driving scene library

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112525551B (en) * 2020-12-10 2023-08-29 北京百度网讯科技有限公司 Drive test method, device, equipment and storage medium for automatic driving vehicle

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287832A (en) * 2019-06-13 2019-09-27 北京百度网讯科技有限公司 High-Speed Automatic Driving Scene barrier perception evaluating method and device
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device

Also Published As

Publication number Publication date
CN115774680A (en) 2023-03-10

Similar Documents

Publication Publication Date Title
CN115774680B (en) Version testing method, device and equipment of automatic driving software and storage medium
CN108632081B (en) Network situation evaluation method, device and storage medium
CN109668742B (en) Laser radar-based unmanned vehicle testing method and device
CN110782034A (en) Neural network training method, device and storage medium
EP3901827B1 (en) Image processing method and apparatus based on super network, intelligent device and computer storage medium
CN110941942A (en) Method, device and system for checking circuit schematic diagram
WO2022052451A1 (en) Image information processing method and apparatus, electronic device and storage medium
CN115907566B (en) Evaluation method and device for automatic driving perception detection capability and electronic equipment
CN111125388B (en) Method, device and equipment for detecting multimedia resources and storage medium
CN107508821B (en) Security level generation method, device and storage medium
CN112815962A (en) Calibration method and device for parameters of combined application sensor
CN112383661B (en) Mobile terminal automatic test method and device, electronic equipment and storage medium
CN115009301A (en) Trajectory prediction method, trajectory prediction device, electronic equipment and storage medium
CN110149310B (en) Flow intrusion detection method, device and storage medium
CN114245915A (en) Traffic information processing method, traffic information processing device, electronic equipment, server and storage medium
CN116030551B (en) Method, device, equipment and storage medium for testing vehicle autopilot software
CN113053454A (en) Classification method and device, electronic equipment and storage medium
CN114791728B (en) Electromagnetic compatibility testing method, device, equipment and medium based on vehicle-mounted controller
CN116500565B (en) Method, device and equipment for evaluating automatic driving perception detection capability
CN112733141B (en) Information processing method and device
CN116883496B (en) Coordinate reconstruction method and device for traffic element, electronic equipment and storage medium
CN116546502B (en) Relay attack detection method, device and storage medium
CN116737592A (en) Program testing method and device, electronic equipment and storage medium
CN115469292B (en) Environment sensing method and device, electronic equipment and storage medium
CN116310663A (en) Method, device, storage medium and program product for testing point cloud registration effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant