CN115907566A - Evaluation method and device for automatic driving perception detection capability and electronic equipment - Google Patents

Evaluation method and device for automatic driving perception detection capability and electronic equipment Download PDF

Info

Publication number
CN115907566A
CN115907566A CN202310127418.8A CN202310127418A CN115907566A CN 115907566 A CN115907566 A CN 115907566A CN 202310127418 A CN202310127418 A CN 202310127418A CN 115907566 A CN115907566 A CN 115907566A
Authority
CN
China
Prior art keywords
perception
result
tracking
automatic driving
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310127418.8A
Other languages
Chinese (zh)
Other versions
CN115907566B (en
Inventor
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310127418.8A priority Critical patent/CN115907566B/en
Publication of CN115907566A publication Critical patent/CN115907566A/en
Application granted granted Critical
Publication of CN115907566B publication Critical patent/CN115907566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The disclosure relates to an evaluation method, an evaluation device and electronic equipment for automatic driving perception detection capability, wherein the method comprises the following steps: acquiring real-time environment perception data acquired by a vehicle environment perception sensor; obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; acquiring real-time laser point cloud data acquired by a vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model; obtaining an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information; and evaluating the automatic driving perception detection capability according to the evaluation index. The technical scheme disclosed by the invention can be used for evaluating the driving perception detection capability quickly and automatically, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.

Description

Evaluation method and device for automatic driving perception detection capability and electronic equipment
Technical Field
The present disclosure relates to the field of automatic driving technologies, and in particular, to a method and an apparatus for evaluating an automatic driving perception detection capability, and an electronic device.
Background
In the related art, the sensing detection effect of the automatic driving sensing model on key targets, such as vehicles, pedestrians and the like, is generally evaluated based on the accuracy and recall rate of image data. However, in practical application, the automatic driving downstream function application receives post-processed data, so that the depth of the target influences the judgment of the call admission rate, and the evaluation of the model level cannot embody the characteristics. Therefore, post-processing recall evaluation is very important for downstream data delivery, and on the other hand, as the distance of the target increases, the distance error of the target becomes larger and larger, which may cause the target to be detected but to be regarded as missed due to the large longitudinal error. The above-described disadvantages result in a large deviation of the overall evaluation of the ability to detect the perception of automatic driving from the actual situation.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a method and an apparatus for evaluating an automatic driving perception detection capability, an electronic device, and a storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a method for evaluating an automatic driving perception detection capability, including: acquiring real-time environment perception data acquired by a vehicle environment perception sensor; obtaining a perception prediction result according to the real-time environment perception data and an automatic driving perception engineering simulation platform; the perceptual prediction result comprises tracking prediction information; acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model; the detection result comprises tracking information; obtaining an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information; and evaluating the automatic driving perception detection capability according to the evaluation index.
In one implementation, the obtaining a perception prediction result according to the real-time environment perception data and an automatic driving perception engineering simulation platform includes: inputting the real-time environment perception data into the automatic driving perception engineering simulation platform frame by frame to obtain a detection result of each frame of environment perception data; and tracking according to the detection result of each frame of environmental perception data to obtain a perception prediction result of the three-dimensional target.
In one implementation, the obtaining an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information, and the tracking information includes: taking the detection result as a true value result; matching the three-dimensional target in the truth value result and the perception prediction result based on a track association algorithm to obtain an associated three-dimensional target; counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth value result and the tracking information so as to calculate the recall rate and/or the accuracy rate of the three-dimensional target; and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
In an optional implementation manner, the counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth result, and the tracking information to calculate a recall rate of the three-dimensional target includes: counting the number of detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all associated three-dimensional targets according to the truth value result and the tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
Optionally, the performing statistics on the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth result, and the tracking information to calculate the accuracy of the three-dimensional target includes: according to the perception prediction result and the tracking prediction information, counting a first number of the associated three-dimensional targets detected as true; according to the truth value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
According to a second aspect of the embodiments of the present disclosure, there is provided an evaluation device for an automatic driving perception detection capability, including: the acquisition module is used for acquiring real-time environment perception data acquired by the vehicle environment perception sensor; the first processing module is used for obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perceptual prediction result comprises tracking prediction information; the second processing module is used for acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model and acquiring a detection result output by the laser point cloud model; the detection result comprises tracking information; the third processing module is used for acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information; and the evaluation module is used for evaluating the automatic driving perception detection capability according to the evaluation index.
In one implementation, the first processing module is specifically configured to: inputting the real-time environmental perception data into the automatic driving perception engineering simulation platform frame by frame to obtain a detection result of each frame of environmental perception data; and tracking according to the detection result of each frame of environmental perception data to obtain a perception prediction result of the three-dimensional target.
In an implementation manner, the third processing module is specifically configured to: taking the detection result as a true value result; matching the three-dimensional target in the truth value result and the perception prediction result based on a track association algorithm to obtain an associated three-dimensional target; counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth value result and the tracking information so as to calculate the recall rate and/or the accuracy rate of the three-dimensional target; and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
In an optional implementation manner, the third processing module is specifically configured to: counting the number of detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all associated three-dimensional targets according to the truth value result and the tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all associated three-dimensional targets.
Optionally, the third processing module is specifically configured to: according to the perception prediction result and the tracking prediction information, counting a first number of the associated three-dimensional targets detected as true; according to the truth value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed, cause the method according to the first aspect to be implemented.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, performs the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the method can obtain a perception prediction result according to real-time environment perception data acquired by a vehicle environment perception sensor and an automatic driving perception engineering simulation platform, and evaluate the automatic driving perception detection capability by combining a detection result of a laser point cloud model. The driving perception detection capability can be evaluated quickly and automatically, and the evaluation result can objectively reflect the detection performance of the automatic driving model on the actual road.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method for evaluating an automatic driving perception detection capability according to an exemplary embodiment.
FIG. 2 is a flow chart illustrating another method for evaluating an autodrive perception detection capability according to an exemplary embodiment.
FIG. 3 is a flow chart illustrating yet another method for evaluating an automatic driving perception detection capability according to an exemplary embodiment.
FIG. 4 is a flow chart illustrating evaluation of an automatic driving perception detection capability according to an exemplary embodiment.
FIG. 5 is a block diagram of an automatic driving perception detection capability evaluation device according to an exemplary embodiment.
FIG. 6 is a schematic diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
Where in the description of the present disclosure, "/" indicates an alternative meaning, for example, a/B may indicate a or B; "and/or" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and/or B, and may mean: a exists alone, A and B exist simultaneously, and B exists alone. The various numbers of the first, second, etc. involved in this disclosure are merely for convenience of description and are not intended to limit the scope of the embodiments of the disclosure, nor to indicate a sequential order.
FIG. 1 is a flow chart illustrating a method for evaluating an automatic driving perception detection capability according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as one example. As shown in fig. 1, the method may include, but is not limited to, the following steps.
Step S101: and acquiring real-time environment perception data acquired by the vehicle environment perception sensor.
For example, the vehicle obtains real-time environment sensing data collected by the vehicle environment sensing sensor from the vehicle side through the real-time environment sensing data collected by the vehicle environment sensing sensor.
As an example, the perception detection capability of the camera needs to be evaluated. The vehicle acquires real-time environmental awareness data through the vehicle camera. The vehicle can send the obtained real-time environment perception data to the electronic equipment, so that the electronic equipment can obtain the real-time environment perception data collected by the vehicle environment perception sensor.
As another example, the perceptual detectability of the millimeter wave radar needs to be evaluated. The vehicle acquires real-time environmental awareness data through the vehicle millimeter wave radar. The vehicle can send the obtained real-time environment perception data to the electronic equipment, so that the electronic equipment can obtain the real-time environment perception data collected by the vehicle environment perception sensor.
Step S102: and obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform.
Wherein, in an embodiment of the present disclosure, the perceptual prediction result includes tracking prediction information.
For example, an automatic driving perception engineering simulation platform is deployed on the electronic device, and the electronic device may input real-time environment perception data to the automatic driving perception engineering simulation platform for simulation to obtain a perception prediction result of a target (for example, vehicles and pedestrians in the same lane or adjacent lanes) output by the automatic driving perception engineering simulation platform and possibly having an influence on automatic driving, where the perception prediction result includes tracking prediction information (for example, a predicted position, a predicted size, and the like) of the target.
Step S103: and acquiring real-time laser point cloud data acquired by a vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model.
In an embodiment of the present disclosure, the detection result includes tracking information.
For example, the electronic device obtains real-time laser point cloud data acquired by a vehicle environment laser radar from a vehicle side, inputs the real-time laser point cloud data into a pre-trained laser point cloud model, and obtains a detection result of a real target output by the laser point cloud model and possibly affecting automatic driving, wherein the detection result includes tracking information (e.g., an actual position, an actual size, and the like) of the target.
Step S104: and obtaining an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information.
For example, the electronic device obtains the number of detected targets according to the perception prediction result and the tracking prediction information, and obtains the number of targets actually existing according to the detection result and the tracking information, so as to obtain the relevance ratio of the automatic driving perception detection to the targets actually existing according to the number of the detected targets and the number of the targets actually existing, and the relevance ratio is used as an evaluation index of the automatic driving perception detection capability.
Step S105: and evaluating the automatic driving perception detection capability according to the evaluation index.
For example, the electronic equipment compares the evaluation index with an evaluation index threshold, and determines that the automatic driving perception detection capability is qualified in response to the evaluation index being greater than or equal to the evaluation index threshold; or determining that the automatic driving perception detection capability is unqualified in response to the evaluation index being smaller than the evaluation index threshold.
In the embodiment of the disclosure, the evaluation index threshold is a preset threshold for judging whether the automatic driving perception detection capability is qualified or not.
By implementing the embodiment of the disclosure, a perception prediction result can be obtained according to real-time environment perception data acquired by a vehicle environment perception sensor and an automatic driving perception engineering simulation platform, and the detection result of a laser point cloud model is combined to evaluate the automatic driving perception detection capability. The driving perception detection capability can be evaluated quickly and automatically, and the evaluation result can objectively reflect the detection performance of the automatic driving model on the actual road.
In one implementation, the real-time environment perception data can be processed frame by frame based on an automatic driving perception engineering simulation platform, and a perception prediction result of the three-dimensional target is obtained based on the processing result. Referring to fig. 2 as an example, fig. 2 is a flow chart illustrating another method for evaluating an automatic driving perception detection capability according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as one example. As shown in fig. 2, the method may include, but is not limited to, the following steps.
Step S201: and acquiring real-time environment perception data acquired by the vehicle environment perception sensor.
In the embodiment of the present disclosure, step S201 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S202: and inputting the real-time environment perception data into the automatic driving perception engineering simulation platform frame by frame to obtain the detection result of each frame of environment perception data.
For example, an autopilot-aware engineering simulation platform is deployed on the electronic device, and the electronic device may input the real-time environment awareness data to the autopilot-aware engineering simulation platform frame by frame for simulation to obtain a detection result of each frame of environment awareness data output by the autopilot-aware engineering simulation platform.
Step S203: and tracking according to the detection result of each frame of environmental perception data to obtain a perception prediction result of the three-dimensional target.
For example, the electronic device processes each frame of environmental perception data by using a target perception algorithm to obtain a corresponding two-dimensional target detection result, and processes the two-dimensional target detection result based on a mapping relation between the positions of the pixel points in each frame of environmental perception data and the obtained depth information to obtain a perception prediction result of the three-dimensional target in each frame of environmental perception data.
Step S204: and acquiring real-time laser point cloud data acquired by a vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model.
In the embodiment of the present disclosure, step S204 may be implemented by using any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S205: and obtaining an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information.
In the embodiment of the present disclosure, step S205 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S206: and evaluating the automatic driving perception detection capability according to the evaluation index.
In the embodiment of the present disclosure, step S206 may be implemented by any one of the embodiments of the present disclosure, which is not limited in this disclosure and is not described again.
By implementing the embodiment of the disclosure, the acquired real-time environmental perception data can be processed frame by frame based on the automatic driving perception engineering simulation platform to obtain the perception prediction result of the three-dimensional target, and the detection result of the laser point cloud model is combined to evaluate the automatic driving perception detection capability. The driving perception detection capability can be evaluated quickly and automatically, and the evaluation result can objectively reflect the detection performance of the automatic driving model on the actual road.
In one implementation, the recall rate and/or accuracy rate of the three-dimensional target can be obtained and calculated based on the perception prediction result, the detection result, the tracking prediction information and the tracking information, so that the evaluation index of the automatic driving perception detection capability can be determined. Referring to fig. 3 as an example, fig. 3 is a flow chart illustrating yet another method for evaluating an automatic driving perception detection capability according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as one example. As shown in fig. 3, the method may include, but is not limited to, the following steps.
Step S301: and acquiring real-time environment perception data acquired by the vehicle environment perception sensor.
In the embodiment of the present disclosure, step S301 may be implemented by adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S302: and obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform.
In the embodiment of the present disclosure, step S302 may be implemented by any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S303: and acquiring real-time laser point cloud data acquired by a vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model.
In the embodiment of the present disclosure, step S303 may be implemented by respectively adopting any one of the embodiments of the present disclosure, and this is not limited in the embodiment of the present disclosure and is not described again.
Step S304: and taking the detection result as a true value result.
For example, the electronic device uses the detection result output by the laser point cloud model as a true value result representing the real situation.
Step S305: and matching the three-dimensional target in the truth value result and the perception prediction result based on a track association algorithm to obtain an associated three-dimensional target.
For example, the electronic device calculates motion similarities between trajectories of different three-dimensional targets in the truth result and the perception prediction result based on the motion model, and calculates apparent similarities between trajectories of different three-dimensional targets in the truth result and the perception prediction result based on the incremental linear appearance model; obtaining time domain similarity between tracks of different three-dimensional targets by judging whether track points at different positions at the same moment exist between the tracks of different three-dimensional targets in the truth value result and the perception prediction result; and determining the product of the motion similarity, the apparent similarity and the time domain similarity obtained by the calculation as the similarity among different three-dimensional targets, and determining that the three-dimensional targets in the corresponding truth value result and the perception prediction result belong to the associated three-dimensional targets when the similarity is larger than a preset similarity threshold.
Step S306: and counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth value result and the tracking information so as to calculate the recall rate and/or the accuracy rate of the three-dimensional target.
As an example, the electronic device obtains, according to the perception prediction result, the tracking prediction information, the truth result, and the tracking information, a percentage of the number of detected three-dimensional targets to the number of three-dimensional targets actually present to calculate a recall rate of the three-dimensional targets.
As another example, the electronic device obtains a percentage of the number of three-dimensional targets actually present in the number of detected three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result, and the tracking information to calculate the accuracy of the three-dimensional targets.
As another example, the electronic device obtains, according to the perception prediction result, the tracking prediction information, the truth result, and the tracking information, a percentage of the number of detected three-dimensional targets to the number of three-dimensional targets actually existing to calculate a recall rate of the three-dimensional targets; and acquiring the percentage of the number of the three-dimensional targets actually existing in the detected number of the three-dimensional targets to calculate the accuracy of the three-dimensional targets.
In an implementation manner, the calculating the recall rate of the three-dimensional target by performing statistics on the associated three-dimensional target according to the perceptual prediction result, the tracking prediction information, the truth result, and the tracking information may include: counting the number of detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all associated three-dimensional targets according to a true value result and tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all associated three-dimensional targets.
For example, the electronic equipment counts the number of detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all actually existing associated three-dimensional targets according to the truth value result and the tracking information; and calculating the percentage of the number of the detected associated three-dimensional targets in the number of all the associated three-dimensional targets, and taking the percentage as the recall rate of the three-dimensional targets.
In an implementation manner, the calculating the accuracy of the three-dimensional target by performing statistics on the associated three-dimensional target according to the perceptual prediction result, the tracking prediction information, the truth result, and the tracking information may include: according to the perception prediction result and the tracking prediction information, counting a first number of the associated three-dimensional targets detected as true; according to the truth value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
For example, the electronic equipment counts a first number of the associated three-dimensional targets detected as true according to the perception prediction result and the tracking prediction information; according to the truth value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true; and calculating the percentage of the first quantity in the second quantity to be used as the accuracy of detecting the three-dimensional target.
Step S307: and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
As an example, the electronic apparatus determines a recall rate of the three-dimensional target as an evaluation index of the automatic driving perception detecting capability.
As another example, the electronic device determines the accuracy of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
As yet another example, the electronic device determines recall and accuracy of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
Step S308: and evaluating the automatic driving perception detection capability according to the evaluation index.
As an example, the electronic device determines the recall rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability. The electronic equipment determines that the automatic driving perception detection capability is qualified in response to the recall rate being greater than or equal to a recall rate threshold; alternatively, in response to the recall rate being less than the recall rate threshold, determining that the autonomous driving perception detection capability is not acceptable.
As another example, the electronic device determines the accuracy of the three-dimensional target as an evaluation index of the automatic driving perception detection capability. The electronic equipment responds to the fact that the accuracy is larger than or equal to the accuracy threshold value, and the automatic driving perception detection capability is determined to be qualified; alternatively, in response to the accuracy rate being less than the accuracy rate threshold, determining that the autonomous driving perception detection capability is not qualified.
As still another example, the electronic device determines the recall rate and accuracy of the three-dimensional target as the evaluation index of the automatic driving perception detection capability. The electronic device determines that the autonomous driving perception detection capability is qualified in response to the recall rate being greater than or equal to the recall rate threshold and the accuracy rate being greater than or equal to the accuracy rate threshold, and otherwise determines that the autonomous driving perception detection capability is unqualified.
By implementing the embodiment of the disclosure, the perception prediction result can be obtained according to the real-time environment perception data acquired by the vehicle environment perception sensor and the automatic driving perception engineering simulation platform, so that the recall rate and/or the accuracy rate of the three-dimensional target can be obtained and calculated based on the perception prediction result, the detection result, the tracking prediction information and the tracking information, and the evaluation index of the automatic driving perception detection capability can be determined. The driving perception detection capability can be evaluated quickly and automatically, and the evaluation result can objectively reflect the detection performance of the automatic driving model on the actual road.
Referring to fig. 4, fig. 4 is a flow chart illustrating evaluation of an automatic driving perception detection capability according to an exemplary embodiment. As shown in fig. 4, in the evaluation process, environmental awareness data such as video data acquired by an environmental awareness sensor of a vehicle is acquired, the data is input to a simulation platform for continuous frame detection to obtain a continuous frame detection result, then a tracking (track) three-dimensional result of each frame is acquired based on trajectory tracking, correlation matching is performed in combination with a true tracking (track) result to obtain the number of associated three-dimensional models in the tracking three-dimensional result and the true tracking result, and a quasi-recall rate of the autopilot awareness detection is calculated in combination with the number of three-dimensional models actually existing in the true tracking result to evaluate the autopilot awareness detection capability based on the quasi-recall rate.
Referring to fig. 5, fig. 5 is a block diagram of an evaluation device for automatic driving perception detection capability according to an exemplary embodiment. As shown in fig. 5, the apparatus 500 includes: an obtaining module 501, configured to obtain real-time environment sensing data acquired by a vehicle environment sensing sensor; the first processing module 502 is used for obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perception prediction result comprises tracking prediction information; the second processing module 503 is configured to acquire real-time laser point cloud data acquired by the vehicle environment laser radar, input the real-time laser point cloud data to a pre-trained laser point cloud model, and acquire a detection result output by the laser point cloud model; the detection result comprises tracking information; a third processing module 504, configured to obtain an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information, and the tracking information; and the evaluation module 505 is used for evaluating the automatic driving perception detection capability according to the evaluation index.
In one implementation, the first processing module 502 is specifically configured to: inputting the real-time environmental perception data into an automatic driving perception engineering simulation platform frame by frame to obtain a detection result of each frame of environmental perception data; and tracking according to the detection result of each frame of environmental perception data to obtain a perception prediction result of the three-dimensional target.
In an implementation manner, the third processing module 504 is specifically configured to: taking the detection result as a true value result; matching the three-dimensional target in the truth value result and the perception prediction result based on a track association algorithm to obtain an associated three-dimensional target; counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth value result and the tracking information so as to calculate the recall rate and/or the accuracy rate of the three-dimensional target; and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
In an optional implementation manner, the third processing module 504 is specifically configured to: counting the number of detected associated three-dimensional targets according to a perception prediction result and tracking prediction information; counting the number of all associated three-dimensional targets according to the truth value result and the tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
Optionally, the third processing module 504 is specifically configured to: according to the perception prediction result and the tracking prediction information, counting a first number of the associated three-dimensional targets detected as true; according to the true value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
By the device, a perception prediction result can be obtained according to real-time environment perception data acquired by the vehicle environment perception sensor and the automatic driving perception engineering simulation platform, and the automatic driving perception detection capability can be evaluated by combining the detection result of the laser point cloud model. The driving perception detection capability can be evaluated quickly and automatically, and the evaluation result can objectively reflect the detection performance of the automatic driving model on the actual road.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Referring to fig. 6, fig. 6 is a schematic diagram of an electronic device according to an exemplary embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, a wearable device, and the like.
Referring to fig. 6, electronic device 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, input/output (I/O) interface 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 606 provides power to the various components of the electronic device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a touch sensitive display screen that provides an output interface between the electronic device 600 and a user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 600 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The input/output interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing various aspects of status assessment for the electronic device 600. For example, the sensor component 614 may detect an open/closed state of the electronic device 600, the relative positioning of components, such as a display and keypad of the electronic device 600, the sensor component 614 may also detect a change in position of the electronic device 600 or a component of the electronic device 600, the presence or absence of user contact with the electronic device 600, orientation or acceleration/deceleration of the electronic device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the electronic device 600 and other devices in a wired or wireless manner. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the methods described in any of the above embodiments.
The present disclosure also provides a readable storage medium having stored thereon instructions which, when executed by a computer, implement the functionality of any of the above-described method embodiments.
The present disclosure also provides a computer program product which, when executed by a computer, implements the functionality of any of the above-described method embodiments.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer programs. The procedures or functions according to the embodiments of the present disclosure are wholly or partially generated when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer program may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer program may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center through a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) manner. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Versatile Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
Those of ordinary skill in the art will understand that: the various numbers of the first, second, etc. involved in this disclosure are merely for convenience of description and distinction, and are not intended to limit the scope of the embodiments of the disclosure, but also to indicate the order of precedence.
At least one of the present disclosure may also be described as one or more, and a plurality may be two, three, four or more, without limitation of the present disclosure. In the embodiment of the present disclosure, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", and the like, and the technical features described in "first", "second", "third", "a", "B", "C", and "D" are not in the order of priority or magnitude.
Predefinition in this disclosure may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-firing.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present disclosure, and all the changes or substitutions should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure should be subject to the protection scope of the claims.

Claims (12)

1. An evaluation method for automatic driving perception detection capability is characterized by comprising the following steps:
acquiring real-time environment perception data acquired by a vehicle environment perception sensor;
obtaining a perception prediction result according to the real-time environment perception data and an automatic driving perception engineering simulation platform; the perceptual prediction result comprises tracking prediction information;
acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model; the detection result comprises tracking information;
obtaining an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information;
and evaluating the automatic driving perception detection capability according to the evaluation index.
2. The method of claim 1, wherein obtaining a perception prediction result based on the real-time environmental perception data and an automated driving perception engineering simulation platform comprises:
inputting the real-time environment perception data into the automatic driving perception engineering simulation platform frame by frame to obtain a detection result of each frame of environment perception data;
and tracking according to the detection result of each frame of environmental perception data to obtain a perception prediction result of the three-dimensional target.
3. The method according to claim 1, wherein the obtaining an evaluation indicator of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information, and the tracking information comprises:
taking the detection result as a true value result;
matching the three-dimensional target in the truth value result and the perception prediction result based on a track association algorithm to obtain an associated three-dimensional target;
counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth value result and the tracking information so as to calculate the recall rate and/or accuracy rate of the three-dimensional target;
and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
4. The method of claim 3, wherein said counting said associated three-dimensional target based on said perceptual prediction result, said tracking prediction information, said truth result, and said tracking information to calculate a recall of the three-dimensional target comprises:
counting the number of detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information;
counting the number of all associated three-dimensional targets according to the truth value result and the tracking information;
and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
5. The method of claim 4, wherein said counting said associated three-dimensional target based on said perceptual prediction result, said tracking prediction information, said truth result, and said tracking information to calculate an accuracy of the three-dimensional target comprises:
according to the perception prediction result and the tracking prediction information, counting a first number of the associated three-dimensional targets detected as true;
according to the truth value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true;
and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
6. An evaluation device for automatic driving feeling detection capability, characterized by comprising:
the acquisition module is used for acquiring real-time environment perception data acquired by the vehicle environment perception sensor;
the first processing module is used for obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perceptual prediction result comprises tracking prediction information;
the second processing module is used for acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model; the detection result comprises tracking information;
the third processing module is used for acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information;
and the evaluation module is used for evaluating the automatic driving perception detection capability according to the evaluation index.
7. The apparatus of claim 6, wherein the first processing module is specifically configured to:
inputting the real-time environmental perception data into the automatic driving perception engineering simulation platform frame by frame to obtain a detection result of each frame of environmental perception data;
and tracking according to the detection result of each frame of environmental perception data to obtain a perception prediction result of the three-dimensional target.
8. The apparatus of claim 6, wherein the third processing module is specifically configured to:
taking the detection result as a true value result;
matching the three-dimensional target in the truth value result and the perception prediction result based on a track association algorithm to obtain an associated three-dimensional target;
counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth value result and the tracking information so as to calculate the recall rate and/or the accuracy rate of the three-dimensional target;
and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
9. The apparatus of claim 8, wherein the third processing module is specifically configured to:
counting the number of detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information;
counting the number of all associated three-dimensional targets according to the truth value result and the tracking information;
and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all associated three-dimensional targets.
10. The apparatus of claim 9, wherein the third processing module is specifically configured to:
according to the perception prediction result and the tracking prediction information, counting a first number of the associated three-dimensional targets detected as true;
according to the truth value result and the tracking information, counting a second number of the associated three-dimensional targets detected as true;
and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 5.
12. A computer-readable storage medium storing instructions that, when executed, cause the method of any of claims 1-5 to be implemented.
CN202310127418.8A 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment Active CN115907566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310127418.8A CN115907566B (en) 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310127418.8A CN115907566B (en) 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment

Publications (2)

Publication Number Publication Date
CN115907566A true CN115907566A (en) 2023-04-04
CN115907566B CN115907566B (en) 2023-05-30

Family

ID=85737492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310127418.8A Active CN115907566B (en) 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment

Country Status (1)

Country Link
CN (1) CN115907566B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591847A (en) * 2024-01-19 2024-02-23 福思(杭州)智能科技有限公司 Model pointing evaluating method and device based on vehicle condition data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180313724A1 (en) * 2017-04-27 2018-11-01 Baidu Online Network Technology (Beijing) Co., Ltd. Testing method and apparatus applicable to driverless vehicle
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
CN114384547A (en) * 2022-01-10 2022-04-22 清华珠三角研究院 Radar sensor model-based fidelity detection evaluation method and system
CN115147796A (en) * 2022-07-14 2022-10-04 小米汽车科技有限公司 Method and device for evaluating target recognition algorithm, storage medium and vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180313724A1 (en) * 2017-04-27 2018-11-01 Baidu Online Network Technology (Beijing) Co., Ltd. Testing method and apparatus applicable to driverless vehicle
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
CN114384547A (en) * 2022-01-10 2022-04-22 清华珠三角研究院 Radar sensor model-based fidelity detection evaluation method and system
CN115147796A (en) * 2022-07-14 2022-10-04 小米汽车科技有限公司 Method and device for evaluating target recognition algorithm, storage medium and vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591847A (en) * 2024-01-19 2024-02-23 福思(杭州)智能科技有限公司 Model pointing evaluating method and device based on vehicle condition data
CN117591847B (en) * 2024-01-19 2024-05-07 福思(杭州)智能科技有限公司 Model pointing evaluating method and device based on vehicle condition data

Also Published As

Publication number Publication date
CN115907566B (en) 2023-05-30

Similar Documents

Publication Publication Date Title
RU2665217C2 (en) Image processing method and apparatus
EP3115982A1 (en) Method and apparatus for road condition prompt
RU2677360C1 (en) Method and device for recognition of gestures
CN115774680B (en) Version testing method, device and equipment of automatic driving software and storage medium
CN106648063B (en) Gesture recognition method and device
CN106557759B (en) Signpost information acquisition method and device
CN112115894B (en) Training method and device of hand key point detection model and electronic equipment
CN110059547B (en) Target detection method and device
CN115907566B (en) Evaluation method and device for automatic driving perception detection capability and electronic equipment
CN110930351A (en) Light spot detection method and device and electronic equipment
CN109214175B (en) Method, device and storage medium for training classifier based on sample characteristics
CN111859097B (en) Data processing method, device, electronic equipment and storage medium
CN114802233B (en) Vehicle control method, device, electronic device and storage medium
CN116310633A (en) Key point detection model training method and key point detection method
CN105635573A (en) Pick-up head visual angle adjusting method and apparatus
CN115009301A (en) Trajectory prediction method, trajectory prediction device, electronic equipment and storage medium
CN110149310B (en) Flow intrusion detection method, device and storage medium
CN116030551B (en) Method, device, equipment and storage medium for testing vehicle autopilot software
CN113450298B (en) Multi-sensor-based view map processing method, device and equipment
CN111898019A (en) Information pushing method and device
CN112733141B (en) Information processing method and device
CN116500565B (en) Method, device and equipment for evaluating automatic driving perception detection capability
CN117826779A (en) Positioning method, positioning device, storage medium and robot
CN117864151A (en) Transverse speed direction determining method and device, storage medium and electronic equipment
CN116206180A (en) Perception result processing method and device, storage medium, electronic equipment and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant