CN115907566B - Evaluation method and device for automatic driving perception detection capability and electronic equipment - Google Patents

Evaluation method and device for automatic driving perception detection capability and electronic equipment Download PDF

Info

Publication number
CN115907566B
CN115907566B CN202310127418.8A CN202310127418A CN115907566B CN 115907566 B CN115907566 B CN 115907566B CN 202310127418 A CN202310127418 A CN 202310127418A CN 115907566 B CN115907566 B CN 115907566B
Authority
CN
China
Prior art keywords
perception
result
dimensional
tracking
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310127418.8A
Other languages
Chinese (zh)
Other versions
CN115907566A (en
Inventor
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202310127418.8A priority Critical patent/CN115907566B/en
Publication of CN115907566A publication Critical patent/CN115907566A/en
Application granted granted Critical
Publication of CN115907566B publication Critical patent/CN115907566B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The disclosure relates to an evaluation method and device for automatic driving perception detection capability and electronic equipment, wherein the method comprises the following steps: acquiring real-time environment sensing data acquired by a vehicle environment sensing sensor; obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; acquiring real-time laser point cloud data acquired by a vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model; acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information; and evaluating the automatic driving perception detection capability according to the evaluation index. According to the technical scheme, the driving perception detection capability can be rapidly and automatically evaluated, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.

Description

Evaluation method and device for automatic driving perception detection capability and electronic equipment
Technical Field
The disclosure relates to the technical field of automatic driving, and in particular relates to an evaluation method and device for automatic driving perception detection capability and electronic equipment.
Background
In the related art, the perception detection effect of an automatic driving perception model on a key target, such as a vehicle, a pedestrian and the like, is generally evaluated based on the accuracy and recall of image data. However, in practical application, the post-processing data is received by the autopilot downstream function application, so that the depth of the target can affect the judgment of the accurate recall rate, and the evaluation of the model level cannot embody the characteristics. Thus, for downstream data delivery, post-processing accurate recall evaluation is very important, on the other hand, as the distance of the target increases, the distance error of the target becomes larger and larger, which may result in that the target is detected but is regarded as missed due to too large longitudinal error. The above-mentioned drawbacks result in a large deviation from reality of the overall evaluation of the autopilot awareness detection capability.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides an evaluation method, an apparatus, an electronic device, and a storage medium for automatic driving perception detection capability.
According to a first aspect of an embodiment of the present disclosure, there is provided a method for evaluating an autopilot perception detection capability, including: acquiring real-time environment sensing data acquired by a vehicle environment sensing sensor; obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perception prediction result comprises tracking prediction information; acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result output by the laser point cloud model; the detection result comprises tracking information; acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information; and evaluating the automatic driving perception detection capability according to the evaluation index.
In one implementation manner, the obtaining a perception prediction result according to the real-time environment perception data and the autopilot perception engineering simulation platform includes: inputting the real-time environment sensing data into the automatic driving sensing engineering simulation platform frame by frame to obtain a detection result of each frame of environment sensing data; tracking according to the detection result of the environmental perception data of each frame to obtain a perception prediction result of the three-dimensional target.
In one implementation manner, the obtaining the evaluation index of the autopilot perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information includes: taking the detection result as a true value result; based on a track association algorithm, matching the three-dimensional targets in the true result and the perception prediction result to obtain an associated three-dimensional target; counting the associated three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result and the tracking information to calculate recall rate and/or accuracy rate of the three-dimensional targets; and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
In an alternative implementation, the counting the associated three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result and the tracking information to calculate the recall of the three-dimensional targets includes: counting the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all the associated three-dimensional targets according to the truth result and the tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
Optionally, the calculating the accuracy of the three-dimensional target by counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth result and the tracking information includes: according to the perception prediction result and the tracking prediction information, counting a first number of associated three-dimensional targets which are detected as true; based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for evaluating an autopilot perception detection capability, including: the acquisition module is used for acquiring real-time environment sensing data acquired by the vehicle environment sensing sensor; the first processing module is used for obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perception prediction result comprises tracking prediction information; the second processing module is used for acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model and acquiring a detection result output by the laser point cloud model; the detection result comprises tracking information; the third processing module is used for acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information; and the evaluation module is used for evaluating the automatic driving perception detection capability according to the evaluation index.
In one implementation, the first processing module is specifically configured to: inputting the real-time environment sensing data into the automatic driving sensing engineering simulation platform frame by frame to obtain a detection result of each frame of environment sensing data; tracking according to the detection result of the environmental perception data of each frame to obtain a perception prediction result of the three-dimensional target.
In one implementation, the third processing module is specifically configured to: taking the detection result as a true value result; based on a track association algorithm, matching the three-dimensional targets in the true result and the perception prediction result to obtain an associated three-dimensional target; counting the associated three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result and the tracking information to calculate recall rate and/or accuracy rate of the three-dimensional targets; and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
In an alternative implementation, the third processing module is specifically configured to: counting the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all the associated three-dimensional targets according to the truth result and the tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
Optionally, the third processing module is specifically configured to: according to the perception prediction result and the tracking prediction information, counting a first number of associated three-dimensional targets which are detected as true; based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the preceding first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium storing instructions that, when executed, cause the method according to the first aspect to be implemented.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects: the automatic driving perception detection capability can be evaluated according to the real-time environment perception data acquired by the vehicle environment perception sensor and the automatic driving perception engineering simulation platform, a perception prediction result is obtained, and the detection result of the laser point cloud model is combined. The driving perception detection capability can be rapidly and automatically evaluated, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow chart illustrating a method of evaluating autopilot awareness detection capability according to one exemplary embodiment.
FIG. 2 is a flow chart illustrating another method of evaluating autopilot awareness detection capability according to one exemplary embodiment.
FIG. 3 is a flowchart illustrating yet another method of evaluating autopilot awareness detection capability, according to one exemplary embodiment.
FIG. 4 is a flowchart illustrating an evaluation of autopilot awareness detection capability, according to one exemplary embodiment.
FIG. 5 is a block diagram of an apparatus for evaluating autopilot awareness detection capability, according to one exemplary embodiment.
Fig. 6 is a schematic diagram of an electronic device, according to an example embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Wherein, in the description of the present disclosure, "/" means or is meant unless otherwise indicated, e.g., a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. The various numbers of first, second, etc. referred to in this disclosure are merely for ease of description and are not intended to limit the scope of embodiments of this disclosure nor to indicate sequencing.
FIG. 1 is a flow chart illustrating a method of evaluating autopilot awareness detection capability according to one exemplary embodiment. The method may be performed by an electronic device, which may be a server, as an example. As shown in fig. 1, the method may include, but is not limited to, the following steps.
Step S101: and acquiring real-time environment sensing data acquired by the vehicle environment sensing sensor.
For example, the vehicle acquires real-time environment sensing data acquired by the vehicle environment sensing sensor from the vehicle side by the electronic device.
As an example, the need to evaluate the perceived detectability of a camera is taken as an example. The vehicle collects real-time environmental awareness data through the vehicle camera. The vehicle can send the obtained real-time environment sensing data to the electronic device, so that the electronic device can obtain the real-time environment sensing data acquired by the vehicle environment sensing sensor.
As another example, taking as an example the need to evaluate the perceived detectability of millimeter wave radars. The vehicle collects real-time environmental awareness data through the vehicle millimeter wave radar. The vehicle can send the obtained real-time environment sensing data to the electronic device, so that the electronic device can obtain the real-time environment sensing data acquired by the vehicle environment sensing sensor.
Step S102: and obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform.
Wherein, in embodiments of the present disclosure, the perceptual prediction result comprises tracking prediction information.
For example, the electronic device is provided with an autopilot perception engineering simulation platform, and the electronic device can input real-time environment perception data to the autopilot perception engineering simulation platform to perform simulation to obtain a perception prediction result of an object (for example, a vehicle and a pedestrian in the same lane or an adjacent lane) which may affect autopilot and is output by the autopilot perception engineering simulation platform, wherein the perception prediction result includes tracking prediction information (for example, a prediction position, a prediction size and the like) of the object.
Step S103: real-time laser point cloud data acquired by a laser radar in a vehicle environment are acquired, the real-time laser point cloud data are input into a pre-trained laser point cloud model, and a detection result output by the laser point cloud model is acquired.
Wherein, in the embodiment of the present disclosure, the detection result includes tracking information.
For example, the electronic device acquires real-time laser point cloud data acquired by the vehicle environment laser radar from the vehicle side, inputs the real-time laser point cloud data into a pre-trained laser point cloud model, and acquires a detection result of a real existing target which is output by the laser point cloud model and may affect automatic driving, wherein the detection result includes tracking information (for example, an actual position, an actual size, and the like) of the target.
Step S104: and acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information.
For example, the electronic device obtains the detected number of targets according to the sensing prediction result and the tracking prediction information, and obtains the actually existing number of targets according to the detection result and the tracking information, so as to obtain the detection rate of the automatic driving sensing detection on the actually existing targets according to the detected number of targets and the actually existing number of targets, and the detection rate is used as an evaluation index of the automatic driving sensing detection capability.
Step S105: and evaluating the automatic driving perception detection capability according to the evaluation index.
For example, the electronic device compares the evaluation index to an evaluation index threshold, and determines that the autopilot sensing capability is acceptable in response to the evaluation index being greater than or equal to the evaluation index threshold; or, determining that the autopilot sensing capability is not acceptable in response to the evaluation indicator being less than the evaluation indicator threshold.
In the embodiment of the disclosure, the evaluation index threshold is a preset threshold for judging whether the automatic driving perception detection capability is qualified or not.
By implementing the embodiment of the disclosure, the sensing prediction result can be obtained according to the real-time environment sensing data acquired by the vehicle environment sensing sensor and the automatic driving sensing engineering simulation platform, and the automatic driving sensing detection capability is evaluated by combining the detection result of the laser point cloud model. The driving perception detection capability can be rapidly and automatically evaluated, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.
In one implementation, the real-time environment perception data can be processed frame by frame based on the automatic driving perception engineering simulation platform, and the perception prediction result of the three-dimensional target can be obtained based on the processing result. As an example, referring to fig. 2, fig. 2 is a flowchart illustrating another method of evaluating the automatic driving perception detection capability according to an exemplary embodiment. The method may be performed by an electronic device, which may be a server, as an example. As shown in fig. 2, the method may include, but is not limited to, the following steps.
Step S201: and acquiring real-time environment sensing data acquired by the vehicle environment sensing sensor.
In the embodiment of the present disclosure, step S201 may be implemented in any manner of each embodiment of the present disclosure, which is not limited to this embodiment, and is not described in detail.
Step S202: and inputting the real-time environment sensing data into an automatic driving sensing engineering simulation platform frame by frame to obtain a detection result of each frame of environment sensing data.
For example, an autopilot perception engineering simulation platform is deployed on the electronic device, and the electronic device can input real-time environment perception data into the autopilot perception engineering simulation platform frame by frame to simulate, so as to obtain a detection result of each frame of environment perception data output by the autopilot perception engineering simulation platform.
Step S203: tracking according to the detection result of the environmental perception data of each frame to obtain the perception prediction result of the three-dimensional target.
For example, the electronic device processes the environmental awareness data of each frame by using an object awareness algorithm to obtain a corresponding two-dimensional object detection result, and processes the two-dimensional object detection result based on a mapping relationship between a position of a pixel point in the environmental awareness data of each frame and the acquired depth information to obtain a perception prediction result of the three-dimensional object in the environmental awareness data of each frame.
Step S204: real-time laser point cloud data acquired by a laser radar in a vehicle environment are acquired, the real-time laser point cloud data are input into a pre-trained laser point cloud model, and a detection result output by the laser point cloud model is acquired.
In the embodiment of the present disclosure, step S204 may be implemented in any manner in each embodiment of the present disclosure, which is not limited to this embodiment, and is not described in detail.
Step S205: and acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information.
In the embodiment of the present disclosure, step S205 may be implemented in any manner in each embodiment of the present disclosure, which is not limited to this embodiment, and is not described in detail.
Step S206: and evaluating the automatic driving perception detection capability according to the evaluation index.
In the embodiment of the present disclosure, step S206 may be implemented in any manner of each embodiment of the present disclosure, which is not limited to this embodiment, and is not described in detail.
By implementing the embodiment of the disclosure, the obtained real-time environment perception data can be processed frame by frame based on the automatic driving perception engineering simulation platform to obtain the perception prediction result of the three-dimensional target, and the automatic driving perception detection capability is evaluated by combining the detection result of the laser point cloud model. The driving perception detection capability can be rapidly and automatically evaluated, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.
In one implementation, recall and/or accuracy of the three-dimensional target may be calculated based on the perception prediction result, the detection result, the tracking prediction information, and the tracking information, thereby determining an evaluation indicator of the autopilot perception detection capability. As an example, referring to fig. 3, fig. 3 is a flowchart illustrating yet another method of evaluating the autopilot awareness sensing capability according to one exemplary embodiment. The method may be performed by an electronic device, which may be a server, as an example. As shown in fig. 3, the method may include, but is not limited to, the following steps.
Step S301: and acquiring real-time environment sensing data acquired by the vehicle environment sensing sensor.
In the embodiment of the present disclosure, step S301 may be implemented in any manner in each embodiment of the present disclosure, which is not limited to this embodiment, and is not described in detail.
Step S302: and obtaining a perception prediction result according to the real-time environment perception data and the automatic driving perception engineering simulation platform.
In the embodiment of the present disclosure, step S302 may be implemented in any manner of each embodiment of the present disclosure, which is not limited to this embodiment, and is not described in detail.
Step S303: real-time laser point cloud data acquired by a laser radar in a vehicle environment are acquired, the real-time laser point cloud data are input into a pre-trained laser point cloud model, and a detection result output by the laser point cloud model is acquired.
In the embodiment of the present disclosure, step S303 may be implemented in any one of the embodiments of the present disclosure, which is not limited to this embodiment, and is not described in detail.
Step S304: and taking the detection result as a true value result.
For example, the electronic device uses the detection result output by the laser point cloud model as a true value result representing the real situation.
Step S305: and matching the three-dimensional targets in the true result and the perception prediction result based on a track association algorithm to obtain an association three-dimensional target.
For example, the electronic device calculates a motion similarity between the trajectories of different three-dimensional objects in the truth result and the perception prediction result based on the motion model, and calculates an apparent similarity between the trajectories of different three-dimensional objects in the truth result and the perception prediction result based on the incremental linear apparent model; judging whether track points at different positions at the same time exist among tracks of different three-dimensional targets in a true value result and a perception prediction result, so as to obtain time domain similarity among the tracks of the different three-dimensional targets; and determining the product of the motion similarity, the apparent similarity and the time similarity as the similarity between different three-dimensional targets, and determining that the three-dimensional targets in the corresponding true value result and the perception prediction result belong to the associated three-dimensional targets when the similarity is greater than a preset similarity threshold value.
Step S306: and counting the related three-dimensional targets according to the perception prediction result, the tracking prediction information, the true value result and the tracking information so as to calculate the recall rate and/or the accuracy rate of the three-dimensional targets.
As one example, the electronic device obtains a percentage of the number of detected three-dimensional targets to the number of three-dimensional targets actually present based on the perception prediction result, the tracking prediction information, the truth result, and the tracking information to calculate a recall of the three-dimensional targets.
As another example, the electronic device obtains a percentage of the number of three-dimensional objects actually present among the number of detected three-dimensional objects based on the perception prediction result, the tracking prediction information, the truth result, and the tracking information to calculate the accuracy of the three-dimensional objects.
As yet another example, the electronic device obtains a percentage of the number of detected three-dimensional targets to the number of three-dimensional targets actually present based on the perception prediction result, the tracking prediction information, the truth result, and the tracking information to calculate a recall of the three-dimensional targets; and acquiring the percentage of the number of the three-dimensional targets actually existing in the number of the detected three-dimensional targets to calculate the accuracy of the three-dimensional targets.
In one implementation manner, the calculating the recall rate of the three-dimensional target by counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the truth result and the tracking information may include: counting the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all the associated three-dimensional targets according to the true value result and tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
For example, the electronic device counts the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all the associated three-dimensional targets actually existing according to the true value result and the tracking information; and calculating the percentage of the number of the detected associated three-dimensional targets to the number of all the associated three-dimensional targets to be used as the recall rate of the three-dimensional targets.
In one implementation manner, the calculating the accuracy of the three-dimensional target by counting the associated three-dimensional target according to the perception prediction result, the tracking prediction information, the true value result and the tracking information may include: counting a first number of associated three-dimensional targets detected as true according to the perception prediction result and the tracking prediction information; based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
For example, the electronic device counts a first number of associated three-dimensional objects detected as true based on the perceived prediction result and the tracked prediction information; based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true; and calculating the percentage of the first quantity to the second quantity to be used as the accuracy rate for detecting the three-dimensional target.
Step S307: and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
As one example, the electronic device determines a recall of the three-dimensional object as an evaluation indicator of the autopilot awareness detection capability.
As another example, the electronic device determines the accuracy of the three-dimensional target as an evaluation index of the autopilot perception detection capability.
As yet another example, the electronic device determines recall and accuracy of the three-dimensional target as an evaluation indicator of the autopilot awareness detection capability.
Step S308: and evaluating the automatic driving perception detection capability according to the evaluation index.
As one example, taking an electronic device to determine the recall of a three-dimensional object as an evaluation indicator of autopilot awareness detection capability as an example. The electronic device determines that the automatic driving perception detection capability is qualified in response to the recall being greater than or equal to the recall threshold; alternatively, in response to the recall being less than the recall threshold, determining that the autopilot sensory detection capability is not acceptable.
As another example, taking as an example the electronic device determines the accuracy of the three-dimensional object as an evaluation index of the autopilot perception detection capability. The electronic equipment responds to the fact that the accuracy is larger than or equal to an accuracy threshold value, and the fact that the automatic driving perception detection capability is qualified is determined; or, in response to the accuracy rate being less than the accuracy rate threshold, determining that the autopilot sensing capability is not acceptable.
As yet another example, taking as an example the electronic device determines recall and accuracy of the three-dimensional object as an evaluation indicator of the autopilot awareness detection capability. And the electronic equipment responds to the fact that the recall rate is larger than or equal to the recall rate threshold value and the accuracy rate is larger than or equal to the accuracy rate threshold value, and determines that the automatic driving perception detection capability is qualified, otherwise, determines that the automatic driving perception detection capability is not qualified.
By implementing the embodiment of the disclosure, the perception prediction result can be obtained according to the real-time environment perception data acquired by the vehicle environment perception sensor and the automatic driving perception engineering simulation platform, so that the recall rate and/or the accuracy rate of the three-dimensional target are calculated based on the perception prediction result, the detection result, the tracking prediction information and the tracking information, and the evaluation index of the automatic driving perception detection capability is determined. The driving perception detection capability can be rapidly and automatically evaluated, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.
Referring to fig. 4, fig. 4 is a flowchart illustrating an evaluation of autopilot awareness sensing capability in accordance with one exemplary embodiment. As shown in fig. 4, the evaluation flow firstly obtains environment sensing data such as video data obtained by an environment sensing sensor of a vehicle, inputs the data into a simulation platform for continuous frame detection to obtain a continuous frame detection result, then obtains a tracking (track) three-dimensional result of each frame based on track tracking, performs association matching by combining a true value tracking (track) result to obtain the number of three-dimensional models associated with the tracking three-dimensional result and the true value tracking result, calculates a quasi-recall rate of automatic driving sensing detection by combining the number of three-dimensional models actually existing in the true value tracking result, and evaluates the automatic driving sensing detection capability based on the quasi-recall rate.
Referring to fig. 5, fig. 5 is a block diagram of an apparatus for evaluating an autopilot awareness sensing capability according to one exemplary embodiment. As shown in fig. 5, the apparatus 500 includes: an acquisition module 501, configured to acquire real-time environmental awareness data acquired by a vehicle environmental awareness sensor; the first processing module 502 is configured to obtain a perception prediction result according to the real-time environment perception data and the autopilot perception engineering simulation platform; the perception prediction result comprises tracking prediction information; the second processing module 503 is configured to obtain real-time laser point cloud data collected by the vehicle environmental lidar, input the real-time laser point cloud data to a pre-trained laser point cloud model, and obtain a detection result output by the laser point cloud model; the detection result comprises tracking information; a third processing module 504, configured to obtain an evaluation index of the autopilot sensing capability according to the sensing prediction result, the detection result, the tracking prediction information, and the tracking information; and the evaluation module 505 is configured to evaluate the autopilot perception detection capability according to the evaluation index.
In one implementation, the first processing module 502 is specifically configured to: inputting the real-time environment sensing data into an automatic driving sensing engineering simulation platform frame by frame to obtain a detection result of each frame of environment sensing data; tracking according to the detection result of the environmental perception data of each frame to obtain the perception prediction result of the three-dimensional target.
In one implementation, the third processing module 504 is specifically configured to: taking the detection result as a true value result; based on a track association algorithm, matching three-dimensional targets in the true result and the perception prediction result to obtain an associated three-dimensional target; counting the related three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result and the tracking information so as to calculate recall rate and/or accuracy rate of the three-dimensional targets; and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
In an alternative implementation, the third processing module 504 is specifically configured to: counting the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information; counting the number of all the associated three-dimensional targets according to the true value result and tracking information; and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
Optionally, the third processing module 504 is specifically configured to: counting a first number of associated three-dimensional targets detected as true according to the perception prediction result and the tracking prediction information; based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true; and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
Through the device of the embodiment of the disclosure, the perception prediction result can be obtained according to the real-time environment perception data acquired by the vehicle environment perception sensor and the automatic driving perception engineering simulation platform, and the automatic driving perception detection capability is evaluated by combining the detection result of the laser point cloud model. The driving perception detection capability can be rapidly and automatically evaluated, and the evaluation result can objectively reflect the detection performance of the automatic driving model on an actual road.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Referring to fig. 6, fig. 6 is a schematic diagram of an electronic device according to an exemplary embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, a wearable device, and the like.
Referring to fig. 6, an electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 606 provides power to the various components of the electronic device 600. The power supply components 606 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 comprises a touch-sensitive display screen providing an output interface between the electronic device 600 and a user. In some embodiments, the touch display screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. When the electronic device 600 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The input/output interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor assembly 614 may detect an on/off state of the electronic device 600, a relative positioning of the components, such as a display and keypad of the electronic device 600, the sensor assembly 614 may also detect a change in position of the electronic device 600 or a component of the electronic device 600, the presence or absence of a user's contact with the electronic device 600, an orientation or acceleration/deceleration of the electronic device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the electronic device 600 and other devices, either wired or wireless. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi,2G, or 3G, or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described in any one of the embodiments above.
The present disclosure also provides a readable storage medium having instructions stored thereon which, when executed by a computer, perform the functions of any of the method embodiments described above.
The present disclosure also provides a computer program product which, when executed by a computer, performs the functions of any of the method embodiments described above.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product comprises one or more computer programs. When the computer program is loaded and executed on a computer, the flow or functions described in accordance with the embodiments of the present disclosure are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer program may be stored in or transmitted from one computer readable storage medium to another, for example, by wired (e.g., coaxial cable, optical fiber, digital subscriber line (digitalsubscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means from one website, computer, server, or data center. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a high-density digital video disc (digitalvideo disc, DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), or the like.
Those of ordinary skill in the art will appreciate that: the various numbers of first, second, etc. referred to in this disclosure are merely for ease of description and are not intended to limit the scope of embodiments of this disclosure, nor to indicate sequencing.
At least one of the present disclosure may also be described as one or more, a plurality may be two, three, four or more, and the present disclosure is not limited. In the embodiment of the disclosure, for a technical feature, the technical features in the technical feature are distinguished by "first", "second", "third", "a", "B", "C", and "D", and the technical features described by "first", "second", "third", "a", "B", "C", and "D" are not in sequence or in order of magnitude.
Predefined in this disclosure may be understood as defining, predefining, storing, pre-negotiating, pre-configuring, curing, or pre-sintering.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. An automatic driving perception detection capability evaluating method is characterized by comprising the following steps:
acquiring real-time environment sensing data acquired by a vehicle environment sensing sensor;
obtaining a perception prediction result of a target which is output by the automatic driving perception engineering simulation platform and affects automatic driving according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perception prediction result comprises tracking prediction information of a target;
acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model, and acquiring a detection result of a real existing target which is output by the laser point cloud model and affects automatic driving; the detection result comprises tracking information of the target;
Acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information;
evaluating the automatic driving perception detection capability according to the evaluation index;
the obtaining the evaluation index of the autopilot perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information comprises the following steps:
taking the detection result as a true value result;
based on a track association algorithm, matching the three-dimensional targets in the true result and the perception prediction result to obtain an associated three-dimensional target;
counting the associated three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result and the tracking information to calculate recall rate and/or accuracy rate of the three-dimensional targets;
and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
2. The method of claim 1, wherein the obtaining a perception prediction result according to the real-time environment perception data and an autopilot perception engineering simulation platform comprises:
Inputting the real-time environment sensing data into the automatic driving sensing engineering simulation platform frame by frame to obtain a detection result of each frame of environment sensing data;
tracking according to the detection result of the environmental perception data of each frame to obtain a perception prediction result of the three-dimensional target.
3. The method of claim 1, wherein said counting the associated three-dimensional objects based on the perceptual prediction result, the tracking prediction information, the truth result, and the tracking information to calculate a recall of three-dimensional objects comprises:
counting the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information;
counting the number of all the associated three-dimensional targets according to the truth result and the tracking information;
and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
4. The method of claim 3, wherein said counting the associated three-dimensional objects based on the perceptual prediction result, the tracking prediction information, the truth result, and the tracking information to calculate an accuracy of the three-dimensional objects comprises:
According to the perception prediction result and the tracking prediction information, counting a first number of associated three-dimensional targets which are detected as true;
based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true;
and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
5. An apparatus for evaluating the ability of sensing and detecting autopilot, comprising:
the acquisition module is used for acquiring real-time environment sensing data acquired by the vehicle environment sensing sensor;
the first processing module is used for obtaining a perception prediction result of a target which is output by the automatic driving perception engineering simulation platform and affects automatic driving according to the real-time environment perception data and the automatic driving perception engineering simulation platform; the perception prediction result comprises tracking prediction information of a target;
the second processing module is used for acquiring real-time laser point cloud data acquired by the vehicle environment laser radar, inputting the real-time laser point cloud data into a pre-trained laser point cloud model and acquiring a detection result of a real existing target which is output by the laser point cloud model and affects automatic driving; the detection result comprises tracking information of the target;
The third processing module is used for acquiring an evaluation index of the automatic driving perception detection capability according to the perception prediction result, the detection result, the tracking prediction information and the tracking information;
the evaluation module is used for evaluating the automatic driving perception detection capability according to the evaluation index;
the third processing module is specifically configured to:
taking the detection result as a true value result;
based on a track association algorithm, matching the three-dimensional targets in the true result and the perception prediction result to obtain an associated three-dimensional target;
counting the associated three-dimensional targets according to the perception prediction result, the tracking prediction information, the truth result and the tracking information to calculate recall rate and/or accuracy rate of the three-dimensional targets;
and determining the recall rate and/or the accuracy rate of the three-dimensional target as an evaluation index of the automatic driving perception detection capability.
6. The apparatus of claim 5, wherein the first processing module is specifically configured to:
inputting the real-time environment sensing data into the automatic driving sensing engineering simulation platform frame by frame to obtain a detection result of each frame of environment sensing data;
Tracking according to the detection result of the environmental perception data of each frame to obtain a perception prediction result of the three-dimensional target.
7. The apparatus of claim 5, wherein the third processing module is specifically configured to:
counting the number of the detected associated three-dimensional targets according to the perception prediction result and the tracking prediction information;
counting the number of all the associated three-dimensional targets according to the truth result and the tracking information;
and calculating the recall rate of the three-dimensional targets according to the detected number of the associated three-dimensional targets and the number of all the associated three-dimensional targets.
8. The apparatus of claim 7, wherein the third processing module is specifically configured to:
according to the perception prediction result and the tracking prediction information, counting a first number of associated three-dimensional targets which are detected as true;
based on the truth result and the tracking information, counting a second number of associated three-dimensional targets detected as true;
and calculating the accuracy of the three-dimensional target according to the first quantity and the second quantity.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 4.
10. A computer readable storage medium storing instructions which, when executed, cause a method as claimed in any one of claims 1 to 4 to be implemented.
CN202310127418.8A 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment Active CN115907566B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310127418.8A CN115907566B (en) 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310127418.8A CN115907566B (en) 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment

Publications (2)

Publication Number Publication Date
CN115907566A CN115907566A (en) 2023-04-04
CN115907566B true CN115907566B (en) 2023-05-30

Family

ID=85737492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310127418.8A Active CN115907566B (en) 2023-02-17 2023-02-17 Evaluation method and device for automatic driving perception detection capability and electronic equipment

Country Status (1)

Country Link
CN (1) CN115907566B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591847B (en) * 2024-01-19 2024-05-07 福思(杭州)智能科技有限公司 Model pointing evaluating method and device based on vehicle condition data

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
CN114384547A (en) * 2022-01-10 2022-04-22 清华珠三角研究院 Radar sensor model-based fidelity detection evaluation method and system
CN115147796A (en) * 2022-07-14 2022-10-04 小米汽车科技有限公司 Method and device for evaluating target recognition algorithm, storage medium and vehicle

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063713B (en) * 2017-04-27 2020-03-10 百度在线网络技术(北京)有限公司 Test method and device applied to unmanned automobile

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
CN114384547A (en) * 2022-01-10 2022-04-22 清华珠三角研究院 Radar sensor model-based fidelity detection evaluation method and system
CN115147796A (en) * 2022-07-14 2022-10-04 小米汽车科技有限公司 Method and device for evaluating target recognition algorithm, storage medium and vehicle

Also Published As

Publication number Publication date
CN115907566A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN108632081B (en) Network situation evaluation method, device and storage medium
CN104537860B (en) Driving safety prompt method and device
RU2665217C2 (en) Image processing method and apparatus
EP3115982A1 (en) Method and apparatus for road condition prompt
RU2677360C1 (en) Method and device for recognition of gestures
EP3312702B1 (en) Method and device for identifying gesture
CN115774680B (en) Version testing method, device and equipment of automatic driving software and storage medium
CN106557759B (en) Signpost information acquisition method and device
CN106599191B (en) User attribute analysis method and device
CN115907566B (en) Evaluation method and device for automatic driving perception detection capability and electronic equipment
CN105607738B (en) Determine the method and device of one hand pattern
CN110930351A (en) Light spot detection method and device and electronic equipment
CN109214175B (en) Method, device and storage medium for training classifier based on sample characteristics
CN111859097B (en) Data processing method, device, electronic equipment and storage medium
CN107508821B (en) Security level generation method, device and storage medium
CN105635573A (en) Pick-up head visual angle adjusting method and apparatus
CN116310633A (en) Key point detection model training method and key point detection method
CN110149310B (en) Flow intrusion detection method, device and storage medium
CN115009301A (en) Trajectory prediction method, trajectory prediction device, electronic equipment and storage medium
RU2621976C2 (en) Method and device for smart metric calculation
CN116030551B (en) Method, device, equipment and storage medium for testing vehicle autopilot software
CN104461536A (en) Operation processing method and device
CN113450298B (en) Multi-sensor-based view map processing method, device and equipment
CN117893591B (en) Light curtain template recognition method and device, equipment, storage medium and program product
CN113919292B (en) Model training method and device for formula identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant