CN113155173A - Perception performance evaluation method and device, electronic device and storage medium - Google Patents

Perception performance evaluation method and device, electronic device and storage medium Download PDF

Info

Publication number
CN113155173A
CN113155173A CN202110614827.1A CN202110614827A CN113155173A CN 113155173 A CN113155173 A CN 113155173A CN 202110614827 A CN202110614827 A CN 202110614827A CN 113155173 A CN113155173 A CN 113155173A
Authority
CN
China
Prior art keywords
standard
test
track attribute
data
perception
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110614827.1A
Other languages
Chinese (zh)
Other versions
CN113155173B (en
Inventor
吴孟
王怀静
孙雅鸽
李廷飞
严达桂
段帆利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Freetech Intelligent Systems Co Ltd
Original Assignee
Freetech Intelligent Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Freetech Intelligent Systems Co Ltd filed Critical Freetech Intelligent Systems Co Ltd
Priority to CN202110614827.1A priority Critical patent/CN113155173B/en
Publication of CN113155173A publication Critical patent/CN113155173A/en
Application granted granted Critical
Publication of CN113155173B publication Critical patent/CN113155173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01DMEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
    • G01D18/00Testing or calibrating apparatus or arrangements provided for in groups G01D1/00 - G01D15/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52004Means for monitoring or calibrating
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The application relates to a perception performance evaluating method, a device, an electronic device and a storage medium, wherein the perception performance evaluating method comprises the following steps: acquiring standard sensor data and test sensor data; acquiring a standard track attribute parameter and a test track attribute parameter corresponding to a perception target according to the standard sensor data and the test sensor data; comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data; and classifying the flight path deviation data according to preset classification information, and acquiring a performance evaluation result based on the classification result. By the method and the device, the problem that the performance defect of the sensing system under the specific detection scene is difficult to obtain is solved, and the technical effect of accurately obtaining the performance defect of the sensing system under the specific detection scene is achieved.

Description

Perception performance evaluation method and device, electronic device and storage medium
Technical Field
The application relates to the field of intelligent driving, in particular to a sensor perception performance evaluating method, a sensor perception performance evaluating device, an electronic device and a storage medium.
Background
The environmental perception plays a crucial role in intelligent driving of each level, and the safety and the comfort of automatic driving are directly influenced by the perception performance. The accurate environment perception result can greatly reduce the complexity of a planning control algorithm and improve the overall performance of intelligent driving. With the development of the intelligent driving technology, the camera, the laser radar, the millimeter wave radar, the ultrasonic radar, the multi-sensor fusion and other schemes are widely applied to the environment sensing system.
However, most of the existing performance analysis methods for the driving assistance system perform overall evaluation on the full-scale results output by the sensing system and the marked full-scale results, and score or grade the sensing capability of the sensing system, so that it is difficult to obtain the performance defects of the sensing system in a specific detection scene.
Aiming at the technical problem that the performance defect of the perception system under the specific detection scene is difficult to obtain in the related technology, no effective solution is provided at present.
Disclosure of Invention
The embodiment provides a perception performance evaluating method, a device, an electronic device and a storage medium, so as to solve the problem that the performance defect of a perception system under a specific detection scene is difficult to obtain in the related art.
In a first aspect, in this embodiment, a method for evaluating perceptual performance is provided, including:
acquiring standard sensor data and test sensor data;
acquiring a standard track attribute parameter and a test track attribute parameter corresponding to a perception target according to the standard sensor data and the test sensor data;
comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data;
and classifying the flight path deviation data according to preset classification information, and acquiring a performance evaluation result based on the classification result, wherein the preset classification information comprises one or more of target class information, target position information and scene information.
In one embodiment, the obtaining a standard track attribute parameter and a test track attribute parameter corresponding to a sensing target according to the standard sensor data and the test sensor data includes: acquiring a preprocessing sensing result according to the standard sensor data; acquiring a standard perception algorithm, wherein the standard perception algorithm comprises an AI perception algorithm, an offline tracking algorithm and a multi-sensor fusion algorithm; acquiring the standard track attribute parameters of the perception target according to the standard perception algorithm, the preprocessing perception result and the standard sensor data; and acquiring the test track attribute parameter perception target of the test perception target according to the test sensor data.
In one embodiment, the obtaining a standard track attribute parameter and a test track attribute parameter corresponding to a sensing target according to the standard sensor data and the test sensor data includes: acquiring a standard track attribute parameter according to the standard sensor data, and acquiring a test track attribute parameter according to the test sensor data; matching the standard track attribute parameters with the test track attribute parameters according to an associated algorithm; and obtaining a standard track attribute parameter and a test track attribute parameter corresponding to each perception target.
In one embodiment, the matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each sensing target includes: acquiring a standard target identification frame and a test target identification frame corresponding to the standard track attribute parameters and the test track attribute parameters; acquiring a distance intersection ratio according to the ratio of the Euclidean distance between the standard target identification frame and the central point of the test target identification frame to the diagonal distance of a minimum closure area, wherein the minimum closure area represents a minimum circumscribed rectangular area comprising the standard target identification frame and the test target identification frame; and comparing the distance intersection ratio with a distance threshold, and if the distance intersection ratio is less than or equal to the distance threshold, obtaining the standard track attribute parameter and the test track attribute parameter corresponding to the perception target.
In one embodiment, the obtaining a standard track attribute parameter and a test track attribute parameter corresponding to a sensing target according to the standard sensor data and the test sensor data includes: generating a corresponding standard track characteristic matrix and a corresponding test track characteristic matrix according to the standard track attribute parameters and the test track attribute parameters; and generating a corresponding visual track image of the perception target according to the standard track characteristic matrix and the test track characteristic matrix.
In one embodiment, the comparing the standard track attribute parameter and the test track attribute parameter corresponding to each sensing target to obtain the track deviation data includes: calculating the difference value of the standard track attribute parameter and the test track attribute parameter corresponding to the perception target to obtain first deviation data; and acquiring a deviation threshold, comparing the first deviation data with the deviation threshold, and taking the first deviation data as track deviation data if the first deviation data is greater than or equal to the deviation threshold.
In one embodiment, the classifying the track deviation data according to the preset classification information further includes: acquiring a scene division rule, wherein the scene division rule comprises a position change rule and a speed change rule; acquiring current scene information according to the scene division rule and the standard track attribute parameters and the test track attribute parameters corresponding to the perception target; and classifying the track deviation data corresponding to the perception target according to the current scene information.
In a second aspect, in this embodiment, there is provided a perceptual performance evaluation apparatus, including:
the data acquisition module is used for acquiring standard sensor data and test sensor data;
the data processing module is used for acquiring a standard track attribute parameter and a test track attribute parameter corresponding to a perception target according to the standard sensor data and the test sensor data;
the evaluation module is used for comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data;
and the statistical module is used for classifying the flight path deviation data according to preset classification information, and acquiring a performance evaluation result based on the classification result, wherein the preset classification information comprises one or more of target class information, target position information and scene information.
In a third aspect, in this embodiment, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method for evaluating perceptual performance according to the first aspect is implemented.
In a fourth aspect, in this embodiment, a storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the perceptual performance evaluation method described in the first aspect.
Compared with the related art, the perception performance evaluating method provided by the embodiment obtains standard sensor data and test sensor data; acquiring a standard track attribute parameter and a test track attribute parameter corresponding to a perception target according to the standard sensor data and the test sensor data; comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data; the flight path deviation data are classified according to preset classification information, and a performance evaluation result is obtained based on the classified result, wherein the preset classification information comprises one or more of target class information, target position information and scene information, so that the problem that the performance defect of the sensing system under a specific detection scene is difficult to obtain is solved, the technical effect of accurately obtaining the performance defect of the sensing system under the specific detection scene is realized, and data support is provided for the improvement of a later sensing algorithm.
The details of one or more embodiments of the application are set forth in the accompanying drawings and the description below to provide a more thorough understanding of the application.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a block diagram of a hardware structure of a terminal of the perceptual performance evaluation method according to the embodiment;
FIG. 2 is a flowchart of a perceptual performance evaluation method of the present embodiment;
FIG. 3 is a schematic structural diagram of a perception evaluation system according to an embodiment of the application;
FIG. 4 is a schematic diagram of distributed computing of stored data according to an embodiment of the present application;
FIG. 5 is a graph illustrating statistical results according to an embodiment of the present application;
FIG. 6 is a schematic diagram of track deviation data capture according to an embodiment of the present application;
FIG. 7 is a graph of track information generated from a target vehicle;
FIG. 8 is a graph of track information generated from another target vehicle;
FIG. 9 is a schematic diagram of a data processing structure of a perception evaluation method according to an embodiment of the present application;
fig. 10 is a block diagram of the perceptual performance evaluation apparatus of the present embodiment.
Detailed Description
For a clearer understanding of the objects, aspects and advantages of the present application, reference is made to the following description and accompanying drawings.
Unless defined otherwise, technical or scientific terms used herein shall have the same general meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The use of the terms "a" and "an" and "the" and similar referents in the context of this application do not denote a limitation of quantity, either in the singular or the plural. The terms "comprises," "comprising," "has," "having," and any variations thereof, as referred to in this application, are intended to cover non-exclusive inclusions; for example, a process, method, and system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or modules, but may include other steps or modules (elements) not listed or inherent to such process, method, article, or apparatus. Reference throughout this application to "connected," "coupled," and the like is not limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. Reference to "a plurality" in this application means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. In general, the character "/" indicates a relationship in which the objects associated before and after are an "or". The terms "first," "second," "third," and the like in this application are used for distinguishing between similar items and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the present embodiment may be executed in a terminal, a computer, or a similar computing device. For example, the method is executed on a terminal, and fig. 1 is a block diagram of a hardware structure of the terminal according to the perceptual performance evaluation method of this embodiment. As shown in fig. 1, the terminal may include one or more processors 102 (only one shown in fig. 1) and a memory 104 for storing data, wherein the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA. The terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those of ordinary skill in the art that the structure shown in fig. 1 is merely an illustration and is not intended to limit the structure of the terminal described above. For example, the terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the perceptual performance evaluation method in the embodiment, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The network described above includes a wireless network provided by a communication provider of the terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
The automatic driving technology senses a driving environment through a sensor mounted on a vehicle, and recognizes surrounding vehicles, pedestrians, obstacles, lane lines, traffic signs, and the like. Because the requirement of automatic driving on the environment perception capability is higher, and a single sensor is difficult to meet corresponding requirements, the automatic driving vehicle needs to be provided with various sensors such as a camera, a laser radar, a millimeter wave radar and an ultrasonic radar for environment perception detection. The performance of the sensing system profoundly affects the safety and comfort of automatic driving. The existing sensing system evaluation mode can only test the basic performance of the sensor generally, and the grading or rating is carried out according to the test result, so that the defects of the sensing system cannot be accurately mined by the grading or rating, and accurate improvement directions and data support are provided for the improvement of the subsequent sensing system.
In this embodiment, a method for evaluating perceptual performance is provided, and fig. 2 is a flowchart of the method for evaluating perceptual performance of this embodiment, as shown in fig. 2, the flowchart includes the following steps:
step S201, standard sensor data and test sensor data are acquired.
Specifically, the standard sensor data and the test sensor data may be vehicle-mounted sensor data acquired in real time, or may be sensor data stored in a database. The sensor includes one or more of a camera, a millimeter wave radar, a laser radar and an ultrasonic radar, and may be other sensors applied to the field of intelligent driving, which is not specifically limited in this application. Preferably, the sensors providing the true values and the sensors of the system under test are deployed simultaneously on the vehicle side. The true value is a real value represented by a measured quantity under a certain time and space condition. The data collected by the sensors providing the true values are standard sensor data, and the data collected by the sensors of the system under test are test sensor data.
And S202, acquiring a standard track attribute parameter and a test track attribute parameter corresponding to the perception target according to the standard sensor data and the test sensor data.
Specifically, data analysis and processing are carried out on the standard sensor data and the test sensor data, and a perception target and corresponding track attribute parameters sensed in the vehicle test process can be obtained. The track attribute parameters refer to parameters related to the space-time position of the perception target, such as: time parameters, position parameters, speed parameters, acceleration parameters, and the like. Further, the position parameter may be a three-dimensional information parameter such as a lateral position, a longitudinal position, and a height position, and similarly, the velocity parameter and the acceleration parameter may be a three-dimensional information parameter. In an actual application scenario, for the consideration of the calculation amount, the track attribute parameter may be the data parameter of the two-dimensional plane, for example, the position parameter refers to the horizontal position and the vertical position of the sensing target; the speed parameters refer to the transverse speed and the longitudinal speed of the perception target; the acceleration parameter refers to lateral acceleration and longitudinal acceleration of the sensing target. The sensing target can be a vehicle, a pedestrian, an obstacle, a lane line, a traffic sign and the like, further, the vehicle can be a car, a truck, a two-wheel vehicle and the like, and all the sensing targets can be sensed based on the sensing system. Through the association algorithm, the standard track parameters and the test track parameters corresponding to similar or identical perception targets can be matched or fused, so that the standard track parameters and the test track parameters corresponding to the same perception target are obtained.
Step S203, comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data.
Specifically, the precision deviation of the standard track attribute parameters and the test track attribute parameters corresponding to each perception target is calculated through a comparison algorithm, and track deviation data is obtained.
And S204, classifying the track deviation data according to preset classification information, and acquiring a performance evaluation result based on a classification result.
Specifically, preset classification information is obtained, where the preset classification information includes one or more of target category information, target position information, and scene information. The object class refers to a class of perception objects such as pedestrians, cars, trucks, and motorcycles. The position information refers to a preset position interval under a current vehicle coordinate system of the vehicle end, for example, when only the longitudinal distance is considered, track deviation data of the sensing target and the vehicle end within 5 intervals of which the longitudinal distance is 0-10 meters, 10-30 meters, 30-60 meters, 60-90 meters, more than 90 meters and the like can be counted, so that classification statistics of the track deviation data is realized, and the scene information can be a semantic label predefined in the configuration information, such as a daytime high-speed scene. When the sensor data is acquired at the vehicle end, a semantic label is added into the data to indicate the current sensor data acquisition scene, and the classification can be directly carried out based on the semantic label when the classification statistics of the track deviation data is carried out. In another embodiment, a scene recognition rule may be predefined, for example, when the track of the perceived target shows a large change in its lateral position, indicating that the current target is cutting into the current lane, the track deviation data of the perceived target may be included in the lane cutting scene according to the data characteristics of the track attribute parameters and the predefined rule.
Through the steps, the perception performance evaluating method provided by the application obtains the perception target, the corresponding standard track attribute parameter and the test track attribute parameter based on the sensor data obtained by the vehicle side, wherein the standard track attribute parameter is the true track attribute parameter. Calculating the deviation between the true value flight path and the test flight path according to the true value flight path attribute parameter and the test flight path attribute parameter to obtain deviation data; and classifying the deviation data according to the corresponding target category, target position and scene information, and outputting track deviation data between a true value track and a test track under preset classification, thereby realizing automatic classification output of the track deviation data. The perception performance evaluation method can accurately output deviation data and corresponding belonged categories, self-excavates perception performance defects, does not need testers to manually analyze complex data based on professional knowledge, and improves the data utilization rate of evaluation results.
In one embodiment, the obtaining a standard track attribute parameter and a test track attribute parameter corresponding to a sensing target according to the standard sensor data and the test sensor data includes: acquiring a preprocessing sensing result according to the standard sensor data; acquiring a standard perception algorithm, wherein the standard perception algorithm comprises an AI perception algorithm, an offline tracking algorithm and a multi-sensor fusion algorithm; acquiring the standard track attribute parameters of the perception target according to the standard perception algorithm, the preprocessing perception result and the standard sensor data; and acquiring the test track attribute parameters of the perception target according to the test sensor data.
Specifically, fig. 3 is a schematic structural diagram of a perception evaluating system according to an embodiment of the present application, and as shown in fig. 3, the perception evaluating system includes a vehicle end 310, a data center 320, and a service end 330, where the vehicle end 310 is a vehicle carrying a true-value system sensor and a test system sensor, and is marked as a true-value vehicle. The sensors mounted on the real-value vehicle include a laser radar, a millimeter wave radar, a camera, an ultrasonic radar, and the like. The real-valued vehicle can collect data on a closed road or a public road. All true system sensors are connected to the industrial personal computer at the vehicle end 310. The method comprises the steps that original true value sensor data collected by a true value system sensor are sent to an industrial personal computer, the industrial personal computer does not directly generate a true value sensing result, but a part of sensing algorithms are operated to process the original true value sensor data to obtain a preprocessing sensing result. The sensing result comprises a fusion sensing result of the sensor data, namely, a track attribute parameter of the sensing target is obtained based on the plurality of sensor data. And for the test sensor data acquired by the test system sensor, the real-time requirement on the sensing system is higher in the actual intelligent driving application scene, so the test sensing result can be directly obtained by processing by the industrial personal computer. The industrial personal computer uploads the preprocessing sensing result, the original true value sensor data, the test sensing result of the sensed system and the test sensor data to the data center 320, and the data center can be deployed at the cloud end, adopts cloud storage and cloud computing services, and can also be deployed at a local server. The local storage may be a local data server, or may be a removable storage medium, such as a removable hard disk. The server 330 may call up data stored in the data center 320 and perform further data processing. For example, the final true sensing result, i.e., the standard sensing result, may be generated at the server 330 based on the pre-processed sensing result, the true sensor data, and the standard processing algorithm. The real-value sensing result is obtained not by a real-time calculation method of an industrial personal computer, but by an off-line calculation mode, real-time data acquired by the vehicle-mounted real-value sensor system is stored in the data center 320, and the sensing result is acquired by extracting the data stored in the data center 320 and combining various sensing algorithms, so that the reliability of the standard sensing result is greatly improved. In one embodiment, when the standard sensing result is calculated, identification and tracking are performed based on historical data and future data of the current frame, higher identification rate and identification accuracy can be obtained, and reliability of the standard sensing result is enhanced. And the data center stores the true value sensor data and the test sensor data, and after the algorithm of the sensing system is upgraded, the true value sensor data and the test sensor data collected by the actual vehicle can be reused for many times, so that the recording cost of the sensor data is greatly saved.
In one embodiment, the data stored in the data center 320 may be accessed and adapted after being recharged to the server 330, and the data recharged includes standard sensor data, test sensor data, standard sensing results, and test sensing results. In the prior art, a sensing system generally processes data by adopting serial computing, wherein the serial computing means that tasks are not split, and one task occupies one processing resource. However, if the processing resource is large, i.e., the data set is large, serial computation requires a lot of time to generate the final data analysis report. The application adds a time tag or other semantic tag to both standard sensor data and test sensor data at the time of data acquisition. The semantic tag refers to a tag set for a real vehicle test environment by a human, for example: the labels which are set by the testers according to the real vehicle test environment and accord with the actual road conditions can be used for the daytime high-speed road conditions, the nighttime rainy high-speed road conditions, the nighttime sunny city road conditions and the like. The server 330 may perform batch division on the recharge data acquired from the data center 320 to obtain multiple sets of shard data. batch refers to batch processing, also called batch script, that is, processing of a batch on an object. And adopting multi-CPU parallel computation or multi-server distributed computation for the divided data segments. And finally, counting and calculating the result of each fragment data, and outputting the final counting result. The parallel operation means that different processing resources occupied by different subtasks are from the same large processing resource, the distributed computation is a special parallel computation, and a large computation task is divided into a plurality of subtasks, and different subtasks occupy different processing resources. By adopting a data processing method of distributed computation and parallel computation, the data processing efficiency is improved, and the evaluation time of the perception performance is shortened.
In one embodiment, the mean and variance of the sliced data may be calculated using parallel algorithms. parallel algorithms are a parallel algorithm. Fig. 4 is a schematic diagram of distributed computing for storing data according to an embodiment of the present application, and as shown in fig. 4, for example, with real-valued system data as an example, the real-valued data in the cloud data is divided into 3 groups by a preset time tag to obtain 3 groups of fragment data, and each group of fragment data is handed to a corresponding cloud computing service to be processed, so as to obtain a data processing result of the group of fragment data. And summarizing the calculation results of all the fragment data through an algorithm to obtain a statistical result of the complete data set. Specifically, the process of calculating the mean and variance of two groups of packet data using parallel algorithm may be:
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 316767DEST_PATH_IMAGE002
the sliced data is represented by a number of slices,
Figure 277769DEST_PATH_IMAGE003
the sum of data representing the a-component slice data,
Figure 253816DEST_PATH_IMAGE004
the sum of data representing the B-component fragment data,
Figure 935464DEST_PATH_IMAGE005
and a sum of the data sum of the a-component slice data and the data sum of the B-component slice data.
Figure 375673DEST_PATH_IMAGE006
Data representing B-group fragmented dataThe average value of the average value is calculated,
Figure 117364DEST_PATH_IMAGE007
represents the data mean of the a-component patch data,
Figure 580706DEST_PATH_IMAGE008
data means representing the total data set of group a and group B sliced data.
Figure 456258DEST_PATH_IMAGE009
The mean difference is indicated.
Figure 360760DEST_PATH_IMAGE010
The variance of the data representing the group a data,
Figure 663566DEST_PATH_IMAGE011
the variance of the data representing the group B data,
Figure 348625DEST_PATH_IMAGE012
representing the data variance of the total data set of the a-group and B-group sliced data.
In one embodiment, the obtaining a standard track attribute parameter and a test track attribute parameter corresponding to a sensing target according to the standard sensor data and the test sensor data includes: acquiring a standard track attribute parameter according to the standard sensor data, and acquiring a test track attribute parameter according to the test sensor data; and matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each perception target. Specifically, a true value system performs sensing fusion calculation based on standard sensor data to obtain a true value sensing result, namely a standard track attribute parameter; the test system performs perception fusion calculation based on the test sensor data to obtain a test perception result, namely a test track attribute parameter. In the actual testing process, although the standard sensing target identified by the true sensing result and the testing sensing target identified by the testing sensing result are the same, because the track attribute parameters of the standard sensing target and the track attribute parameters of the testing target have a deviation, the standard track attribute parameters and the testing track attribute parameters belonging to the same sensing target need to be matched and correspond to each other according to an association algorithm, so as to perform comparison through a comparison algorithm. For example: for the same target vehicle, the standard track attribute parameters obtained by the truth value system based on the standard sensor data are different from the test sensor data obtained by the test system based on the test sensor data. For example, the true-value system-perceived standard track attribute parameter indicates that the longitudinal distance between a target vehicle and the own vehicle is 20 meters, and the test system-perceived test track attribute parameter indicates that the longitudinal distance between the target vehicle and the own vehicle is 30 meters. In fact, the target vehicle sensed by the truth-value system is the same as the target vehicle sensed by the test system, so the standard track attribute parameters and the test track attribute parameters need to be matched to obtain the standard track attribute parameters and the test track attribute parameters of the same sensing target, and the sensing performance evaluation result is obtained by comparing the standard track attribute parameters and the test track attribute parameters of the same sensing target.
In one embodiment, the matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each sensing target includes: acquiring the similarity of the standard track attribute parameters and the test sensing track attribute parameters; and comparing the similarity with a similarity threshold, and if the similarity is greater than or equal to the similarity threshold, obtaining the standard track attribute parameter and the test track attribute parameter corresponding to the perception target.
In one embodiment, the matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each sensing target includes: acquiring a standard target identification frame and a test target identification frame corresponding to the standard track attribute parameters and the test track attribute parameters; acquiring a distance intersection ratio according to the ratio of the Euclidean distance between the standard target identification frame and the central point of the test target identification frame to the diagonal distance of a minimum closure area, wherein the minimum closure area represents a minimum circumscribed rectangular area comprising the standard target identification frame and the test target identification frame; and comparing the distance intersection ratio with a distance threshold, and if the distance intersection ratio is less than or equal to the distance threshold, obtaining the standard track attribute parameter and the test track attribute parameter corresponding to the perception target.
In one embodiment, the obtaining a standard track attribute parameter and a test track attribute parameter corresponding to a sensing target according to the standard sensor data and the test sensor data includes: generating a corresponding standard track characteristic matrix and a corresponding test track characteristic matrix according to the standard track attribute parameters and the test track attribute parameters; and generating a corresponding visual track image of the perception target according to the standard track characteristic matrix and the test track characteristic matrix.
Specifically, the data structure is reorganized based on the standard sensor data and the standard sensing result and the characteristics of the test sensor data and the test sensing result. In one embodiment, taking the track information of the perception target including time information, position information, speed information and acceleration information as an example, the track characteristic matrix corresponding to the target is as follows:
Figure 903234DEST_PATH_IMAGE013
wherein the content of the first and second substances,
Figure 990139DEST_PATH_IMAGE014
represents time,
Figure 135949DEST_PATH_IMAGE015
The position is indicated by a position indication,
Figure 105042DEST_PATH_IMAGE016
which is denoted as the longitudinal direction,
Figure 932184DEST_PATH_IMAGE017
the lateral direction is indicated and,
Figure 935912DEST_PATH_IMAGE018
the speed is indicated in the form of a speed,
Figure 783782DEST_PATH_IMAGE019
indicating acceleration.
Figure 115538DEST_PATH_IMAGE020
Indicating a longitudinal position;
Figure 136583DEST_PATH_IMAGE021
indicates the lateral position;
Figure 932501DEST_PATH_IMAGE022
represents the longitudinal speed;
Figure 888956DEST_PATH_IMAGE023
the lateral velocity is indicated.
Figure 832641DEST_PATH_IMAGE024
Represents the longitudinal acceleration;
Figure 595061DEST_PATH_IMAGE025
indicating lateral acceleration. Based on the track characteristic matrix, the corresponding target track can be obtained, and the target track is visually displayed. Preferably, the target track can be represented by a track curve, and compared with text data, the track curve is more intuitive in form and is more easily accepted by testers.
In one embodiment, the comparing the standard track attribute parameter and the test track attribute parameter corresponding to each sensing target to obtain the track deviation data includes: calculating the difference value of the standard track attribute parameter and the test track attribute parameter corresponding to the perception target to obtain first deviation data; and acquiring a deviation threshold, comparing the first deviation data with the deviation threshold, and taking the first deviation data as track deviation data if the first deviation data is greater than or equal to the deviation threshold. Specifically, a KPI threshold, that is, a deviation threshold, may be set for a difference between the standard track attribute parameter and the test track attribute parameter, and the difference greater than the deviation threshold is used as the track deviation data. The smaller the value of the deviation threshold, the higher the accuracy of the track deviation data, but the larger the calculation amount and the larger the influence of noise. By reasonably setting the deviation threshold, the precision and the calculation speed of the track deviation data can be effectively improved.
In one embodiment, the comparison method of the standard track attribute parameter and the test track attribute parameter may also be a ratio calculation method, and the fitness between the test track attribute parameter and the standard track attribute parameter is obtained. And determining the flight path deviation data by setting a fitting degree threshold value. For example, taking the degree of engagement of the lateral position as an example, if the sensing target corresponds to the lateral position at a certain time point of the test track, the value of the position is 9, and the value of the position is 10, the two values are subjected to ratio calculation to obtain the degree of engagement of 90%, and the degree of engagement threshold is set to be 95%, and then the values of the two lateral positions and the degree of engagement are taken as the track deviation data corresponding to the point.
In one embodiment, the standard track attribute parameters and the test track attribute parameters corresponding to the sensing target can be used for generating a visual track image, such as a standard track curve and a test track curve, extracting a part with a larger deviation between the standard track curve and the test track curve based on an image processing algorithm, and displaying the corresponding standard track attribute parameters, the corresponding test track attribute parameters and the corresponding track deviation data. In one embodiment, the image processing algorithm may be disposed in the triggered data processing module, and when a standard track curve and a test track curve have a large deviation, the partial image is automatically captured and corresponding track deviation data is output.
In one embodiment, fig. 5 is a diagram illustrating statistical results according to an embodiment of the present application. As shown in fig. 5, day-goodpeather and night-goodpeather in the figure are classified scenes of data, representing daytime clear weather and nighttime clear weather, respectively. And the numerical value under the classification scene represents the integrating degree of the test track attribute and the standard track attribute. car denotes a target vehicle, and car-x denotes a longitudinal position of the target vehicle; car-y represents the lateral position of the target vehicle; car-vx represents the longitudinal speed of the target vehicle; car-vy represents the lateral velocity of the target vehicle. ped for pedestrian detection, ped-x for the longitudinal position of the target pedestrian; ped-y represents the lateral position of the target pedestrian; ped-vx represents the longitudinal velocity of the target pedestrian; ped-vy represents the lateral velocity of the target pedestrian. By comparing the test data and the truth value data of the vehicle target detection in the two scenes, the performance defect of the perception system can be obtained, for example, according to the comparison between the perception result in the sunny weather in the daytime and the perception result in the sunny weather at night in the graph 5, in the sunny weather scene in the daytime, except for the poor longitudinal position perception result of the target, all other indexes are superior to the perception result in the sunny weather at night, and by extracting the result, a tester can purposefully improve the perception algorithm.
In one embodiment, fig. 6 is a schematic diagram of track deviation data capture according to an embodiment of the present application. As shown in fig. 6, fusion represents the fusion sensing result of the multiple sensors, and lidar represents the lidar sensing result. Fig. 6 shows the comparison of the lateral position of the sensing target, and it can be seen from the image that the deviation of the sensing result of the lidar is large with respect to the lateral position of the target, and at this time, the deviation can be correctly extracted and displayed by the image processing algorithm. Preferably, the deviation detection methods corresponding to fig. 5 and 6 can be used in combination, for example. When the extracted transverse position deviation of the target is large, in addition to marking on the track curve image, the track attribute parameters corresponding to the marked part of the track curve in the statistical result and the corresponding conformity comparison result are extracted, and the character data processing result and the visual graphic processing result are displayed together. The deviation data is presented to the tester in a more intuitive form while taking into account the tester's data requirements.
It is emphasized that the perceptual evaluation method of the present application acquires sensor data of multiple sensors. Therefore, the fusion sensing result of the true value sensing system of the multi-sensor can be used as the standard track of the sensing target, and the standard track is compared with the test track of the sensing target corresponding to the fusion sensing result of the test sensing system. Or comparing the data of a single test sensor, such as the point cloud data of the laser radar, with the standard track corresponding to the true value fusion perception result of the perception target, thereby evaluating the perception performance of the single sensor.
In one embodiment, the classifying the track deviation data according to the preset classification information further includes: acquiring a scene division rule, wherein the scene division rule comprises a position change rule and a speed change rule; acquiring current scene information according to the scene division rule and the standard track attribute parameters and the test track attribute parameters corresponding to the perception target; and classifying the track deviation data corresponding to the perception target according to the current scene information.
Specifically, the preset classification information can be set with a scene division rule besides directly inputting fixed classification information, so that the perception evaluation system can generate classification information based on a predefined rule; and then classifying and counting the corresponding deviation data according to the classification information generated in real time to obtain a statistical result. In one particular embodiment, FIG. 7 is a graph of track information generated from a target vehicle, as shown in FIG. 7, with the abscissa representing time and the ordinate representing the lateral position of the target vehicle. The preset scene division rule is a detection interval with a transverse position, if the variation of the transverse position of the perception target exceeds a lane switching threshold value in the detection interval according to the standard track information corresponding to the perception target, the perception target is considered to be in a lane cut-in scene in the detection interval, and correspondingly, the track deviation data detected in the interval is also included in the lane cut-in scene. In another specific embodiment, FIG. 8 is a map of track information generated from another target vehicle, as described in FIG. 8. The abscissa of the graph represents time, and the ordinate represents the longitudinal speed of the target vehicle. The preset scene division rule is a detection interval with a transverse position, if the longitudinal speed curve of the perception target is monotonically decreased according to the standard track information corresponding to the perception target in the detection interval, the perception target is judged to be continuously decelerating, at the moment, the perception target is in a target deceleration scene in the detection interval, and correspondingly, the track deviation data detected in the interval is also included in the target deceleration scene. It should be emphasized that the predefined rule may be determined according to the track parameter of the target, or according to the real-valued vehicle motion state. In addition, the data for scene division can be standard track attribute parameters of the target, and can also be test track attribute parameters; the method can be used for obtaining the fusion sensing result of multiple sensors or the sensing result of a single sensor.
In one embodiment, fig. 9 is a schematic diagram of a data processing structure of a perception evaluating method according to an embodiment of the present application, where the data processing structure includes: the system comprises a data access layer, a data adaptation layer, a core algorithm layer, a function definition layer and a report output layer. As shown in fig. 9, after the data collected by the vehicle end 310 is recharged from the data center 320 to the service end 330, the data access layer accesses the recharge data, which includes standard sensor data, true value fusion data, test sensor data and test fusion data. The standard sensor data refers to true sensor data; fusing data with the truth value, namely a sensing result of the truth value system; and testing the fusion data, namely testing the perception result of the system. Meanwhile, the self-vehicle state data and the configuration information are accessed to the data access layer. The self-vehicle state data is the true-value vehicle state data collected by the vehicle end 310, and includes: the running speed of the vehicle, the yaw rate of the vehicle and the like. The configuration information includes data adaptation rules, preset classification rules and function definition rules. And acquiring a data adaptation rule and completing data adaptation in a data adaptation layer. The data adaptation process includes true sensor data adaptation and test sensor data adaptation. The adaptation process refers to adjusting relevant parameters of the sensor, so that the perception evaluating system is suitable for processing sensor parameters acquired by different sensors. And processing and calculating the data according to a track association algorithm, a true value enhancement algorithm, a performance comparison algorithm and a feature extraction algorithm in a core algorithm layer to obtain track deviation data. And classifying the track deviation data based on a preset rule. The KPI passing rate refers to data which are larger than a deviation threshold value in deviation data, track attribute parameters with larger deviation can be found by counting the KPI passing rate, and key modification is carried out based on the track attribute parameters, so that the subsequent algorithm is improved more pertinently. In the function definition layer, the original recharge data and the track deviation data can be analyzed according to a function definition rule preset in the configuration information, and the track deviation data can be further classified based on a preset function, such as target precision, target recognition rate, lane line precision and lane line recognition rate. And finally, counting and summarizing the data processing results, and outputting a statistical report based on the classified perception performance evaluation results. The report comprises multidimensional analysis data of the perception performance evaluation result, such as KPI (Key Performance indicator) passing rate data, deviation statistical data, deviation item extraction data and the like, and testers can set customized output for the report according to test requirements. The perception evaluation method can accurately provide track deviation data under various scenes and different performance indexes, and improves the pertinence and the usability of perception evaluation results.
Through the steps, the sensing performance evaluating method takes the offline calculation result of the sensor information as a true value, and has the characteristic of being more accurate and stable compared with the real value calculated in real time in the prior art. And the sensing performance of the sensor is evaluated by adopting a more accurate true value, so that a more objective and reliable performance evaluation result can be obtained. And the data is subjected to grouping processing, and a parallel computing method and a distributed computing method are adopted, so that the performance evaluation time is shortened. In the process of data comparison and analysis, a track characteristic matrix is constructed by taking a target track as a unit, statistical operation is carried out based on the track characteristic matrix, and a trigger type data capture module is added, so that perception information with larger deviation with the target true track can be automatically extracted and displayed to a tester in a visual mode. And functional scenes can be classified according to perception targets or self-vehicle information, and the perception performance of each scene can be counted. And performing multi-dimensional evaluation on the sensing system by combining preset classification information, wherein the finally output evaluation result can accurately reflect the performance of the sensed system in each dimension, and data support is provided for the improvement of a subsequent sensing algorithm and a driving control algorithm.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment also provides a device for evaluating the sensing performance, which is used for implementing the above embodiments and preferred embodiments, and the description of the device is omitted. The terms "module," "unit," "subunit," and the like as used below may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 10 is a block diagram of the structure of the perceptual evaluation device of the present embodiment, and as shown in fig. 10, the device includes:
the data acquisition module 10 is used for acquiring standard sensor data and test sensor data;
the data processing module 20 is configured to obtain a standard track attribute parameter and a test track attribute parameter corresponding to the sensing target according to the standard sensor data and the test sensor data;
the evaluation module 30 is configured to compare the standard track attribute parameter and the test track attribute parameter corresponding to each sensing target to obtain track deviation data;
and the statistical module 40 is configured to classify the track deviation data according to preset classification information, and obtain a performance evaluation result based on the classification result, where the preset classification information includes one or more of target category information, target position information, and scene information.
The data acquisition module 10 is further configured to acquire a preprocessing sensing result according to the standard sensor data; acquiring a standard perception algorithm, wherein the standard perception algorithm comprises an AI perception algorithm, an offline tracking algorithm and a multi-sensor fusion algorithm; acquiring the standard track attribute parameters of the perception target according to the standard perception algorithm, the preprocessing perception result and the standard sensor data; and acquiring the test track attribute parameters of the perception target according to the test sensor data.
The data processing module 20 is further configured to obtain a standard track attribute parameter according to the standard sensor data, and obtain a test track attribute parameter according to the test sensor data; and matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each perception target.
The data processing module 20 is further configured to obtain the standard track attribute parameters and a standard target identification frame and a test target identification frame corresponding to the test track attribute parameters; acquiring a distance intersection ratio according to the ratio of the Euclidean distance between the standard target identification frame and the central point of the test target identification frame to the diagonal distance of a minimum closure area, wherein the minimum closure area represents a minimum circumscribed rectangular area comprising the standard target identification frame and the test target identification frame; and comparing the distance intersection ratio with a distance threshold, and if the distance intersection ratio is less than or equal to the distance threshold, obtaining the standard track attribute parameter and the test track attribute parameter corresponding to the perception target.
The data processing module 20 is further configured to generate a corresponding standard track feature matrix and a corresponding test track feature matrix according to the standard track attribute parameter and the test track attribute parameter; and generating a corresponding visual track image of the perception target according to the standard track characteristic matrix and the test track characteristic matrix.
The evaluation module 30 is further configured to perform difference calculation on the standard track attribute parameter and the test track attribute parameter corresponding to the sensing target to obtain first deviation data; and acquiring a deviation threshold, comparing the first deviation data with the deviation threshold, and taking the first deviation data as track deviation data if the first deviation data is greater than or equal to the deviation threshold.
The statistical module 40 is further configured to obtain a scene division rule, where the scene division rule includes a position change rule and a speed change rule; acquiring current scene information according to the scene division rule and the standard track attribute parameters and the test track attribute parameters corresponding to the perception target; and classifying the track deviation data corresponding to the perception target according to the current scene information.
The above modules may be functional modules or program modules, and may be implemented by software or hardware. For a module implemented by hardware, the modules may be located in the same processor; or the modules can be respectively positioned in different processors in any combination.
There is also provided in this embodiment an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, standard sensor data and test sensor data are acquired.
And S2, acquiring standard track attribute parameters and test track attribute parameters corresponding to the perception target according to the standard sensor data and the test sensor data.
And S3, comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data.
And S4, classifying the track deviation data according to preset classification information, and acquiring a performance evaluation result based on the classification result, wherein the preset classification information comprises one or more of target category information, target position information and scene information.
It should be noted that, for specific examples in this embodiment, reference may be made to the examples described in the foregoing embodiments and optional implementations, and details are not described again in this embodiment.
In addition, in combination with the perceptual performance evaluation method provided in the foregoing embodiment, a storage medium may also be provided in this embodiment to implement the perceptual performance evaluation method. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the above-described methods of perceptual performance evaluation.
It should be understood that the specific embodiments described herein are merely illustrative of this application and are not intended to be limiting. All other embodiments, which can be derived by a person skilled in the art from the examples provided herein without any inventive step, shall fall within the scope of protection of the present application.
It is obvious that the drawings are only examples or embodiments of the present application, and it is obvious to those skilled in the art that the present application can be applied to other similar cases according to the drawings without creative efforts. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
The term "embodiment" is used herein to mean that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is to be expressly or implicitly understood by one of ordinary skill in the art that the embodiments described in this application may be combined with other embodiments without conflict.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the patent protection. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application shall be subject to the appended claims.

Claims (10)

1. A perception performance evaluating method is characterized by comprising the following steps:
acquiring standard sensor data and test sensor data;
acquiring a standard track attribute parameter and a test track attribute parameter corresponding to a perception target according to the standard sensor data and the test sensor data;
comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data;
and classifying the flight path deviation data according to preset classification information, and acquiring a performance evaluation result based on the classification result, wherein the preset classification information comprises one or more of target class information, target position information and scene information.
2. The perception performance evaluating method according to claim 1, wherein the obtaining of the standard track attribute parameter and the test track attribute parameter corresponding to the perception target according to the standard sensor data and the test sensor data includes:
acquiring a preprocessing sensing result according to the standard sensor data;
acquiring a standard perception algorithm, wherein the standard perception algorithm comprises an AI perception algorithm, an offline tracking algorithm and a multi-sensor fusion algorithm;
acquiring the standard track attribute parameters of the perception target according to the standard perception algorithm, the preprocessing perception result and the standard sensor data;
and acquiring the test track attribute parameters of the perception target according to the test sensor data.
3. The perception performance evaluating method according to claim 1, wherein the obtaining of the standard track attribute parameter and the test track attribute parameter corresponding to the perception target according to the standard sensor data and the test sensor data includes:
acquiring a standard track attribute parameter according to the standard sensor data, and acquiring a test track attribute parameter according to the test sensor data;
and matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each perception target.
4. The perceptual performance evaluation method of claim 3, wherein the matching the standard track attribute parameters and the test track attribute parameters according to an association algorithm to obtain the standard track attribute parameters and the test track attribute parameters corresponding to each of the perceptual targets comprises:
acquiring a standard target identification frame and a test target identification frame corresponding to the standard track attribute parameters and the test track attribute parameters;
acquiring a distance intersection ratio according to the ratio of the Euclidean distance between the standard target identification frame and the central point of the test target identification frame to the diagonal distance of a minimum closure area, wherein the minimum closure area represents a minimum circumscribed rectangular area comprising the standard target identification frame and the test target identification frame;
and comparing the distance intersection ratio with a distance threshold, and if the distance intersection ratio is less than or equal to the distance threshold, obtaining the standard track attribute parameter and the test track attribute parameter corresponding to the perception target.
5. The perception performance evaluating method according to claim 1, wherein the obtaining of the standard track attribute parameter and the test track attribute parameter corresponding to the perception target according to the standard sensor data and the test sensor data comprises:
generating a corresponding standard track characteristic matrix and a corresponding test track characteristic matrix according to the standard track attribute parameters and the test track attribute parameters;
and generating a corresponding visual track image of the perception target according to the standard track characteristic matrix and the test track characteristic matrix.
6. The perceptual performance evaluation method of claim 1, wherein the comparing the standard track attribute parameter and the test track attribute parameter corresponding to each of the perceptual targets to obtain track deviation data comprises:
calculating the difference value of the standard track attribute parameter and the test track attribute parameter corresponding to the perception target to obtain first deviation data;
and acquiring a deviation threshold, comparing the first deviation data with the deviation threshold, and taking the first deviation data as track deviation data if the first deviation data is greater than or equal to the deviation threshold.
7. The perceptual performance evaluation method of claim 1, wherein the classifying the track deviation data according to preset classification information further comprises:
acquiring a scene division rule, wherein the scene division rule comprises a position change rule and a speed change rule;
acquiring current scene information according to the scene division rule and the standard track attribute parameters and the test track attribute parameters corresponding to the perception target;
and classifying the track deviation data corresponding to the perception target according to the current scene information.
8. A perceptual performance evaluation apparatus, comprising:
the data acquisition module is used for acquiring standard sensor data and test sensor data;
the data processing module is used for acquiring a standard track attribute parameter and a test track attribute parameter corresponding to a perception target according to the standard sensor data and the test sensor data;
the evaluation module is used for comparing the standard track attribute parameters and the test track attribute parameters corresponding to each perception target to obtain track deviation data;
and the statistical module is used for classifying the flight path deviation data according to preset classification information, and acquiring a performance evaluation result based on the classification result, wherein the preset classification information comprises one or more of target class information, target position information and scene information.
9. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and the processor is configured to execute the computer program to perform the perceptual performance evaluation method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for perceptual performance evaluation according to any one of claims 1 to 7.
CN202110614827.1A 2021-06-02 2021-06-02 Perception performance evaluation method and device, electronic device and storage medium Active CN113155173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110614827.1A CN113155173B (en) 2021-06-02 2021-06-02 Perception performance evaluation method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110614827.1A CN113155173B (en) 2021-06-02 2021-06-02 Perception performance evaluation method and device, electronic device and storage medium

Publications (2)

Publication Number Publication Date
CN113155173A true CN113155173A (en) 2021-07-23
CN113155173B CN113155173B (en) 2022-08-30

Family

ID=76875458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110614827.1A Active CN113155173B (en) 2021-06-02 2021-06-02 Perception performance evaluation method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113155173B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113885532A (en) * 2021-11-11 2022-01-04 江苏昱博自动化设备有限公司 Unmanned floor truck control system of barrier is kept away to intelligence
CN114136356A (en) * 2021-11-30 2022-03-04 上汽通用五菱汽车股份有限公司 Parameter acquisition test system, method, device and computer readable storage medium
CN114543830A (en) * 2022-02-28 2022-05-27 重庆长安汽车股份有限公司 Vehicle-mounted sensor noise estimation system and method based on truth value system
CN114543842A (en) * 2022-02-28 2022-05-27 重庆长安汽车股份有限公司 Positioning precision evaluation system and method of multi-sensor fusion positioning system
CN114792469A (en) * 2022-04-06 2022-07-26 大唐高鸿智联科技(重庆)有限公司 Method and device for testing sensing system and testing equipment
CN115311761A (en) * 2022-07-15 2022-11-08 襄阳达安汽车检测中心有限公司 Non-real-time vehicle-mounted sensing system evaluation method and related equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180374359A1 (en) * 2017-06-22 2018-12-27 Bakhi.com Times Technology (Beijing) Co., Ltd. Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
WO2020079698A1 (en) * 2018-10-19 2020-04-23 A.D Knight Ltd. Adas systems functionality testing
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
CN112147632A (en) * 2020-09-23 2020-12-29 中国第一汽车股份有限公司 Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN112541261A (en) * 2020-12-09 2021-03-23 中国航空工业集团公司沈阳飞机设计研究所 Target track fusion assessment method based on data recharging function
CN112693466A (en) * 2021-01-29 2021-04-23 重庆长安汽车股份有限公司 System and method for evaluating performance of vehicle environment perception sensor
CN112816954A (en) * 2021-02-09 2021-05-18 中国信息通信研究院 Road side perception system evaluation method and system based on truth value

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180374359A1 (en) * 2017-06-22 2018-12-27 Bakhi.com Times Technology (Beijing) Co., Ltd. Evaluation framework for predicted trajectories in autonomous driving vehicle traffic prediction
WO2020079698A1 (en) * 2018-10-19 2020-04-23 A.D Knight Ltd. Adas systems functionality testing
CN111983935A (en) * 2020-08-19 2020-11-24 北京京东叁佰陆拾度电子商务有限公司 Performance evaluation method and device
CN112147632A (en) * 2020-09-23 2020-12-29 中国第一汽车股份有限公司 Method, device, equipment and medium for testing vehicle-mounted laser radar perception algorithm
CN112541261A (en) * 2020-12-09 2021-03-23 中国航空工业集团公司沈阳飞机设计研究所 Target track fusion assessment method based on data recharging function
CN112693466A (en) * 2021-01-29 2021-04-23 重庆长安汽车股份有限公司 System and method for evaluating performance of vehicle environment perception sensor
CN112816954A (en) * 2021-02-09 2021-05-18 中国信息通信研究院 Road side perception system evaluation method and system based on truth value

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113885532A (en) * 2021-11-11 2022-01-04 江苏昱博自动化设备有限公司 Unmanned floor truck control system of barrier is kept away to intelligence
CN114136356A (en) * 2021-11-30 2022-03-04 上汽通用五菱汽车股份有限公司 Parameter acquisition test system, method, device and computer readable storage medium
CN114543830A (en) * 2022-02-28 2022-05-27 重庆长安汽车股份有限公司 Vehicle-mounted sensor noise estimation system and method based on truth value system
CN114543842A (en) * 2022-02-28 2022-05-27 重庆长安汽车股份有限公司 Positioning precision evaluation system and method of multi-sensor fusion positioning system
CN114543842B (en) * 2022-02-28 2023-07-28 重庆长安汽车股份有限公司 Positioning accuracy evaluation system and method for multi-sensor fusion positioning system
CN114792469A (en) * 2022-04-06 2022-07-26 大唐高鸿智联科技(重庆)有限公司 Method and device for testing sensing system and testing equipment
CN115311761A (en) * 2022-07-15 2022-11-08 襄阳达安汽车检测中心有限公司 Non-real-time vehicle-mounted sensing system evaluation method and related equipment
CN115311761B (en) * 2022-07-15 2023-11-03 襄阳达安汽车检测中心有限公司 Non-real-time vehicle-mounted perception system evaluation method and related equipment

Also Published As

Publication number Publication date
CN113155173B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
CN113155173B (en) Perception performance evaluation method and device, electronic device and storage medium
CN110364008B (en) Road condition determining method and device, computer equipment and storage medium
CN109087510B (en) Traffic monitoring method and device
CN109754594B (en) Road condition information acquisition method and equipment, storage medium and terminal thereof
WO2020042984A1 (en) Vehicle behavior detection method and apparatus
CN111462488A (en) Intersection safety risk assessment method based on deep convolutional neural network and intersection behavior characteristic model
CN109993138A (en) A kind of car plate detection and recognition methods and device
CN111290370B (en) Automatic driving performance detection method and device
US11501538B2 (en) Systems and methods for detecting vehicle tailgating
CN111291697A (en) Method and device for recognizing obstacle
Sikirić et al. Image representations on a budget: Traffic scene classification in a restricted bandwidth scenario
CN115797403A (en) Traffic accident prediction method and device, storage medium and electronic device
CN113781767A (en) Traffic data fusion method and system based on multi-source perception
US20220234588A1 (en) Data Recording for Advanced Driving Assistance System Testing and Validation
CN113723176B (en) Target object determination method and device, storage medium and electronic device
CN110909656A (en) Pedestrian detection method and system with integration of radar and camera
CN113393442A (en) Method and system for detecting abnormality of train parts, electronic device and storage medium
CN113593256B (en) Unmanned aerial vehicle intelligent driving-away control method and system based on city management and cloud platform
CN114283361A (en) Method and apparatus for determining status information, storage medium, and electronic apparatus
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN113192340B (en) Method, device, equipment and storage medium for identifying highway construction vehicles
CN113393011B (en) Method, device, computer equipment and medium for predicting speed limit information
CN114802264A (en) Vehicle control method and device and electronic equipment
CN110458459B (en) Visual analysis method, device and equipment for traffic data and readable storage medium
CN113837222A (en) Cloud-edge cooperative machine learning deployment application method and device for millimeter wave radar intersection traffic monitoring system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant