CN113222042A - Evaluation method, evaluation device, electronic equipment and storage medium - Google Patents

Evaluation method, evaluation device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113222042A
CN113222042A CN202110570839.9A CN202110570839A CN113222042A CN 113222042 A CN113222042 A CN 113222042A CN 202110570839 A CN202110570839 A CN 202110570839A CN 113222042 A CN113222042 A CN 113222042A
Authority
CN
China
Prior art keywords
frame
target detection
dimensional target
point cloud
cloud data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110570839.9A
Other languages
Chinese (zh)
Inventor
杨国润
王哲
石建萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202110570839.9A priority Critical patent/CN113222042A/en
Publication of CN113222042A publication Critical patent/CN113222042A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The disclosure provides an evaluation method, an evaluation device, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network; and determining a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing the integral scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and at least one three-dimensional target detection frame. The method and the device evaluate the target detection result in the three-dimensional space by determining the detection evaluation result of the target detection network in each sub-scanning range, and are more accurate and complete.

Description

Evaluation method, evaluation device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of three-dimensional (3-Dimension, 3D) target detection, and in particular, to an evaluation method, an evaluation device, an electronic device, and a storage medium.
Background
Lidar is widely used in various technical fields. Taking the field of automatic driving as an example, it is particularly important to accurately detect targets around vehicles, such as pedestrians and other vehicles, by using point cloud data acquired by a laser radar.
In contrast to classical 2D detection, 3D object detection here requires not only detection of the class of object, but also providing information of 3D position, 3D size and 3D orientation. For the evaluation of the 3D detection result, the conventional evaluation method mainly refers to the PASCAL standard of the 2D target detection, and for example, the average accuracy (mAP) index may be used to evaluate the result.
However, since the 3D target detection is implemented in a three-dimensional space and has a characteristic that a two-dimensional space does not have, if the existing 2D evaluation method is still used, the detection result of the target in the three-dimensional space cannot be well evaluated.
Disclosure of Invention
The embodiment of the disclosure at least provides an evaluation method, an evaluation device, electronic equipment and a storage medium.
In a first aspect, an embodiment of the present disclosure provides an evaluation method, including:
acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment, and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
and determining a target detection network detection evaluation result under a plurality of sub-scanning ranges obtained by dividing the whole scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and the at least one three-dimensional target detection frame.
Here, a three-dimensional target true value box, which may be obtained based on the labeling result, and a three-dimensional target detection box, which may be obtained based on the target detection network detection, may be acquired first. For at least one three-dimensional target true value frame and at least one three-dimensional target detection frame, here, the detection evaluation result corresponding to the point cloud data set in each sub-scanning range can be determined under the condition that the whole scanning range of the radar device is divided into a plurality of sub-scanning ranges. Here, in consideration of different influences of different scanning ranges on evaluation of a target detection result in a three-dimensional space, for example, point cloud data acquired corresponding to a target closer to the target is denser, which makes the target detection result more accurate to a certain extent, whereas point cloud data acquired corresponding to a target farther from the target is sparser, which makes the target detection result less accurate to a certain extent.
In a possible implementation manner, the determining, based on the obtained at least one three-dimensional target true value box and the at least one three-dimensional target detection box, a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing a scanning range of the radar device includes:
for each sub-scanning range in the plurality of sub-scanning ranges, determining at least one three-dimensional target detection frame falling in the sub-scanning range from the at least one three-dimensional target detection frame, and determining at least one three-dimensional target real value frame falling in the sub-scanning range from the at least one three-dimensional target real value frame;
determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and the determined at least one three-dimensional target true value frame;
and determining the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range.
Here, in order to determine the detection evaluation result of the target detection network in each sub-scanning range, the detection evaluation result corresponding to each frame of point cloud data in each sub-scanning range may be determined, for example, the result may be averaged with respect to the detection evaluation result corresponding to each frame of point cloud data to determine the detection evaluation result of the target detection network. The detection evaluation result corresponding to each frame of point cloud data can be determined based on the matching degree between at least one three-dimensional target detection frame and at least one three-dimensional target real value frame corresponding to each sub-scanning range, so that the detection evaluation result corresponding to the point cloud data set determined in each sub-scanning range is more accurate.
In one possible embodiment, the detection evaluation result of the target detection network includes an average accuracy mean value mAP; determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and the determined at least one three-dimensional target true value frame, including:
for each determined three-dimensional target detection frame in the at least one three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree with the three-dimensional target detection frame being greater than a first preset threshold exists in the at least one determined three-dimensional target true value frame;
determining the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the found three-dimensional target detection frames corresponding to the first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data;
determining the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range, including:
and determining the mAP of the target detection network in each sub-scanning range based on the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range and the number of frames of the point cloud data contained in the point cloud data set.
Here, first, the number of the three-dimensional target detection frames which can be found and have the corresponding first three-dimensional target real value frame can be determined based on the matching result of the at least one three-dimensional target detection frame and the at least one three-dimensional target real value frame, and the ratio operation is performed on the number and the total number of the three-dimensional target detection frames in each frame of point cloud data, so that the detection accuracy of each frame of point cloud data can be determined, that is, which target detection frames have the target real value frames matched with the target detection frames, which shows that the detection result corresponding to the detection frames is accurate and non-false detection, and by using the detection accuracy, the mAP of the target detection network can be determined, and the evaluation result is more accurate.
In one possible embodiment, the detection evaluation result of the target detection network includes an average accuracy mean value mAP; determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and the determined at least one three-dimensional target true value frame, including:
respectively converting the determined at least one three-dimensional target real value frame into corresponding two-dimensional target real value frames and respectively converting the determined at least one three-dimensional target detection frame into corresponding two-dimensional target detection frames based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data;
determining the detection accuracy of the target detection network on each frame of point cloud data under the aerial view under the sub-scanning range based on the converted at least one two-dimensional target true value frame and at least one two-dimensional target detection frame;
the obtaining of the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range includes:
and determining the mAP of the target detection network under the aerial view under each sub-scanning range based on the detection accuracy of the target detection network on each frame of the point cloud data, under the aerial view and the number of frames of the point cloud data contained in the point cloud data set under each sub-scanning range.
Here, the target detection result in the three-dimensional space may be converted to the target detection result in the two-dimensional space through conversion between coordinate systems, and then the mapp of the target detection network under the bird's eye view may be determined in the two-dimensional space under each sub-scanning range, and on the premise of ensuring integrity of evaluation in different scanning ranges, integrity of evaluation in different spaces is also ensured.
In a possible embodiment, the determining, based on the obtained at least one three-dimensional target true value box and the at least one three-dimensional target detection box, a detection evaluation result of the target detection network at each sub-scanning range includes:
aiming at each three-dimensional target detection frame in the at least one obtained three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree with the three-dimensional target detection frame larger than a first preset threshold exists in the at least one obtained three-dimensional target true value frame;
aiming at each found three-dimensional target detection frame without the corresponding first three-dimensional target real value frame, finding a second three-dimensional target real value frame with the coincidence degree between the second three-dimensional target real value frame and the three-dimensional target detection frame being greater than a second preset threshold value and smaller than the first preset threshold value from the at least one three-dimensional target real value frame;
and determining the detection evaluation result of the target detection network in each sub-scanning range based on each searched three-dimensional target detection frame and the searched second three-dimensional target true value frame.
In the embodiment of the present disclosure, in addition to the evaluation in different scanning ranges, other detection dimensions in the three-dimensional space may also be evaluated, for example, distance deviation, angle deviation, size deviation, and the like of the detection frame may be evaluated. Here, in order to complete the evaluation of more completeness, the detection frame in incomplete matching (i.e. matching degree smaller than a certain value) may be further determined to have the above-mentioned deviation, where the detection frame in incomplete matching with a certain matching degree may be evaluated to determine the evaluation result of other detection dimensions.
In one possible implementation, the detection evaluation result of the target detection network includes an average distance error ATE; determining a detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame, including:
aiming at each found three-dimensional target detection frame, respectively projecting the three-dimensional target detection frame and the found second three-dimensional target truth value frame to the coordinate system of the aerial view based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data to obtain a projected two-dimensional target detection frame and a second two-dimensional target truth value frame;
determining the distance between the projected two-dimensional target detection frame and the second two-dimensional target true value frame;
determining ATE of the target detection network on each frame of the point cloud data based on the determined distance;
and determining the ATE of the target detection network in each sub-scanning range according to the ATE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
Here, the average deviation of the detection result corresponding to the point cloud data in the distance can be evaluated through the determination of the ATE of the target detection network in each sub-scanning range, the more the deviation is, the less the detection result is inaccurate to some extent, and conversely, the less the deviation is, the more the detection result is accurate to some extent, so that the disclosure provides a wider-dimension evaluation means.
In a possible embodiment, the detection evaluation of the target detection network comprises a mean size error ASE; determining a detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame, including:
aiming at each found three-dimensional target detection frame, aligning the three-dimensional target detection frame and the found corresponding second three-dimensional target truth value frame in position and orientation to obtain an aligned three-dimensional target detection frame and a second three-dimensional target truth value frame;
determining ASE of the target detection network on each frame of point cloud data based on the coincidence ratio between the aligned three-dimensional target detection frame and the second three-dimensional target true value frame;
and determining the ASE of the target detection network in each sub-scanning range according to the ASE of the target detection network in each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
Here, the average deviation of the size of the detection result corresponding to the point cloud data can be evaluated through determining the ASE of the target detection network in each sub-scanning range, the more the deviation is, the more inaccurate the detection result is explained to a certain extent, and conversely, the less the deviation is, the more accurate the detection result is explained to a certain extent, so that the disclosure provides a wider-dimension evaluation means.
In one possible embodiment, the detection evaluation result of the target detection network includes an average orientation error AOE; determining a detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame, including:
aiming at each found three-dimensional target detection frame, determining the AOE of the target detection network on each frame of point cloud data according to the direction of the three-dimensional target detection frame and the angle difference between the directions of the found corresponding second three-dimensional target real value frames;
and determining the AOE of the target detection network in each sub-scanning range according to the AOE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
Here, the average deviation of the detection result corresponding to the point cloud data in the orientation direction can be evaluated through the determination of the AOE of the target detection network in each sub-scanning range, the more the deviation is, the less the detection result is, and the less the deviation is, the more the detection result is, and it can be known that the present disclosure provides a wider-dimension evaluation means.
In one possible embodiment, the method further comprises:
determining the false detection rate of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target detection frames which are not found to have the corresponding first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data; and/or the presence of a gas in the gas,
searching whether a first three-dimensional target detection frame with the coincidence degree with the three-dimensional target true value frame being larger than a third preset threshold exists in the at least one three-dimensional target detection frame or not aiming at each three-dimensional target true value frame in the at least one three-dimensional target true value frame in each frame of point cloud data; and determining the omission ratio of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to exist and correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data.
Here, the false detection rate and/or the missing detection rate for each frame of point cloud data may be determined based on the matching result of the at least one three-dimensional target true value box and the at least one three-dimensional target detection box in each frame of point cloud data to perform overall evaluation on the detection result of each frame of point cloud data.
In a possible implementation manner, the determining, according to a ratio of the number of the three-dimensional target detection frames found to be absent from the first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data, a false detection rate of the target detection network in each frame of point cloud data includes:
for each sub-scanning range, determining the false detection rate of the target detection network on each frame of point cloud data in the sub-scanning range according to the ratio of the number of the three-dimensional target true value frames which are not found to correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target detection frames detected in the sub-scanning range;
the determining the missing rate of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to have the corresponding first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data includes:
and determining the omission ratio of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the searched three-dimensional target detection frames without the corresponding first three-dimensional target real value frame to the total number of the three-dimensional target real value frames determined in the sub-scanning range.
In one possible embodiment, the method further comprises:
drawing a false detection distribution map of the target detection network on each frame of point cloud data in the integral scanning range of the radar equipment based on the false detection rate of the target detection network on each frame of point cloud data in each sub-scanning range;
and/or drawing a missing detection distribution map of the target detection network on each frame of point cloud data in the whole scanning range of the radar equipment based on the missing detection rate of the target detection network on each frame of point cloud data in each sub-scanning range.
The visual view display can be provided based on the drawing of the false detection distribution map and/or the missing detection distribution map of each frame of point cloud data, so that an evaluating person can visually know the evaluation result conveniently, and the method is more practical.
In a second aspect, an embodiment of the present disclosure further provides an evaluation apparatus, including:
the system comprises an acquisition module and at least one detection module, wherein the acquisition module is used for acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
and the evaluation module is used for determining the detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing the integral scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and the at least one three-dimensional target detection frame.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, including: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of evaluating according to the first aspect and any of its various embodiments.
In a fourth aspect, the disclosed embodiments further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the evaluation method according to the first aspect and any of the various embodiments thereof.
For the effect description of the above evaluation device, electronic device, and computer-readable storage medium, reference is made to the above description of the evaluation method, which is not repeated here.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
FIG. 1 illustrates a flow chart of an evaluation method provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of an evaluation device according to an embodiment of the disclosure;
fig. 3 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
It has been found that compared to classical 2D detection, 3D object detection here requires not only detection of the object type, but also providing information about 3D position, 3D size and 3D orientation. For the evaluation of the 3D detection result, the conventional evaluation method mainly refers to the PASCAL standard of the 2D target detection, and for example, the average accuracy (mAP) index may be used to evaluate the result.
However, in many cases, the mAP index cannot well distinguish the performance of the 3D target detection method, and evaluate the precision and accuracy of the target detection result. For example, for most 3D detection methods, especially for point cloud data, it is very difficult to achieve a high mAP due to the error of distance estimation. Taking the KITTI data set as an example, the current monocular 3D algorithm Kinemattic 3D with the best performance can only reach 12.72% of mAP, so that the mAP index is difficult to distinguish the performances of the methods.
On the other hand, a single mAP index cannot comprehensively measure the effect of the 3D target detection method. Taking an automatic driving scene as an example, except for a full-range evaluation result, other deviation indexes in a three-dimensional space can be used for influencing the detection effect.
Based on the research, the evaluation method capable of performing refined evaluation on the 3D target detection result is provided, so that the performance of 3D target detection can be evaluated more reasonably and finely.
To facilitate understanding of the present embodiment, first, an evaluation method disclosed in the embodiments of the present disclosure is described in detail, where an execution subject of the evaluation method provided in the embodiments of the present disclosure is generally a computer device with certain computing capability, and the computer device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, a vehicle mounted device, a wearable device, or a server or other processing device. In some possible implementations, the evaluation method may be implemented by a processor calling computer-readable instructions stored in a memory.
Referring to fig. 1, a flowchart of an evaluation method provided in the embodiment of the present disclosure is shown, and the method includes steps S101 to S102, where:
s101: acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
s102: and determining a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing the integral scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and at least one three-dimensional target detection frame.
Here, in order to facilitate understanding of the evaluation method provided by the embodiment of the present disclosure, first, an application scenario of the evaluation method may be described in detail. The evaluation method in the embodiment of the disclosure can be mainly applied to the field of 3D target detection. Like 2D object detection, evaluation in the field of 3D object detection may also be based on the matching result between the detection box and the truth box, but considering that in 3D object detection, the difficulty of matching between the relevant detection box and the truth box is large, slight misalignment in spatial position may cause an unmatched result, which results in insufficient discrimination of the evaluation index in the three-dimensional space.
In order to solve the above problem, the embodiments of the present disclosure provide a method for evaluating a 3D target detection result based on scan range division.
In the embodiment of the disclosure, the detection evaluation result of the target detection network may be determined based on the obtained at least one three-dimensional target true value frame and the at least one three-dimensional target detection frame, and under a plurality of sub-scanning ranges obtained by dividing the overall scanning range of the radar device.
The detection evaluation result may be an evaluation result obtained by detecting the point cloud data of each frame in the point cloud data set by the target detection network, that is, the detection evaluation result may be a detection evaluation result of the target detection network on the point cloud data set.
The three-dimensional target true value frame can be obtained by labeling point cloud data, and the three-dimensional target detection frame can be obtained by performing target detection on the point cloud data. The former can rely on manual labeling means, and the latter can correspond to the prediction result of the trained target detection network.
The point cloud data in the embodiment of the disclosure may be acquired by using radar equipment, where the radar equipment may adopt a rotary scanning laser radar, and may also adopt other radar equipment, which is not particularly limited. Taking a rotary scanning laser radar as an example, the laser radar can acquire three-dimensional point cloud data related to the surrounding environment when the laser radar rotates and scans in the horizontal direction. Here, the laser radar may adopt a multi-line scanning mode in which a plurality of laser tubes are sequentially used for emission during the rotational scanning, and the structure is that the plurality of laser tubes are longitudinally arranged, that is, multi-layer scanning in the vertical direction is performed during the rotational scanning in the horizontal direction. A certain included angle is formed between every two laser tubes, the vertical emission view field can be 30-40 degrees, therefore, a data packet returned by the laser emitted by the laser tubes can be obtained when the laser radar equipment rotates for one scanning angle, the data packets obtained by the scanning angles are spliced to obtain a frame of point cloud data (corresponding to 360-degree scanning in one rotation), and after the laser radar scans for one circle, the frame of point cloud data can be obtained by scanning.
The embodiment of the disclosure may collect the multi-frame point cloud data obtained by scanning to obtain a point cloud data set, for example, the multi-frame point cloud data collected for a preset application scene (e.g., a road traffic scene) within a preset time (e.g., 3 minutes) may be used as the point cloud data set.
The labeling process of the point cloud data may be obtained based on labeling habits of different labeling personnel, and various information such as target size, target position, target orientation and the like may be labeled. The detection process of the point cloud data can be obtained by utilizing a trained target detection network for prediction, wherein the target detection network trains the corresponding relation between the point cloud data and the three-dimensional target detection frame, so that at least one three-dimensional target detection frame corresponding to each frame of point cloud data can be rapidly determined.
Under the condition that a plurality of corresponding three-dimensional target true value frames and a plurality of three-dimensional target detection frames are acquired aiming at each frame of point cloud data, because the three-dimensional target true value frames and the three-dimensional target detection frames are from different acquisition modes, the pairing relationship between the two frames (namely the three-dimensional target true value frames and the three-dimensional target detection frames) is not clear, that is, whether any two frames correspond to the same target object cannot be determined, and it can be known that the matching operation between the two frames is a key step for evaluating the 3D target detection result (corresponding to the three-dimensional target detection frames).
Here, it is considered that the target objects at different distances have different influences on the evaluation result, and therefore, here, the detection evaluation result corresponding to the point cloud data set may be determined for each sub-scanning range under a plurality of sub-scanning ranges obtained by dividing the overall scanning range of the radar device.
In consideration of the volatility problem which may exist in the evaluation result of single-frame point cloud data, the implementation of the method is to detect and evaluate a three-dimensional target detection frame corresponding to a point cloud data set formed by multi-frame point cloud data.
It should be noted that the number of sub-scanning ranges divided by the embodiments of the present disclosure may be determined based on different radar devices. In practical application, the number of the divided sub-scanning ranges is not too large, nor too small, the excessively fine sub-scanning range division result may cause an increase in the calculation amount, and the excessively coarse sub-scanning range division result may cause an inability to well show the influence of target objects at different distances on the evaluation result.
For example, the overall scanning range may be divided into 3 sub-scanning ranges by the distance from the radar device, and the sub-scanning ranges correspond to a near range, a middle range and a far range, where the near range corresponds to a sub-scanning range with the radar device as a center and a radius of 30m, the middle range corresponds to a sub-scanning range with the radar device as a center and a radius of 30m-50m, and the far range corresponds to a sub-scanning range with the radar device as a center and a radius of 50m-70 m.
The evaluation method provided by the embodiment of the disclosure can determine the detection evaluation result of the target detection network for each sub-scanning range according to the following steps:
step one, aiming at each sub-scanning range in a plurality of sub-scanning ranges, determining at least one three-dimensional target detection frame falling into the sub-scanning range from at least one three-dimensional target detection frame, and determining at least one three-dimensional target real value frame falling into the sub-scanning range from at least one three-dimensional target real value frame;
determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and at least one three-dimensional target true value frame;
and step three, determining the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range.
Here, first, for each sub-scanning range, the corresponding at least one three-dimensional target detection frame and the corresponding at least one three-dimensional target real value frame may be selected from the at least one three-dimensional target detection frame and the at least one three-dimensional target real value frame, respectively.
For the three-dimensional target detection frame, the three-dimensional target detection frame may be screened by the target position predicted by the prediction result, for example, in a case that the target position predicted by one target three-dimensional detection frame falls into one sub-scanning range, the target position is determined as the three-dimensional target detection frame selected for the sub-scanning range; for the three-dimensional target true value frame, the screening of the three-dimensional target true value frame can be completed through the target position obtained by labeling the labeling result, and the specific selection process refers to the description content of the three-dimensional target detection frame, which is not repeated herein.
For the at least one three-dimensional target detection frame and the at least one three-dimensional target true value frame after screening, a detection evaluation result determined for each frame of point cloud data in each sub-scanning range can be determined, and further, a detection evaluation result corresponding to the point cloud data set in the sub-scanning range (that is, a detection evaluation result of the target detection network on the point cloud data set) can be determined based on a mode of averaging the results and the like.
In the embodiment of the present disclosure, the detection evaluation result corresponding to the point cloud data set may include an average accuracy mean (mapp), and the evaluation result may represent the accuracy of the target detection result corresponding to the entire point cloud data set; in addition, the target detection network can also be an Average distance Error (ATE), an Average Size Error (ASE), and an Average Orientation Error (AOE), which can be used to evaluate the deviations of the target detection result in the distance, size, and Orientation dimensions, respectively, so that the target detection network can be adjusted by the deviations to perform the subsequent 3D target detection better.
The mAP in the embodiment of the present disclosure may be an evaluation result determined for a three-dimensional space, or may be an evaluation result determined for a two-dimensional space, which may be described in the following two aspects.
In a first aspect: the embodiment of the disclosure may determine the mAP of the target detection network in each sub-scanning range according to the following steps:
step one, aiming at each three-dimensional target detection frame in at least one determined three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree larger than a first preset threshold value with the three-dimensional target detection frame exists in the at least one determined three-dimensional target true value frame;
determining the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the three-dimensional target detection frames with the corresponding first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data;
and step three, determining the mAP of the target detection network in each sub-scanning range based on the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range and the number of frames of the point cloud data contained in the point cloud data set.
Here, first, the detection accuracy for each frame of point cloud data in each sub-scanning range may be determined, and then, based on the detection accuracy for each frame of point cloud data and the number of frames of point cloud data included in the point cloud data set, the maps corresponding to the point cloud data set may be determined.
In specific application, the detection accuracy of each frame of point cloud data can be summed firstly, and then the sum is divided by the number of the point cloud data frames, so that the mAP corresponding to the point cloud data set is determined.
The detection accuracy rate for each frame of point cloud data in the embodiment of the present disclosure may refer to an accuracy rate of a detection result determined for the frame of point cloud data relative to a labeling result.
Considering that the probability that two three-dimensional target detection frames and three-dimensional target true value frames with high coincidence degree correspond to the same target object is higher, the target detection result indicated by the paired three-dimensional target detection frames is more accurate. Therefore, the first three-dimensional target true value frame with higher coincidence degree with the three-dimensional target detection frame can be searched from the three-dimensional target true value frames aiming at each three-dimensional target detection frame in each frame of point cloud data, and then the detection accuracy is determined based on the ratio operation result of the number of the first three-dimensional target true value frames with higher coincidence degree with the total number of the three-dimensional target detection frames in the frame of point cloud data.
The coincidence degree in the embodiment of the present disclosure may be determined based on an Intersection-over-Union (IoU) between the three-dimensional target detection frame and the three-dimensional target true value frame, where the larger the Intersection-over-Union, the higher the corresponding coincidence degree.
The searching operation of the three-dimensional target true value frame can be performed by performing threshold setting on the coincidence degree, and the three-dimensional target true value frame with the coincidence degree larger than the first preset threshold can be set as the true value frame paired with the three-dimensional target detection frame.
The first preset threshold should not be set too large or too small. The setting may be made in conjunction with different target objects in the embodiments of the present disclosure. For example, for a vehicle target, a first preset threshold value of 0.7 may be set here, and for a pedestrian target, a first preset threshold value of 0.5 may be set here.
In view of the fact that in the view of the bird's eye view, the outline of the corresponding frame is clearer and there is no problem of occlusion, therefore, the embodiment of the present disclosure may evaluate the target detection result in the three-dimensional space according to the scheme of the first aspect, and may evaluate the target detection result in the two-dimensional space according to the following scheme, so as to further improve the integrity of the evaluation result.
In a second aspect: the embodiment of the disclosure may determine the mAP of the target detection network in each sub-scanning range according to the following steps:
firstly, respectively converting at least one determined three-dimensional target real value frame into corresponding two-dimensional target real value frames and respectively converting at least one determined three-dimensional target detection frame into corresponding two-dimensional target detection frames based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data;
determining the detection accuracy of the target detection network on each frame of point cloud data and under the aerial view under the sub-scanning range based on the converted at least one two-dimensional target true value frame and at least one two-dimensional target detection frame;
and thirdly, determining the mAP of the target detection network under the aerial view in each sub-scanning range based on the detection accuracy of the target detection network on each frame of point cloud data, under the aerial view and the number of frames of the point cloud data contained in the point cloud data set in each sub-scanning range.
Here, first, the detection accuracy of each frame of point cloud data under the corresponding aerial view under each sub-scanning range may be determined, and then, the mag of the point cloud data set under the corresponding aerial view may be determined based on the detection accuracy of each frame of point cloud data under the corresponding aerial view and the number of frames of point cloud data included in the point cloud data set.
In order to determine the detection accuracy of each frame of point cloud data under the bird's-eye view, herein, firstly, a true value frame (i.e. a three-dimensional target true value frame) in a three-dimensional space is converted into a true value frame (i.e. a two-dimensional target true value frame) in a two-dimensional space based on a corresponding relationship between a coordinate system where the bird's-eye view is located and a coordinate system where the point cloud data is located, and a detection frame (i.e. a three-dimensional target detection frame) in the three-dimensional space is converted into a detection frame (i.e. a two-dimensional target detection frame) in the two-dimensional space, and then the determination process of the detection accuracy is implemented based on the converted two-dimensional.
The conversion process from the three-dimensional target real value frame to the two-dimensional target real value frame and the conversion process from the three-dimensional target detection frame to the two-dimensional target detection frame may be a mapping result of a frame in a three-dimensional space on a top view.
The determination process of the maps of the target detection network under the bird's eye view may be based on the determination process of the maps of the target detection network, which may be specifically described with reference to the first aspect, and will not be described herein again.
It can be known that the evaluation method provided by the embodiment of the disclosure evaluates not only in a three-dimensional space but also in a two-dimensional space, so that the determined evaluation result is more complete.
The evaluation result of the mAP can be determined according to the matching result between the three-dimensional target true value frame and the at least one three-dimensional target detection frame, and other evaluation results with more dimensions can be determined based on the matching result, so that the evaluation integrity is improved, and the application requirements in the 3D target detection field can be better met.
The above evaluation results of more dimensions may refer to ATE related to distance deviation, ASE related to size deviation, and AOE related to orientation deviation.
Here, the detection evaluation result of the target detection network at each sub-scanning range may be determined as follows:
step one, aiming at each three-dimensional target detection frame in at least one obtained three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree larger than a first preset threshold value with the three-dimensional target detection frame exists in at least one obtained three-dimensional target true value frame;
step two, aiming at each found three-dimensional target detection frame without the corresponding first three-dimensional target real value frame, finding a second three-dimensional target real value frame with the coincidence degree between the second three-dimensional target real value frame and the three-dimensional target detection frame being larger than a second preset threshold value and smaller than the first preset threshold value from at least one three-dimensional target real value frame;
and step three, determining the detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame.
Here, in order to determine the three evaluation results determined based on the deviations, a three-dimensional target true value frame having a certain degree of coincidence with the three-dimensional target detection frame but not having a large degree of coincidence may be determined for each three-dimensional target detection frame, and thus it is highly likely that the determined three-dimensional target true value frame is a data source having various deviations.
Similarly, a three-dimensional target true value box meeting the deviation requirement can be selected through threshold setting of the coincidence degree. For example, the second preset threshold may be set to be 0.3, and the first preset threshold may be set to be 0.5, so that the three-dimensional target detection frame and the three-dimensional target true value frame with the overlap ratio therebetween may be subjected to correlation operation to determine the corresponding evaluation result.
The results of ATE, ASE, and AOE are described in three ways as follows.
In a first aspect: the embodiment of the present disclosure may determine ATE of the target detection network under each sub-scanning range according to the following steps:
step one, aiming at each searched three-dimensional target detection frame, respectively projecting the three-dimensional target detection frame and the searched second three-dimensional target true value frame to the coordinate system of the aerial view based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data to obtain the projected two-dimensional target detection frame and the second two-dimensional target true value frame;
determining the distance between the projected two-dimensional target detection frame and a second two-dimensional target true value frame;
step three, determining ATE (automatic test equipment) of the target detection network on each frame of point cloud data based on the determined distance;
and step four, determining the ATE of the target detection network in each sub-scanning range according to the ATE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
Here, the ATE corresponding to each frame of point cloud data may be determined first, and then the ATE corresponding to the point cloud data set (i.e., the ATE of the target detection network) under each sub-scanning range may be determined based on an average solving method.
For each frame of point cloud data, in a top view, for each found three-dimensional target detection frame, the euclidean distance between the projected two-dimensional target detection frame and the second two-dimensional target true value frame is calculated, the average value of the euclidean distances calculated by each three-dimensional target detection frame is obtained, and the ATE corresponding to the frame of point cloud data can be determined.
In a second aspect: the embodiment of the present disclosure may determine the ASE of the target detection network in each sub-scanning range according to the following steps:
step one, aiming at each found three-dimensional target detection frame, aligning the three-dimensional target detection frame and the found corresponding second three-dimensional target truth value frame in position and orientation to obtain an aligned three-dimensional target detection frame and a second three-dimensional target truth value frame;
secondly, determining ASE of the target detection network on each frame of point cloud data based on the coincidence ratio between the aligned three-dimensional target detection frame and the second three-dimensional target true value frame;
and step three, determining the ASE of the target detection network in each sub-scanning range according to the ASE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
Here, the ASE corresponding to each frame of point cloud data may be determined first, and then the ASE corresponding to the point cloud data set (i.e., the ASE of the target detection network) in each sub-scanning range may be determined based on an average solution method.
For each frame of point cloud data, the searched three-dimensional target detection frames and the searched corresponding second three-dimensional target true value frames can be aligned first, and then the overlap ratio calculation is performed to determine the size deviation between the searched frames. Under the condition of determining the logarithm of the searched frame, the average value of the determined size deviations is obtained, and the ASE corresponding to the frame point cloud data can be determined.
In a third aspect: the embodiment of the present disclosure may determine the AOE of the target detection network under each sub-scanning range according to the following steps:
step one, aiming at each found three-dimensional target detection frame, determining the AOE of a target detection network on each frame of point cloud data according to the direction of the three-dimensional target detection frame and the angle difference between the directions of the found corresponding second three-dimensional target real value frames;
and step three, determining the AOE of the target detection network in each sub-scanning range according to the AOE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
Here, the AOE corresponding to each frame of point cloud data may be determined first, and then the AOE corresponding to the point cloud data set (i.e., the AOE of the target detection network) in each sub-scanning range may be determined based on an average solving method.
For each frame of point cloud data, the orientation deviation between a pair of searched frames can be determined for each searched three-dimensional target detection frame and the corresponding searched second three-dimensional target true value frame. And under the condition of determining the logarithm of the searched frame, calculating the average value of the determined orientation deviations, and determining the AOE corresponding to the frame point cloud data.
The evaluation method provided by the embodiment of the disclosure can also determine the corresponding false detection rate and the missed detection rate for each frame of point cloud data.
The false detection rate of the target detection network in each frame of point cloud data can be determined according to the ratio of the number of the three-dimensional target detection frames which are not found to have the corresponding first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data.
In addition, for each three-dimensional target true value frame in at least one three-dimensional target true value frame in each frame of point cloud data, whether a first three-dimensional target detection frame with the coincidence degree with the three-dimensional target true value frame larger than a third preset threshold value exists or not can be searched from at least one three-dimensional target detection frame; and determining the omission ratio of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data. The third preset setting may be set according to different applications, and is not limited specifically here.
In the embodiment of the disclosure, under the condition of determining the false detection rate of each frame of point cloud data, the embodiment of the disclosure can draw a false detection distribution map of a corresponding point cloud data set by using a visualization tool; under the condition of determining the missing detection rate of each frame of point cloud data, the embodiment of the disclosure can draw the missing detection distribution map of the corresponding point cloud data set by using a visualization tool.
In addition, the false detection rate and the false detection rate of the target detection network in each frame of point cloud data can be determined for each sub-scanning range, and here, the total number of the three-dimensional target detection frames detected in the sub-scanning range can be referred to.
Under the condition that the false detection rate of the target detection network on each frame of point cloud data is determined in each sub-scanning range, the embodiment of the disclosure can draw a false detection distribution map of the target detection network on each frame of point cloud data in the whole scanning range of the radar equipment by using a visualization tool; under the condition that the missing rate of each frame of point cloud data of the target detection network is determined in each sub-scanning range, the embodiment of the disclosure can draw the missing distribution map of the target detection network on each frame of point cloud data in the whole scanning range of the radar equipment by using a visualization tool.
For the three-dimensional target detection frame for false detection, there may be a plurality of false detection reasons, for example, false detection caused by insufficient matching IoU, false detection caused by classification error, false detection caused by an empty condition, or false detection caused by other reasons, which is not limited in this disclosure.
For the above mentioned false detection reasons, here, the false detection rate caused by different false detection reasons can also be determined, and the corresponding false detection distribution map is presented through the visualization tool, so that the user can better understand the false detection reasons and timely correct the target detection network.
For the three-dimensional target true value frame of the missing detection, there may be a plurality of missing detection reasons, for example, the missing detection may be caused by insufficient matching IoU, may also be caused by classification error, may also be pure missing detection, and may also be caused by other reasons, which is not specifically limited by the embodiment of the present disclosure.
For the above mentioned missed detection reasons, here, the missed detection rate caused by different missed detection reasons can also be determined, and the corresponding missed detection distribution map is presented through the visualization tool, so that the user can better know the missed detection reasons and correct the model in time.
The false detection distribution map and the false detection distribution map may be distribution maps in a three-dimensional viewing angle, or may be distribution maps in a top view angle, for example, evaluation results determined by different sub-scanning ranges and different false detection/false detection reasons may be presented in a thermodynamic diagram manner.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides an evaluation device corresponding to the evaluation method, and since the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the above evaluation method in the embodiment of the present disclosure, the implementation of the device can refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 2, a schematic diagram of an evaluation apparatus provided in an embodiment of the present disclosure is shown, where the apparatus includes: an acquisition module 201 and an evaluation module 202; wherein the content of the first and second substances,
the acquisition module 201 is configured to acquire at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by a radar device, and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
and the evaluation module 202 is configured to determine a detection evaluation result of the target detection network under multiple sub-scanning ranges obtained by dividing the overall scanning range of the radar device based on the obtained at least one three-dimensional target true value frame and at least one three-dimensional target detection frame.
In the embodiment of the disclosure, a three-dimensional target true value frame and a three-dimensional target detection frame may be obtained first, where the former may be obtained based on a labeling result, and the latter may be obtained based on target detection network detection. For at least one three-dimensional target true value frame and at least one three-dimensional target detection frame, here, the detection evaluation result corresponding to the point cloud data set in each sub-scanning range can be determined under the condition that the whole scanning range of the radar device is divided into a plurality of sub-scanning ranges. Here, in consideration of different influences of different scanning ranges on evaluation of a target detection result in a three-dimensional space, for example, point cloud data acquired corresponding to a target closer to the target is denser, which makes the target detection result more accurate to a certain extent, whereas point cloud data acquired corresponding to a target farther from the target is sparser, which makes the target detection result less accurate to a certain extent.
In a possible implementation manner, the evaluation module 202 is configured to determine, based on the obtained at least one three-dimensional target true value frame and at least one three-dimensional target detection frame, a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing a scanning range of the radar device, according to the following steps:
for each sub-scanning range in the plurality of sub-scanning ranges, determining at least one three-dimensional target detection frame falling into the sub-scanning range from at least one three-dimensional target detection frame, and determining at least one three-dimensional target real value frame falling into the sub-scanning range from at least one three-dimensional target real value frame;
determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and at least one three-dimensional target true value frame;
and determining the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range.
In one possible implementation, the detection evaluation result of the target detection network includes an average accuracy mean value mAP; an evaluation module 202, configured to determine, based on a detection evaluation result corresponding to each frame of point cloud data in each sub-scanning range, a detection evaluation result of the target detection network in the sub-scanning range according to the following steps:
aiming at each three-dimensional target detection frame in the at least one determined three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree with the three-dimensional target detection frame larger than a first preset threshold exists in the at least one determined three-dimensional target true value frame;
determining the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the three-dimensional target detection frames with the corresponding first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data;
and determining the mAP of the target detection network in each sub-scanning range based on the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range and the number of frames of the point cloud data contained in the point cloud data set.
In one possible implementation, the detection evaluation result of the target detection network includes an average accuracy mean value mAP; an evaluation module 202, configured to obtain, based on a detection evaluation result corresponding to each frame of point cloud data in each sub-scanning range, a detection evaluation result of the target detection network in the sub-scanning range according to the following steps:
respectively converting the determined at least one three-dimensional target real value frame into corresponding two-dimensional target real value frames and respectively converting the determined at least one three-dimensional target detection frame into corresponding two-dimensional target detection frames based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data;
determining the detection accuracy of the target detection network on each frame of point cloud data and under the aerial view under the sub-scanning range based on the converted at least one two-dimensional target true value frame and at least one two-dimensional target detection frame;
and determining the mAP of the target detection network under the aerial view in each sub-scanning range based on the detection accuracy of the target detection network on each frame of point cloud data, under the aerial view and the number of frames of the point cloud data contained in the point cloud data set in each sub-scanning range.
In a possible implementation manner, the evaluation module 202 is configured to determine, based on the obtained at least one three-dimensional target true value box and at least one three-dimensional target detection box, a detection evaluation result of the target detection network at each sub-scanning range according to the following steps:
aiming at each three-dimensional target detection frame in the at least one obtained three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree with the three-dimensional target detection frame larger than a first preset threshold exists in the at least one obtained three-dimensional target true value frame;
aiming at each found three-dimensional target detection frame without the corresponding first three-dimensional target real value frame, finding a second three-dimensional target real value frame with the coincidence degree between the second three-dimensional target real value frame and the three-dimensional target detection frame being greater than a second preset threshold value and smaller than a first preset threshold value from at least one three-dimensional target real value frame;
and determining the detection evaluation result of the target detection network in each sub-scanning range based on each searched three-dimensional target detection frame and the searched second three-dimensional target true value frame.
In one possible implementation, the detection evaluation result of the target detection network comprises an average distance error ATE; an evaluation module 202, configured to determine, based on each found three-dimensional target detection box and the corresponding found second three-dimensional target true value box, a detection evaluation result of the target detection network in each sub-scanning range according to the following steps:
for each found three-dimensional target detection frame, respectively projecting the three-dimensional target detection frame and the found second three-dimensional target true value frame to the coordinate system of the aerial view based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data to obtain a projected two-dimensional target detection frame and the second two-dimensional target true value frame;
determining the distance between the projected two-dimensional target detection frame and the second two-dimensional target true value frame;
determining ATE (automatic test equipment) of the target detection network on each frame of point cloud data based on the determined distance;
and determining the ATE of the target detection network in each sub-scanning range according to the ATE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
In a possible embodiment, the detection evaluation of the target detection network comprises an average size error ASE; an evaluation module 202, configured to determine, based on each found three-dimensional target detection box and the corresponding found second three-dimensional target true value box, a detection evaluation result of the target detection network in each sub-scanning range according to the following steps:
aiming at each found three-dimensional target detection frame, aligning the three-dimensional target detection frame and the found corresponding second three-dimensional target truth value frame in position and orientation to obtain an aligned three-dimensional target detection frame and a second three-dimensional target truth value frame;
determining ASE of the target detection network on each frame of point cloud data based on the coincidence degree between the aligned three-dimensional target detection frame and the second three-dimensional target true value frame;
and determining the ASE of the target detection network in each sub-scanning range according to the ASE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
In one possible embodiment, the detection evaluation result of the target detection network includes an average orientation error AOE; an evaluation module 202, configured to determine, based on each found three-dimensional target detection box and the corresponding found second three-dimensional target true value box, a detection evaluation result of the target detection network in each sub-scanning range according to the following steps:
aiming at each found three-dimensional target detection frame, determining the AOE of the target detection network on each frame of point cloud data according to the direction of the three-dimensional target detection frame and the angle difference between the directions of the found corresponding second three-dimensional target true value frames;
and determining the AOE of the target detection network in each sub-scanning range according to the AOE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
In a possible embodiment, the above apparatus further comprises:
the determining module 203 is configured to determine a false detection rate of the target detection network in each frame of point cloud data according to a ratio of the number of the three-dimensional target detection frames which are not found to have a corresponding first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data; and/or the presence of a gas in the gas,
aiming at each three-dimensional target true value frame in at least one three-dimensional target true value frame in each frame of point cloud data, searching whether a first three-dimensional target detection frame with the coincidence degree between the first three-dimensional target detection frame and the three-dimensional target true value frame being larger than a third preset threshold exists in at least one three-dimensional target detection frame; and determining the omission ratio of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data.
In a possible implementation manner, the determining module 203 is configured to determine the false detection rate of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target detection frames found to be absent from the corresponding first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data, according to the following steps:
for each sub-scanning range, determining the false detection rate of the target detection network on each frame of point cloud data in the sub-scanning range according to the ratio of the number of the three-dimensional target true value frames which are not found to correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target detection frames detected in the sub-scanning range;
the determining module 203 is configured to determine the missing rate of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data, according to the following steps:
and determining the omission ratio of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the searched three-dimensional target detection frames without the corresponding first three-dimensional target real value frame to the total number of the three-dimensional target real value frames determined in the sub-scanning range.
In a possible embodiment, the above apparatus further comprises:
the drawing module 204 is used for drawing a false detection distribution map of the target detection network on each frame of point cloud data in the integral scanning range of the radar equipment based on the false detection rate of the target detection network on each frame of point cloud data in each sub-scanning range;
and/or drawing a missing detection distribution map of the target detection network on each frame of point cloud data in the whole scanning range of the radar equipment based on the missing detection rate of the target detection network on each frame of point cloud data in each sub-scanning range.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
An embodiment of the present disclosure further provides an electronic device, as shown in fig. 3, which is a schematic structural diagram of the electronic device provided in the embodiment of the present disclosure, and the electronic device includes: a processor 301, a memory 302, and a bus 303. The memory 302 stores machine-readable instructions executable by the processor 301 (for example, execution instructions corresponding to the obtaining module 201 and the evaluating module 202 in the apparatus in fig. 2, and the like), when the electronic device is operated, the processor 301 and the memory 302 communicate through the bus 303, and when the machine-readable instructions are executed by the processor 301, the following processes are performed:
acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
and determining a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing the integral scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and at least one three-dimensional target detection frame.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the evaluation method described in the above method embodiment. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiments of the present disclosure also provide a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute the steps of the evaluation method in the foregoing method embodiments, which may be referred to specifically for the foregoing method embodiments, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (14)

1. An evaluation method, comprising:
acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment, and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
and determining a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing the whole scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and the at least one three-dimensional target detection frame.
2. The evaluating method according to claim 1, wherein the determining, based on the obtained at least one three-dimensional target true value frame and the at least one three-dimensional target detection frame, a detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing a scanning range of the radar device includes:
for each sub-scanning range in the plurality of sub-scanning ranges, determining at least one three-dimensional target detection frame falling in the sub-scanning range from the at least one three-dimensional target detection frame, and determining at least one three-dimensional target real value frame falling in the sub-scanning range from the at least one three-dimensional target real value frame;
determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and the determined at least one three-dimensional target true value frame;
and determining the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range.
3. The evaluation method according to claim 2, wherein the detection evaluation result of the target detection network comprises an average precision mean value, mAP; determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and the determined at least one three-dimensional target true value frame, including:
for each determined three-dimensional target detection frame in the at least one three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree with the three-dimensional target detection frame being greater than a first preset threshold exists in the at least one determined three-dimensional target true value frame;
determining the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the found three-dimensional target detection frames corresponding to the first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data;
determining the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range, including:
and determining the mAP of the target detection network in each sub-scanning range based on the detection accuracy of the target detection network on each frame of point cloud data in each sub-scanning range and the number of frames of the point cloud data contained in the point cloud data set.
4. The evaluation method according to claim 2, wherein the detection evaluation result of the target detection network comprises an average precision mean value, mAP; determining a detection evaluation result of the target detection network on each frame of point cloud data in the sub-scanning range according to the determined at least one three-dimensional target detection frame and the determined at least one three-dimensional target true value frame, including:
respectively converting the determined at least one three-dimensional target real value frame into corresponding two-dimensional target real value frames and respectively converting the determined at least one three-dimensional target detection frame into corresponding two-dimensional target detection frames based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data;
determining the detection accuracy of the target detection network on each frame of point cloud data under the aerial view under the sub-scanning range based on the converted at least one two-dimensional target true value frame and at least one two-dimensional target detection frame;
the obtaining of the detection evaluation result of the target detection network in each sub-scanning range based on the detection evaluation result of the target detection network on each frame of point cloud data in each sub-scanning range includes:
and determining the mAP of the target detection network under the aerial view under each sub-scanning range based on the detection accuracy of the target detection network on each frame of the point cloud data, under the aerial view and the number of frames of the point cloud data contained in the point cloud data set under each sub-scanning range.
5. The evaluation method according to any one of claims 1 to 4, wherein the determining the result of the detection evaluation of the target detection network in each sub-scanning range based on the obtained at least one three-dimensional target true value box and the at least one three-dimensional target detection box comprises:
aiming at each three-dimensional target detection frame in the at least one obtained three-dimensional target detection frame, searching whether a first three-dimensional target true value frame with the coincidence degree with the three-dimensional target detection frame larger than a first preset threshold exists in the at least one obtained three-dimensional target true value frame;
aiming at each found three-dimensional target detection frame without the corresponding first three-dimensional target real value frame, finding a second three-dimensional target real value frame with the coincidence degree between the second three-dimensional target real value frame and the three-dimensional target detection frame being greater than a second preset threshold value and smaller than the first preset threshold value from the at least one three-dimensional target real value frame;
and determining the detection evaluation result of the target detection network in each sub-scanning range based on each searched three-dimensional target detection frame and the searched second three-dimensional target true value frame.
6. The evaluation method according to claim 5, wherein the detection evaluation result of the target detection network comprises an average distance error ATE; determining a detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame, including:
aiming at each found three-dimensional target detection frame, respectively projecting the three-dimensional target detection frame and the found second three-dimensional target truth value frame to the coordinate system of the aerial view based on the corresponding relation between the coordinate system of the aerial view and the coordinate system of the point cloud data to obtain a projected two-dimensional target detection frame and a second two-dimensional target truth value frame;
determining the distance between the projected two-dimensional target detection frame and the second two-dimensional target true value frame;
determining ATE of the target detection network on each frame of the point cloud data based on the determined distance;
and determining the ATE of the target detection network in each sub-scanning range according to the ATE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
7. Evaluation method according to claim 5 or 6, wherein the detection evaluation of the target detection network comprises a mean size error ASE; determining a detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame, including:
aiming at each found three-dimensional target detection frame, aligning the three-dimensional target detection frame and the found corresponding second three-dimensional target truth value frame in position and orientation to obtain an aligned three-dimensional target detection frame and a second three-dimensional target truth value frame;
determining ASE of the target detection network on each frame of point cloud data based on the coincidence ratio between the aligned three-dimensional target detection frame and the second three-dimensional target true value frame;
and determining the ASE of the target detection network in each sub-scanning range according to the ASE of the target detection network in each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
8. The evaluation method according to any one of claims 5 to 7, wherein the detection evaluation results of the target detection network comprise an average orientation error, AOE; determining a detection evaluation result of the target detection network in each sub-scanning range based on each found three-dimensional target detection frame and the found second three-dimensional target true value frame, including:
aiming at each found three-dimensional target detection frame, determining the AOE of the target detection network on each frame of point cloud data according to the direction of the three-dimensional target detection frame and the angle difference between the directions of the found corresponding second three-dimensional target real value frames;
and determining the AOE of the target detection network in each sub-scanning range according to the AOE of the target detection network on each frame of point cloud data and the number of frames of the point cloud data contained in the point cloud data set.
9. Method of evaluation according to any of claims 5 to 8, the method further comprising:
determining the false detection rate of the target detection network in each frame of point cloud data according to the ratio of the number of the found three-dimensional target detection frames without the first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data; and/or the presence of a gas in the gas,
searching whether a first three-dimensional target detection frame with the coincidence degree with the three-dimensional target true value frame being larger than a third preset threshold exists in the at least one three-dimensional target detection frame or not aiming at each three-dimensional target true value frame in the at least one three-dimensional target true value frame in each frame of point cloud data; and determining the omission ratio of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to exist and correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data.
10. The evaluating method according to claim 9, wherein the determining the false detection rate of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target detection frames which are not found to exist and correspond to the first three-dimensional target true value frame to the total number of the three-dimensional target detection frames in each frame of point cloud data comprises:
for each sub-scanning range, determining the false detection rate of the target detection network on each frame of point cloud data in the sub-scanning range according to the ratio of the number of the three-dimensional target true value frames which are not found to correspond to the first three-dimensional target detection frame to the total number of the three-dimensional target detection frames detected in the sub-scanning range;
the determining the missing rate of the target detection network in each frame of point cloud data according to the ratio of the number of the three-dimensional target true value frames which are not found to have the corresponding first three-dimensional target detection frame to the total number of the three-dimensional target true value frames in each frame of point cloud data includes:
and determining the omission ratio of the target detection network on each frame of point cloud data in each sub-scanning range according to the ratio of the number of the searched three-dimensional target detection frames without the corresponding first three-dimensional target real value frame to the total number of the three-dimensional target real value frames determined in the sub-scanning range.
11. Method of evaluating according to claim 10, the method further comprising:
drawing a false detection distribution map of the target detection network on each frame of point cloud data in the integral scanning range of the radar equipment based on the false detection rate of the target detection network on each frame of point cloud data in each sub-scanning range;
and/or drawing a missing detection distribution map of the target detection network on each frame of point cloud data in the whole scanning range of the radar equipment based on the missing detection rate of the target detection network on each frame of point cloud data in each sub-scanning range.
12. An evaluation device, comprising:
the system comprises an acquisition module and at least one detection module, wherein the acquisition module is used for acquiring at least one three-dimensional target true value frame obtained by labeling each frame of point cloud data in a point cloud data set acquired by radar equipment and at least one three-dimensional target detection frame obtained by performing target detection on each frame of point cloud data based on a target detection network;
and the evaluation module is used for determining the detection evaluation result of the target detection network under a plurality of sub-scanning ranges obtained by dividing the integral scanning range of the radar equipment based on the obtained at least one three-dimensional target true value frame and the at least one three-dimensional target detection frame.
13. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the steps of the method of evaluating according to any of claims 1 to 11.
14. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the evaluation method according to one of claims 1 to 11.
CN202110570839.9A 2021-05-25 2021-05-25 Evaluation method, evaluation device, electronic equipment and storage medium Pending CN113222042A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110570839.9A CN113222042A (en) 2021-05-25 2021-05-25 Evaluation method, evaluation device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110570839.9A CN113222042A (en) 2021-05-25 2021-05-25 Evaluation method, evaluation device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113222042A true CN113222042A (en) 2021-08-06

Family

ID=77099521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110570839.9A Pending CN113222042A (en) 2021-05-25 2021-05-25 Evaluation method, evaluation device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113222042A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023072055A1 (en) * 2021-10-27 2023-05-04 华为技术有限公司 Point cloud data processing method and system
CN116543271A (en) * 2023-05-24 2023-08-04 北京斯年智驾科技有限公司 Method, device, electronic equipment and medium for determining target detection evaluation index

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179329A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Three-dimensional target detection method and device and electronic equipment
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112818845A (en) * 2021-01-29 2021-05-18 深圳市商汤科技有限公司 Test method, target object detection method, driving control method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179329A (en) * 2019-12-31 2020-05-19 智车优行科技(上海)有限公司 Three-dimensional target detection method and device and electronic equipment
CN112700552A (en) * 2020-12-31 2021-04-23 华为技术有限公司 Three-dimensional object detection method, three-dimensional object detection device, electronic apparatus, and medium
CN112818845A (en) * 2021-01-29 2021-05-18 深圳市商汤科技有限公司 Test method, target object detection method, driving control method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023072055A1 (en) * 2021-10-27 2023-05-04 华为技术有限公司 Point cloud data processing method and system
CN116543271A (en) * 2023-05-24 2023-08-04 北京斯年智驾科技有限公司 Method, device, electronic equipment and medium for determining target detection evaluation index

Similar Documents

Publication Publication Date Title
EP2975555B1 (en) Method and apparatus for displaying a point of interest
CN107833280B (en) Outdoor mobile augmented reality method based on combination of geographic grids and image recognition
US9349189B2 (en) Occlusion resistant image template matching using distance transform
CN102510734B (en) Pupil detection device and pupil detection method
CN111261016B (en) Road map construction method and device and electronic equipment
CN113222042A (en) Evaluation method, evaluation device, electronic equipment and storage medium
CN111753757B (en) Image recognition processing method and device
WO2022217988A1 (en) Sensor configuration scheme determination method and apparatus, computer device, storage medium, and program
US20220357176A1 (en) Methods and data processing systems for predicting road attributes
US20210209776A1 (en) Method and device for depth image fusion and computer-readable storage medium
CN112950622A (en) Target detection method and device, computer equipment and storage medium
CN112907746A (en) Method and device for generating electronic map, electronic equipment and storage medium
Song et al. Development of comprehensive accuracy assessment indexes for building footprint extraction
CN112422653A (en) Scene information pushing method, system, storage medium and equipment based on location service
CN116484036A (en) Image recommendation method, device, electronic equipment and computer readable storage medium
CN111899279A (en) Method and device for detecting motion speed of target object
CN111862208B (en) Vehicle positioning method, device and server based on screen optical communication
CN112233139A (en) System and method for detecting motion during 3D data reconstruction
CN113450459A (en) Method and device for constructing three-dimensional model of target object
CN114674328B (en) Map generation method, map generation device, electronic device, storage medium, and vehicle
CN116843891A (en) Graphic outline detection method, device, storage medium, equipment and program product
CN104616302A (en) Real-time object identification method
Ruf et al. Towards real-time change detection in videos based on existing 3D models
CN112818845A (en) Test method, target object detection method, driving control method and device
CN112578369A (en) Uncertainty estimation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination